Deep Learning


Let MindShare Bring "Deep Learning Demystified" to Life for You:

The major tech giants (e.g. Google, Amazon, Facebook, Microsoft, and Apple) are convinced that artificial intelligence (AI) will transform the world in short order. AI algorithms determine our internet search results. They recognize faces in our photos and identify commands spoken into our smartphones. They translate sentences from one language to another and can defeat the world champion at Go. They power self-driving cars and autonomous robots, drug discovery and genomics. In other words, the number of applications of AI technology is exploding. Investments in AI will also discover a new frontier for computing hardware, since many companies, from giant semiconductor companies to a myriad of startups, are racing to develop new AI-specific chips.

Modern AI technology is based on deep learning algorithms. These algorithms learn tasks on their own by analyzing a very large amount of data. To do so, they flow data through multiple processing layers, each of which extracts and refines information obtained from the previous layer. The “deep” in deep learning refers to the fact that these algorithms use a large number, say dozens or hundreds, of processing layers. This depth allows them to learn complex tasks.

Course Overview:

Deep learning algorithms are remarkably simple to understand and easy to code. Through a sequence of hands-on programming labs and straight-to-the-point, no-nonsense slides and explanations, you will be guided toward developing a clear, solid, and intuitive understanding of deep learning algorithms and why they work so well for AI applications.

Practice: Three types of neural networks power 95% of today's deep learning commercial applications: fully connected neural networks; convolutional neural networks; and recurrent neural networks. During this training you will gain a solid understanding of each of these neural networks and their typical commercial applications. Most importantly, you will learn how to implement them from scratch with Pytorch (the deep learning library developed by Facebook AI). You will then train them on various image recognition and natural language processing tasks, and build a feel for what they can accomplish.

Theory: Deep neural networks are trained with gradient descent and backpropagation. These are simple yet very powerful concepts, and they provide the mathematical rules that govern the learning process. You will gain a solid, clear and intuitive understanding of these two fundamental concepts.

GPUs and cloud computing: As we need GPUs in order to train deep neural networks, the programming labs will all take place on a modern cloud platform. You will be given access to your own cloud virtual machine instance with a dedicated GPU to implement and train deep neural networks.

Hardware for deep learning: GPUs are currently the go-to computing platform for deep learning, but they were originally designed for computer graphics and not for deep learning. The past few years have witnessed a race amongst chip makers to develop Application-Specific Integrated Circuits (ASIC) for deep learning. You will learn the main hardware challenges for deep learning applications and the current technological trends for addressing them. In particular, we will take a peek inside Google's Tensor Processing Unit (TPU), Nvidia's Tensor Cores, and Intel Nervana's Neural Network Processor.

MindShare Courses On Deep Learning:

Course Name

Virtual Classroom

Deep Learning Demystified 
3 days
  Notify Me When Available 

All of MindShare's classroom and virtual classroom courses can be customized to fit the needs of your group.

Deep Learning Demystified Course Info

You Will Learn:

  • Deep learning algorithms -- why they are at the center of the ongoing AI revolution, and what are their main commercial applications.
  • How to implement deep learning algorithms with Pytorch (the deep learning library developed by Facebook’s artificial intelligence research group).
  • Fully connected neural networks, convolutional neural networks, and recurrent neural networks.
  • The inside mechanics of deep learning algorithms and why they work so well.
  • The hardware challenges for deep learning applications and current trends for addressing them.

Course Length: 3 Days

Who Should Attend?
• Anyone that wants to take a first step toward using AI for their own applications.
• Anyone that will work with or around AI algorithms and wants a solid understanding of how these algorithms work and what type of tasks they can accomplish.
• Anyone that want to understand how AI algorithms are currently being used by the major tech giants (e.g. Google, Amazon, Facebook, Microsoft, and Apple), and how these algorithms will profoundly impact many industries in a near future.

Course Outline:

Day 1:

  •  Introduction
    • Brief history of machine learning, neural networks, deep learning and AI
    • Recent breakthroughs and today’s main commercial applications
    • What is a neural net? Inference versus Training
    • GPUs for execution of today’s deep learning algorithms and the next generation of AI chips.
    • Modern deep learning frameworks: Pytorch, Caffe2, TensorFlow
    • Hands-On Lab: data and tensor manipulation in Pytorch
  • One-layer neural networks (a.k.a. logistic regression)
    • Matrix multiplication and Softmax function
    • Hands-On Lab: Constructing a neural net with Pytorch
    • How to update the internal parameters of a network?
    • Template matching
    • Hands-On Lab: Training a one-layer neural net and visualizing what has been learned
  • Loss function and stochastic gradient descent
    • Partial derivatives
    • Gradient descent versus stochastic gradient descent
    • Batch size, data parallelism, and GPU computing

Day 2:

  • Multilayer fully connected neural networks (a.k.a. multilayer perceptron)
    • Hands-On Lab: Why do we need depth?
    • The backpropagation algorithm
    • Hands-On Lab: classifying handwritten digits
  • Convolutional neural networks (a.k.a. ConvNet)
    • Convolutional and pooling layers.
    • Hands-On Lab: visual recognition with ConvNet
    • Bells and whistles: dropout, residual connections and batch normalization
    • Hands-On Lab: visual recognition with ConvNet revisited
  • State-of-the-art and real world applications
    • State-of-the-art network for visual recognition: ResNet
    • Real-time object detection, self-driving cars

Day 3:

  • Deep Learning hardware
    • Hardware challenges for deep learning applications
    • Dataflow architectures, low precision arithmetic
    • Facebook’s Big Basin GPU server
    • Nvidia’s Tensor Cores
    • Google’s Tensor Processing Unit (TPU)
    • Nervana, Graphcore, Wave Computing (the next generation of AI chip?)
  • Recurrent Neural Networks (RNNs)
    • Neural networks for sequential data (e.g. natural language and audio)
    • Vanilla RNNs
    • Long Short-Term Memory (LSTM)
    • Hands-On Lab: language modeling and text generation
  • State-of-the-art and real world applications
    • State-of-the-art translation system (Google Translate)
    • State-of-the-art speech recognition system (Baidu’s Deep Speech)


Lab Exercises:

• Implement a convolutional neural network for visual recognition. Train it on the cloud with a GPU. Then learn what is needed to transform your code into a state-of-the-art visual recognition system.
• Implement from scratch a recurrent neural network for natural language processing. Train it on the cloud with a GPU. Then learn what would be needed to transform your code into a state-of-the-art translation or speech recognition system.
• Observe in real time how a neural network changes its internal parameters to learn useful representations of the data.
• Play with various neural nets and get an intuitive understanding of why a neural net needs many layers (i.e. the “deep” in deep learning) in order to learn complex tasks.

Description of Lab Environment:

In order to have access to GPUs, we will use a modern cloud platform. At the beginning of each lab, you will simply enter an IP address in the local browser of your laptop -- you will then be connected to a Jupyter Notebook that runs on a cloud virtual machine instance with a dedicated GPU. The Jupyter Notebook will provide you with detailed instructions on how to progress through the lab.

Required Equipment:

Students must bring their laptop (Linux, Mac, or Window).

Recommended Prerequisites:

• Python programming. If you have programming experience in a different language (e.g. C/C++/Matlab/R/Javascript) you will be fine -- training starts with a Python programming refresher.
• Basic understanding of matrices and derivatives. We will provide quick refreshers when needed.
• Basic understanding of computer architecture.

Supplied Materials:

• Downloadable PDF version of the presentation slides
• Lab exercise solutions