STARTS

Aug 15th 2017

Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization (Coursera)

Created by:Delivered by:
Taught by:

This course will teach you the "magic" of getting deep learning to work well. Rather than the deep learning process being a black box, you will understand what drives performance, and be able to more systematically get good results. You will also learn TensorFlow.

After 3 weeks, you will:

- Understand industry best-practices for building deep learning applications.

- Be able to effectively use the common neural network "tricks", including initialization, L2 and dropout regularization, Batch normalization, gradient checking,

- Be able to implement and apply a variety of optimization algorithms, such as mini-batch gradient descent, Momentum, RMSprop and Adam, and check for their convergence.

- Understand new best-practices for the deep learning era of how to set up train/dev/test sets and analyze bias/variance

- Be able to implement a neural network in TensorFlow.

This is the second course of the Deep Learning Specialization.


Who is this class for: This class is for: - Learners that took the first course of the specialization: "Neural Networks and Deep Learning" - Anyone that already understands fully-connected neural networks, and wants to learn the practical aspects of making them work well.


Course 2 of 5 in the Deep Learning Specialization.


Syllabus


WEEK 1

Practical aspects of Deep Learning

Graded: Practical aspects of deep learning

Graded: Initialization

Graded: Regularization

Graded: Gradient Checking


WEEK 2

Optimization algorithms

Graded: Optimization algorithms

Graded: Optimization


WEEK 3

Hyperparameter tuning, Batch Normalization and Programming Frameworks

Graded: Hyperparameter tuning, Batch Normalization, Programming Frameworks

Graded: Tensorflow


Already taken this course? Please rate.
No votes yet