Practical Deep Neural Networks AI : 

Best Practices for Gradient Learning

  LEVEL : INTERMEDIATE                    HRDF : CLAIMABLE 

    TRAINER : DR ERIC HO TATT WEI

 

WHEN  


22 - 24 NOVEMBER 2021

 

WHERE 


MS TEAMS

 

TIME


9.00AM - 1.00PM

 

RM 1,250 FOR PROFESSIONALS

10% Discount for Early Bird (until 22 November 2021) / Group / Students

CONTENT SUMMARY

INTRODUCTION

Deep neural network artificial intelligence (AI) has brought powerful pattern recognition capabilities to various applications in a broad span of industries. Setting up complex image interpretation and recognition software no longer requires deep expertise in machine vision feature selection. Instead, the technical challenge has been greatly simplified to the process of acquiring plenty of high-quality labeled image data and applying supervised gradient descent learning on popular architectures like the deep convolutional neural network using open source software frameworks like Tensorflow and Pytorch. While it is now easy to set up a deep neural network classifier within a few hours by following one of many tutorial instructions online, it remains challenging to ensure that the deep neural network is robustly well-trained for all kinds of data. If not well-configured, gradient learning often yields suboptimal classification and sometimes just fails to converge. This course focuses on the best practices in designing and configuring gradient learning for deep neural networks. We first introduce the methodology of gradient learning and backpropagation and highlight where gradient learning commonly fails. We review common training loss functions and regularization strategies which improve the convergence of gradient learning. With a good understanding of these fundamentals, we will study the motivation and implementation of input, weight and activation normalizations and clipping techniques that have been commonly used to stabilize gradient learning across multiple different network architectures. We will discuss a numerical technique to check gradients to assess the success of gradient learning. Finally, we will study methods to enhance learning convergence through adaptive learning algorithms.

COURSE CONTENT

3 Hours

  • Gradient descent and backpropagation learning
  • Challenges of managing gradient learning
  • Training hyperparameters

1 Hour

  • Cost functions
  • Cost function regularization strategies
  • Weightage between data and regularized portions

2 Hours

  • Gradient checking and gradient clipping

1 Hour

  • Dropout regularization

1 Hour

  • Weight initialization and normalization

1 Hour

  • Activation Normalizations (Batch, Layer, Instance, Group, Scale)

1 Hour

  • Input normalization and decorrelation

2 Hours

  • Adaptive gradient learning


OBJECTIVES

Upon completion of this course, participants will be able to:

  • Apply gradient learning best practices to train deep neural networks correctly. 
  • Improve the performance or robustness of deep neural networks.


WHO SHOULD ATTEND?

  • Engineers and researchers from all industries who need to implement deep neural networks AI. 
  • Engineers, researchers and consultants who have difficulty improving the performance of their deep neural network AI systems for industry 4.0 Prerequisite: Participants should have some basic knowledge and hands-on experience with training and setting up a deep neural network.






OUR TRAINERS




Dr Eric Ho Tatt Wei  (UTP)

Dr Eric Ho Tatt Wei received his MS and PhD degrees in Electrical Engineering from Stanford University in Silicon Valley, USA specializing in computer hardware and VLSI systems, As part of his PhD research, he developed real-time systems for fruit flies for biological research to conduct automated inspection and guide robotic manipulation. He is currently pursuing applications of deep neural network technology to network analysis on MRI brain images.

COUNTDOWN

DaysHoursMinutesSeconds

REGISTRATION FEES

      ProfessionalS       

myr1,250

*fee quoted does not include SST, GST/VAT or withholding tax (if applicable)

Early bird/ group/ student

myr1,125

*fee quoted does not include SST, GST/VAT or withholding tax (if applicable)

OUR LOCATION

Centre for Advanced & Professional Education (CAPE)

 Level 16, Menara 2, Menara Kembar Bank Rakyat, 50470, Jalan Travers, Kuala Lumpur.

CALL US

+605 - 368 7558 /

+605 - 368 8485

DROP US AN EMAIL

cape@utp.edu.my