Practical Deep Neural Networks AI : Best Practices for Gradient Learning 

LEVEL: INTERMEDIATE                    HRDF: CLAIMABLE 

Process Applying MBOT

TRAINER: 

TS Dr Eric Ho Tatt Wei 

 

16 - 17 JUNE 2025

 

MS TEAMS

(ONLINE)

 

9.00AM - 5.00PM

 

RM 1,450 FOR PROFESSIONALS

10% Discount for Early Bird (until 16 May 2025) / Group / Students

CONTENT SUMMARY

INTRODUCTION

Deep neural network artificial intelligence (AI) has brought powerful pattern recognition capabilities to various applications in a broad span of industries. Setting up complex image interpretation and recognition software no longer requires deep expertise in machine vision feature selection. Instead, the technical challenge has been greatly simplified to the process of acquiring plenty of high-quality labeled image data and applying supervised gradient descent learning on popular architectures like the deep convolutional neural network using open source software frameworks like Tensorflow and Pytorch. While it is now easy to set up a deep neural network classifier within a few hours by following one of many tutorial instructions online, it remains challenging to ensure that the deep neural network is robustly well-trained for all kinds of data. If not well-configured, gradient learning often yields suboptimal classification and sometimes just fails to converge. This course focuses on the best practices in designing and configuring gradient learning for deep neural networks. We first introduce the methodology of gradient learning and backpropagation and highlight where gradient learning commonly fails. We review common training loss functions and regularization strategies which improve the convergence of gradient learning. With a good understanding of these fundamentals, we will study the motivation and implementation of input, weight and activation normalizations and clipping techniques that have been commonly used to stabilize gradient learning across multiple different network architectures. We will discuss a numerical technique to check gradients to assess the success of gradient learning. Finally, we will study methods to enhance learning convergence through adaptive learning algorithms. 

COURSE CONTENT

Topic 1

  • Gradient descent and backpropagation learning
  • Challenges of managing gradient learning
  • Training hyperparameters

Topic 2 

  • Cost functions 
  • Cost function regularization strategies
  • Weightage between data and regularized portions

Topic 3

  • Gradient checking and gradient clipping

Topic 4

  • Dropout regularization

Topic 5

  • Weight initialization and normalization

Topic 6

  • Activation Normalizations(Batch, Layer, Instance, Group, Scale) 

Topic 7

  • Input normalization and decorrelation

Topic 8

  • Adaptive gradient learning 

OBJECTIVES

Upon completion of this course, participants will be able to:

  • Apply gradient learning best practices to train deep neural networks correctly.
  • Improve the performance or robustness of deep neural networks.


WHO SHOULD ATTEND?

  • Engineers and researchers from all industries who need to implement deep neural networks AI. 
  • Engineers, researchers and consultants who have difficulty improving the performance of their deep neural network AI systems for industry 4.0 Prerequisite: Participants should have some basic knowledge and hands-on experience with training and setting up a deep neural network. 





OUR TRAINER

1. Ts Dr Eric Ho Tatt Wei  (UTP)

Dr Eric Ho Tatt Wei received his MS and PhD degrees in Electrical Engineering from Stanford University in Silicon Valley, USA specializing in computer hardware and VLSI systems, As part of his PhD research, he developed real-time systems for fruit flies for biological research to conduct automated inspection and guide robotic manipulation. He is currently pursuing applications of deep neural network technology to network analysis on MRI brain images.

COUNTDOWN

REGISTRATION FEES

PROFESSIONALS

MYR1,450*

*fee quoted does not include SST, HRDF service fee, GST/VAT or withholding tax (if applicable).

EARLY BIRD/ GROUP/ STUDENT

MYR1,305*

*fee quoted does not include SST, HRDF service fee, GST/VAT or withholding tax (if applicable).

OUR LOCATION

Centre for Advanced & Professional Education (CAPE)

 Level 8, Permata Sapura, Kuala Lumpur City Centre, 50088 Kuala Lumpur​

CALL US

+605 - 368 7558 /

+605 - 368 8485

DROP US AN EMAIL

cape@utp.edu.my