Teaching

13. Accelerate the learning with Parallel training on multiple GPUs

Hands-on description Deep learning codes are highly parallel, which makes them not only conducive to taking advantage of GPU acceleration but also scalable to multiple GPUs and multiple nodes. In today’s hands-on, we will present how we could paralyze a single Deep Neural Network training over many GPUs in one node of CTE-POWER using tf.distributed.MirroredStrategy() …

13. Accelerate the learning with Parallel training on multiple GPUs Read More »

12. Parallel and Distributed Deep Learning Frameworks

Hands-on exercise description In today’s hands-on exercise we will review the basic concepts and an image classification problem (based on the CIFAR10 dataset) using a neural network model as a classifier (the neural network ResNet50V2 with 25,613,800 parameters). We will introduce and analyze all the required steps to train this classifier sequentially using one GPU. This …

12. Parallel and Distributed Deep Learning Frameworks Read More »

11. Using Supercomputers for DL training

Last Monday, all the attendees (including the teacher) agreed that today’s class could be online in our meet room, utilising the hands-on exercise described below. The experience will be as a proof of concept class to know if we can follow the remaining classes (of this part 2 of the course) without requiring our presence …

11. Using Supercomputers for DL training Read More »

Without the Rise of Supercomputers There Will Be No Progress in Artificial Intelligence

We need solutions that overlap these two research worlds Although artificial intelligence (AI) research began in the 1950s, and the fundamental algorithms of neural networks were already known in the last century, this discipline was left dormant for many years due to the lack of results that could excite both academia and the industry. Artificial Intelligence …

Without the Rise of Supercomputers There Will Be No Progress in Artificial Intelligence Read More »