supercomputERS architecture course (sa-miri)

SA-MIRI: SUPERCOMPUTERS ARCHITECTURE Syllabus

UPC Master in Innovation and Research in Informatics 

(specialization High Performance Computing)  

Official Course Web Site

http://www.fib.upc.edu/en/masters/miri/syllabus.html?assig=SA-MIRI

Course Description (2021 edition):

This course introduces the fundamentals of high-performance and parallel computing. It is targeted at scientists and engineers seeking to develop the skills necessary for working with supercomputers, the leading edge in high-performance computing technology.

In the first part of the course, we will cover the basic building blocks of supercomputers and their system software stack. Then, we will introduce their traditional parallel and distributed programming models, which allow one to exploit parallelism, a central element for scaling the applications in these types of high-performance infrastructures. 

In the second part of the course, we will motivate the current supercomputing systems developed to support artificial intelligence algorithms required in today’s world. This year’s  syllabus will pay special attention to Deep Learning (DL)  algorithms and their scalability using a GPU platform. 

This course uses the “learn by doing” approach, based on a set of exercises, made up of programming problems and reading papers, that the students must carry out throughout the course. The course will be marked by a continuous assessment, which ensures constant, steady work. 

All in all, this course seeks to enable students to acquire practical skills that can help them as much as possible to adapt and anticipate the new technologies that will undoubtedly emerge in the coming years. For the practical part of the exercises, the student will use supercomputing facilities from the Barcelona Supercomputing Center (BSC-CNS).

Important warning about course workload and attendance

The student should be aware that the SA-MIRI 2021 edition is a course that requires an effort from the student equivalent to 6.0 ECTS. Therefore, this course is not recommended for students who have other commitments during the term that prevent them from dedicating the required amount of hours for this course.

Regular and consistent attendance is mandatory unless you have a reason to miss a class occasionally that is acceptable to the instructor, for instance, for health reasons or visa matters/issues in the case of international students. Missing classes due to attending other courses, such as PATC courses from BSC-CNS, will not be accepted by the instructor. If you expect to miss any classes for some reason of this kind, you should wait to enroll in the next edition of the course.

It is the student’s responsibility to obtain documentation, handouts, assignments, etc. given in class for any missed classes (from a fellow student, for example).

Prerequisites

Programming in C and Linux basics will be expected in the course. In addition, prior exposure to parallel programming constructions, Python language, experience with linear algebra/matrices, or machine learning knowledge will be helpful.

Course Activities:

Class attendance and participation: Regular attendance is expected, and is required to be able to discuss concepts that will be covered during class. 

Lab activities: Some exercises will be conducted as hands-on sessions during the course using supercomputing facilities. The student’s own laptop will be required to access these resources during the theory class. Each hands-on session will involve writing a lab report with all the results. There are no days for theory classes and days for laboratory classes. Theoretical and practical activities will be interspersed during the same session to facilitate the learning process.

Reading/presentation assignments: Some exercise assignments will consist of reading documentation/papers that expand the concepts introduced during lectures. Some exercises will involve student presentations (randomly chosen).

Assessment: There will be one midterm exam in the middle of the course. The student is allowed to use any type of documentation (also digital via the student’s laptop).

Grading Procedure  

The evaluation of this course can be obtained by continuous assessment. This assessment will take into account the following: 

  •   25% Attendance + participation
  •   10% Midterm exam
  •   65%  Exercises (+ exercise presentations) and Lab exercises (+ Lab reports)

Details of the weight of each component of the course in the grade are described in the tentative scheduling section.  

Course Exam: For those students who have not benefited from the continuous assessment, a course exam will be announced during the course. This exam includes evaluating the knowledge of the entire course (practical part, theoretical part, and self-learning part). During this exam, the student is not allowed to use any documentation (neither on paper nor digital).

Tentative course content (topics):

Welcome

PART 1: HOW BSC SUPERCOMPUTERS ARE AND HOW TO PROGRAM THEM

  1. Supercomputing basics
  2. General purpose of supercomputers
  3. Parallel programming languages for shared memory platforms: OpenMP basics
  4. Parallel programming languages for distributed platforms: MPI
  5. Parallel performance
  6. Heterogeneous supercomputers
  7. Parallel programming languages for heterogeneous platforms: CUDA
  8. Emerging trends and challenges in supercomputing

PART 2: HOW MODERN SUPERCOMPUTERS CAN BE USED TO ACCELERATE DL TRAINING

  1. Getting started with DL
  2. AI is a supercomputing problem
  3. Using supercomputers for DL training
  4. Parallel and distributed deep learning frameworks
  5. Accelerate the learning with parallel training on multiple GPUs
  6. Accelerate the learning with distributed training on multiple servers

Each topic ends with an exercise.

Tentative list of exercises:

  • Exercise 01: Read and present a paper about exascale computer challenges
  • Exercise 02: Getting started with supercomputing
  • Exercise 03: Getting started with OpenMP
  • Exercise 04: Getting started with MPI
  • Exercise 05: Getting started with parallel performance metrics and models
  • Exercise 06: Comparing the performance of supercomputers
  • Exercise 08: Read and present a paper about emerging trends in supercomputing
  • Exercise 07: Getting started with CUDA
  • Exercise 09: First contact with deep learning
  • Exercise 10: The 58th edition of the TOP500 (Nov 2021| St. Louis, MO – USA)
  • Exercise 11: Using a supercomputer for deep learning training
  • Exercise 12: Using prebuild models for deep learning training
  • Exercise 13: Getting started with parallel deep learning training
  • Exercise 14: Getting started with distributed deep learning training

Tentative course scheduling:

Professor : 

Jordi Torres
Office: Mòdul C6- 217 (second floor)
https://torres.ai
@JordiTorresAI

Office Hours: 
By appointment

Additional documentation:

 

This syllabus will be revised/updated until the start of the course (last modified 11/Jun/2021)