First steps with PyTorch

First steps with PYTORCH

In reality, this post was intended for my DLAI course’s students, although I think it may be of interest to other students. I am going to share in this blog the teaching material that I am going to generate for the part of DLAI course that will cover the basic principles of Deep Learning from a computational perspective.

This post provides a fast-paced introduction to the PyTorch API required to follow the DLAI Labs (Master Course at UPC – Autumn 2017). I will teach the part of DLAI course that will cover the basic principles of deep learning from computational perspectives. In this part we will review the basics of Pytorch, the Python implementation of the Torch machine. learning framework


PyTorch  can be seen as a Python package for GPU-accelerated deep neural network programming that can complement or partly replace existing Python packages for math and stats, such as NumPy. PyTorch is the Python implementation of the Torch machine learning framework.  Torch was originally implemented in C with a wrapper in the Lua scripting language, but PyTorch wraps the core Torch binaries in Python and provides GPU acceleration for many functions.

Unlike TensorFlow, PyTorch offers a quick method to modify existing neural networks without having to rebuild the network from scratch and makes it possible easily print Tensors on the screen. For Deep Learning researchers these are important features, since they usually take several iterations to get a Deep Learning implementation right. Researchers also value some interesting features as the fact that the interoperability with numpy is easy in PyTorch, with the simple .numpy() suffix to convert a Tensor to a numpy array.

However PyTorch models are not easy to put into production or ship into mobile as TensorFlow, which it has production-ready deployment options. TensorFlow also supports distributed training, which at the moment, PyTorch is crucially lacking. PyTorch is still a young framework which is rapidly gaining momentum. If you have the time, my advice would be to try both and see what fits your need best if you are doing research or your production requirements are not very demanding.

Below are the tasks of this lab session. If you don’t finish all of them during this session lab, please, read last Task before leaving classroom.

Task 1:  Update DLAI lab docker image

We need to be sure that we have the last version of DLAI lab docker. Update or download our docker image :

docker pull jorditorresbcn/dlai-met:latest

Next we need to create a new container with the following options:

  • Forward port 8888
docker run -it -p 8888:8888 jorditorresbcn/dlai-met:latest

newfolder is an empty folder created by you somewhere inside your home folder.

In your container, clone the course repository inside the /app/code folder:

cd /app/code
git clone

Start a jupyter notebook using this command:

jupyter notebook --ip= --allow-root

Open a browser and go to http://localhost:8888 , the password is dlaimet.

If you are on windows and you are experiencing connectivity issues, please check THIS.

In this lab we will use the port 8888, if you are using a remote computer please open this port.

Task 2: Run your first PyTorch program

Run your first PyTorch program following these instructions:

First of all, using your browser with jupyter, open the PyTorch examples folder and locate the mnist-pytorch-book  file. Try to run all the blocks t in order to check your PyTorch installation.  

The output should be something like:

LeNet (
  (conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  (conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
  (fc1): Linear (400 -> 500)
  (fc2): Linear (500 -> 10)
 Train Epoch: 1 [16640/60000 (28%)]	Loss: 2.027052

Task 3 : Analyzing the code

Using your browser with jupyter, look for these parts on the code:

  • Identify how PyTorch reads the data.
  • Identify where the neural net definition is.
  • Which are the layers used on this net? Which are the activation functions used on this net?
  • Identify the loss and optimizer functions.

Task 4: What differences can you see between Keras and PyTorch?

Task 5: Hyperparameters

Going through the shared folder, modify the hyperparameters (batch size, number of epochs, learning rate), optimizer and loss functions and the layers of the model. the logs to observe if you can get better accuracy and better speed with your modifications. Make charts comparing accuracy/epochs with the different configurations.

Task 6: Lab Report

Build a lab report with a brief explanation of the previous Tasks. Send the report to with the subject “[DLAI] Lab PyTorch”, include all the names of the group members and put them in copy.

My thanks to Francesc Sastre for helping me with the preparation of this lab.

2017-10-29T16:40:37+00:00October 3rd, 2017|