Skip RNN - deep learning

Skip RNN: Skipping State Updates in Recurrent Neural Networks

Victor Campos, our PhD student at BSC,  will present his last paper at the Sixth International Conference on Learning Representations ICLR 2018 . La paper “Skip RNN: Skipping State Updates in Recurrent Neural Networks” is the result of a collaboration with Barcelona Supercomputing Center, Google Inc., Universitat Politècnica de Catalunya and Columbia University.

Recurrent Neural Networks (RNNs) continue to show outstanding performance in sequence modeling tasks. However, training RNNs on long sequences often face challenges like slow inference, vanishing gradients and difficulty in capturing long term dependencies. In backpropagation through time settings, these issues are tightly coupled with the large, sequential computational graph resulting from unfolding the RNN in time. We introduce the Skip RNN model which extends existing RNN models by learning to skip state updates and shortens the effective size of the computational graph. This model can also be encouraged to perform fewer state updates through a budget constraint. In this paper we evaluate the proposed model on various tasks and show how it can reduce the number of required RNN updates while preserving, and sometimes even improving, the performance of the baseline RNN models.

We evaluate the Skip RNN model in a series of tasks: (1) adding task, (2) frequency discrimination task, (3) digit classification, (4) sentiment analysis, and (5) action recognition. Please see the paper for results and discussion. The code of the project is available on github, developed with Python 3.6.0 and TensorFlow 1.0.0.

More information can be found in the web page of the project.