Featured publications of our research group “Emerging Technologies for Artificial Intelligence” at BSC

Our research group is conducting research on Artificial Intelligence since several years ago. After an important paper accepted at ICLR last year,  in these moments we have exceptional research results thanks to the intense and great work of our PhD students that I co-advice together with Xavier Giró-i-Nieto. Enclosed I present a brief description of the most outstanding  accepted paper of each of them. With their work BSC-CNS is positioned as one of the centers that generate outstanding contributions in the area of artificial intelligence. We are tremendously happy!!!
 
Campos V, Giró-i-Nieto X, Torres J. Importance Weighted Evolution Strategies. In NeurIPS 2018 Deep Reinforcement Learning Workshop. December 2018
 
Deep reinforcement learning (RL) agents can not only be trained with backpropagation, but also with other alternatives as evolution strategies. This genetic-inspired algorithm create populations of neural networks that will guide the actions of an agent in an environment. Those RL agents that achieve higher rewards are kept and serves as seeds to test new ones, while those performing poorly are discarded. In this work we exploit the outstanding computational capabilities of BSC’s Marenostrum to run these agents in parallel across hundreds of CPU cores, which provide an almost perfect scalability. The main contribution of this work is regarding data efficiency, as some of the simulations are reused by different agents of the evolving population.
 
 
Bellver, Miriam, Amaia Salvador, Jordi Torres and Xavier Giró-i-Nieto. “Budget-aware Semi-Supervised Semantic and Instance Segmentation” (link), CVPR 2019 DeepVision Workshop.
 
The annotation cost of large amount of data is often the bottleneck when training deep neural networks. This is specially true for pixel-wise tasks for computer vision, like image or instance segmentation. We show that, given a fixed and low annotation budget, it is better to invest it in few high quality pixel-wise annotations than in a larger amount of weak annotations in the form of image labels.
 
Amanda Duarte, Francisco Roldan, Miquel Tubau, Janna Escur, Santiago Pascual, Amaia Salvador, Eva Mohedano, Kevin McGuinness, Jordi Torres, Xavier Giro-i-Nieto. “Wav2Pix: Speech-conditioned Face Generation using Generative Adversarial Networks” (link), ICASSP 2019.
 
Video sequences from popular YouTubers are used to define a self-supervised pipeline between speech snippets and face images. We train a generative adversarial network (GAN) to create new faces from speech in an end-to-end fashion, without any pre-computation of audio features. In collaboration of Dublin City University.