Learning something from (almost) nothing - Naver Labs Europe
loader image

University national cordoba logoThe seminar run from 11am to 12pm. Please register online

Date: 14th June 2019

Speakers: Cristian Cardellino, Professor in Programming Paradigms, Databases & Operating Systems at National University of Cordoba in Argentina.
Milagro Teruel, computer scientist at National University of Cordoba in Argentina

Abstract: What can we do when having a complex problem and a very limited amount of annotated data?
In this talk, we are going to present two approaches to apply neural models with small labeled corpora available. The challenge is to train a big model, necessary to capture semantically complex language phenomena, on a few labeled documents. We focus on general modifications to the neural architecture instead of relying on more data or domain-specific knowledge.
First, we will talk about how neural attention mechanisms impact on performance and help human annotation in argument mining tasks. Attention allows the model to separate important words from irrelevant ones, with respect to the classification task. Furthermore, it can be visualized to gain further insight on the model’s behaviour and possible annotation errors.
Then, we will talk about a recent architecture for deep semi-supervised learning: ladder networks. We analyze how unsupervised data helps better generalization in a legal named entity recognition task.


Ceci correspond à une petite biographie d'environ 200 caractéres