Abstract
Tetiana Parshakova, Jean-Marc Andreoli, Marc Dymetman |
The SIGNLL Conference on Computational Natural Language Learning (ConLL), Hong-Kong, China, 3-4 November, 2019 |
Download |
Download |
@inproceedings{parshakova-etal-2019-global, title = "Global Autoregressive Models for Data-Efficient Sequence Learning", author = "Parshakova, Tetiana and Andreoli, Jean-Marc and Dymetman, Marc", booktitle = "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/K19-1084", doi = "10.18653/v1/K19-1084", pages = "900--909", abstract = "Standard autoregressive seq2seq models are easily trained by max-likelihood, but tend to show poor results under small-data conditions. We introduce a class of seq2seq models, GAMs (Global Autoregressive Models), which combine an autoregressive component with a log-linear component, allowing the use of global {\textbackslash}textit{a priori} features to compensate for lack of data. We train these models in two steps. In the first step, we obtain an {\textbackslash}emph{unnormalized} GAM that maximizes the likelihood of the data, but is improper for fast inference or evaluation. In the second step, we use this GAM to train (by distillation) a second autoregressive model that approximates the {\textbackslash}emph{normalized} distribution associated with the GAM, and can be used for fast inference and evaluation. Our experiments focus on language modelling under synthetic conditions and show a strong perplexity reduction of using the second autoregressive model over the standard one.", }
Abstract
Standard autoregressive seq2seq models are easily trained by max-likelihood, but tend to show poor results under small-data conditions. We introduce a class of seq2seq models, GAMs (Global Autoregressive Models), which combine an autoregressive component with a log-linear component, allowing the use of global a priori features to compensate for lack of data. We train these models in two steps. In the first step, we obtain an unnormalized GAM that maximizes the likelihood of the data, but is improper for fast inference or evaluation. In the second step, we use this GAM to train (by distillation) a second autoregressive model that approximates the normalized distribution associated with the GAM, and can be used for fast inference and evaluation. Our experiments focus on language modelling under synthetic conditions and show a strong perplexity reduction of using the second autoregressive model over the standard one.
1. Difference in female/male salary: 33/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: uncalculable
4. Number of employees in under-represented gender in 10 highest salaries: 0/10 points
NAVER France targets (with respect to the 2022 index) are as follows:
En 2022, NAVER France a obtenu les notes suivantes pour chacun des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 33 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : non calculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 0 sur 10 points
Les objectifs de progression pour l’index 2022 de NAVER France sont :
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.