FOR SUMMARIZING LONG DOCUMENTS
International Conference on Learning Representations (ICLR) 2021
Blog: Debiasing large pretrained language models using distributional control
Seminar: Perceptual robot learning
2020 International Conference on Robotics and Automation (ICRA)
A novel framework for controlled NLG called 'Generation with Distributional Control', achieves great generality on the types of constraints that can be imposed and has a large potential to remedy the problem of bias in language models.
Papers and activities of NAVER LABS Europe at EACL 2021. NLG, scaling to larger context, cost of supervised data, controlling the output plus ASR at the AfricaNLP workshop.
Our Global BERT-based Transformer architecture fuses global and local information at every layer, resulting in a reading comprehension model that achieves a deeper understanding for long documents and enables flexibility for downstream tasks.
GLOBAL AI R&D BELT
ACADEMIA – EU/GOVT – ENTREPRENEURS
Our partnerships range from long-term fundamental research to investment in products and services.