ACL IS THE FLAGSHIP EVENT OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE MOST PRESTIGIOUS CONFERENCE DEDICATED TO NATURAL LANGUAGE PROCESSING (NLP). IT CHANGES CONTINENT EVERY YEAR FROM ASIA/PACIFIC, AMERICAS AND EUROPE/AFRICA.
This year it was held in Melbourne, Australia. We asked a jet lagged Matthias Gallé some questions about his experience there. Matthias heads up our NLP team.
Interest in NLP has been growing quite a bit the last few years. Was there anything that surprised you about ACL from a community perspective?
Oh, yes, it was clearly the number & size of the Asian companies there. In addition to BAT (Baidu-Alibaba-Tencent) but also the more traditional corporate giants (Samsung, Huawei, Recruit) there were a lot of large companies there. Companies that haven’t been around that long i.e. less than 20 years like JD, Bytedance, CVTE and oh yes, ‘Naver’. They/we are investing heavily in AI-related technology which is where NLP sits.
What about the science? What were the popular themes this year?
It’s always very hard to say. You can only observe so much of such a large conference by yourself. There were 6 parallel tracks so I’m not going to even pretend I saw most of the presentations but, from what I did see, my general feeling was that, after a few crazy years of rapid evolution and hype around dialogue, machine translation and neural methods the pace has declined a bit. The papers were more reflective, with a certain amount of analysis of what doesn’t work (for now anyway) and suggestions on how to overcome the problems. Here are some of them:
How to combine linguistic knowledge with deep learning is an obvious challenge, with a workshop dedicated to the topic and a keynote by Anton van den Hengel.
There were a lot of papers on personalizing machine learning, be it in translation (based on work which started in our team some years ago), auto-completion or dialogue. The second keynote (“Who is the Bridge Between the What and the How”) was around that topic as well.
I counted at least 6 papers which try to specialize word embeddings. On the one hand, that shows the impact those methods had on practitioners, but also their short-comings when applying them to precise domains and in particular in sentiment analysis (it’s a problem when your closest word to cheap is expensive).
Two areas which have clear potential are neural language generation, and machine reading, not least because neural methods have given us tools we didn’t have in the past. But they have their problems too. For neural language generation we know how to generate fluent text very well but it’s not always adequate and suffers problems such as hallucination and repetitions. Yejin Choi gave a good overview in a much praised talk at one of the workshops. For machine reading, existing data-sets (SQuAD, CNN/Daily Mail) have received a fair share of criticism, but newer ones (NarrativeQA) are extremely challenging. Many of the talks and discussions at the related MRQA 2018[i] workshop were precisely about characteristics and issues with existing test-beds.
The workshop around Open Source Software[ii] for NLP is also worth a call out. It showed the growing impact open source has in the area (in line with the same trend in machine learning at large). An excellent talk by Joel Nothman, drawing on his experience with nltk, scipy, pandas and ipython mentioned some of the consequences OSS has on how science is done in the field (i.e. decision on which algorithm to include, default parameters, API decisions, etc)
Is there anything you came home with that you’re sure is going to be important for NLP in ambient intelligence?
Extracting relevant information from text and presenting it to humans in a succinct and intuitive way continues to be a relevant, big topic. We can do this on larger and more complex documents than before and user-generated content and privacy policies are two topics we’re working on. There’s still a fair amount of work in dialogue, although maybe a little less than a few years back. Doing all of this right is critical in an ambient intelligent world.
[i] Co-organized by Minjoon SEO, Univ. Washington and NAVER Clova
[ii] Co-organized by Lucy PARK, NAVER Corp.
About the author: Matthias Gallé leads the Natural Language Processing group at NAVER LABS Europe.