NeurIPS 2018- Part 4/4 Machine Learning for Creativity and Design. - Naver Labs Europe
loader image

The Thirty-second Annual Conference on Neural Information Processing Systems
Highlights of what we saw at this year’s conference – Part 4/4
Machine Learning for Creativity and Design & Integration of Deep Learning Theory Workshops

Machine Learning for Creativity and Design.

In his invited talk, Kenneth O. Stanley (UCF) argued that beyond the generation of impressive arty pieces, creativity was needed in Machine Learning to solve core problems. For him, “intelligence is creativity” and “To solve is to create”. In a lot of situations, the so-called deception principle might apply: which is that the stepping stones to reach a result can be very different from the final result. Therefore, optimizing a measure that focuses on how close we are to the desired result might not work. This idea was illustrated with the generative neural network breeding project (, where people can visualize images produced by a generative neural network with random weights. Choosing the image they like most, the weights of the corresponding neural net are then randomly perturbed to create a new set of images, etc. After a dozen iterations of this process (or at most a couple of hundred) amazing images were generated. The common point between all the choice sequences that led to these images is that people were not actively seeking the end result. This type of observation inspired the creation of new algorithms that don’t have a specific objective but only follow a “novelty search” principle.  This idea is behind the paper “Robots that can adapt like animals” that made the cover of Nature 2015. Another example is the recently released Go-Explore algorithm ( which beats previous algorithms and humans at the Atari game Montezuma’s Revenge and Pitfall (two challenging benchmarks for deep reinforcement learning). Note that to apply this idea, one needs to keep a diversity of “stepping stones” that have “potential”, in order to explore them further so a measure of this potential needs to be defined.

In the Integration of Deep Learning Theory workshop, K. Chaudhuri (UCSD), talked about explaining two phenomena in deep learning. The first question was about what GANs are learning and how does that relate to classical statistical methods. Chaudhuri showed that restricted f-GANs are equivalent to moment matching if there’s no model mismatch and to minimizing f-divergence, subject to matched moments in case of model mismatch. The second question was about the existence of adversarial examples. These are slight strategic modifications of test inputs that cause misclassification, and are a very popular topic right now. There are many proposed attacks, many defences (to be broken again) and a handful of certified ones. The question the speaker addressed is why do adversarial examples exist? She distinguished 3 reason or cases: they can be due to the data distribution, to too few samples or the fact that the algorithm is bad. In each case, a different kind of robustness is needed; namely distributional robustness, finite sample robustness or algorithmic robustness. A more precise analysis was provided for the nearest neighbors algorithms which you can read about in the ICML 2018 paper.

2 photo of Leonid Antsfeld, Quentin Grail, Matthias Galle & Chris Dance at NeurIPS2018

Leonid Antsfeld (left) Quentin Grail, Matthias Galle & Chris Dance at the back (right) all on the NAVER LABS Europe booth at NeurIPS2018.

So. that’s the end of the whirlwind tour of some of the sessions we enjoyed at NeurIPS. And when Matthias, Chris, Sofia, Julien and Quentin weren’t in a session they hung out at the NAVER LABS Europe booth to give machine reading and NLP demos of our technology. They also helped Leonid who stoically spent 6 days explaining what we’re up to in the LABS in France and handing out our cool neck gaiters (aka buffs) to provide some shelter from the damp and cold :-).

Highlights of what we saw at this year’s conference  –  Part 1/4 Expo Day

Highlights of what we saw at this year’s conference  –  Part 2/4 Visualization and Machine Learning

Highlights of what we saw at this year’s conference  –  Part 3/4 Robotics and Optimization