Julien Perez: To give a short answer I would say yes but I’ll try to develop it a bit because I think it’s an important point. First of all, I’ve had the opportunity to work in the field of deep learning for almost 15 years now and I started at the time when people basically believed that a neural network could do anything and – what we realise is that – when deep learning started to grow after its first noticeable success in speech recognition (with respect to LSTM) and in vision (with respect to CNNs), we started to realise that the field of deep learning (some people call that differential programming but, whatever) – that adopting deep learning led to overall improvement in each domain of application. Let me give a very specific example that occurred recently. Deep learning applied to text gives us the transformer. And now we realise that this new kind of ‘convolution’ – I will not go into the details – can actually make a lot of sense not only for text but also for images. And, as we go towards using deep learning for robotics, we see that it doesn’t actually work as well as one could have imagined – but – we’re making progress, we’re starting to better understand why it doesn’t work, what the limitations are and we’re improving the paradigm of deep learning (like we did in the past), but in this case for a new field of application which is robotics. And, what’s interesting with respect to robotics (as you said) is that, it’s very difficult to simulate. You end up having to deal with constraints that were less present when classifying images for example or even translating text – which are also very challenging tasks of course – but, as we go towards robotics we realise that there are a lot of new challenges we have to deal with for real world applications of machine learning algorithms – in the context of an embodied agent that we call a robot. But one thing that makes me pretty optimistic I would say is that we’re beginning to understand the problems – we’re starting to understand the limitations of the paradigm that we have at hand whether it’s reinforcement learning or deep learning in general. So, there’s still a lot of work to do to before having an autonomous embodied agent in a human crowded environment but I think we’re going in the right direction and, for simple tasks, we may even very soon start to have embodied agents functioning in environments beyond the factory. I think it’s also a matter of adoption – as we start to have embodied agents in more human crowded places like a café or a bar – well make progress in our understanding of this difficulty of grounding machine learning in the real world and we’ll get better. For example, if I take the field of computer vision, the more we started using it, the more people started to adopt the models – not forgetting of course the frameworks like TensorFlow or PyTorch that make adoption easier – and the more people use it the more we learn about the capability but also the limitations of the model and the better the paradigm becomes. So, to conclude, that’s why there is still a lot of ground to cover but we’re going in the right direction.