AI for Robotics workshop

2nd NAVER LABS Europe International Workshop on AI for Robotics
29th – 30th November 2021
Online event

Workshop RegistrationThis edition of the workshop is open and free of charge, but participants are required to register through this registration form. REGISTRATION IS CLOSED.

Confirmed speakers:

 


Mathieu AubryMathieu Aubry, École des Ponts ParisTech
Mathieu Aubry is a tenured researcher in the Imagine team of Ecole des Ponts. His work is mainly focussed on Computer Vision and Deep Learning, and their intersection with Computer Graphics, Machine Learning, and Digital Humanities. His PhD on 3D shapes representations at ENS was co-advised by Josef Sivic (INRIA) and Daniel Cremers (TUM). In 2015, he spent a year working as a postdoc with Alexei Efros in UC Berkeley.

Analysis by synthesis for interpretable object discovery
I will present our recent work on analyzing the content of image collections by learning a simple prototype-based model of images. I will start by introducing the idea and framework of Deep Transformation Invariant image analysis in the case of image clustering [1], where I will show that a simple modification of the standard K-means algorithm can lead to state of the art image clustering, while computing distances in pixel space and being easy to interpret. I will then show the same idea can be used to learn 3D shape models and analyze large unstructured 3D models collections [2]. Finally, I will explain how the idea can be extended to perform object recovery [3], decomposing every image in a collection into layers derived from a small set of image prototypes. I will show this can be applied to real world data, such as collection of Instagram images, and provide models and segmentation of repeated objects.
[1] Deep Transformation-Invariant Clustering, T. Monnier, T. Groueix, M. Aubry, NeurIPS 2020, http://imagine.enpc.fr/~monniert/DTIClustering/
[2] Representing Shape Collections with Alignment-Aware Linear Models, R. Loiseau, T. Monnier, M. Aubry, L. Landrieu, ArXiv 2021, https://romainloiseau.github.io/deep-linear-shapes/
[3] Unsupervised Layered Image Decomposition into Object Prototypes, T. Monnier, E. Vincent, J. Ponce, M. Aubry, ICCV 2021, https://imagine.enpc.fr/~monniert/DTI-Sprites/


Joschka BoedeckerJoschka Boedecker, Albert-Ludwigs-Universität Freiburg
Joschka Boedecker studied computer science at the University of Koblenz-Landau, Germany, and artificial intelligence at the University of Georgia, USA. He received his PhD degree in engineering from Osaka University, Japan, in 2011, and continued to work there as a postdoc until 2012. In 2013, he joined the Machine Learning Lab of Martin Riedmiller at University of Freiburg, Germany, as a postdoc, which he later lead as interim professor from 2015-2017. Since fall of 2017, he holds a position as assistant professor of neurorobotics at University of Freiburg. His research interests are at the intersection of machine learning and robotics, with a focus on deep reinforcement learning.

A Q-Function decomposition for faster learning, interpretable reward design, and model-free Inverse Reinforcement Learning
Off-policy Reinforcement Learning is a promising paradigm for learning-based robot control. Despite notable recent progress, however, it still suffers from poor data-efficiency, and it remains difficult to specify reward functions accurately and intuitively for many tasks. In this talk, I will present our work on a Q-function decomposition which learns separate short-term and long-term components. I will show how it enables improved learning speed through separable time scales for optimization, interpretable constraint formulations, and a model-free variant of a recently introduced Inverse Reinforcement Learning algorithm for learning from demonstrations efficiently. I will illustrate these results for various robot learning and autonomous driving tasks.


blankAndrea Cavallaro, Queen Mary University of London, UK
Andrea Cavallaro is Professor of Multimedia Signal Processing and the founding Director of the Centre for Intelligent Sensing at Queen Mary University of London, UK. He is Fellow of the International Association for Pattern Recognition (IAPR) and Turing Fellow at the Alan Turing Institute, the UK National Institute for Data Science and Artificial Intelligence. He is Editor-in-Chief of Signal Processing: Image Communication; Chair of the IEEE Image, Video, and Multidimensional Signal Processing Technical Committee; an IEEE Signal Processing Society Distinguished Lecturer; an elected member of the IEEE Video Signal Processing and Communication Technical Committee; and a Senior Area Editor for the IEEE Transactions on Image Processing.

CORSMAL: Collaborative Object Recognition, Shared Manipulation And Learning
Acoustic and visual sensing can support the contactless estimation of the weight of a container and the amount of its content when a person manipulate them, prior to the handover to a robot. However, opaqueness and transparencies (both of the container and of the content) and the variability of materials, shapes and sizes make this problem challenging. I will present an open framework to benchmark methods for the estimation of the capacity of a container, and the type, mass, and amount of its content. The framework includes a dataset, well-defined tasks and performance measures, baselines and state-of-the-art methods.


Sonia ChernovaSonia Chernova, Georgia Tech
Sonia Chernova is an Associate Professor in the College of Computing at Georgia Tech. She directs the Robot Autonomy and Interactive Learning lab, where her research focuses on the development of intelligent and interactive autonomous systems. Chernova’s contributions span robotics and artificial intelligence, including semantic reasoning, adaptive autonomy, human-robot interaction, and explainable AI. She has authored over 100 scientific papers and is the recipient of the NSF CAREER, ONR Young Investigator, and NASA Early Career Faculty awards.

Beyond the Label: Robots that Reason about Object Semantics
Reliable operation in everyday human environments – homes, offices, and businesses – remains elusive for today’s robotic systems. A key challenge is diversity, as no two homes or businesses are exactly alike. However, despite the innumerable unique aspects of any home, there are many commonalities as well, particularly about how objects are placed and used. These commonalities can be captured in semantic representations, and then used to improve the autonomy of robotic systems by, for example, enabling robots to infer missing information in human instructions, efficiently search for objects, or manipulate objects more effectively. In this talk, I will discuss recent advances in semantic reasoning, particularly focusing on semantics of everyday objects, household environments, and the development of robotic systems that intelligently interact with their world.


blankSungjoon Choi, Department of AI, Korea University
Sungjoon Choi is currently an assistant professor at the Dept. of AI in Korea University. Previously, he did a postdoc at Disney Research and a research scientist at Kakao Brain. Phd in EECS at Seoul National University.

Towards a Natural Motion of a Robot
In this talk, we will be looking at several research topics regarding generating natural motions for human-like robots.

 


Angela DaiAngela Dai, Technical University of Munich
Angela Dai is an Assistant Professor at the Technical University of Munich. Her research focuses on understanding how the 3D world around us can be modeled and semantically understood, leveraging generative deep learning towards enabling understanding and interaction with real-world 3D/4D scenes for content creation and virtual or robotic agents. Previously, she received her PhD in computer science from Stanford in 2018 and her BSE in computer science from Princeton in 2013. Her research has been recognized through a ZDB Junior Research Group Award, an ACM SIGGRAPH Outstanding Doctoral Dissertation Honorable Mention, as well as a Stanford Graduate Fellowship.

Learning from Synthetic Priors for Real-world 3D Scene Understanding

 


Andreas GeigerAndreas Geiger, University of Tübingen
Andreas Geiger is a full professor at the University of Tübingen and a group leader at the Max Planck Institute for Intelligent Systems. Prior to this, he was a visiting professor at ETH Zürich and a research scientist at MPI-IS. He studied at KIT, EPFL and MIT and received his PhD degree in 2013 from the Karlsruhe Institute of Technology. His research interests are at the intersection of 3D reconstruction, motion estimation, scene understanding and sensory-motor control. He maintains the KITTI vision benchmark.

Driving with Attention
How should representations from complementary sensors be integrated for autonomous driving? Geometry-based sensor fusion has shown great promise for perception tasks such as object detection and motion forecasting. However, for the actual driving task, the global context of the 3D scene is key: a change in traffic light state can affect the behavior of a vehicle geometrically distant from that traffic light. Geometry alone may therefore be insufficient for effectively fusing representations in end-to-end driving models. In this talk, I will demonstrate that existing sensor fusion methods under-perform in the presence of a high density of dynamic agents and complex scenarios, which require global contextual reasoning such as handling traffic oncoming from multiple directions at uncontrolled intersections. Towards tackling this challenge, I will present TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention. Moreover, in the second part of the talk, I will present NEural ATtention fields (NEAT), a novel representation that enables reasoning about the semantic, spatial, and temporal structure of the scene. Both models demonstrate state-of-the-art driving performance on CARLA.


Hatice GunesHatice Gunes, University of Cambridge, UK
Hatice Gunes (Senior Member, IEEE) received the Ph.D. degree in computer science from the University of Technology Sydney, Australia. She is a Professor with the Department of Computer Science and Technology, University of Cambridge, UK, leading the Affective Intelligence and Robotics (AFAR) Lab. Her expertise is in the areas of affective computing and social signal processing cross-fertilizing research in multimodal interaction, computer vision, signal processing, machine learning and social robotics. Dr Gunes’ team has published over 125 papers in these areas (H-index=34, citations > 5,900) and has received various awards and competitive grants, with funding from the Engineering and Physical Sciences Research Council UK (EPSRC), Innovate UK, British Council, the Alan Turing Institute and the EU Horizon 2020. Dr Gunes is the former President of the Association for the Advancement of Affective Computing (2017-2019), the General Co-Chair of ACII 2019, and the Program Co-Chair of ACM/IEEE HRI 2020 and IEEE FG 2017. She also served as the Chair of the Steering Board of IEEE Transactions on Affective Computing (2017-2019) and as a member of the Human-Robot Interaction Steering Committee (2018-2021). In 2019, Dr Gunes has been awarded the prestigious EPSRC Fellowship to investigate adaptive robotic emotional intelligence for well-being (2019-2024) and has been named a Faculty Fellow of the Alan Turing Institute – UK’s national centre for data science and artificial intelligence.

Data-driven Robot Socio-emotional Intelligence
Designing artificially intelligent systems and interfaces with socio-emotional skills is a challenging task. Progress in industry and developments in academia provide us a positive outlook, however, the artificial social and emotional intelligence of the current technology is still limited. My lab’s research has been pushing the state of the art in a wide spectrum of research topics in this area, including the design and creation of new datasets; novel feature representations and learning algorithms for sensing and understanding human nonverbal behaviours in solo, dyadic and group settings; theoretical and practical frameworks for lifelong learning and long-term human-robot interaction with applications to wellbeing; and providing solutions to mitigate the bias that creeps into these systems. In this talk, I will present my research team’s explorations specifically in the area of continual learning for affective robotics.


M HebertMartial Hebert, Carnegie Mellon University
Martial Hebert is a Professor of Robotics and Dean of the School of Computer Science at Carnegie Mellon University. His research interests include computer vision and robotics, especially recognition in images and video data, model building and object recognition from 3D data, and perception for mobile robots and for intelligent vehicles. His group has developed approaches for object recognition and scene analysis in images, 3D point clouds, and video sequences. In the area of machine perception for robotics, his group has developed techniques for people detection, tracking, and prediction, and for understanding the environment of ground vehicles from sensor data. He currently serves as Editor-in-Chief of the International Journal of Computer Vision.

Issues in robust AI for robotics
The past decade has seen a remarkable increase in the level of performance of AI techniques, including with the introduction of effective deep learning techniques. This has led to a rapid expansion of applications opportunities. However, translating this progress into operational autonomous systems, e.g., robotics systems that are constrained in computation and have critical failure modes, is faced with specific challenges. This talk with explore some of the critical questions at the boundary to be addressed for deployable AI. This includes introspection/self-awareness of performance, rapid learning and adaptation, low sample/reduced supervision learning, and modeling bias and uncertainty in human generated data.


Jemin HwangboJemin Hwangbo, Korea Advanced Institute of Science and Technology (KAIST)
Jemin Hwangbo is Assistant Professor at KAIST. He received his Bachelor in Mechanical Engineering from the University of Toronto, and his Master and PhD in Mechanical Engineering from ETH Zurich.

Control of Legged Robots using Reinforcement Learning
Legged robots pose one of the greatest challenges in robotics. Dynamic and agile maneuvers of animals cannot be imitated by existing methods that are crafted by humans. A compelling alternative is reinforcement learning, which requires minimal craftsmanship and promotes the natural evolution of a control policy. However, so far, reinforcement learning research for legged robots is mainly limited to simulation, and only few and comparably simple examples have been deployed on real systems. The primary reason is that training with real robots, particularly with dynamically balancing systems, is complicated and expensive. Recent algorithmic improvements have made simulation even cheaper and more accurate at the same time. Leveraging such tools to obtain control policies is thus a seemingly promising direction. However, a few simulation-related issues have to be addressed before utilizing them in practice. The biggest obstacle is the so-called reality gap — discrepancies between the simulated and the real system. Hand-crafted models often fail to achieve a reasonable accuracy due to the complexities of actuation systems of existing robots. This talk will focus on how such obstacles can be overcome. The main approaches are twofold: a fast and accurate algorithm for solving contact dynamics and a data-driven simulation-augmentation method using deep learning. These methods are applied to the ANYmal robot, a sophisticated medium-dog-sized quadrupedal system. Using policies trained in simulation, the quadrupedal machine achieves locomotion skills that go beyond what had been achieved with prior methods: ANYmal is capable of precisely and energy-efficiently following high-level body velocity commands, running faster than ever before, and recovering from falling even in complex configurations.


Sagbae KimSangbae Kim, MIT
Sangbae Kim is the director of the Biomimetic Robotics Laboratory and a professor of Mechanical Engineering at MIT. His research focuses on bio-inspired robot design achieved by extracting principles from animals. Kim’s achievements include creating the world’s first directional adhesive inspired by gecko lizards and a climbing robot named Stickybot that utilizes the directional adhesive to climb smooth surfaces. TIME Magazine named Stickybot one of the best inventions of 2006. One of Kim’s recent achievements is the development of the MIT Cheetah, a robot capable of stable running outdoors up to 13 mph and autonomous jumping over obstacles at the efficiency of animals. Kim is a recipient of best paper awards from the ICRA (2007), King-Sun Fu Memorial TRO (2008) and IEEE/ASME TMECH (2016). Additionally, he received a DARPA YFA (2013), an NSF CAREER award (2014), and a Ruth and Joel Spira Award for Distinguished Teaching (2015).

Physical Intelligence and Human’s Cognitive Biases toward AI
While industrial robots are effective in repetitive, precise kinematic tasks in factories, the design and control of these robots are not suited for physically interactive performance that humans do easily. These tasks require ‘physical intelligence’ through complex dynamic interactions with environments whereas conventional robots are designed primarily for position control. In order to develop a robot with ‘physical intelligence’, we first need a new type of machines that allow dynamic interactions. This talk will discuss how the new design paradigm allows dynamic interactive tasks. As an embodiment of such a robot design paradigm, the latest version of the MIT Cheetah robots and force-feedback teleoperation arms will be presented. These robots are equipped with proprioceptive actuators, a new design paradigm for dynamic robots. This new class of actuators will play a crucial role in developing ‘physical intelligence’ and future robot applications such as elderly care, home service, delivery, and services in environments unfavorable for humans.


Joohyung KimJoohyung Kim, University of Illinois Urbana-Champaign
Joohyung Kim is currently an Associate Professor of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. His research focuses on design and control for humanoid robots, systems for motion learning in robot hardware, and safe human-robot interaction. He received BSE and Ph.D. degrees in Electrical Engineering and Computer Science (EECS) from Seoul National University, Korea, in 2001 and 2012. He was a Research Scientist in Disney Research from 2013 to 2019. Prior to joining Disney, he was a postdoctoral fellow in the Robotics Institute at Carnegie Mellon University for the DARPA Robotics Challenge in 2013. From 2009 to 2012, he was a Research Staff Member in Samsung Advanced Institute of Technology, Korea, developing biped walking controllers for humanoid robots.

Towards Human-Friendly Robots
The demand for robots which can work closely and interact physically with human has been growing. Such robots can already be found in public places, such as guiding robots in airports and serving robots in restaurants. However, despite advances in many robotic technologies, there are very few robots for personal use, to help humans in daily life. For this, we need better understanding of home environments, human behaviors, better methods to understand/implement them through robots, and better design to interact with human naturally and safely. In this talk, I will present our efforts to make human-friendly robots and share on-going projects in this direction.


blankBeomjoon Kim, Korea Advanced Institute of Science and Technology (KAIST)
Beomjoon Kim is the director of Intelligent mobile manipulation (iM^2) lab and Assistant Professor at the Kim Jaechul Graduate School of AI of Korea Advanced Institute of Science and Technology (KAIST). He obtained his PhD in Computer Science from MIT CSAIL, working on integrating task and motion planning and learning. Before that, he obtained his Masters at McGill University, working on combining reinforcement learning and learning from demonstrations and applying them on a wheelchair robot. Even before that, he got his undergraduate degree in computer science and statistics from the University of Waterloo, dreaming to become a roboticist someday. His research goal is to create a mobile-manipulator that can work in diverse and unstructured environments.

Applying intuitions from AlphaGo to robot task and motion planning problems
How can we enable robots to efficiently reason both at the discrete task level and the continuous motion level to achieve high-level goals such as tidying up a room or constructing a building? This is a challenging problem that requires integrated reasoning about the combinatorial aspects, such as deciding which object to manipulate, and feasibility of each motion, such as collision-free constraints, to achieve goals. In this talk, I will present our method in applying intuitions from AlphaGo in solving such challenging problems.


Paul LuffPaul Luff, King’s College London
Professor Paul Luff is a social scientist who has undertaken detailed analysis of work practice and technology. These studies have been undertaken in diverse settings including surgery, general practice consultations, transport control rooms, trading rooms, surveillance centres and design and architectural practices. In most of the projects he has collaborated with computer scientists and engineers, informing the design and development of advanced technologies which have included human-robot interaction, AI technologies, ubiquitous technologies and advanced collaboration systems. His approach draws on video-based studies, interaction analysis and quasi-naturalistic experiments. He currently is working on projects related to robotic surgery, autonomous vehicles and approaches to planning and explanation in robotics and AI.

Planning and Situating Actions: challenges for assessing autonomous systems
Lucy Suchman’s seminal work ‘Plans and Situated Actions’ (1987) contrasted different ways of viewing actions and interactions with technology, particularly with ‘intelligent’ systems. With related work at the time it led to a radical shift in how we consider the nature of interaction with and the capabilities of technologies. In this talk, we will briefly revisit this work and discuss its relevance to contemporary studies of explainable systems and trustworthy technologies. We will draw on examples of explanations and everyday behaviour to consider the nature of explanations and trust and discuss research currently underway in two research projects. .


Jan PetersJan Peters, TU Darmstadt
Jan Peters is a full professor (W3) for Intelligent Autonomous Systems at the Computer Science Department of the Technische Universitaet Darmstadt. Jan Peters has received the Dick Volz Best 2007 US PhD Thesis Runner-Up Award, the Robotics: Science & Systems – Early Career Spotlight, the INNS Young Investigator Award, and the IEEE Robotics & Automation Society’s Early Career Award as well as numerous best paper awards. In 2015, he received an ERC Starting Grant and in 2019, he was appointed as an IEEE Fellow. Despite being a faculty member at TU Darmstadt only since 2011, Jan Peters has already nurtured a series of outstanding young researchers into successful careers. These include new faculty members at leading universities in the USA, Japan, Germany, Finland and Holland, postdoctoral scholars at top computer science departments (including MIT, CMU, and Berkeley) and young leaders at top AI companies (including Amazon, Google and Facebook). Jan Peters has studied Computer Science, Electrical, Mechanical and Control Engineering at TU Munich and FernUni Hagen in Germany, at the National University of Singapore (NUS) and the University of Southern California (USC). He has received four Master’s degrees in these disciplines as well as a Computer Science PhD from USC. Jan Peters has performed research in Germany at DLR, TU Munich and the Max Planck Institute for Biological Cybernetics (in addition to the institutions above), in Japan at the Advanced Telecommunication Research Center (ATR), at USC and at both NUS and Siemens Advanced Engineering in Singapore. He has led research groups on Machine Learning for Robotics at the Max Planck Institutes for Biological Cybernetics (2007-2010) and Intelligent Systems (2010-2021).

Robot Learning: Quo vadis?
Autonomous robots that can assist humans in situations of daily life have been a long standing vision of robotics, artificial intelligence, and cognitive sciences. A first step towards this goal is to create robots that can learn tasks triggered by environmental context or higher level instruction. However, learning techniques have yet to live up to this promise as only few methods manage to scale to high-dimensional manipulator or humanoid robots. In this talk, we investigate a general framework suitable for learning motor skills in robotics which is based on the principles behind many analytical robotics approaches. It involves generating a representation of motor skills by parameterized motor primitive policies acting as building blocks of movement generation, and a learned task execution module that transforms these movements into motor commands. We discuss learning on three different levels of abstraction, i.e., learning for accurate control is needed to execute, learning of motor primitives is needed to acquire simple movements, and learning of the task-dependent “hyperparameters” of these motor primitives allows learning complex tasks. We discuss task-appropriate learning approaches for imitation learning, model learning and reinforcement learning for robots with many degrees of freedom. Empirical evaluations on a several robot systems illustrate the effectiveness and applicability to learning control on an anthropomorphic robot arm. These robot motor skills range from toy examples (e.g., paddling a ball, ball-in-a-cup) to playing robot table tennis against a human being and manipulation of various objects


Josef SivicJosef Sivic, Czech Technical University in Prague and Inria
Josef Sivic holds a distinguished researcher position at the Institute of Robotics, Informatics and Cybernetics at the Czech Technical University in Prague where he heads the Intelligent Machine Perception project and the recently established ELLIS Unit Prague. He is currently on leave from a senior researcher position at Inria Paris where he remains a close external collaborator of the Willow team. He received the habilitation degree from Ecole Normale Superieure in Paris in 2014 and PhD from the University of Oxford in 2006. After Phd he was a post-doctoral associate at the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. He received the British Machine Vision Association Sullivan Thesis Prize, three test-of-time awards at major computer vision conferences, and an ERC Starting Grant.

Learning manipulation skills from instructional videos
People easily learn how to change a flat tire of a car or perform resuscitation by observing other people doing the same task, for example, in an instructional video. This involves advanced visual intelligence abilities such as interpreting sequences of human actions that manipulate objects to achieve a specific task. Currently, however, there is no artificial system with a similar level of cognitive visual competence. In this talk, I will describe our recent progress on learning from instructional videos how people manipulate objects and demonstrate transferring the learnt skill to a robotic manipulator.


Pierre WeberPierre-Brice Wieber, Inria
Pierre-Brice Wieber is a full-time researcher at INRIA Grenoble and has been a visiting researcher at AIST/CNRS Joint Research Lab in Tsukuba. He has advised 14 PhD students and 6 Post-Docs on topics covering modeling, optimization and control of autonomous vehicles, humanoid and legged robots, industrial and collaborative robots. His specific focus of interest is on model-based safety guarantees. He is an IEEE RAS Distinguished Lecturer and has been serving as Associate Editor for IEEE Transactions on Robotics, Robotics and Automation Letters and conferences such as ICRA and Humanoids.

A mathematical approach to Isaac Asimov’s Three Laws of Robotics
1/ A robot may not injure a human being. 2/ A robot must obey orders except where they conflict with the First Law. 3/ A robot must protect its own existence as long as this does not conflict with the First or Second Law. I propose to discuss how these broad, eight decades old statements can be approached and implemented in today’s autonomous vehicles, humanoids, and collaborative robots, introducing a general mathematical approach to provide the corresponding safety guarantees. This will naturally raise the question of models in decision making, and a few ethical issues.

blank

At the end of each session, there will be a panel discussion with the speakers of the session. We will discuss current and future trends and challenges in AI for Robotics. While some questions will be prepared in advance, the audience will have the chance to ask questions as well.

  • Morning Panel 1, Monday November 29, 11:30 AM (UTC+1). Moderator: Martin Humenberger
  • Afternoon Panel 1, Monday November 29, 4:00 PM (UTC+1). Moderator: Gregory Rogez
  • Morning Panel 2, Tuesday November 30, 11:00 AM (UTC+1). Moderator: Julien Perez
  • Afternoon Panel 2, Tuesday November 30, 4:00 PM (UTC+1). Moderator: Gianluca Monaci

Best Poster PrizeTo make the workshop more interactive and to give the opportunity to more scientists to actively contribute to the event, we will organize an interactive poster session on gather.town. PhD students and other participants are invited to present and discuss their work on any topic related to the workshop’s themes. The organizing committee will award the best poster a prize of EUR 1000.

The poster session will take place on Tuesday, November 30th 2021, between 1PM and 2PM (UTC+1).

 

Poster Session Program

Title Authors
Embedded GPU based ToF image recognition for robots Benjamin Kelényi, Szilárd Molnár, Levente Tamás
ToFNest: Efficient normal estimation for ToF depth cameras Szilárd Molnár, Benjamin Kelényi, Levente Tamás
Next-Best-View estimation for volumetric information gain Alexandru Pop, Levente Tamás
ToF Planar Correction using Convolutional Neural Networks Marian Pop, Levente Tamás
DOPE + PoseBERT: Real-time Hand Mesh Recovery for Animating a Robotic Gripper Fabien Baradel, Romain Brégier, Philippe Weinzaepfel, Yannis Kalantidis, Grégory Rogez
Simultaneous Human Action and Motion Prediction Kourosh Darvish, Daniele Pucci
ADHERENT: Learning Human-like Trajectory Generators for Whole-body Control of Humanoid Robots (***) Paolo Maria Viceconte, Raffaello Camoriano, Giulio Romualdi, Diego Ferigo, Stefano Dafarra, Silvio Traversaro, Giuseppe Oriolo, Lorenzo Rosasco and Daniele Pucci
NeuralDiff: Segmenting 3D objects that move in egocentric videos Vadim Tschernezki, Diane Larlus, Andrea Vedaldi
Online Learning and Control of Dynamical Systems from Sensory Input Oumayma Bounou, Jean Ponce and Justin Carpentier
Learning Dynamic Manipulation Skill from Haptic-Play Taeyoon Lee, Donghyun Sung, Kyoungyeon Choi, Choongin Lee, Changwoo Park, Keunjun Choi
Towards safe human-to-robot handovers of unknown containers Yik Lung Pang, Alessio Xompero, Changjae Oh, Andrea Cavallaro
Learning to Manipulate Tools by Aligning Simulation to Video Demonstration (***) Kateryna Zorina, Justin Carpentier, Josef Sivic and Vladimir Petrik
Differentiable rendering with perturbed optimizers Quentin Le Lidec, Ivan Laptev, Cordelia Schmid, Justin Carpentier
blank

*** Congratulations !! ***

blank

Monday, November 29

Start Time Paris (UTC+1) Start Time Boston (UTC-5) Start Time Seoul (UTC+9)
Opening remarks 8:50 AM 2:50 AM 4:50 PM
Andrea Cavallaro, “CORSMAL: Collaborative Object Recognition, Shared Manipulation And Learning” 9:00 AM 3:00 AM 5:00 PM
Andreas Geiger, “Driving with Attention” 9:30 AM 3:30 AM 5:30 PM
Jan Peters, “Robot Learning: Quo vadis?” 10:00 AM 4:00 AM 6:00 PM
Paul Luff, “Planning and Situating Actions: challenges for assessing autonomous systems” 10:30 AM 4:30 AM 6:30 PM
Josef Sivic, “Learning manipulation skills from instructional videos” 11:00 AM 5:00 AM 7:00 PM
Morning Panel 1 11:30 AM 5:30 AM 7:30 PM
Break 12:30 PM 6:30 AM 8:30 PM
Sonia Chernova, “Beyond the Label: Robots that Reason about Object Semantics” 2:00 PM 8:00 AM 10:00 PM
Martial Hebert, “Issues in robust AI for robotics” 2:30 PM 8:30 AM 10:30 PM
Angela Dai, “Learning from Synthetic Priors for Real-world 3D Scene Understanding” 3:00 PM 9:00 AM 11:00 PM
Mathieu Aubry, “Analysis by Synthesis for interpretable object discovery” 3:30 PM 9:30 AM 11:30 PM
Afternoon Panel 1 4:00 PM 10:00 AM 12:00 AM

Tuesday, November 30

Start Time Paris (UTC+1) Start Time Boston (UTC-5) Start Time Seoul (UTC+9)
Opening remarks 8:50 AM 2:50 AM 4:50 PM
Beomjoon Kim, “Applying intuitions from AlphaGo to robot task and motion planning problems” 9:00 AM 3:00 AM 5:00 PM
Jemin Hwangbo, “Control of Legged Robots using Reinforcement Learning” 9:30 AM 3:30 AM 5:30 PM
Sungjoon Choi, “Towards a Natural Motion of a Robot” 10:00 AM 4:00 AM 6:00 PM
Joschka Boedecker, “A Q-Function decomposition for faster learning, interpretable reward design, and model-free Inverse Reinforcement Learning” 10:30 AM 4:30 AM 6:30 PM
Morning Panel 2 11:00 AM 5:00 AM 7:00 PM
Break 12:00 PM 6:00 AM 8:00 PM
Poster Session 1:00 PM 7:00 AM 9:00 PM
Pierre-Brice Wieber, “A mathematical approach to Isaac Asimov’s Three Laws of Robotics” 2:00 PM 8:00 AM 10:00 PM
Hatice Gunes, “Data-driven Socio-emotional Intelligence for Human-Robot Interaction” 2:30 PM 8:30 AM 10:30 PM
Joohyung Kim, “Towards Human-Friendly Robots” 3:00 PM 9:00 AM 11:00 PM
Sangbae Kim, “Physical Intelligence and Human’s Cognitive Biases toward AI” 3:30 PM 9:30 AM 11:30 PM
Best Poster Prize announcement and Afternoon Panel 2 4:00 PM 10:00 AM 12:00 AM

NAVER LABS is the R&D subsidiary of NAVER Corporation, Korea’s leading internet company and a global tech company with hundreds of millions of users worldwide.

The European branch, NAVER LABS Europe, is the biggest industrial AI research center in France and its sister lab, NAVER LABS Korea, is a leading robotics research organisation with robots like the M1X mapping robot, the service platform AROUND, and the 5G connected robot arm AMBIDEX.

NAVER LABS Korea will operate the world’s first 5G brainless robots at NAVER’s second headquarters, a robot-friendly office building, which will open in Seongnam, Korea in  early 2022.

NAVER LABS workshop organizers:

Martin Humenberger
Martin Humenberger
Sangbae Kim
Sangbae Kim
Gianluca Monaci
Gianluca Monaci
Julien Perez
Julien Perez
Gregory Rogez
Gregory Rogez

Day 1 – Monday, 29th November 2021

Day 2 – Tuesday, 30th November 2021

This web site uses cookies for the site search, to display videos and for aggregate site analytics.

Learn more about these cookies in our privacy notice.

blank

Cookie settings

You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.

FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.

AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.

Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.

blank