The 2nd International Symposium on Human-Robot Interaction (HRI) unfolded on November 14–15, drawing an inspiring convergence of academic and industry experts. Across two days of insightful sessions, attendees delved into Conversational HRI, Trustworthy and Explainable AI, Design Principles for HRI, and Social Navigation in Public Spaces. A vibrant poster and demo session at NAVER LABS Europe complemented these discussions, fostering dynamic exchanges among students and professionals.
Below, the chairs give an overview of each of their sessions inviting you to explore the full recorded presentations on the Symposium website.
The symposium kicked off with a welcome address by NAVER LABS Europe Director of Science, Martin Humenberger and Symposium Program chair Danilo Gallo before diving into the first session which explored how to develop robots capable of engaging in natural, socially aware dialogue.
Conversational HRI faces unique challenges: robots must not only understand and produce language but also interpret and express socio-emotional cues to create effective, human-like interactions.
The three presentations by Chloe Clavel, Bahar Irfan and Marta Romeo respectively highlighted the socio-emotional dimensions of interactions, the use of large language models (LLMs) in conversational robots, and the integration of multimodal signals in HRI.
The panel discussion expanded on these topics, beginning with an analysis of the capabilities and limitations of LLMs. While these models enhance conversational abilities, they often fall short in delivering conversational depth, maintaining long-term memory, and grounding interactions in real-world contexts. Social tact (such as effective turn-taking) and the ability to interpret and manage multimodal cues also remain significant challenges.
The discussion emphasised the importance of improving the controllability of LLMs, especially in the context of Human-Robot Interaction (HRI). Enhancing control is essential to prevent unintended behaviors, particularly in end-to-end systems, where a robot might react inappropriately to a user expressing anger or frustration.
The challenges of evaluation were also highlighted as important, with a need for more holistic methods to assess the quality of interactions. Proposed improvements included continuous monitoring of interaction dynamics, the development of novel objective and subjective metrics, and the analysis of micro-events and user reactions across multiple modalities (text interactions, facial expressions, etc.).
Privacy and ethical considerations in HRI were discussed at the end where participants stressed the importance of designing scenarios that really benefit users while preventing unintended emotional attachment or dependency on robots. Interdisciplinary collaboration was highlighted as crucial for aligning conversational AI innovations with user needs, societal values, and ethical standards to ensure meaningful and trustworthy advancements in HRI.
This session brought together experts to explore the complexities of fostering trust in artificial intelligence systems. The speakers delved into factors that affect levels of trust in HRI, the dual-edged role of anthropomorphic AI and the situated nature of trust.
Antonietta Grasso began by emphasising that trust is a dynamic, context-dependent process driven by users’ practical orientations and ongoing interactions. She introduced the “Trust Mediator,” a framework to align AI systems with principles adapted to a specific setting of deployment and user needs through continuous monitoring and feedback.
Factors like reliability and cultural context and how they shape trust in HRI were addressed by Alessandra Rossi. Her research showed that human-like robot appearance features and adaptive behaviors can foster trust but also induce under-trust, for example in disclosing personal information.
Canfer Akbulut examined primarily the risks of anthropomorphism in AI. While human-like cues enhance engagement, they may lead to over/excessive trust, emotional reliance and privacy concerns. She advocated for safeguards like clear disclosures of AI capabilities and ethical design principles to prevent harm.
The opening and key question in the panel discussion was on supporting continuous collection of feedback on ethical principles made public to ensure trustworthy AI. The members agreed that embedding adaptive feedback loops is a crucial requirement to foster ethical oversight to uphold trustworthiness principles.
Overall, the session highlighted the importance of aligning AI innovation with societal values to ensure trustworthiness and long-term user confidence. An open question remains on how to design for generalization while enabling local adaptation and principle monitoring.
The third session was comprised of designers from various fields of robotics who presented and discussed the Design Principles for HRI.
Nazli Cila emphasized the importance of design thinking to improve robot roles and interactions. She presented case studies, including examples of robots used in farms and restaurants, and highlighted design-driven methodologies as legitimate academic tools, demonstrating their value in advancing the field.
Insights on NAVER’s data centre (2nDC) robots in Korea were shared by Myeongsoo Shin. These robots, currently deployed in the field, were developed under the principles of safety, efficiency, and adaptability. Faced with challenges such as COVID-19 restrictions and the data center being under construction, he explained how VR prototyping and remote usability tests were used to validate safety and ensure seamless workflow integration.
The session continued with Tito Favaro of PAL Robotics who introduced the TIAGo Pro robot, a flexible platform designed to meet diverse user needs. He highlighted how feedback from businesses, engineers and researchers informed significant design improvements, leading to simplified assembly, enhanced usability and greater scalability.
The panel concluded this session where the discussion centered around how design research methodologies can be universally applied across different interaction needs. Members emphasized the critical role of designers in engaging stakeholders early through prototyping and maintaining clear communication throughout the process.
The last session explored the nature of social interactions in shared and public spaces and the challenges involved in capturing and modelling all the complexity of those interactions.
It began with Barry Brown, a sociologist and computer scientist whose talk focused on the importance of observing interactions between autonomous agents and users or bystanders in naturalistic settings and the risks involved in overestimating the capabilities of current technology to understand and operate within unconstrained environments.
The session continued with Fanjun Bu whose presentation focussed on interactions between bystanders and a remotely operated trash collecting robot in a Wizard of Oz experimental setup, with a particular focus on the ways people ascribe intent to and make sense of the robot’s navigation movements. Mattia Racca presented the technical challenges involved in operationalizing social norms in a navigation planner for a service robot taking the elevator, and suggested that feature engineering, while not a recent technique, may be useful in bridging the semantic gap between unconstrained environments and machine learning models.
The last presentation by Greg Lynn shared the work done by his company to build an extensive dataset that captures pedestrian etiquette in indoor and outdoor public spaces, and how that dataset was used to train Piaggio Fast Forward robots to follow users with smooth, responsive and predictable navigation behaviours.
The symposium closed with a panel discussion where all the speakers discussed the sensing and actuation requirements for social navigation, and what the biggest challenges and obstacles are in the state of the art.
For a deeper dive into the ideas and discussions from the symposium, we encourage you to watch the full presentations and panels available on the event website. Your feedback and suggestions are invaluable as we shape future editions of this growing event in HRI research.
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.