November 14th & 15th, 2024 - Hybrid
2nd International Symposium
Read about the highlights of the symposium in the blog article composed by the symposium session chairs.
Chloé Clavel is a Senior Researcher in the ALMAnaCH team at INRIA Paris (the French national research institute for digital science and technology). Until October 2023, she was a Professor of Affective Computing at LTCI, Telecom-Paris, Institut Polytechnique de Paris, where she coordinated the Social Computing team. Her research interests are in the areas of Affective Computing and Artificial Intelligence that lie at the crossroads of multiple disciplines including, speech and natural language processing, machine learning, and social robotics. She studies computational models of socio-emotional behaviors (e.g., sentiments, social stances, engagement, trust) in interactions be it human-human interactions (social networks, job interviews) or human-agent interactions (conversational agents, social robots).
A single lack of social tact on the part of a conversational system (chatbot, voice assistant, social robot) can lead to a decrease in the user’s trust and engagement in the interaction. This lack of social intelligence affects the willingness of a large audience to view conversational systems as acceptable. To understand the state of the user, the current research community in affective/social computing has drawn on research in artificial intelligence and social science. In recent years, however, the trend has shifted towards a monopoly of deep learning methods, which are quite powerful but opaque, greedy for annotated data and less suited for integrating social science knowledge. I will present here the research we are doing to develop machine learning approaches (from classical approaches to large language models) for modelling the social component of interactions. In particular, I will focus on research that aims to improve the explainability of the models as well as their transferability to new data and new socio-emotional phenomena.
Nazli Cila is an Assistant Professor at the Department of Human-Centered Design in Delft University of Technology, where she seeks to understand how to design symbiotic relationships between humans and embodied AI, such as robots, while preserving individual, ethical and societal values, such as autonomy, enrichment, and justice. She is also fascinated by how designers and design researchers produce knowledge. This occasionally zeroes in on studying the complex landscape of robot design at academia and in industry, and at times, expands its focus to embrace design epistemology and methodology at a broader level. She is the co-director of the AI DeMoS Lab at TU Delft, investigating how to facilitate responsible design and use of AI for a meaningful democratic engagement.
Designerly thinking and methods are recognized as strategic in advancing robot development, application, and implementation. However, the use of design-specific methods, tools, and techniques is not yet wide-spread in the HRI community. Drawing from my work in designing human-robot interactions for diverse contexts such as dairy farming and upscale dining, as well as from my studies on unpacking HRI design methodologies and epistemology—including questions around what constitutes a design contribution, the nature of knowledge it generates, and its impact—this talk will explore various methods and approaches to designing robots. I will also discuss how to fully harness the potential of design as a discipline to advance HRI.
I am Assistant Professor at the University of Naples Federico II. I got a PhD as part of the Marie Sklodowska-Curie Research ETN SECURE project at the University of Hertfordshire (UK). I am co-PI of the AFOSR ERROR project, Project Manager of Marie Sklodowska-Curie Research ETN PERSEO, and involved in the scientific coordination of several national and international projects. I was elected Trustee of the RoboCup in 2024, and have been involved in RoboCup Humanoid League since 2016. I am very active in the scientific community as organisers of conferences (e.g., Special Session char at RO-MAN 2024) and events (e.g., special issues and sessions, workshops). My research interests include HRI, social robotics, trust, XAI, multi-agent systems.
Trust is a fundamental aspect that drives not only human interactions but any human-agent interaction in the humans’ day-to-day activities. It is, therefore, understandable that the literature of the last few decades focuses on measuring how much people trust robots — and more generally, any agent – to foster such trust in these technologies. While it is very important to trust an agent whether this is another human, a robot or a machine, embodiment and the appearance of robots shape trust in a very different way. Moreover, trust is a complex behaviour, and it is affected and depends on several factors, including those related to the interacting agents (e.g. humans, robots, pets), itself (e.g. capabilities, reliability), the context (e.g. task), and the environment (e.g. public spaces vs private spaces vs working spaces). In this talk, we are going to explore some of the factors affecting and being affected by trust in order to build and foster a balanced interaction between humans and agents.
Bahar Irfan is a Postdoctoral Researcher at KTH. Her research focuses on creating personal robots that continually learn and adapt to assist in daily life. Currently, she is working on large language models to create conversational robots. Prior to joining KTH, she held research positions at Disney Research and Evinoks. She has a diverse background in robotics, from personalization in long-term HRI during her PhD at the University of Plymouth and SoftBank Robotics Europe as a Marie Skłodowska-Curie Actions fellow to user-centered task planning for household robotics during her MSc in computer engineering, and building robots for BSc in mechanical engineering at Boğaziçi University.
Conversational robots powered by large language models (LLMs) offer promising potential for providing daily social support to both younger and older adults. However, integrating these models involves navigating multifaceted challenges to achieve smooth, enjoyable, and adaptive interactions. This talk will present recommendations for addressing key challenges in embedding LLMs and foundation models into conversational robots, focusing on the technical, ethical, and user experience aspects. Findings from two participatory design studies with older adults will highlight the importance of involving end-users in the development process to ensure these robots align with their unique needs and expectations. Another study will explore mechanisms for smoother turn-taking in conversations and reveal the extent of personal information users might share with robots, underscoring critical privacy and ethical considerations. Additionally, this talk will introduce a new scale for assessing user enjoyment in conversations with a robot and demonstrate how multimodal LLMs could detect user enjoyment in real time to create more adaptive and engaging interactions.
Dr. Mattia Racca is a Research Scientist at NAVER LABS Europe in Grenoble, France. He earned his Doctorate in Robotics from Aalto University, Finland, in 2020, under the supervision of Professor Ville Kyrki, with a thesis titled “Teacher-Learner Interaction for Robot Active Learning”. He holds a B.Sc. and a M.Sc. in Computer Engineering from Politecnico di Torino, Italy. His research primarily explores how AI-driven robots interact with people — whether they are end users, expert users, or bystanders — and investigates the technologies, design strategies, and trade-offs involved in creating effective human-robot interactions.
Taking an elevator is typically uneventful, yet subtly guided by social norms and non-verbal cues that make it an effortless experience for humans. Introducing robots into this shared space requires at least partial automation of these social behaviors to ensure smooth interactions. In this talk, we present our approach to enabling service robots to wait for elevators alongside humans in a socially acceptable manner, while aiming to maintain the uneventfulness of the experience. We discuss how the social norms are operationalized and embedded as engineered features within a data-scarce machine learning pipeline, emphasizing both practical solutions and obstacles faced throughout the process.
Dr Marta Romeo is an Assistant Professor and Bicentennial Research Leader in Human-Robot Interaction in the School of Mathematical and Computer Sciences at Heriot-Watt University. She got her bachelor’s and master’s degrees in engineering of computing systems at Politecnico di Milano (Italy). She earned her PhD from the University of Manchester on human-robot interaction and deep learning for companionship in elderly care, within the H2020 Project MoveCare. She is a CO-i for the UKRI Node on Trust, working on how trust in human-robot interactions is built, maintained and recovered. She has worked on multiparty human-robot interaction within the European project Spring and she is PI of the TAS-GAIL project looking at developing a robotic active listener. She is interested in social robotics, failures and repairs in HRI, and healthcare technologies.
Effective communication in human interactions is facilitated by our ability to seamlessly integrate various non-verbal signals with spoken language. These signals help manage interactions, such as indicating when to take turns, and enhance our understanding of social aspects like emotions and intentions. This capacity to perceive, comprehend, and integrate a wide range of signals is a key trait that defines us as social beings. Therefore, when developing social robots intended for integration into our social environments, it is crucial to equip them with similar capabilities. Over the years, human-robot interaction has utilised techniques from linguistics, natural language processing, computer vision, and social signal processing to try and achieve comparable levels of integration in robotic companions. In this talk, we will explore the progress made in this field, particularly focusing on multi-party interactions, considering human factors and advancements in AI solutions.
Myeongsoo SHIN is a designer experienced in electronic device UX, voice interaction, and multimodal interaction design. He focuses on human-robot interaction, striving to deliver both practical and aesthetic value to digital products.
At NAVER LABS Korea, Myeongsoo SHIN developed robotic delivery services and systems for cafe operations at NAVER’s 1784 office. He later took on product planning and interaction design for the GaRo and SeRo robots, which automate the movement and management of heavy assets at NAVER’s new data center, Gak Sejong. Myeongsoo began his career by designing visible and tangible interactions for consumer electronic devices, including smartphones, smartwatches, and smart TVs. He later expanded into creating intangible interactions, such as conversational UIs and content recommendations features. Combining these experiences, he strives to contribute across various aspects of human-robot interaction. Currently, Myeongsoo is focusing on designing interactions that seamlessly bridge the gap between user expectations and technological capabilities, and finding the optimal level of intelligence to be embodied in robot design and behavior.
The 2nDC project is a multidisciplinary initiative by NAVER to establish “Gak Sejong,” NAVER’s second data center, integrating fields like architecture, robotics, and cloud technology. With the experience gained through NAVER LABS’ implementation of large-scale robotics systems in the 1784 project, which is NAVER’s new headquarters project, NAVER LABS has developed and applied advanced robotics systems for the 2nDC project. Currently, at Gak Sejong, several robotic solutions are in trial operation. These include “GaRo,” a robot designed to transport servers and assets within the data center; “SeRo,” which manages asset retrieval and storage; “ALT-B,” an autonomous shuttle facilitating easy movement across the data center; and ARC Brain, a cloud service that oversees and coordinates these robotic systems. In this presentation, the application cases of robotics systems within the 2nDC project will be introduced. Additionally, some insights and lessons learned about detailing the challenges faced throughout the design, implementation, and testing phases, and deployment will be shared.
Professor Barry Brown is a research professor at Stockholm University and a Professor at the University of Copenhagen, within the HCC group. At Stockholm he helps to run the STIR group – Stockholm technology and interaction research group. This group has received funding from Vinnova, SSF, VR, Wallenburg, Microsoft, Nissan, Mozilla and the EU. His two most recent books have been published by Sage and MIT Press, focusing on how to research the use of digital technology, and the study and design of leisure technologies. Professor Brown previously worked as the research director of the Mobile Life research centre (2011-2017), and as an associate professor in the Department of Communication at UCSD (2007-2011). He has published over 100 papers in human computer interaction and social science forums, along with five ACM best paper nominations (CHI, CSCW, Ubicomp), one ACM best paper award (CHI) and a recent 10-year impact award from the Ubicomp conference. In terms of research funding he has received over $8 million (75 milion SEK) in research funding from the UK research councils, NSF, and European and Swedish Funding agencies. His research has also been covered in the international press including the Guardian, Time, New York Times, Sydney Morning Herald, Voice of America and Fortune Magazine.
Frank Bu is a Ph.D. candidate at Cornell Tech under the supervision of Prof. Wendy Ju. My research focuses on understanding how people interact with emerging technologies. In particular, I am interested in human-robot interaction and the challenges it presents. My approach involves utilizing the Wizard-of-Oz technique for in-the-wild deployments to simulate robots’ autonomy and elicit natural interaction behaviors. By leveraging unique interaction data collected through the Wizard-of-Oz technique, I aim to bootstrap robots’ social intelligence. Through my research, I aspire to provide valuable guidelines for the design of future technologies that seamlessly integrate into human environments.
With the increasing deployment of robots in public spaces, it is essential to explore how these robots are perceived by the public and how they should respond to various social cues. To address this, researchers must deploy robots in real-world environments and analyze their interactions within those specific contexts. This presentation shares insights from our multiple deployments of trash barrel robots in New York City’s public spaces, providing a detailed look at the entire deployment process—from initial site scouting to field deployment, including both anticipated and unexpected challenges. I will also demonstrate how the data gathered from these deployments has become a valuable resource for advancing research in social intelligence for both social and computer scientists.
Discussion of the design of robots and their navigation that begins with moving with people within a radius of one to two meters as a starting point for design. Topics will include, understanding and translating into measurable performance how people move with one another and things, design specifications of interaction with people that can be translated for use by edge computing devices, structuring and exploiting database of human motion using ML and AI methods, engineering robots to perform with human compatible motion, and comprehensive design from form, to color, materials, motion, lighting, sound, devices, and ergonomics.
Canfer is a Research Scientist at Google DeepMind, working on developing valid and robust evaluations for underexplored model behaviors, capabilities, and potential downstream harms to users. Her public-facing work includes an ethical foresight pieces on anthropomorphic AI and persuasion, as well as empirical work surveying the safety evaluation landscape and expanding on red-teaming methodologies. She has a background in psychology and computational social science, and is broadly interested in investigating how methods from both disciplines can be adapted to AI evaluation.
The development of highly capable conversational agents, underwritten by large language models, has the potential to shape user interaction with this technology in profound ways, particularly when the technology is anthropomorphic, or appears human-like. Although the effects of anthropomorphic AI are often benign, anthropomorphic design features also create new kinds of risk. In this talk, I will explore anthropomorphic features that have been embedded in interactive systems in the past, and leverage this precedent to highlight the current implications of anthropomorphic design. Second, we propose research directions for informing the ethical design of anthropomorphic AI.
Juan is a Senior Industrial designer at PAL ROBOTICS where he combines human factors, robotics and design research. He was previously Customer Experience Design Lead in ergonomics at Hewlett-Packard and creative R&D Officer at IDEADED. Juan also teaches at universities and companies, and has published several books and conference papers on the discipline of design. He was a member of the Board of ADIFAD 2010/11 (appoint ICSID), and an expert jury of the DELTA international awards of industrial design. He graduated in Industrial Design and has a Master’s in “Curator of art, design and technology”.
This paper presents the design process of a mobile manipulator robot focused on human-machine interaction from the perspectives of ergonomics and sustainability. To ensure the robot’s adaptability to various scenarios, we applied anthropometric principles and international ergonomics standards that promote quality in the workplace. The primary objective of these studies has been to assist and support people in tasks prone to injury from prolonged repetitive movements, equipping the machine with resources to provide such assistance. The development phase involved multiple iterations of the robot’s arms, head, torso, and mobile base, emphasising the scalability of its production, manufacturing, and assembly. This optimised design enables the effective fabrication of the robot without requiring mass production, maintaining both functionality and ergonomics without relying on large-scale manufacturing. Test results indicate that these enhancements not only enable greater versatility and efficiency in manipulation tasks but also contribute to the product’s sustainability by reducing costs and resource consumption throughout its lifecycle. This work demonstrates the potential of ergonomic and scalable mobile manipulators in robotics applied to human-robot interaction, marking a step towards creating robots that are both functional and sustainable.
Principal Scientist at NAVER LABS Europe, located in France and owned by the Korean internet giant Naver. NAVER LABS Europe is the largest private AI lab in France. Her work focuses on research in Human-Machine Interaction and Human-Robot Interaction. As Principal Scientist, she plays a cross-functional role across the various research groups within the Interactive Systems group, aiming to define interdisciplinary projects for the application and evaluation of machine learning techniques. Her research contribution lies in identifying user needs and informing technology development to meet them, mainly through qualitative studies. Recently, she has been studying the ethics of AI to inform the design of privacy aware robotics and trustworthy interaction with AI enabled technology
Trust in AI technology relies on a trust-building process that considers both context and the practical reasoning of users. This presentation uses examples from social robots concerning data acquisition and usage and extends these findings to other AI technologies, such as LLMs. It argues that trust is context-dependent and that any development aimed at creating trustworthy AI can benefit from continuous monitoring and system behavior updates based on transparent, locally defined AI principles.
As conversational AI progresses, there’s a growing interest to integrate it into HRI, allowing robots to engage in natural, socially aware dialogue. However, conversational HRI faces unique challenges: robots must not only understand and produce language but also interpret and express socio-emotional cues to create effective, human-like interactions. Achieving this requires interpretable and controllable models that align with user expectations.
In this session, we’ll look at the socio-emotional aspects of language processing and discuss recent progress in using large language models for conversational robots, aiming to develop robots that encourage positive social interactions and build user trust.
AI-enabled technologies, whether embodied (e.g., robots) or non-embodied (e.g., digital services), are rapidly integrating into our work and daily lives. As these technologies advance, society is calling for trustworthy AI, ensuring that AI services are both reliable and safe. However, there is still uncertainty about how to achieve this, with the risks of both over-trusting or under-trusting the technology.
In this session, we will explore key factors influencing user trust, such as anthropomorphism, and discuss how they shape our perceptions of AI. The session will aim to define the trustworthiness problem in actionable terms and propose a vision for how research can guide the design of trustworthy AI technologies.
This session brings together designers from different areas of robotics, each applying unique methodologies—ranging from participatory to interaction and industrial design. Working with various robot typologies, such as humanoid and factory robots, these experts will explore the challenges of creating cohesive and functional robots while balancing technical constraints and user requirements.
The session will explore how they define design principles, collaborate with users and stakeholders, and navigate the balance between functionality and technical limitations. They will look into the different methodologies that shape robot behavior, interaction, and consistency across robot platforms.
Social robots are entering public and semi-public spaces such as roads, offices, hospitals, shops. These types of settings present challenges include but also go beyond the possible moments of direct collaborative interaction, as robots need to navigate and occupy the space without intruding and disrupting the social order that governs those spaces. Advances toward this goal and associated methods are discussed in this session.
Read more
INRIA Paris
Read more
Heriot-Watt University
Read more
KTH Royal Institute of Technology
Read more
(at NAVER LABS Europe)
(at NAVER LABS Europe)
Read more
NAVER LABS Europe
Read more
University of Naples Federico II
Read more
Google DeepMind
Read more
Read more
Delft University of Technology
Read more
NAVER LABS
Read more
PAL Robotics
Read more
Read more
University of Copenhagen
Read more
Cornell Tech
Read more
NAVER LABS Europe
Read more
Piaggio Fast Forward, Inc. & UCLA
Read more
This event is open and free of charge, but participants are required to register. Please select one of the options below.
Ground transportation is provided from Grenoble centre to the workshop venue in Meylan (there and back) on both days, and between the venue and NAVER LABS Europe for the poster session and lunch on Day 1.
NAVER LABS is the R&D subsidiary of NAVER, Korea’s leading internet company and the part of NAVER responsible for creating future technology in AI, robotics, autonomous driving, 3D/HD mapping and AR. The European branch, NAVER LABS Europe, is the biggest industrial AI research center in France and its sister lab, NAVER LABS Korea, is a leading robotics research organisation with robots like AMBIDEX and Rookie.
NAVER opened the world’s first robot-friendly high rise office building in 2022 as their second HQ in Seongnam, Korea. This building, called ‘1784‘ is a massive testbed and ‘tech convergence platform’ where employees and robots coexist on a daily basis.
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.