November 14th & 15th, 2022 - Hybrid
International Symposium
Discretion, as a virtue of keeping secrets an respecting values of personal information and privacy, is central to the concept of trust–a highly valued virtue in both human and technological systems. Yet, information privacy has largely been viewed as a property of data, and articulated through controlling who gets access to which pieces of information. These properties are largely fixed at the time of data collection or transfer, and are based primarily on ownership and its transfer. As this data gets transferred to other institutional contexts and/or used for different purposes, it becomes increasingly complex to determine who should have access and for what purposes. How, then, are we supposed to design, or train, a robot to understand the complex rules of privacy and data ownership? What information might be considered private? For and from whom? And for what purposes?
In this presentation I argue that when we start to design information privacy in the context of embodied human-robot interactions, it becomes clear that privacy is a feature of embodied social interactions. By embodied, I mean that information privacy is best understood as being socially embedded in social roles involving shared norms and expectations, as well as social hierarchies and relations (e.g. professional-client, child-parent) and social difference (e.g. gender identity). As such, it better seen as a property of the interactions between a robot and a human, or a robot’s observation of human interactions. In order for a robot to respect human values like privacy, it needs to understand the social roles of both humans and robots, and socially-relevant categories of privacy that might apply to any and all information a robot might collect in the course of a complex social interaction.
I propose a research project for developing a framework for understanding privacy in the context of leading co-robotics applications, including medical care, home healthcare, education, and childcare. Each of these contexts contains information that a robot might collect which might be considered personal, private or even legally protected (e.g., under HIPPA or FERPA in the United States). But each context also allows us to refer to social roles and norms, and the practices of contextually structured interactions which can help us design a framework for structuring the privacy of information that our co-robots might collect.
Emerging interactive and adaptive systems using emotions such as affective robots modify how we will socialise with machines. These areas inspire critical questions centering on the ethics, the goals and the deployment of innovative products that can change our lives and society. The concept of nudge is already used in social robotics even
if the nature of the mechanisms that characterize it is not always consistent. Nudge in social robotics is being used positively in several domains: health, education, etc. However, AI-enhanced Nudges (i.e. all kind of digital nudges using AI mechanisms as statistical inferences from users’ behaviour) are rising new ethical concerns. A set of goals defined by human agents may be reached using decision-making mechanisms, recommendations or other interaction influences. In addition, some nudge mechanisms are built unintentionally by the system to achieve its ends. Nudging mechanisms can occur at a very fine level of granularity e.g. adaptation of the tone of voice of the robot based on online learning variants of machine learning. These processes may generate a number of ethical risks for individuals, groups or society.
1784 is a robot-friendly building where robots and humans live in harmony. Naver Labs is envisioning the future of robots through the space called 1784. In this presentation, Naver Labs’ HRI team introduces the delivery robot ‘Rookie’ that has been researched so far. As of today, Rookie is delivering food, coffee and parcels to staffs in the building, and has done over 3,000 deliveries. From the aspect of HRI, we disclose the records of designing various elements which enabled the robot socially accepted by human. This is only the beginning of a new robot design to come.
Maria Luce Lupetti is an Assistant Professor in Interaction and Critical Design at the faculty of Industrial Design Engineering, TU Delft (NL), and a core member of the AiTech Initiative on Meaningful Human Control over AI Systems. Her current research, at the intersection of design, ethics, AI and robotics, is focused on understanding and designing responsible human-technology relations. As part of her research, she is setting a research agenda and gathering a community of scientist on the topic Designerly HRI. She pursued a PhD in Production, Managment and Design at Politecnico di Torino, in Italy. During her doctoral studies on child-robot playful interactions, which was funded by Telecom Italia, she was also a visiting research fellow at Haipeng Mi Lab, at the Academy of Art and Design, Tsinghua University, China. Prior to the current position, she was a postdoctoral research fellow at AMS institute, The Netherlands. In 2019, she was awarded the Distinguished Women Scientist Award, from the Dutch Network of Women Professors. Website
The development of embodied AI systems intended to coexist and share practices with humans is largely driven by a strong technical drive. The socio-technical complexity of existing practices and contexts is hardly addressed in the conceptualization and design of these systems, which paves the way to inappropriate, undesirable, or unpredictable consequences. In response, a growing body of research is now looking at how could we achieve a better alignment of embodied AI systems and their promises, with human needs and values, as well as to meet societal challenges and norms. In short, we are collectively asking how to make embodied AI more meaningful for people and society. In this talk I will illustrate a designerly perspective on how we, as designers and researchers, can answer this question and contribute to the development of embodied AI systems that are grounded on existing practices and respond to relevant challenges.
From trolley problems to AI ethics, much of the ethics discussions in robotics have been distant from the everyday design decisions of HRI practitioners. However, there are pressing everyday-kind of roboethics issues that arise as we try and bring more robots into our buildings and streets. This talk will highlight some of these everyday issues ranging from the way robots influence our social norms to the problematic market trends in consumer robotics that could make the dream of ‘a robot in every home’ an environmentally and economically costly one.
To make the interaction of a robot with human beings socially acceptable, legible, and adaptable, it is of paramount importance to endow a robot with the ability to model the users’ preferences, needs, and motivations. Moreover, the embodiment condition of a robot also requires considering the interaction’s physical characteristics, such as the user preferences regarding the robot’s physical movements in space (e.g., proxemics, speed, and trajectories). A personalized and adaptive interaction, different from pure reactive strategies, strongly relies on learning such a computational model of human behavior and on the integration of these into the decision-making algorithms of the robot. This talk will provide an overview of this problem in the context of a robot interacting with a human by focusing on how to obtain meaningful information from human observation and, consequently, how to adapt and model the robot’s behavior accordingly. The talk will present examples from current research projects in the area of socially assistive robotics, including assisting people with dementia and children in hospitals.
With more than 20 years in the creation of robots and interactive characters, Jérôme Monceaux has developed a singular approach that has already been successful. This experiences has enabled him to see that the approach of the players in this field does not allow for sufficient levels of interaction to satisfy the ever-growing curiosity of the public. For years, Jérôme has been able to test an approach based on animal interactions on a very large audience and how to integrate it into robotics and HMI. In his work, he also questions the humanoid form and its impact on user expectations. At the end of November, he and the Enchanted Tools team will unveil a new artificial species that allows us to feel perceived and understood by these machines while enchanting us with their behaviours. Jérôme will share his experience but also his vision of a future where technology will be able to enchant people’s daily lives.
In this current craze for artificial intelligence, the so-called robot machine, the so-called social robot, is a singular point that reveals both fantasies and realities that are much more cultural than technological, and that allows us to act, and perhaps favors, this perception of “subject” that we project on this technological object. Even if the dedicated social robot does not yet penetrate our daily life, there are already many service robots that manufacturers cannot prevent from being perceived as “social”. The attributes of this machine are mostly derived from the current practice of artificial intelligence, highly industrialized and essentially based on blind algorithmic learning and slaves to the incessant donations of our naturally highly intelligent data, deposited on the digital highways. There are currently no scientifically established facts designating the characteristics that make this technical object be perceived as an “other” increasing our social space, and in particular the need to reflect and model the attributes projected on this machine (strength/fragility, dominance etc). This illusion of the living, moreover empathic, places the “social robot” in a remarkable status of the ethical mechanism of its societal integration. Thus people, in particular fragile in their social density, in feeling of isolation, seem to project durably, on a robot whose expressive characteristics are controlled, attachments which could as much relieve their pain of isolation, as isolate them in the long run even more, questioning us in urgency on the consequences of these manipulations for all the populations, and on the endorsed responsibilities.
But first of all, the question of the reason of this current desire of an artificial “other” and of this obvious ease and anxiety by which we call them in our intimacies must undoubtedly be asked as a preliminary to any reflection. Is this artifact only an attractive but futile fact of novelty or could it usefully favor the reconstruction of our human social space damaged, by this machine mirror of the human, until becoming useless? Or on the contrary, is the ignorance of the deep processes of the interaction of the human and of the state of our socio-relational fabric a risk of toxicity in the use of an illusion of another void?
Tony Belpaeme is Professor at Ghent University and Visiting Professor in Robotics and Cognitive Systems at the University of Plymouth, UK. He received his PhD in Computer Science from the Vrije Universiteit Brussel (VUB) and currently leads a team studying cognitive robotics and human-robot interaction. Starting from the premise that intelligence is rooted in social interaction, Belpaeme and his research team try to further the science and technology behind artificial intelligence and social human-robot interaction. This results in a spectrum of results, from theoretical insights to practical applications. Website
While interactive systems and specifically Human-Robot Interaction have been studied for over 20 years, we do not yet see autonomous systems that can support open-ended social interaction. Instead, we have been taking shortcuts which for constrained applications seem to work well. However, the dream -of course- is to build true autonomous HRI rivalling human-to-human interaction. This talk will look into what would be needed for that and speculates if recent advances in data-driven AI can bring us there.
Socially assistive robots are conceived to improve the quality of life by better assisting people in need. In this talk we explore how patients could cohabit with a socially assistive robot in rehabilitation centers. Our aim is to open a dialogue on the effect of presence of social robotics in the ecology of people in convalescence. In order to do so we present a qualitative analysis of 8 patients living with Pepper robot during a week’s time. Our analysis proposes an understanding on what worked in terms of a potential relationship between patients and Pepper, while the robot was still under development. Notions like patience, politeness, and a general positive appreciation of the design of the robot create a social presence for the residents of the reeducation center.
Participatory methodologies are now well established in social robotics to generate blueprints of what robots should do to assist humans. The actual implementation of these blueprints, however, remains a technical challenge and the end-users — especially vulnerable ones — are not usually involved at that stage. In three recent studies, we have however shown that, under the right conditions, robots can directly learn their behaviours from domain experts, replacing the traditional heuristic-based or plan-based robot controllers by autonomously learnt social policies. We have derived from these studies a novel ‘end-to-end’ participatory methodology called LEADOR, that I will introduce during the seminar, also discussing its application to social robots for autistic children.
The talk will present recent advancement in the field of autonomous navigation while cars are among humans. AV start to share the urban space with other road users. To be accepted, autonomous vehicle need to be seen as a social robot that transport people. That implies that the people inside must feel integrated in the environment, as they would be in a driven car. They expect, as well as people in the surroundings, the cybercar to behave accordingly adhering to social and urban conventions and negotiating its path among crowded environments. This talk will explore the complex problem of navigating autonomously in shared-space environments, where pedestrians and cars share the same environment.
This talk will address some key decisional issues that are necessary for a cognitive and interactive robot which shares space and tasks with humans. We adopt a constructive approach based on the identification and the effective implementation of individual and collaborative skills. The system is comprehensive since it aims at dealing with a complete set of abilities articulated so that the robot controller is effectively able to conduct in a flexible and fluent manner a human-robot joint action seen as a collaborative problem solving and task achievement. These abilities include geometric reasoning and situation assessment based essentially on perspective-taking and affordances, management and exploitation of each agent (human and robot) knowledge in a separate cognitive model, human-aware task planning and interleaved execution of shared plans. We will also discuss the key issues linked to the pertinence and the acceptability by the human of the robot behaviour, and how this influence qualitatively the robot decisional, planning, control and communication processes.
Humans adapt their social behaviors during interactions based on explicit or implicit cues they receive from the interlocutor. Inspired by human-human interaction, researchers and developers have built robots that analyze not only what the user said or gestured at but also consider more subtle cues, such as head movements or body posture, to infer information on the users’ emotional and attentive state. On the one hand, robots that rely on implicit cues contribute to a more natural interaction. On the other hand, their behavior might confuse users as they are unaware that the adaptation process considers their social signals. In my talk, I will discuss the social and ethical implications of different kinds of user feedback a robot may consider to shape user interactions.
Mobile robots will likely soon be part of our urban public environments, yet little is known about how human-robot encounters play out in complex and highly dynamic social settings. Our paper reports on a field study that included more than three hundred ‘incidental’ human-robot encounters in a public outdoor space. Using an ethnographic approach informed by ethnomethodology, we applied breaching experiments and membership categorization analysis to reveal how Incidentally Co-present Persons (InCoPs) interact with and make sense of robots in an urban environment. In this talk, I will introduce our approach, and encourage research trajectories and projects to embark on that extend our conceptual tool box. This both in terms of the development of more “light-weight” protocols for HRI, as well as for conducting thoughtful “in the wild” research that accepts the situated complexity of public space.
In this talk, I will describe our last work on user-centered explainable artificial intelligence in human-robot interaction. We investigated how shared experience-based counterfactual explanations affected people’s performance and robots’ persuasiveness during a decision-making task in a social HRI context. We used the Connect 4 game as a complex decision-making task where participants and the robot had to play as a team against the computer. We compared two strategies of explanation generation (classical vs. shared experience-based) and investigated their differences in team performance, the robot’s persuasiveness, and participants’ perception of the robot and self. Our results showed that the two explanation strategies led to comparable performances. Moreover, shared experience-based explanations gave higher persuasiveness to the robot’s suggestions than classical ones. Finally, we noted that low-performers tend to follow the robot more than high-performers, providing insights into the potential danger for non-expert users interacting with expert explainable robots.
I started my studies in the healthcare sector with a bachelor’s degree in Adapted Physical Activities, which I continued with a master’s degree in Ergonomics, specialized in product and service design. It was during this master’s degree that I discovered the world of research on a robotics project in the field of geriatrics, which I returned to a couple of years later. With an experience and another diploma in Quality in the industrial environment, I developed my sense of observation of work practices. My quest for meaning led me to return to the research field for technologies and their adoption, while keeping a human-centered methodology and an attraction for the user experience in context. I decided to start a PhD in order to define the impacts of social robots on work practices and social interactions in an institution for the elderly. This thesis in sociology is conducted under an industrial contract with Berger-Levrault, a software editor specialized in public service (education, health, community). Website
Between hope and questioning, social robots appear as an assistance technology for caregivers in contact with the elderly in care settings. However, the state-of-the-art highlights certain gaps, notably the lack of long-term field studies. Thus, we wish to study the impact of this technology on the working practices of professionals as well as its situated acceptance in institutions for the elderly. An ethnomethodological analytical orientation allowed us to study the practices of care professionals with the elderly. This work is inspired by the situated acceptance approach, whose contributions come from the theories of activity and action. In a participatory approach, we co-designed uses of the Tiago Iron robot (Pal robotics) to meet the needs of existing practices. We placed Tiago for three months in a retirement home, then in a day care center (both in France), to observe the impact on social interactions and work practices. In an empirical approach, we wish to mobilize the methodologies of conversational analysis to explore the phenomena recorded. The study in a retirement home reveals that the professionals have not appropriated the machine. We did not observe any accompaniment of the resident towards the robot by the caregivers. They do not have the necessary space in their practices to integrate this tool. At the Day Care Center, the organization is different: the groups are smaller and the professionals have a real mission of accompaniment (and not of care). This gives them more latitude to support and encourage the use of the robot. This accompanying practice encourages the elderly to interact with the machine and leads to a better acceptance. Can a new technology be appropriated without human support? What arrangements are made in the interaction? How do professionals normalize the presence of the robot? Through this work, we hope to improve and contribute to a better human-robot engagement.
Session Opening. Read more
Read more.
Read more
Read more
Read more
Social Acceptability in Public Spaces
Catering at INRIA. Included with Registration
Session Opening. Read more
Read more
Read more
Read more
Read more
Ethics of Embodied AI
Included with Registration
Session Opening. Read more
Read more
Read more
Student Spotlight Presentation.
Read more
Read more
Read more
Verbal and Non-verbal Interaction Modalities
Catering at NAVER LABS Europe. Included with registration
Session Opening. Read more
Read more
Student Spotlight Presentation.
Read more
Read more
Read more
Read more
Designing for Vulnerable Users & the Need for Adaptation
NAVER LABS is the R&D subsidiary of NAVER, Korea’s leading internet company and the part of NAVER responsible for creating future technology in AI, robotics, autonomous driving, 3D/HD mapping and AR. The European branch, NAVER LABS Europe, is the biggest industrial AI research center in France and its sister lab, NAVER LABS Korea, is a leading robotics research organisation with robots like AMBIDEX and AROUND.
NAVER recently opened the world’s first robot-friendly high rise office building as their second HQ in Seongnam, Korea. This building, called ‘1784‘ is a massive testbed and ‘tech convergence platform’ where employees and robots coexist on a daily basis.
Inria is the French national research institute for digital science and technology. World-class research, technological innovation and entrepreneurial risk are its DNA. In 200 project teams, most of which are shared with major research universities, more than 3,900 researchers and engineers explore new paths, often in an interdisciplinary manner and in collaboration with industrial partners to meet ambitious challenges.
As a technological institute, Inria supports the diversity of innovation pathways: from open source software publishing to the creation of technological startups (Deeptech).
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.