Shreepriya Shreepriya, Danilo Gallo, Sruthi Viswanathan, Jutta Willamowski, Tommaso Colombino |
2019 |
This year the conference was held in the Scottish city of Glasgow, famous for the kindness of its people, the lively music scene of its bars and, like many places in Scotland, it’s not so great weather. Fortunately the weather didn’t stop the very motivated 3885 registered participants to join this edition, making CHI 2019 the biggest CHI conference ever. Over 1400 accepted papers presented in 23 parallel tracks forced our team to split and apply the “divide and conquer” strategy to get the most out of these busy days.
Georgia Tech GVU Center created this interactive visualization that allows you to explore the accepted papers and filter by topic. Among the top keywords mentioned we found VR, accessibility, AR and privacy. We were glad to see areas like accessibility and privacy at the center of the research agenda of the HCI community.
Between each paper session, we regrouped for coffee, shared our discoveries and tried some of the many demos displayed in the main hall. Ranging from VR systems augmented with thermal and olfactory feedback to experiments to test ethical conflicts in the design profession, as good researchers we couldn’t hide our curious side and we put our hands on every prototype we could find. Those breaks were just one of the many opportunities we had to meet and interact with the other attendants. Indeed, every evening there were several social events meant to facilitate networking, which was one of the most interesting elements of the conference.
We also met with some of our Korean colleagues from Naver, Naver Labs Korea and Clova and discussed the work they presented at the conference:
Opening Keynote
The opening keynote for CHI 2019 was given by Dr. Aleks Krotoski, an academic and journalist who is best known for producing radio content and television documentaries (with a focus on technology – internet, digital media, privacy, etc.) for the BBC. She has a PhD in psychology, and while not really a member of the CHI academic community, has an interest in human machine interaction. Her talk was focused on what she considers three key challenges in the development of future human-machine interfaces:
She addressed these challenges using examples from her professional and personal life, for example discussing the impact of images on cognitive load and why producing content for radio and television requires completely different approaches. In the case of plans and actions, she drew on her side business which is baking cakes, to discuss the relationship between following instructions (recipes) and a satisfying (as opposed to standard) outcome. The talk was very professionally delivered and engaging but the topics are quite familiar to a CHI audience and there was nothing especially challenging or original in the actual substance of the talk.
It all started early for us, attending one of the 35 workshops on the weekend before the beginning of the main conference. SIIDs is an area that studies temporary accessibility issues faced by users that can be triggered by contextual or personal conditions, i.e. wanting to send an SMS while driving or needing to pick up a call while holding a baby. This is a relevant topic for our team as we develop more natural and un-intrusive interfaces to provide navigation support for runners in unknown places.
During the workshop we reflected on topics such as the ethical considerations required when conducting user studies, the need to generate research parameters that can be replicated across studies and even to actually question the actual relevance of studying SIIDs and the opportunities of transference to the field of disabilities. We had the opportunity to discuss these topics and create connections with some of the most renowned figures in the field of accessibility, such as Jacob O. Wobbrock, professor at the University of Washington who received the 2017 ACM SIGCHI Social Impact Award for his work on accessible computing, Yeliz Yesilada, lecturer at the METU Northern Cyprus Campus and co-author of the book “Web Accessibility”, along with a motivated group of researchers that study the field around the world.
SIID Workshop Dinner
With several simultaneous tracks, the feeling was that of always missing something interesting. On many occasions the rooms were full (a problem that should be addressed in future editions) and we had no other option than to follow the session outside via live-streaming until a spot would open or we would give up and attend a plan B session. Still, our strategic planning and the division of the sessions among the four of us helped us cover the most relevant sessions for our activity and get the most out of the conference. Next, you’ll find highlights of the most interesting presentations we attended, grouped by topic.
Accessibility and inclusion
AI
Robotics
Recommender Systems
We attended a UX Event that focused on the disconnection between practitioners and researchers and explored ways to bridge the gap between the two communities. Leading UX practitioners and researchers like Stuart Reeves from Mixed Reality Lab, University of Nottingham, presented their findings on “Studying Practitioners’ Work”. There was an emphasis on the fact that research in practitioners’ work is valuable to the HCI community. . For example, some of the ideas proposed the creation of new role, a facilitator that can mediate between the two groups and foster dialogue, or the creation of spaces where both groups can collaborate towards a common goal and build understanding during the process.
The closing keynote was given by Ivan Poupyrev, Director of Engineering and Technical Projects Lead at the Google’s Advanced Technology and Projects (ATAP). He has over 20 years of experience leading invention, development and productization of breakthrough technologies such as the internet of things, VR/AR and haptic interactions. During the talk, he focused on the need to break from the limited interactions confined to our digital devices and proposed a vision in which “the world is the interface”. He aims to generate a future where interfaces allow us to interact with technology in a more natural way, taking advantage of the physical objects that surround our everyday life.
To illustrate this he presented the work he is currently developing at Google, his most recent explorations in creating a pico-radar sensor for touchless gesture interaction (Project Soli), and designing a platform for manufacturing interactive, connected soft goods at scale (Project Jacquard). In fact, he wore the Levi’s Commuter Jacket produced during this project to fashionably control his presentation, casually touching his left arm to go through the slides (though we must say he faced some technical difficulties that forced him to rely on the old clicker!) Even though many questions remain about the actual usefulness and potential problems these kind of products can present, the talk was inspiring and invited us to look into the future. It was interesting to see the approach he took to develop this product, embedding the technology in the very raw material used to produce the jackets, aiming to enable non technical producers to enter the field of smart garments and thus change the way we interact with the world at a large scale. (Keynote video on YouTube)
This keynote marked the end of CHI2019 and of our visit to Glasgow. CHI was a great platform to learn more about current research areas popular in the HCI community and a great place to create connections and learn from one another.
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.