Sruthi Viswanathan |
2020 |
Navigational apps are relied upon by millions of people every day. However, in addition to helping people navigate the world, many now expect their phones to help them discover new places, too. But is it really possible to wander around exploring a new city and, without any prior planning, locate points of interest (POIs, e.g. museums and restaurants) that interest you?
Apps like Google Maps, Tripadvisor and Instagram are often relied upon by those looking for POI recommendations. However, these apps don’t offer personalized results. Attempting an on-the-go search for POIs that suit personal preferences and current needs can be a frustrating and often lengthy process. An app that can understand users’ unique taste, as well as their current mood, could automatically provide a range of personal and context-specific recommendations.
Our aim is to inform the design of this kind of ‘ambient wandering’ technology. To this end, we’re developing an intelligent contextualized and personalized recommendation system with a focus on delivering POI information to people on the go, based on their location and mindset. Before beginning work on any POI recommendation algorithms, we decided to first research and understand the expectations of end users. For this, we created the concept of an intelligent POI recommending assistant for mobile urban exploration that we named Ambient Wanderer.
We originally envisioned Ambient Wanderer to be used by everyone, in much the same way as a navigational app. When we looked at studies on urban exploration and information retrieval, however, we noticed that there’s been very little focus on newcomers to a city. So, in a spectrum of urban explorers—ranging from tourists who have limited knowledge of the city they’re visiting to local residents who know it pretty well—we chose to recruit people who had recently relocated to a new city (from within their own country or abroad). The POI needs of these newcomers tend to correspond to those of both locals and tourists. Also, prior research has shown that these people are likely to explore by walking, wandering and wayfinding to get familiar with their new habitat [1]. To better characterise their in-between nature, we refer to these people as ‘new locals’. To validate and strengthen our concept, we elicited the feedback of potential end users. Then, using this information, we derived design implications for the development of the system.
In early evaluations by other groups looking to integrate user feedback into intelligent systems, the use of low-fidelity (pen-and-paper) prototypes [2, 3] and concept testing with storyboarding [4] has helped in the development of better algorithms and features. However, in our case, such low-fidelity approaches are too far removed from the real-life application (i.e. actually using Ambient Wanderer in the street). To test the personalized aspects of Ambient Wanderer, we needed to create profiles for each participant based on their personal information [5]. With these requirements in mind, we customized a hybrid Wizard of Oz prototyping methodology [6]. In a Wizard of Oz experiment, participants believe that the system they are interacting with is autonomous, when it is in fact being operated (partially or wholly) by an unseen human being. Our hybrid approach uses a high-fidelity prototype of Ambient Wanderer, annotated with the participants’ personal preferences and the contextual variables of their environment. By inputting recommended POIs manually, based on a user’s profile (collected from a pre-questionnaire) and the user’s current context (location, weather and time of the day), we were able to remove the human ‘wizard’ and have our prototype effectively fake an ideal recommendation system that supports serendipitous urban exploration.
The user interface of the Ambient Wanderer prototype consisted of three key features: a localized map; POI information; and seven ‘mindset’ options (shown in Figure 1). These mindsets include a range of categories—such as ‘I’m hungry’ and ‘Hidden gems’—that were conceived by our research team during a card-sorting session [7] to guide the selection of recommended POIs. Unlike traditional POI categories that are descriptions of places (e.g. ‘Restaurants’ and ‘Museums’), mindsets enable users to explore POIs based on a description of their state of mind. In addition to acting as visual cues that provide ideas for on-the-spot exploration of the city, we hoped that the mindsets would also push the participants to think about the POI needs they have that aren’t being catered for within the prototype.
To test our prototype, we enlisted 12 new locals from Grenoble, France. We used critical incident interviews [7] and asked the participants to use Ambient Wanderer in the field to derive implications for the design of a POI recommendation system that would serve it.
From a qualitative thematic analysis of the experimental sessions with the Ambient Wanderer app, we obtained the following findings regarding user behaviour:
We also succeeded in validating our concept of Ambient Wanderer. The ‘state of mind’ nature of the search, supported by mindsets, was appreciated by the participants (see Figure 2, left). They found our interface ‘clean’ and ‘easy’ and 100% of participants agreed that Ambient Wanderer was fun and engaging. Based on findings from this experiment, we plan to grow our collection of mindsets to provide a wider range of options for a more accurate reflection of user needs. To this end, we also intend to develop customizable and user-generated mindsets and to work on a system that can automate the classification of different POIs into suitable mindsets.
Figure 2: Left: Results from our post-experiment questionnaire display a positive overall reception to the Ambient Wanderer concept. Right: A Nightingale Rose Visualization compares the urban exploration needs of the new locals (pink) to locals (yellow) and tourists (blue).
As we’d expected, the results from our study show that the behaviours and needs of new locals are similar to those of locals and tourists. We’re currently comparing our observations to further clarify the global picture of our POI recommendation system. Beyond the development of Ambient Wanderer, we aim to perform studies with a larger sample—including the entire spectrum of locals, new locals and tourists—to assess how well the system fits to the specific needs of each user category.
For a detailed description of how we designed our methodology for the hybrid Wizard of Oz experiment, how we uncovered the urban exploration needs of the new locals and the corresponding implications for designing a POI recommender system, see our full paper at the Designing Interactive Systems 2020 conference [8].
Acknowledgements: The Ambient Wanderer contributors are Sruthi Viswanathan, Behrooz Omidvar-Tehrani, Adrien Bruyat, Frédéric Roulland and Antonietta Grasso.
References
[1] Making the City My Own: Uses and Practices of Mobile Location Technologies for Exploration of a New City. Louise Barkhuus and Donghee Yvette Wohn. Personal and Ubiquitous Computing, vol. 23, no. 2, 2019, pp. 269–278. DOI: 10.1007/s00779-018-01191-z.
[2] Toward Harnessing User Feedback for Machine Learning. Simone Stumpf, Vidya Rajaram, Lida Li, Margaret Burnett, Thomas Dietterich, Erin Sullivan, Russell Drummond and Jonathan Herlocker. Proceedings of the 12th International Conference on Intelligent User Interfaces (IUI ’07), Honolulu, HI, 28–31 January 2007, pp. 82–91. DOI: 10.1145/1216295.1216316.
[3] Integrating Rich User Feedback Into Intelligent User Interfaces. Simone Stumpf, Erin Sullivan, Erin Fitzhenry, Ian Oberst, Weng-Keen Wong and Margaret Burnett. Proceedings of the 13th International Conference on Intelligent User Interfaces (IUI ’08), Gran Canaria, Spain, 13–16 January 2008, pp. 50–59. DOI: 10.1145/1378773.1378781.
[4] Triptech: A Method for Evaluating Early Design Concepts. Julie Anne Séguin, Alec Scharff and Kyle Pedersen. Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (CHI EA ’19), Glasgow, Scotland, 4–9 May 2019, pp. 1–8. DOI: 10.1145/3290607.3299061.
[5] Wizard of Oz Prototyping for Machine Learning Experiences. Jacob T. Browne. Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (CHI EA ’19), Glasgow, Scotland, 4–9 May 2019, pp. 1–6. DOI: 10.1145/3290607.3312877.
[6] Hybrid Wizard of Oz: Concept Testing a Recommender System. Sruthi Viswanathan, Behrooz Omidvar-Tehrani, Adrien Bruyat, Frédéric Roulland and Antonietta Maria Grasso. Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (CHI EA ’20), Honolulu, HI, April 2020, pp. 1–7. https://doi.org/10.1145/3334480.3383097
[7] The Critical Incident Technique in Service Research. Dwayne D. Gremler. Journal of Service Research, vol. 7, no. 1, 2004, pp. 65–89. DOI: 10.1177/1094670504266138.
[8] Designing Ambient Wanderer: Mobile Recommendations for Urban Exploration. Sruthi Viswanathan, Behrooz Omidvar-Tehrani, Adrien Bruyat, Frédéric Roulland and Antonietta Grasso. Proceedings of the 2020 ACM Conference on Designing Interactive Systems (DIS ’20), 6–11 July 2020. DOI: https://dl.acm.org/doi/abs/10.1145/3357236.3395518
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.