Finding pleasant running tours and intuitively navigating through them
The use of combinatorial optimization and UX design, together with head-tracking technology, provides a non-intrusive way of helping runners to discover new itineraries.
Sofia Michel, Jutta Willamowski, Shreepriya Shreepriya, Danilo Gallo, Christophe Legras |
2020 |
The use of combinatorial optimization and UX design, together with head-tracking technology, provides a non-intrusive way of helping runners to discover new itineraries.
Running is one of the most popular forms of exercise around the world. It doesn’t require special equipment or complex organization and can be practised almost anywhere. In principle, runners have the freedom to just put on their shoes and go.
However, runners often refrain from doing this—especially in unknown environments—when they don’t know where to go or fear getting lost (1). Indeed, in new environments, it’s not easy for runners to find a pleasant itinerary that matches their constraints and targets in terms of distance and difficulty. Even in the instance that they do manage to find an itinerary that suits their needs, they may not want to try it out due to related challenges (i.e. needing to remember the route or having to rely on cumbersome navigation support to follow it). As we will see, itinerary identification and efficient navigation support for running are in fact pretty complex and complementary problems.
As outdoor sport enthusiasts and scientists, we asked ourselves the question: How can we help runners find pleasant itineraries that match their personal preferences, and then smoothly guide them along these routes?
Among the currently available commercial systems, most (e.g. Garmin, Strava and RunGo) propose itineraries by exploiting user-generated content (i.e. existing collections of running traces) combined with standard visual map-based or voice-based navigation support. However, these approaches have many limitations. First, in terms of itinerary generation, relevant running traces are not always available in a given area. Even in densely populated areas, people are increasingly likely to want to protect their privacy and might therefore prefer not to share their traces. Constructing a tour based on a limited number of traces can be difficult, or even impossible. Although there are a few approaches that generate itineraries using only map data, most of them remain quite basic (e.g. they are based on geometric approaches that don’t take the environment into account, or they consider the environment but limit the user preferences to the distance only). Second, existing navigation support systems are problematic because even those that allow a user to follow an itinerary are often distracting and can disrupt the running activity (2, 3). Visual map-based navigation requires runners to regularly look at their phone or smartwatch, while voice-based guidance requires constant attention from users, who otherwise risk missing an indication.
The objective of our work was to create a solution that generates novel and pleasant itineraries (4, 5), without requiring the availability of running traces, and that provides efficient navigation support which reduces disruption and is pleasant for the runner.
Before designing our solution, we carried out a user study to better understand runners’ needs. The results showed that as well as searching for itineraries that match their personal objectives, runners also desire itineraries to be flexible. In other words, they prefer itineraries that they don’t have to strictly follow from beginning to end, and from which they can deviate during their run. We therefore designed our solution with this additional feature in mind.
With the aim of generating pleasant itineraries, we began by formalizing what makes a path pleasant for a runner. We consider the pleasantness, or quality, of an itinerary as the extent to which its characteristics match runners’ preferences. We split this quality into two aspects: the quality of the path segments that compose the itinerary; and the form, or geometry, of the whole path. The quality of every path segment depends on various criteria (e.g. whether it’s a pedestrian path, as well as its environment and proximity to traffic) that must be translated into a numerical score that can be used as input to the itinerary-generation algorithm.
Using OpenStreetMap (OSM) data and standard routing libraries, we began by building a routing graph. The nodes of this graph are the road crossings, and the arcs are the road segments that link the crossings. Then, based on OSM attributes, we computed a pleasantness score for each arc according to standard runners’ preferences. For example, paths along hiking trails, parks or riversides were assigned a much higher score than paths close to main roads or motorways.
Given the scored routing graph, a starting point, a target distance and an elevation gain, we focused on generating appropriate tours—i.e. loops of the requested length—that maximize the pleasantness score on the visited arcs. We formulated this optimization problem as an arc-orienteering problem, which can be seen as a combination of two well-known NP-hard problems: a knapsack problem, where the goal is to select the best locations on a map for the runner within a (distance) budget constraint; and a travelling salesman problem, where the goal is to find the shortest route that visits these locations.
To solve this problem, we proposed a heuristic algorithm based on a greedy randomized adaptive search procedure (6). Starting from a trivial short tour, the algorithm iteratively computes possible extensions of the current tour and randomly selects one among the best of these to increase its length. This procedure can be repeated until no improvement is found, or the processing time limit is reached (see Animation 1(a) below). There’s a trade-off parameter that balances the greediness and randomness of the algorithm to allow for an efficient exploration of the solution space; the algorithm can be executed in parallel with different trade-offs to produce a variety of results.
While this method allowed the computation of high-scoring tours, we observed that some artefacts can appear, such as artificial detours (those that a user would never choose to take) and generally unnatural-looking paths (involving many unnecessary turns). This wasn’t surprising, since we were only focusing on the score. To improve the geometry of the results, we designed a post-processing step that smoothens the tours: see Animation 1(b). We were also able to rank the generated tours according to their geometric properties (e.g. number of turns) to favour the simpler, more natural-looking ones.
As pointed out by our user study, runners appreciate flexibility in their itineraries and like to feel free to modify them as they go. Sometimes, they’re simply unable to follow the proposed path due to unplanned factors (e.g. construction work, the weather or tiredness). Also, since it’s impossible to account for the subjective preferences of every runner, we believe it’s important to acknowledge that users know best. To ensure they get the most pleasant run, we should support them by adapting the proposed itinerary to suit their needs.
To support such flexibility, we provide a set of alternative paths—see Animation 1(c)—that may be interesting to explore for the runner with each computed tour. The idea behind this is to highlight alternatives to subsections of the original tour that are roughly equivalent, or better, in terms of pleasantness. In addition, we allow users to change their duration or distance targets at any time, and dynamically update the tour accordingly, using a variant of our main heuristic. This makes it possible for runners to deviate and explore beyond the precise initial itinerary, while still ensuring that the quality of the run matches their preferences.
To guide runners through the generated tours, we wanted an efficient navigation support that was also less disruptive compared to existing solutions. To this end, we designed RunAhead (7), a prototype system that provides navigation feedback to runners in response to their natural head-scanning movements (see Figure 1).
Through prior user studies, we have observed that when runners arrive at an intersection, they naturally perform a head-scanning movement as they look at and evaluate the possible path options and their suitability for running. We decided to exploit this head movement to enable runners to query for information about the direction they’re looking towards. To keep the navigation feedback simple, we only provide binary signals that tell users whether the path they’re looking at is one they’re supposed to follow (or not). We also designed the feedback signals to be intuitive, so that runners instinctively understand what they mean.
Our design is based on defining a circle around each intersection, and then mapping the enclosed angles of the ‘good’ path options onto this circle, as shown in Figure 2. When the runner enters such a circle, the RunAhead head-scanning mechanism activates, continuously monitoring the direction in which the runner is looking (or, more precisely, in which direction they’ve turned their head). It then compares the angle of this direction with the enclosing angles of the good path options—i.e. following the proposed itinerary or deviating on one of the alternatives—and provides navigational feedback, depending on whether the runner is looking in the direction of a good path or not.
Once the runner leaves this circle, the head-scanning mechanism inactivates and the system double-checks that the runner has left the intersection on a good path. Otherwise, it generates a signal to urge the runner to go back to the intersection and follow a good path. The feedback system then remains inactive until the runner reaches the next intersection.
We implement this system with a small compass sensor that’s worn on the head, which sends its readings wirelessly to a mobile app running on the runner’s smartphone. We complement these values with GPS information to trigger the right feedback.
We designed different feedback modes to target two types of runners: those who run listening to music, and those who prefer to run immersed in the environment. For runners who exercise with music, we used a change in volume to communicate the quality of the path the runner is looking at. If the path is good, the music volume remains unchanged. Otherwise, it’s set to low. For runners without music, the haptic feedback mode can use vibration (from their phone or smartwatch) as a negative signal whenever the runner looks in the wrong direction at an intersection. Thus, the absence of vibration implicitly constitutes the positive signal that’s given when the user looks at a good path.
We tested these modes against a baseline voice condition that provides conventional turn-by-turn voice directions. Through our user tests, shown in Figure 3, we verified that the head-scanning feedback mechanism was indeed effective and found that music and haptic feedback were the preferred and least intrusive feedback modes.
We were curious to understand why, despite the large array of sport-related tools, many runners still can’t find the help they need with itineraries. We discovered that for proposing original tours, mining map data and guiding users seamlessly along an unknown path remained two major problems that hadn’t yet been solved for such an unconstrained activity.
Our current prototype, shown in the video at the top, integrates innovative solutions that have been validated in various user tests. The running itinerary is constructed as a tour that matches the user’s targets (in terms of distance and location) while maximizing pleasantness (in terms of the environment and route smoothness). Further, the tour is augmented by a set of alternative sub-paths that might be interesting to explore. The navigation relies on head-movement detection, via the wearable RunAhead device, and uses non-intrusive cues to efficiently guide the runner in action.
Although we support some flexibility in the itineraries, we acknowledge that our solution probably doesn’t account for the complexity and variety of runners’ personal preferences, including the desired balance between the amount of guidance and freedom of exploration. These are some of the directions that our multidisciplinary team of both optimization and user-experience specialists is currently investigating.
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.