Running into the unknown with RunAhead - Naver Labs Europe
RunAhead blog image

Running into the unknown with RunAhead: finding pleasant running tours and intuitively navigating through them

The use of combinatorial optimization and UX design, together with head-tracking technology, provides a non-intrusive way of helping runners to discover new itineraries.

Running is one of the most popular forms of exercise around the world. It doesn’t require special equipment or complex organization and can be practised almost anywhere. In principle, runners have the freedom to just put on their shoes and go.

However, runners often refrain from doing this—especially in unknown environments—when they don’t know where to go or fear getting lost (1). Indeed, in new environments, it’s not easy for runners to find a pleasant itinerary that matches their constraints and targets in terms of distance and difficulty. Even in the instance that they do manage to find an itinerary that suits their needs, they may not want to try it out due to related challenges (i.e. needing to remember the route or having to rely on cumbersome navigation support to follow it). As we will see, itinerary identification and efficient navigation support for running are in fact pretty complex and complementary problems.

As outdoor sport enthusiasts and scientists, we asked ourselves the question: How can we help runners find pleasant itineraries that match their personal preferences, and then smoothly guide them along these routes?

Among the currently available commercial systems, most (e.g. Garmin, Strava and RunGo) propose itineraries by exploiting user-generated content (i.e. existing collections of running traces) combined with standard visual map-based or voice-based navigation support. However, these approaches have many limitations. First, in terms of itinerary generation, relevant running traces are not always available in a given area. Even in densely populated areas, people are increasingly likely to want to protect their privacy and might therefore prefer not to share their traces. Constructing a tour based on a limited number of traces can be difficult, or even impossible. Although there are a few approaches that generate itineraries using only map data, most of them remain quite basic (e.g. they are based on geometric approaches that don’t take the environment into account, or they consider the environment but limit the user preferences to the distance only). Second, existing navigation support systems are problematic because even those that allow a user to follow an itinerary are often distracting and can disrupt the running activity (2, 3). Visual map-based navigation requires runners to regularly look at their phone or smartwatch, while voice-based guidance requires constant attention from users, who otherwise risk missing an indication.

The objective of our work was to create a solution that generates novel and pleasant itineraries (4, 5), without requiring the availability of running traces, and that provides efficient navigation support which reduces disruption and is pleasant for the runner.

Before designing our solution, we carried out a user study to better understand runners’ needs. The results showed that as well as searching for itineraries that match their personal objectives, runners also desire itineraries to be flexible. In other words, they prefer itineraries that they don’t have to strictly follow from beginning to end, and from which they can deviate during their run. We therefore designed our solution with this additional feature in mind.

Generating pleasant running tours

With the aim of generating pleasant itineraries, we began by formalizing what makes a path pleasant for a runner. We consider the pleasantness, or quality, of an itinerary as the extent to which its characteristics match runners’ preferences. We split this quality into two aspects: the quality of the path segments that compose the itinerary; and the form, or geometry, of the whole path. The quality of every path segment depends on various criteria (e.g. whether it’s a pedestrian path, as well as its environment and proximity to traffic) that must be translated into a numerical score that can be used as input to the itinerary-generation algorithm.

Using OpenStreetMap (OSM) data and standard routing libraries, we began by building a routing graph. The nodes of this graph are the road crossings, and the arcs are the road segments that link the crossings. Then, based on OSM attributes, we computed a pleasantness score for each arc according to standard runners’ preferences. For example, paths along hiking trails, parks or riversides were assigned a much higher score than paths close to main roads or motorways.

Given the scored routing graph, a starting point, a target distance and an elevation gain, we focused on generating appropriate tours—i.e. loops of the requested length—that maximize the pleasantness score on the visited arcs. We formulated this optimization problem as an arc-orienteering problem, which can be seen as a combination of two well-known NP-hard problems: a knapsack problem, where the goal is to select the best locations on a map for the runner within a (distance) budget constraint; and a travelling salesman problem, where the goal is to find the shortest route that visits these locations.

To solve this problem, we proposed a heuristic algorithm based on a greedy randomized adaptive search procedure (6). Starting from a trivial short tour, the algorithm iteratively computes possible extensions of the current tour and randomly selects one among the best of these to increase its length. This procedure can be repeated until no improvement is found, or the processing time limit is reached (see Animation 1(a) below). There’s a trade-off parameter that balances the greediness and randomness of the algorithm to allow for an efficient exploration of the solution space; the algorithm can be executed in parallel with different trade-offs to produce a variety of results.

While this method allowed the computation of high-scoring tours, we observed that some artefacts can appear, such as artificial detours (those that a user would never choose to take) and generally unnatural-looking paths (involving many unnecessary turns). This wasn’t surprising, since we were only focusing on the score. To improve the geometry of the results, we designed a post-processing step that smoothens the tours: see Animation 1(b). We were also able to rank the generated tours according to their geometric properties (e.g. number of turns) to favour the simpler, more natural-looking ones.

As pointed out by our user study, runners appreciate flexibility in their itineraries and like to feel free to modify them as they go. Sometimes, they’re simply unable to follow the proposed path due to unplanned factors (e.g. construction work, the weather or tiredness). Also, since it’s impossible to account for the subjective preferences of every runner, we believe it’s important to acknowledge that users know best. To ensure they get the most pleasant run, we should support them by adapting the proposed itinerary to suit their needs.

To support such flexibility, we provide a set of alternative paths—see Animation 1(c)—that may be interesting to explore for the runner with each computed tour. The idea behind this is to highlight alternatives to subsections of the original tour that are roughly equivalent, or better, in terms of pleasantness. In addition, we allow users to change their duration or distance targets at any time, and dynamically update the tour accordingly, using a variant of our main heuristic. This makes it possible for runners to deviate and explore beyond the precise initial itinerary, while still ensuring that the quality of the run matches their preferences.

Animation: Illustration of the 3 steps of our generation method: (a) Iterative extension of the tour by inserting new segments (b) Post-processing to smoothen the tour (c) Adding relevant alternatives for flexibility.

RunAhead: Exploring head-scanning-based navigation for runners

To guide runners through the generated tours, we wanted an efficient navigation support that was also less disruptive compared to existing solutions. To this end, we designed RunAhead (7), a prototype system that provides navigation feedback to runners in response to their natural head-scanning movements (see Figure 1).

RunAhead system

Figure 1: Our RunAhead system. RunAhead, a head-tracking device, is a small battery-powered sensor that combines the readings of a three-axis magnetometer, a three-axis gyroscope and a three-axis accelerometer to make a tilt-compensated compass that detects a runner’s natural head-scanning movements. This data is sent from the wearable device to the runner’s smartphone, where it’s combined with GPS information to trigger feedback that’s given via the runner’s earphones (in the form of modulated volume), for example.

Through prior user studies, we have observed that when runners arrive at an intersection, they naturally perform a head-scanning movement as they look at and evaluate the possible path options and their suitability for running. We decided to exploit this head movement to enable runners to query for information about the direction they’re looking towards. To keep the navigation feedback simple, we only provide binary signals that tell users whether the path they’re looking at is one they’re supposed to follow (or not). We also designed the feedback signals to be intuitive, so that runners instinctively understand what they mean.

Our design is based on defining a circle around each intersection, and then mapping the enclosed angles of the ‘good’ path options onto this circle, as shown in Figure 2. When the runner enters such a circle, the RunAhead head-scanning mechanism activates, continuously monitoring the direction in which the runner is looking (or, more precisely, in which direction they’ve turned their head). It then compares the angle of this direction with the enclosing angles of the good path options—i.e. following the proposed itinerary or deviating on one of the alternatives—and provides navigational feedback, depending on whether the runner is looking in the direction of a good path or not.

RunAhead system defines a circle

Figure 2: Our RunAhead system defines a circle around each intersection and then maps the enclosed angles of the ‘good’ paths (i.e. those that would make up part of the itinerary of a pleasant run) onto this circle. Runners are then given non-intrusive guidance (e.g. in the form of music volume adjustment or haptic feedback) based on the direction of their gaze.

Once the runner leaves this circle, the head-scanning mechanism inactivates and the system double-checks that the runner has left the intersection on a good path. Otherwise, it generates a signal to urge the runner to go back to the intersection and follow a good path. The feedback system then remains inactive until the runner reaches the next intersection.

We implement this system with a small compass sensor that’s worn on the head, which sends its readings wirelessly to a mobile app running on the runner’s smartphone. We complement these values with GPS information to trigger the right feedback.

We designed different feedback modes to target two types of runners: those who run listening to music, and those who prefer to run immersed in the environment. For runners who exercise with music, we used a change in volume to communicate the quality of the path the runner is looking at. If the path is good, the music volume remains unchanged. Otherwise, it’s set to low. For runners without music, the haptic feedback mode can use vibration (from their phone or smartwatch) as a negative signal whenever the runner looks in the wrong direction at an intersection. Thus, the absence of vibration implicitly constitutes the positive signal that’s given when the user looks at a good path.

test results

Figure 3: Results from tests on the efficacy of each RunAhead feedback option (music, haptic and audio cues) against a baseline voice condition show that our system was effective, and that music and haptic feedback from RunAhead were preferred by runners. NASA-TLX: NASA Task Load Index, an assessment tool for rating the perceived workload of a system. SUS (System Usability Scale) Score: A score that measures the usability of a system.

We tested these modes against a baseline voice condition that provides conventional turn-by-turn voice directions. Through our user tests, shown in Figure 3, we verified that the head-scanning feedback mechanism was indeed effective and found that music and haptic feedback were the preferred and least intrusive feedback modes.

Summarizing our tour-generation approach and wearable device for runners

We were curious to understand why, despite the large array of sport-related tools, many runners still can’t find the help they need with itineraries. We discovered that for proposing original tours, mining map data and guiding users seamlessly along an unknown path remained two major problems that hadn’t yet been solved for such an unconstrained activity.

Our current prototype, shown in the video at the top, integrates innovative solutions that have been validated in various user tests. The running itinerary is constructed as a tour that matches the user’s targets (in terms of distance and location) while maximizing pleasantness (in terms of the environment and route smoothness). Further, the tour is augmented by a set of alternative sub-paths that might be interesting to explore. The navigation relies on head-movement detection, via the wearable RunAhead device, and uses non-intrusive cues to efficiently guide the runner in action.

Although we support some flexibility in the itineraries, we acknowledge that our solution probably doesn’t account for the complexity and variety of runners’ personal preferences, including the desired balance between the amount of guidance and freedom of exploration. These are some of the directions that our multidisciplinary team of both optimization and user-experience specialists is currently investigating.

References

  1. Investigating and Supporting Undirected Navigation for Runners. David K. McGookin and Stephen Anthony Brewster. Extended Abstracts on Human Factors in Computing Systems (CHI EA ’13), 2013, pp. 1395–1400.
  2. Situationally Induced Impairment in Navigation Support for Runners. Shreepriya Shreepriya, Danilo Gallo, Sruthi Viswanathan and Jutta Willamowski. Conference on Human Factors in Computing Systems (CHI ’19), Glasgow, Scotland, 4–9 May 2019.
  3. Addressing the Challenges of Situationally-Induced Impairments and Disabilities in Mobile Interaction. Garreth W. Tigwell, Zhanna Sarsenbayeva, Benjamin M. Gorman, David R. Flatla, Jorge Gonçalves, Yeliz Yeşilada and Jacob O. Wobbrock. Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (CHI EA ’19), paper no. W30, 2019, pp. 1–8.
  4. Running Tour Generation for Unknown Environments. Jutta Willamowski, Stephane Clinchant, Christophe Legras, Sofia Michel and Shreepriya Shreepriya. 21st International Conference on Human–Computer Interaction (HCII 2019), Orlando, FL, 26–31 July 2019, pp. 528–535.
  5. Supporting Natural Navigation for Running in Unknown Places. Shreepriya Shreepriya, Jutta Willamowski and Danilo Gallo. ACM Designing Interactive Systems Conference (DIS ’19), San Diego, CA, 2019, 23–28 June, pp. 277–281.
  6. Greedy Randomized Adaptive Search Procedures. Thomas A. Feo and Mauricio G. C. Resende. Journal of Global Optimization, vol. 6, no. 2, 1995, pp. 109–133.
  7. RunAhead: Exploring Head Scanning Based Navigation for Runners. Danilo Gallo, Shreepriya Shreepriya and Jutta Willamowski. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20), 25–30 April 2020, pp. 1–13. DOI: 1145/3313831.3376828.