Blog author: Luis R. Ulloa
If you live in a city you’re probably spoilt for choice on how to get around – public transport, bike, car sharing, ride sharing or taxi. Yet, despite being faced with all these options, most people still jump in their car to get to their destination. They do so because it’s simple. The effort it takes to figure out the best way to travel with different operators and modes is so big that most people shy away from the complexity and prefer to use what they’re most familiar with. And they do so even though they know it’s not the most economical, eco-friendly, fastest or even enjoyable choice they’ve made.
To tackle this complexity and encourage more sustainable choices, a number of city trip planning apps have appeared to help people organize their travel in the same way GPS navigation systems help drivers on the road. However, building a reliable, easy-to-use trip planning app is a challenge to meet some basic requirements that will make people give the car the definitive boot.
Dynamic, personal and transparent
For a travel app to become the first step in any trip it must be able to do three things – provide the full array of travel options available, incorporate reliable real time information on the fly and learn individual travel preferences to make planning fast and smooth. Making these three capabilities a reality has been the focus of Xerox’s European research labs in creating the trip planner engine at the core of ‘Mobility Companion’.
To illustrate what’s behind multimodal multi-provider planning, let’s go back to the example of the GPS. To model the different routes a vehicle can take the GPS uses graphs. This works perfectly well if you stay in a single vehicle but the graph gets too big to handle if you start switching modes. If for example you decide to leave your car next to a bus stop and continue your travel via public transit, generating a graph with all the possible bus stops and all the possible options of travel from there is extremely challenging. Moreover, most travel options, and in particular scheduled services such as public transport, require not only an origin and destination but also a time when the service is available.
Altogether these options create a combinatorial complexity that would very quickly explode the size of a graph representation of the network.
To address this complexity, a common approach is to simplify the problem by simply ignoring some options. Another way is to allow travellers to switch between modes at predetermined, specific ‘hubs’ and not any just anywhere. This aggregates results at only a few connection points but it has the same problem as the previous solution whereby it excludes a number of options. A third approach is to quite simply ignore the time dimension. None of these solutions are ideal and, in some cases, they may even generate itineraries which don’t make any sense encouraging the traveller to once again head for the car key.
One way to get around the problem of calculating all possible options is to ‘pre-compute’ travel paths. The speed this gives does however have two drawbacks: it requires expensive storage and computing infrastructures and secondly, it cannot integrate any network information changes as the preprocessing may require hours or days of computation.
Our approach is based on a completely different way of modelling the network and an algorithm that allows us to calculate all possible combinations of transport modes without relying on any preprocessing for the parts of the trips that are subject to constant change and in particular, public transport schedules.
Only a sfew transport services are able to provide ‘real time’ information to passengers within a timeframe that is acceptable to the user.
Even when this information is available, most apps are victim to the preprocessing described earlier, and can’t integrate the information at the time of a trip planning request. Very often the trip planner user interface will just mention delays or changes associated to a leg of the itinerary without actually integrating it. The traveller needs to do the calculation themselves.
As more and more real time information becomes available from mobility services systems we believe our system will be alone in proposing its users, valid alternative travel solutions when unforeseen delays and incidents occur on the move.
Multimodal trip planning represents a whole new challenge in visually presenting results to users. When you plan a trip with your car you basically choose between fastest or cheapest in the satnav. In a multimodal trip, the number of options and user preferences is much bigger and more complex to communicate.
You may prefer one type of service or mode of transport over another (e.g. tram vs bus) and want to completely avoid others. You’ll also typically want to consider criteria other than the price and arrival time such as the number of transfers, the frequency of the service, the walking distance or the waiting time between connections. The fact that you’re going to prefer one option from a set of alternatives relies on a complex, context-dependent, combination of criteria that may be totally unpredictable such as the weather, the other constraints you have that day, why you’re travelling in the first place or the mood that you happen to be in.
In proposing options, the usual method is to submit several queries to the trip planning engine with some predefined possible combinations of transportation modes and services. This is designed to return the best result for each combination and allows the user to sort the results criterion per criterion e.g. cheapest, fastest…
The Mobility Companion engine uses a generic model that considers all possible combinations of modes in a single query. This makes it easy to configure the specific user criteria and easily integrate new services or city information as they become available. Yet another mechanism provides alternative paths using the same combination of transport modes to propose different travel experiences.
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.