Mobile payments are destined to replace cash, credit cards and even smart cards. Few people would deny this. But what are the pros and cons of the different technical solutions that make mobile payment systems work?
This question is important, especially in the field of transportation: These systems must be fast, reliable, and secure, and their solutions must reduce overhead cost and installation time.
Payment solutions come in two basic types. The first uses the mobile phone as a ticket. The second, more disruptive approach, uses the mobile phone as a payment terminal.
In the first case, the idea is to simply store the equivalent of your traditional ticket or smart card within the phone. You tap the phone on a terminal to validate your travel. The phone tap doesn’t really change your experience because you must still buy something before you travel. An alternative method to avoid purchasing a ticket is to use the mobile phone as a credit card where the terminal processes the payment.
These approaches have a major drawback. When NFC (Near Field Communication) technology is applied with the phone in this way, every transportation provider must work with every phone operator in order to sort out how the phone will exchange the data. The two parties also have to agree to the fee the phone operator charges for the service. When the phone is used a credit card you need extra certification of the terminal. That terminal needs to have a server connection and on top of that, you’ve got bank charges.
Despite these constraints, these approaches have been most promoted up to now. Less of a surprise has been the very low adoption rate and low return of investment for transportation operators.
For this solution, NFC smart tags are installed at the point of payment such as a bus stop or in a vehicle. Similar to the solution described above, the traveler taps the phone on the tag. The phone connects to the central server of the payment system to report the transaction, and subsequently bills the customer. For the traveler there is no need to buy and download a ticket before travelling. For the operators, NFC smart tags are quick, low cost and very easy to deploy.
Few operators offer this solution because the phone requires an Internet connection to validate the transaction. When the connection is weak or non-existent (say in a rural station or bus stop, or in a mass transit station with large crowds) processing the transaction is either too slow, or can’t take place at all. Deploying this solution in such conditions means being forever subject to massive fraud by travelers not paying or the inability to operate and provide a service if you enforce payment before travel.
Xerox researchers have developed a different solution that uses our expertise in how information is stored, carried and shared. The Xerox Seamless™ Transportation Solution offers transportation operators a system that is compatible with existing ticketing infrastructures, and allows travellers to use their NFC smartphone to tap and pay for their travel. It’s based on a patented invention that allows smartphones to operate as a channel between smart tags and a server.
Our European research team came up with a way for smart tags to perform advanced security/encryption operations and to store transactions. This gives transportation operators a secure mobile payment solution that’s not only seamless for commuters, it’s also cost effective and reliable. Here’s how it works:
Let’s say John wants to use his smartphone instead of buying tickets for his daily commute on the underground railway. He downloads an app to his NFC smartphone, then creates a single account that will pay each of his transportation services with his phone. His smartphone stores an encrypted user certificate that allows him to pay. As he taps on the smart tag at the station, the tag collects the user certificate from John’s smartphone and verifies that he is an authorized user of the mobile payment ticketing system. The tag generates an encrypted certificate of the transaction, stores it in its local memory and gives it to John’s smartphone.
If the station John passes through has network connectivity, the smartphone will push the encrypted transaction certificate to the operator’s server where it is decrypted, and the fare is billed to John. If the station has no connectivity, the process will take place as soon as John moves into an area that has a signal.
If John’s smartphone is hacked so that it never pushes the transactions to the server, he’ll be billed anyway. That’s because each time his phone makes a transaction with a tag, the tag also stores the encrypted transaction in its local memory. John’s transaction will piggyback on the phones of several other travelers who tap the same tag after John. Their phones will take care of sending his transaction to the server. As all transactions are encrypted between the tag and the server, the data being carried by the phones cannot be read.
This invention has overcome the technical barrier that – up to now – has prevented a more disruptive model in mobile payments. This new model offers much more compelling advantages for mobile payment for both users and operators. We’re confident it’s the start of seamless payment for travelers.
Frédéric Roulland is a Xerox computer scientist who leads the Data Intelligence group at Xerox Research Centre Europe. He is the European champion for transportation research activities at Xerox.
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.