A novel approach to indoor localization that uses magnetic field data from smartphone sensors and deep learning
A novel approach to indoor localization that uses magnetic field data from smartphone sensors and deep learning
One of the reasons smartphones have become such an integral part of our lives is that their built-in sensors give us information that can be used to create novel user applications. A large number of these applications rely on a person’s location such as step-by-step navigation, mobility solutions, personalized advertisements and assistance to the blind and the elderly (1). These location-based services depend on positioning technology which needs to be robust in both outdoor and indoor spaces and, although navigation satellite systems such as GPS provide reliable outdoor positioning, a corresponding solution is yet to be found for indoor environments – environments with walls which GPS signals have difficulty penetrating.
Numerous techniques for smartphone-based indoor positioning have been developed over the years to substitute GPS. Most have individual strengths and weaknesses depending on the conditions in which they operate. When combined, they can improve both the accuracy and reliability of a service (2) but no single solution can guarantee a reliable and universal service by itself.
The most common approach for indoor positioning is to use existing Wi-Fi and/or dedicated Bluetooth beacons. The basic idea is to use the radio signal strength reception as a measure of proximity. As these systems depend on the infrastructure, they require a large number of access points and broad coverage to achieve a good level of performance. Fluctuations in the signals they receive often lead to inaccurate positioning results and a localization error of typically 2 to 3 metres.
The source of the Earth’s magnetic field, also known as the ‘geomagnetic’ field, lies inside the planet extending outwards into space. This pervasive environmental feature exists at every point on the planet but with differing levels of intensity. It’s relatively constant over short distances yet disturbances, induced by various ferromagnetic objects such as walls, doors, etc., create unique magnetic signatures, with unique patterns, that can be used to determine a position.
We can easily capture the values of the magnetic field along the X, Y and Z axes at a high rate of up to 50–100 Hz using the magnetic sensor (magnetometer) in a smartphone. However, because it’s a smartphone sensor, the data comes in the phone coordinates frame which means that, even if you stand still, the reported magnetic values along the X, Y, Z axes will not be the same if you’re holding the phone up in front of you or if it’s upside down in your pocket. To make this sensor data usable, we need to convert the recorded values into a global, Earth coordinate frame, also known as NED (North-East-Down) or ENU (East-North-Up). This requires knowing the orientation of the phone in space which can be calculated using other IMU (Inertial Measurement Unit) sensors such as the accelerometer and gyroscope. The Madgwick algorithm is commonly used to do this calculation but many others algorithms exist which may perform better depending on the conditions (5).
Once we have the Earth coordinates frame of reference an offline step of data collection and pre-processing is required us to create a magnetic map of the building. Essentially, someone needs to walk through the building and measure the magnetic field along their path. These magnetic values can be visualized in a heat map as in Figure 2 below. This heat map also shows there may be areas with lower and higher values which is good for navigation purposes i.e. the more the features are distinct, the “easier” it will be to locate the user.
There are several ways to localize a user on based on magnetic sensor readings on the map and have developed a novel method that converts 1D time series of magnetic readings into 2D images.
We then use a Convolutional Neural Network (CNN) to obtain hidden features and patterns to regress the user position and localize them.
We learned that, occasionally, two subsequent magnetic readings may result in two locations that are far away from each other. This happens if these two locations exhibit similar “magnetic patterns”. If no additional information is given, the algorithm becomes “confused” and has difficulty in determining the correct location. Recurrent Neural Networks (RNNs) can help overcome this by providing the localization pipeline with more context such as temporal dimensions and by “constraining” subsequent readings so that they be close to each other.
Our approach has been tested the on publicly available MagPie dataset with promising results. This dataset was collected in the University of Illinois and covers three buildings. The received accuracy error was in the range 0.4m–1.1m with more details available in our papers (6, 7).
Although the results were good in an “experimental” setting, the real world presents more challenges with its multitude of devices, different sensor manufacturers, uncalibrated sensors and moving objects, such as elevators, that can affect the magnetic field. Nevertheless, we believe this approach has many benefits – no additional infrastructure required, more stable than WiFi signal, more robust or more privacy friendly than phone camera image-based localization – and that it can greatly improve indoor localization methods for better services. We’re currently experimenting with the more advanced ‘Transformer’ architectures that shown improvement over RNNs and LSTMs in other domains and after that we plan to fuse magnetic based positioning with visual based positioning, to further improve the accuracy.
1. Location-Based Services (LBS) and Real-Time Location Systems (RTLS) Market by Component (Platform, Services and Hardware), Location Type (Indoor and Outdoor), Application, Vertical, Region – Global Forecast to 2025. Markets and Markets, June 2020.
2. The IPIN 2019 Indoor Localisation Competition – Description and Results, F. Portorti et al. 2019.
3. Imaging Time-Series to Improve Classification and Imputation, Z. Wang and T. Oates, Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI), 2015.
4. Magnetic Positioning Indoor Estimation Dataset (MagPIE), BRETL Research Group.
5. On Attitude Estimation with Smartphones, INRIA.
6. Magnetic sensor-based indoor positioning by multi-channel deep regression, L. Antsfeld, B. Chidlovskii, D. Borisov, ACM Conference on Embedded Network Sensor Systems (SenSys), Yokohama, Japan, 16-19 November, 2020.
7. Magnetic field sensing for pedestrian and robot indoor positioning, L. Antsfeld, B. Chidlovskii, International conference on Indoor Positioning and Indoor Navigation (IPIN) 2021, Lloret de Mar, Spain, 29 November – 2 December, 2021 (to appear).
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.