Hervé Déjean |
2020 |
A great deal of human history is detailed in handwritten documents that have yet to be analysed. Extracting information from such documents has, until recently, represented an extremely time-consuming and labour-intensive task. For this reason, there remains a treasure trove of untapped information that could help provide insight into, for example, the impact of industrialization on populations. Indeed, some of these records—such as those documenting epidemics—may provide useful context for our understanding of modern problems like the COVID-19 pandemic. Is there a way that we can glean and subsequently analyse data from these handwritten records such that we can quickly spot trends and draw conclusions?
Previous approaches to this problem have generally focused on extracting information from modern (printed or digital) documents. Automatic extraction from archival (handwritten) documents presents a much more nuanced challenge and has only been possible for the last 2–3 years (1). In 2019, we co-organized the first competition for table recognition (TR) with both archival and printed documents, with good results (2). Work has been carried out to extract geographical information from archival geometrical surveys (3), and work on machine-learning systems to digitize handwritten material by other groups is ongoing (4). However, as far as we know, we are the first to develop a platform based on this technology for public use.
Our demonstrator, called Vital Records, has been developed to parse and analyse large volumes of handwritten data. The project forms part of the EU Horizon 2020 project READ, which aims to revolutionize access to archival documents. The Vital Records demonstrator, based on a collection of 80,000 pages, brings to life records that were handwritten by more than 700 different priests from 200 parishes in Germany from 1848 to 1878. The demo illustrates how state-of-the-art deep-learning methods—handwritten text recognition (HTR), TR and information extraction—can be used to transform these records into a digital format that can be queried and visualized in different ways to enrich our knowledge from previously unexplored sources of information.
Vital Records allows users to browse and visualize the data set extracted from these German records using spatio-temporal criteria, in addition to the usual search queries. From there on, some notions of German (and even old German) may be useful!
DISCLAIMER: The following examples are used to illustrate what’s possible. They have not been scientifically validated by social historians.
Death records provide a range of useful information. As well as the name, age, date and cause of death, the records include the profession of the deceased. With this information in hand, we are able to visualize trends over a given period. For example, we used Vital Records to trace the evolution of professions between 1847 and 1877. The resulting graph, shown in Figure 1, shows the number of deaths recorded for which the field of profession (Stand) contains weaver (Weber), shoemaker (Schumacher) and miller (Müller). The graph may be considered a proxy for estimating the evolution of these professions during this period. It’s possible, too, to spot locally relevant characteristics. For example, data extracted from these records shows a high number of glassmakers (Glasmacher) around Zwiesel, a region that remains well known for its production of glassware to this day.
In addition to the visualization of data using graphs, Vital Records makes possible the development of temporal animations for which the time step can be adjusted by year, month or day. By combining both spatial and temporal information in this way, we are able to visualize the evolution of a given query over space and time.
Our favourite query is the spread of scarlet fever in 1871 as illustrated in the video below:
It is also possible to track specific events, such as the opening of a hospital in a parish or the arrival of a train line. You may even query why, for example, Dr Kufner travelled from Osterhofen to Passau in June 1867; or what happened to railway workers Franzesk, Georg and Luigi in May 1876 in Regen. Answers to questions like these could quickly help solve longstanding puzzles in family trees. (Note: Searching for professions containing the string bahn will provide you with a map of three Bahnstrecke, or train lines: Passau–Regensburg, the Bavarian Forest Railway and München–Simbach.)
To develop this technology and create the Vital Records demo, NAVER LABS Europe and the Passau Diocesan Archives (ABP) began by transforming handwritten tables—like the one shown in Figure 2—into a digital format. The field of handwritten document processing has improved significantly over the past few years, thanks to the neural network paradigm (i.e. algorithms based on the human brain that are designed to recognize patterns). First, we used the automated recognition and transcription platform Transkribus (part of the READ project with over 30,000 users) to transcribe 1000 pages from the German records. We then used these transcriptions to train an HTR model that could automatically digitize the information from the Death, Birth and Wedding records of the archive. We found the character error rate to be around 10%, meaning that on average one letter out of every 10 is wrongly recognized.
Next, we focused on converting an image of a table into a spreadsheet. Understanding a page means grasping the layout, the relations between textual elements within the page and so on. Although such ‘table understanding’ remains a challenge, it’s one that can now be addressed with deep neural network (DNN) technology. We developed a DNN that would learn how to organize a set of lines such that the information could be arranged into table rows. To achieve this, we enlisted the help of a graph convolutional network (4,5). Using the ABP collection (1000 pages containing tables that had been annotated to train the models), our system was able to recognize nine out of 10 table rows.
We didn’t know whether the 10% error rate of the HTR and the TR model was sufficient for the task. This uncertainty is what originally seeded the idea that led to the development of Vital Records. Building a user interface that enables experts to search the collection through space and time would provide an answer to the question, ‘Are the results sufficiently good to be useful to users of the archive or social historians?’
Once the information in the table had been extracted (see Figure 3), we used a basic state-of-the-art named-entity recognition tool trained with synthetic data. As the textual data in the records is very regular (e.g. names, dates and family situation), we wrote a text generator that allowed us to generate a large quantity of training data and then train the named-entity recognition tool with it to recognize each word category (first name, last name, death date, etc).
Finally, we focused on spatio-temporal indexing. Associating each record to a geographical location and a temporal point enables navigation of the data through space and time. We achieved spatial indexing simply, through metadata at the parish level (since we know the parish associated with each book). We then refined this geographical data by indexing the location/address of each person. This required disambiguation, however, because some location names—even at the level of the diocese—are ambiguous, as they can refer to various locations/villages (e.g. the location name Oberndorf refers to more than 10 locations in Lower Bavaria and more than 40 in the whole of Bavaria). The second indexing step (i.e. temporal indexing) requires information to be extracted from each record. Even if there is a dedicated column for this information in every type of record, its extraction and normalization requires some processing. This is because the string extracted from the image by the HTR must be normalized and converted to a timestamp that a computer can handle (see Figure 4). Furthermore, although the month and day are usually written in the record itself, the year may be ‘factorized’ at the page level, might only exist in the first record of the page or could even occur a few pages before the one being processed (something a human would not have an issue inferring). Detecting this is still noisy, and we’re currently working on our model to improve its ability to appropriately extract this data.
The first time we showed colleagues the spatio-temporal result for scarlet fever (shown in the video), they said, ‘But this is just random data’! There were two reasons for this response: first, the data hadn’t been presented with a cumulative view; and second, they couldn’t see where the data had originated from. Their response made clear how important it is to provide cumulative information as well as the means to verify it. For this reason, we have implemented an easy way to go back to the original documents from which the information was extracted.
In summary, Vital Records uses information extracted (via HTR and TR machine-learning technology) from archival documents to enable users to visualize trends through history, and even to track specific events. The demonstrator that we have developed, based on data obtained from German parish records between 1848 and 1878, showcases the spatio-temporal capabilities of the demonstrator.
We have almost finished processing the Birth and Wedding records and will be adding them to the demo soon, along with the ability to search across the three types of records. We hope that one outcome of this upgrade will be the automatic generation of family trees. Additionally, a major milestone will be to infer demographic information from these data, such as the population size, and some demographic statistics (birth and death rate, for instance). This can only be computed/estimated when the three types of records are linked, enabling identification of the same person across the three records.
The Vital Records demonstrator is now accessible online. Please try it out and let us know what you think.
If you’d like to explore the data that we used to create Vital Records and are a Transkribus user, just drop me a line and we’ll grant you access to all three collections (Marriages, Births, Deaths) stored in the Transkribus platform.
References
NAVER LABS Europe is a founding member of the European Cooperative Society READ-COOP and continues to work on document processing and information extraction tools. See https://readcoop.eu/ for more information about the READ-COOP.
About the author: Hervé Déjean is a senior research scientist and expert in document recognition and layout. He has created a number of tools that enrich digital documents with semantics and that transform historical handwritten documents into digital ones to help explore our cultural heritage.
The European Union (EU) project Transkribus READ received the EU Horizon Impact 2020 award for “outstanding projects that have used their results to provide value for society”: TRANSKRIBUS
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.