Article written at the occasion of the XRCE 20th anniversary celebration.
…but in fact, the first markup appeared hundreds of years ago.
Irish monks in the sixth century A.D. who were unfamiliar with the Latin language from the continent first introduced one of the most important tags still in use today – spaces between words*. Standardized punctuation quickly followed[*]. Later, during the Middle Ages, a kind of PML (Paragraph Markup Language) was introduced, similar to today’s SGML but back then only opening tags indicated the beginning of a paragraph. It was not until the 15th century that document layout became what we are familiar with today.
So while we think of markup as modern, it originally was created to help humans read, and has since evolved into an essential tool for computer processing. Where this evolution is headed is important, given the growth of documents and their relationship to automated workflows. While the production of documents is not growing at quite the same exponential pace of numerical data, it is still huge and critical to many businesses. Just as the monks relied on tags to help humans read, the effort to “teach” computers how to structure documents will lead to more efficient workflows.
Dividing content from form
Markup has a long tradition in print shops where it is used to format and correct manuscripts. This tradition was passed over into photocomposition before computation with languages such as:
<CC 0,5, 12>Nortext,
:li.Ordered GML (Generalized Markup Language)
.bu .b Troff[†].
These first computed markup languages were more or less a copy of existing manual practices, and corresponded with textual streams where procedural tags about layout were added to explain how content must be laid out. At the end of the 1960s, under the impetus of the Graphic Communication Association the GenCode (Generic Coding) Committee was created with the goal of reflecting on the separation of document information content from document format. Their conclusions, embodied in the “GenCode® concept”, were that:
Furthermore, markup languages [should] greatly facilitate the sharing of data and the integration of diverse types of software, yielding a new era of efficiency and flexibility.”[‡]
The first technological achievement that put these principles into practice was GML (Generalized Markup Language) designed by C. F. Goldfarb, E. Mosher and R. Lorie in the 1970s. Then followed its offspring: SGML, HTML and XML. XML, which leveraged the experience of SGML but which was easier to manipulate, emerged as an industry standard and is recognised as “the universal format for structured documents and data on the Web.”[§] Well specified and based on international standards such as Unicode, a set of tools was quickly developed by the community and made available to users. In line with the first GenCode concept of incorporating smaller documents within bigger ones, specific languages were also designed including SMIL for multimedia data, RDF for resources or MathML for mathematical formulae.
Following the second GenCode concept, a structured document is one that is hierarchically organized by elements explicitly marked up with a tag. This markup, which reflects the semantic structure of the document, does not correspond to layout instructions. The layout instructions are given by a style sheet (XSL for instance), which indicates how a tag (e.g., its content) must be laid out.
Finally, a document type called schema, may be associated with a document, so that it can be validated e.g., to check it complies with a specific model. This warranty prevents inconsistency in further processing. This separation of content from form is a benefit for publishers who now target multiple output channels. It is also essential in automated document processing to:
Who uses markup today?
It would be reasonable to assume that anyone who is literate can write a digital document using simple word processing software. But who is able to actually design a markup document? To do so one must select the correct document model, properly markup its pieces, validate compliance against its chosen model and select the formatting style sheet to display it. After having worked 20 years on markup, Brian Reid, father of the Scribe system, presented his reflections on markup technologies in 1998 one of which is still valid to this day: Most people won’t use abstract markup even if you threaten them.[**] Document markup is beyond a layman’s skills and is reserved for professionals[††]. Word processing software provides some assistance with spell checkers and grammar checkers. Yet today’s markup checkers are at the level of the very first spell checkers from the 1960s which could only look up a word to see if it existed in a checklist. Unstructured documents accumulate in our PCs every day and the “new era of efficiency and flexibility” that markup could bring is still a dream.
If most of these documents are unstructured, how can we automate their processing? How can they be easily structured to be fed into automated workflows? If humans are not ready to accept this burden, computers have to be taught to undertake it.Over the years, a team at XRCE has been investigating the challenges related to the automation of document processes, including document understanding, document conversion to XML and schema management. The team addresses research challenges relevant to analyzing and understanding document collections based on their layout and structural organization. These methods can be thought of as going beyond what Optical Character Recognition systems do at the character and word level to reconstruct higher level structures, such as document sections, tables of contents, indexes, etc. to automatically markup documents.
The main difficulty lies in the enormous variety of document content layout. To exhaustively inventory all the possibilities is a never ending and expensive task. One alternative is to inverse this pattern recognition problem by relying on constraints that structure the different elements in a given document (for instance the incremental relation between two page numbers), without having to describe their layout characteristics. [‡‡] We simply learn for each document which layout has been used, and mark it up accordingly. These methods have been successfully applied to large-scale customer cases where required information has been extracted from millions of pages, automatically feeding customer databases. Some examples of this technology are available as online web services on Open Xerox including the popular Pdf2epub converter. This service will automatically convert for you your PDF file into an ePub file, so that reading will be optimal on tablets and readers.
[*] Pause and Effect: An Introduction to the History of Punctuation in the West, Malcolm Beckwith Parkes, University of California Press, 1993[†] http://www.troff.org/[‡] The SGML Handbook, Charles F. Goldfarb, Yuri Rubinsky, Oxford University Press, USA, 1991[§] http://www.w3.org/MarkUp/[**] 1998 Markup Technologies conference.[††] For Reid, “Markup is a mathematical abstraction in the field of data/information.” ibid. Technical Edition[‡‡] H. Déjean, J.-L. Meunier, Logical document conversion: combining functional and formal knowledge, Proceedings of the 2007 ACM Symposium on Document Engineering, pp. 135 – 143.
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.