This article was written for the occasion of the XRCE 20th anniversary celebration. Author: Jacki O’Neill
PDF version of the article
Since everyone from office worker to police officer uses computers nowadays, their design has a major impact on our life. To make systems design successful requires understanding exactly how work is carried out, including the often unnoticed human expertise used every day in what we call unskilled jobs.
Ethnography uses systematic data capture and rigorous analysis to uncover what people actually do instead of just what they say they do. Xerox, a pioneer of applying this research to technology innovation, uses ethnography to identify unmet needs, recommend practices that work and avoid design constraints that impede how work is done.
When carried out prior to design, ethnographic study reveals the real contingencies of the work and helps us design systems which can handle that complexity; post-design ethnography can make the often implicit and taken-for-granted models underlying systems design visible.
In her ground-breaking ethnographic study, Plans and Situated Action, Suchman [i], demonstrated what happens when design bears little resemblance to how people act. In the study even engineers had major problems using early photocopiers because system design embodied cognitive principles about how people plan and act, which bore little relation to how users actually interacted with these machines.
In another study, Whalen et al [ii] provide an eloquent dismissal of the founding principles underlying the implementation of computerized “expert systems” in a call centre. These systems were intended to allow unskilled call agents to answer customer problems in the place of trained experts. However, the so-called expert systems were not able to respond to the contingencies of the interaction in the way human experts could, leaving the untrained agents with little option but to escalate their calls. Most of us have had at least one customer service interaction which ended in frustration because the representative seemed unable or unwilling to deal with our request. Typically this is caused by poorly designed workflow systems rather than truly unhelpful people.
Whilst automation and the pursuit of cost reduction are not inherently bad, the danger comes from an oversimplified model of low-skilled work: just because workers do not need advanced education or long apprenticeships, does not mean a computer can do that work better. There is a fundamental difference between the human skill and computing power: we’re good at interpretation, improvisation and interaction, computers are good at large scale number crunching.
If ethnographers ruled the world of work it would be a very different place. Automation of manual tasks that computers can do just as well or better than humans makes sense. It’s important to recognize, however that the main use for computers should be to enable people to capitalise on their expertise – whether it be highly refined professional knowledge or mundane, everyday sense-making. We can do this through designing systems which 1) enable people to use their skills and reasoning to deal with the problems which routinely crop up, rather than restricting them to rigid workflows or 2) consist of unique combinations of human and digital input to create output that neither can complete on their own, e.g., crowdsourcing image categorisation, where people identify the semantic parts of an object (e.g. wing, beak, etc. of a bird) and the machine applies the rules to identify exactly which object this is (e.g. this bird is a speckled hummingbird). Call centres provide a good illustration of what can go wrong and right with computer system design.
Time and again, ethnographic studies have shown the importance of agency (the capacity of a human to act). These studies have highlighted the myriad human skills which go into even mundane work. For example, we compared two parallel government processes, each consisting of a call centre and a processing centre. Where the teams were collocated, the workers could circumnavigate the rigid workflow with face-to-face communication to ensure vulnerable citizens got the benefits they were due without delay. In another setting the teams were distributed, had no agency to step outside of the workflow and could only communicate using official channels. In this situation, the call centre could not answer many queries – resulting in caller frustration– and it was necessary to employ someone full-time to follow up on citizens’ complaints, most of which could have been avoided.
In another example, we worked with a customer service company that wondered why customers chose to call the call centre when there were online resources to help solve customer problems. By putting the same resources used by the call centre agents online, the company was hoping customers would choose to solve problems themselves, making it possible to remove the middleman.
Studying the call centre agents as well as customers using the online resources we identified all the extra work that the agents did to solve the customers problem – from persuading unwilling customers to troubleshoot in the first place, to translating the customer’s language into that of the online system.
When customers used the online system alone we found that even when a search for answers returned the right results, customers often did not realize it was the right result. Call centre agents, on the other hand, were skilled in making the same search result relevant to the customers’ problem. They used their semantic skills to guide the customer through the problem-solving process. The company realized that simply removing the agents was not an option.
These examples point to an important two-fold lesson: even low skilled workers are often engaging in semantic work which is not apparent at first glance but which is almost impossible for systems to replicate. Secondly, tight control through rigid workflows is rarely optimum, as it tends to stop workers from using their skills effectively. Design should support and enhance human expertise, rather than attempting to automate and control it.
About the author:
Jacki O’Neill was principal scientist at Xerox Research Centre Europe. Her main area of interest is in the design of useful, usable and innovative computer systems, through both the detailed understanding of work practices and a consideration of the interaction of the social and the technical in prototyping and development work.
[i] Plans and Situated Actions: The Problem of Human-machine Communication, Lucille Alice Suchman, Cambridge University Press, 1987[ii] Whalen, J., & Vinkhuyzen, E. (2000). Expert systems in (inter) action: diagnosing document machine problems over the telephone. Workplace studies: Recovering work practice and informing systems design, 92-140.
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.