This article is the first in a mini-series on Business Process Management (BPM), a widely adopted methodology to manage and improve processes across organizations*. This piece looks at ‘process design’, also known as ‘process modeling’ which is where you capture the intent you have in mind for your business processes. The process models will specify, step by step, how they should be executed by people and machines. Modeling is therefore a pretty fundamental BPM activity.
So, if you want to design your processes, keep them constantly up-to-date and make sure they can be used on different IT systems, what options do you have? Ask any BPM expert and I’d bet my pet Shih Tzu on it they’d propose a standard Business Process Model Notation (BPMN) editor. And to a certain extent, they’d be right. Because, as a standard, there’s wide support for executing BPMN models in complex enterprise infrastructure stacks. They can also be relatively easily exchanged between tools (although certain technical artifacts aren’t compatible between vendors).
However, there is one big problem with BPMN: it’s a technical language in the same way that UML is. Yet, whilst UML is targeted at technical people (software architects), BPMN is meant to be used by business analysts. There’s a pretty important difference there. Business analysts are good at all kinds of things in manufacturing, healthcare, transportation, finance – you name it, but they’re rarely comfortable with drawing logical architectures and using gateways and signals.
There’s an interesting and useful parallel to draw here with using software to edit and design. You’ve probably seen and used a whole bunch of software tools from anything like basic text editing to photo enhancement, to programming and software design. These tools vary widely in complexity, quality, ease of use and how good they are at generating the output you expect from them.
And we all love tools that work right away or at least with hardly any practice. Tools that are intuitive, never fail, generate perfectly those beautiful PDF reports you need every month or compile that piece of code into a perfectly bug free app.
Let’s pause for a second on the aspect of “hardly any practice”. The amount of practice you need is, just like anything else, pretty much related to how well you know the field that the tool’s been designed for.
Expert programmers can intuitively find their way around even a fully featured Integrated Development Environment (like Eclipse) because the elements of the user interface and their behavior quite simply match their expectations. Some purists would even argue that, if you’re a master of your trade, your tool should be minimalist (Emacs) users, you know who you are). But, for the mere mortals among us, fully featured graphical editors are the best help we can get, and not just when it comes to programming but in any other domain; think Word, Photoshop and even Minecraft! These editors are considered to be WYSIWYG (What You See Is What You Get) where the user clearly expects the output to perfectly match their input, whether it’s typed or drawn.
What’s the equivalent of WYSIWYG for process models? And what would our business analysts be able to do if they had a WYSIWYG editor for their domain of expertise? An editor that, just like for the technical experts, wouldn’t need hours of practice, would be easy to use and would automatically take care of the technical details required to execute and monitor their processes.
Imagine the financial services expert using graphical elements that are intuitive to them, to represent invoices, payments, transfers or loans. Or a healthcare specialist with patient admissions, bed allocations, surgical procedures or medication. What if these ‘graphical studios’ could be adapted to individual needs and domains of understanding of the people who design all these kinds of processes? Once designed they would simply need to be executed by the BPM system in place. To make that possible you might need to generate some BPMN just as PDF needs to be generated from a Word document to more easily share or print it. But that’s fine because the generated BPMN would not need to be seen and edited by them. It would be either immediately executed or go through some minor enhancements by technical staff before execution. Automatic transformation technologies would ensure that further changes to the original process are always kept in sync with the generated BPMN.
We’ve always been big fans in the Xerox Research labs of developing graphical tools and environments to make work easier. In fact, the first WYSIWYG tool was created at Xerox (https://en.wikipedia.org/wiki/Xerox_Star).
Today we’re exploring technology that can automatically generate modern process design studios for multiple business domains and which interface with BPM environments. The generation happens on-the-fly with little bespoke coding and it accommodates the fact the domains constantly evolve and change. Unless, as a business person, you’d rather fire up a BPM process equivalent of Emacs you might want to take a closer look.
The next article in this series will be on process monitoring. I’ll explore how monitoring execution platforms can help you better understand business processes and enhance the models explained here.
For more information, read the paper we presented at BPM 2016 ‘Business Matter Experts do Matter: A Model-Driven Approach for Domain Specific Process Design and Monitoring’ or the ‘Generating Domain-Specific Process Studios’ paper that was recently published at EDOC 2016. Both papers are authored by Adrian Mos and Mario Cortes-Cornax.
This article was originally published on BPTrends.
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.