Systemic AI research - NAVER LABS Europe

Systemic AI

At the crossroads of AI and software engineering research, addressing the challenges of integrating AI/ML components in large dynamic software systems.


Image to illustrate Systemic AI

Applications and systems that include Artificial Intelligence components have significantly grown in number and complexity since their early days. The promise of AI as a force for improving society is reinforced regularly by innovative companies as well as the press. As such, people and organizations using AI applications have higher expectations than ever about their performance, reliability and quality. Their increasing adoption means more and more companies and researchers are working on improving the basic building blocks as well as integrating them in larger systems. We are at an exciting point in time, in which the tremendous opportunities AI brings are mixed with great challenges. This is similar to what happened when other major disruptive technologies, from the steam engine to the internet, were maturing.

The Systemic AI group’s ambition is to explore the specific research challenges at the intersection of AI and software engineering, that are key in creating large, high-quality, reliable systems based on AI components. We also address topics related to continuous evolution of context and behaviour, large scale integration and reuse, and heterogeneous data exchanges.

The three main research areas that we target are Data Management, Architecture and Lifecycle:

Data Management

Data Management deals with the challenge of providing high quality data, at the right time, with the right properties. This is essential when training the AI components used in the systems we analyse but it also important throughout the lifetime of the components as they make their predictions, in particular given their complex interdependencies with the rest of the system. 


This area targets the challenges commonly associated with creating large, dependable, resilient and reliable systems. When organizations shift their focus from ad-hoc creation of  stand-alone components for specific AI tasks, to predictable, mature and repeatable creation and integration into complete solutions, several new concerns appear. They are related to the ability to provide   an architectural framework that enables rapid prototyping and large-scale building of systems by consistently reusing working components in new contexts, while conforming to constraints imposed by requirements and regulations.


This area brings a medium to long-term temporal perspective to the topics described above. Systems need to evolve over time due to changes in requirements, regulations, or indeed data. Many such possible changes induce interrelated effects which need to be kept under control in order to ensure continuous correct system behaviour. Since change is unavoidable, systems need to be designed, created, built and run in a way that makes them ready to evolve when changes occur. In addition to change management, this area also deals with MLOps aspects such as testing, deployment and monitoring.

By incorporating a set of reusable constructs—or behaviours—that enable advanced functionality, Flow makes platform-agnostic application modelling, creation, distribution and maintenance easy. Blog article by Jose Miguel Pérez-Álvarez and Adrian Mos
TAMGU: A new open source programming language to help create, annotate and augment corpora and data. Blog article by Claude Roux

Systemic AI team:

Nikolaos Lagos
Michel Langlais
Adrian Mos
Group lead
Claude Roux