By incorporating a set of reusable constructs—or behaviours—that enable advanced functionality, Flow makes platform-agnostic application modelling, creation, distribution and maintenance easy.
By incorporating a set of reusable constructs—or behaviours—that enable advanced functionality, Flow makes platform-agnostic application modelling, creation, distribution and maintenance easy.
Nowadays, there are many tools that enable users without technical knowledge to create content on the internet. From social media platforms, such as Facebook and Instagram, to highly customizable content management systems like WordPress, Weebly and Drupal, to more specialized options—such as LINE Webtoon, for webcomics—a range of systems and platforms exist that make the creation and publication of online content simple.
When it comes to creating applications, though, technical knowledge is usually required. One exception is for very basic apps that enable end users to browse ‘static content’ (e.g. the menu of a restaurant, or tourist information for a location). These can be customized quite simply, since the necessary tools don’t demand advanced technical knowledge. Typically, however, applications are more complicated than this.
As an example, take a reservation system for flight tickets. Such a system would involve a series of tasks: asking the end user for the dates and destination; checking prices and availability; booking the flights; accepting payment; sending notifications and so on. All of these tasks and their interconnections (the logical flow of the app, involving end-user interaction as well as automated tasks) together represent what we call the ‘behaviour’.
For applications that involve user-specified behaviour—such as the ability to make an appointment at a small store, play a trivia game related to a tourist attraction or compose an itinerary to tour a city—the user must have, or acquire, significant technical skills. Furthermore, if we consider deploying such applications for a range of target platforms (including screen-based devices, such as smartphones or tablets, and voice-based ones, like Amazon Echo), the task becomes completely unrealistic for non-technical users. For a visual reference of the skill levels required for each of these kinds of applications, see Figure 1.
With all this in mind, the first objective of our work—Flow (1)—is to empower non-technical users to create apps that contain application logic as easily as they’re currently able to create static content. Our second objective is to simplify the long-term maintenance of the same application for several platforms, which currently poses a major problem for organizations. Typically, different teams of specialized developers are required for each individual platform (as indicated in Figure 2), which multiplies the codebase that must be maintained. As an example, take your favourite music streaming service, online store or social media platform. Each of these can be accessed through a variety of specialized apps which are available on a range of devices, including smartphones, voice assistants, smart TVs and smartwatches. However, these apps—which all run on completely different, platform-specific code—execute the same core logic across all supported devices. Is there a way to employ the logic that’s specific to a given app across all platforms simultaneously, without the need for platform-specific code?
Organizations sometimes use hybrid approaches (e.g. Ionic, React Native and Apache Cordova) with the objective of targeting multiple mobile platforms with the same code. These approaches reduce the number of different codebases that are required by, for example, generating complex platform-specific code from more simple front-end logic. However, hybrid approaches are generally limited in terms of platforms and still require skilled developers to take care of the code. Additionally, long after the initial development of an app, updates must be coded and deployed for each specific platform by the respective development team, sometimes with explicit effort required from the end user (i.e. approving and controlling certain updates).
To bring about the simplicity that’s required to enable non-technical users to create and maintain apps for a variety of platforms, our system relies on two main elements under the hood: the modelling infrastructure, which simplifies app design; and the cloud execution engine (CEE), which takes graphical flows and interprets them.
The modelling infrastructure of Flow enables the use of domain-specific constructs to build reusable behaviour. Not all modelling activities are intended for the non-technical end user, however, as some activities—mainly the creation of the set of reusable constructs that represent the ‘bricks’ that can later be used by non-technical people to create application behaviours—must be performed by specialists. While the remainder of this article focuses on app creation, the creation of these behaviours is briefly illustrated at the beginning of Video 1 on ‘Flow Design’.
The CEE is in essence a native execution environment that takes graphical flows and, from the cloud, ensures that the application specification is delivered to the target platform. The platform then interprets the information in a way that makes sense based on its specific capabilities. The CEE understands the flows directly—with no specific translation required—by reading, analysing and then executing each step, either by interacting with the user (for steps requiring data display or collection) or by executing an external service or internal functionality (such as data retrieval) automatically. The CEE differs from existing engines (like BPMN—business process model and notation—engines, such as Bonita BPM, Camunda BPM and jBPM) in that it is not generic, executes flows natively, can interact with many devices, and provides certain monitoring and control features that other engines don’t. Further, it can be used to form the backbone of stand-alone apps.
In terms of app creation, the main component of the modelling infrastructure is the application behaviour model (ABM), illustrated in Figure 3. The ABM doesn’t contain graphical information or information about the target platform, and it doesn’t implement the behaviour that will be executed within the app. Instead, the ABM describes an application in abstract form, independent of the platform on which it will be executed, as a collection of references to predefined behaviours. The actual behaviours of the app—the flows—are executed solely on the server, and devices only render or collect the data needed in the flows. The CEE then sends the ABM to devices when they connect and provides the appropriate communication to ensure that the right task is received at the right time, without requiring that the device be aware of the flow.
Non-technical users can easily create an ABM—see Video 2: ‘Application creation’—and/or search for the predefined behaviour that they want to include in their app by using a graphical editor, as shown in Figure 3. Please note that the ABM Modeler shown in Figure 3 is just a possible, non-WYSIWYG (what-you-see-is-what-you-get) ABM editor. This means that the final application may have a different look and feel. In fact, this is quite likely, as the ABM is platform independent—the same app will look quite dissimilar on an iPhone compared to a smartwatch, and will be very different indeed when it’s running on a voice-based assistant with no screen.
While the creation of these reusable behaviours is not within the scope of this article, one way to understand them is by drawing a parallel between apps and WordPress blogs. The community provides countless WordPress plug-ins that can be easily integrated into a blog by a user without specialist knowledge. These plug-ins make it simple to add sophisticated functionality to a website without requiring that the creator understand the underlying code. Our reusable functions work in a similar way to these plug-ins. They’re easily available and can be simply added to create the final app, just as a unique blog experience can be created by picking and choosing specific plug-ins.
Only two inputs are required from the user creating an app: first, the contents of the ABM; and second, the platforms that the app needs to be generated for. Once the ABM is created for an application, it’s then deployed alongside other standard actions (e.g. creating space on the device drive or initializing the app-specific database). Additionally, a token is generated at this stage that identifies the application (and therefore the ABM) in the CEE.
In the next stage of the app-generation process, appropriate templates are selected for the target platforms. These templates do not contain specific code for the application logic of the app being generated, but do contain:
In other words, a platform template will differ between platforms—and will contain code that takes advantage of the platform capabilities—but will be exactly the same for all applications. It’s through the interpretation of each ABM that an application will perform its functionalities. A template for mobile phones, for example, will contain logic to display buttons and screens, and will leverage phone sensors and capabilities, while driving user interaction with a native look and feel. For a voice assistant, the template will generate voice output and receive user input in a way that ensures the application logic is respected and the various data elements are presented to the user—or obtained from the user—at the right moments in time. Additionally, the template for each target platform is enriched with the token that was generated earlier to identify the application in the CEE. This enables a connection to be made between the code that will run on each individual platform and the ABM that resides on the CEE.
The next step is to deploy the generated code on the end device (e.g. a smartphone). This step will depend on the final platform (e.g. Android). Most platforms have application distribution mechanisms (in the case of smartphones, for instance, there are specialized application stores that facilitate their distribution).
Finally, the code is executed on the target device by the end user. Each time the application is started, it will connect to the CEE—by sending the token that identifies it—and the CEE will return the ABM. The application then executes the template code, running the logic to interpret the ABM (as well as to interact with the end user) in the most appropriate way according to the platform. All of these stages are laid out illustratively in Figure 4, and examples of execution are shown in Video 3 below.
Video 4 shows how Flow enables a person without any particular technical skills to create a small game that consists of guessing a number.
In this case, the specification of an ABM enables the generation of two applications—one for iOS and the other for Amazon Echo. Finally, Video 5 shows how these applications can be updated automatically, for all users and all devices, merely by updating the ABM and propagating this change.
Our approach enables non-technical users to create (and update) apps for any target platform simply and easily. Flow could also be used by organizations that maintain the same application for several platforms, reducing the time and specialized knowledge that’s required to push the updates for each of them.
In summary, our approach brings about the following advantages:
A major disadvantage of our approach is that it requires an uninterrupted internet connection. However, due to continuous progress in terms of both coverage and connection speeds (2), we believe that this is becoming less and less of an issue over time.
We’re currently considering developing Flow along a number of lines. First, we intend to expand the built-in capabilities of the modelling and execution methods to bring about better support for applications that use artificial intelligence. Second, we hope to introduce support for different domains, such as robotics. Finally, we would like to explore mechanisms that would enable the apps generated using Flow to have offline working modes.
Paper: “From Abstract Specifications to Application Generation”, Jose Miguel Perez-Alvarez, Adrian Mos, ICSE 2020, SEIS Track (Official link: https://2020.icse-conferences.org/details/icse-2020-Software-Engineering-in-Society/10/From-Abstract-Specifications-to-Application-Generation)
Paper video: https://www.youtube.com/watch?v=c33jiAaHlBc
Flow: enabling non-technical people to create and maintain applications
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.