A unified and interpretable parametric model, available under Apache 2.0 license, that covers the full human lifespan – from infants to the elderly.
| Fabien Baradel, Romain Brégier |
| 2025 |
A unified and interpretable parametric model, available under Apache 2.0 license, that covers the full human lifespan – from infants to the elderly.

Reconstructing 3D body shapes and poses from images or video is the foundation for many robotics, AR/VR, sports analytics and other apps or systems that need a consistent spatial understanding of people.
Up until today this reconstruction, called Human Mesh Recovery (HMR), has relied on parametric 3D human models derived from templates trained on 3D body scans. These scans represent only a limited number of real individuals, mostly healthy adults within narrow demographic ranges, and are inherently discrete samples of human space shape. As a result, the models inherit these limitations – they struggle to generalize to children, elders and diverse body types and they carry over the potential privacy risks of their underlying scan data.
Anny is a new, open-source parametric 3D human model that covers a wide spectrum of humanity, from infants to elders, with interpretable and easy-to-use controls.
A more complete model
For years, the SMPL family of models (SMPL-X, STAR, SUPR, and their variants) has been the workhorse of HMR research. These models learn from expensive 3D scan datasets where their blendshapes are trained on large collections of scans of real humans but we see the following drawbacks:
Recent attempts, such as ATLAS, have improved shape representation but still rely heavily on human scans and don’t include children. In short, the community has lacked a single, all-ages, interpretable, privacy-friendly alternative.
Anny is designed to overcome the shortcomings of existing models. Anny works for all ages from babies, to adults and the elderly, all represented in one unified model. The Anny model is equipped with interpretable controls in the form of simple sliders to modify age, height, weight and muscle which makes shape editing intuitive and easier than with previous models which require advanced technical expertise. Anny doesn’t use 3D human scans but leverages the anthropometric expertise of the MakeHuman open-source community and this calibrates the model with World Health Organization (WHO) data to represent the worldwide population and to make it privacy-friendly. Released under Apache 2.0 for unrestricted use makes Anny freely accessible to organizations of all sizes.
Finally, Anny can be dropped directly into existing pipelines. It produces realistic meshes without training on sensitive scan data and achieves competitive or superior results on HMR benchmarks.
| Category | SMPL Family (SMPL-X, STAR, SUPR, etc.) | Anny |
|---|---|---|
| Data source | Trained on extensive collections of 3D body scans from real individuals. | Built from anthropometric and statistical data (MakeHuman, WHO), without using scans. |
| Population coverage | Limited representation of children and older adults. | Designed to represent all ages and global morphologies within a single model. |
| Shape representation | Struggles with interpolation between age groups (e.g., child–adult transitions can appear unrealistic). | Uses continuous and interpretable parameters that handle age and body variation smoothly. |
| Privacy | Dependent on biometric scan data, which can raise privacy concerns. | Privacy-preserving, since no personal scans or identifiable data are used. |
| User control and interpretability | Parameters often require technical expertise to adjust effectively. | Provides intuitive sliders for attributes such as age, height, weight and muscle mass. |
| Licensing and accessibility | Typically distributed under restricted academic or commercial licenses. | Free and open-source, released under the Apache 2.0 license. |
Figure 1: Comparison of SMPL-X family of models and the Anny model.
Anny is based on the open source assets developed by the MakeHuman community. The MakeHuman tool provides full-body template meshes with blend shapes that are used by artists to model human-like characters.
Anny starts from a high-quality MakeHuman base mesh (over 13,000 vertices and 163 bones) and uses 564 artist-defined “blendshapes”, each representing interpretable human traits known as phenotypes – age, gender, height, weight, muscle and local features like head fat or foot width. This approach captures far greater body diversity than scan-based models.
Shape variation is created through piecewise interpolation between blending prototypes. For example, a mid-age setting (age = 0.5) blends equally between child and young adult shapes, ensuring smooth transitions through the lifespan. Each slider such as age, height or weight, has clear semantic meaning to make Anny intuitive for different user populations.
Anny also adds a skeletal rig with realistic joint hierarchy and, given 3D rotations for each joint, applies blend skinning to deform the mesh consistently, producing lifelike poses.
Video 1: Anny reconstruction of a child blending smoothly into an older child and adult and back again by adjusting features like height, width or amount of muscle for fine-grained realism.
Despite not being trained on 3D scans of humans, Anny can be accurately fitted to them to produce meshes that match diverse human bodies. The result is a model that generalizes well to underrepresented groups and provides a scalable way to train vision systems without real-world privacy risks.

Figure 2: 3D registrations of Anny to 3D scans of adults (top 2 rows)
and children (bottom row).
Anny is designed for direct use in HMR systems, integrated into both single-person and multi-person methods. The model includes precise mappings to and from SMPL-X and HumGen3D so it’s compatibility with current HMR benchmarks. It also powers the Anny-One dataset, generating over 800,000 synthetic images of humans of all ages in diverse poses, clothes and scenes and providing privacy-safe, high-quality training data for computer vision models. Recent benchmarks have established that Anny is comparable or has better performance on standard benchmarks and, on the only benchmark that includes a dataset with children (AGORA), Anny significantly improves accuracy thanks to its all-ages design.
Performance improves as training data scales, confirming that Anny is a robust foundation for synthetic dataset generation. All benchmark details are available in the Anny paper [1].
Video 2: Robot using Anny for Multi-HMR, recognising both children and adults in the scene.
Computer Vision Researchers: Anny provides a drop-in replacement for SMPL-X in HMR tasks. Its interpretable shape space makes it easier to analyze errors and biases and, thanks to the Anny-One dataset, researchers can train on synthetic yet realistic data.
Robotics and HRI: Robots trained with Anny-based datasets learn to interpret gestures, proximity and body diversity more robustly which is crucial for assistive robots that must adapt to all populations.
Gaming and VR: Developers can easily create avatars that grow, age or change body composition realistically. Instead of separate models for children or adults, one system covers the full lifecycle providing new narrative and gameplay mechanics.
Healthcare and Sports: Posture analysis, rehabilitation tools and motion studies benefit from realistic age- and body-aware models. With WHO-calibrated data, Anny can model worldwide growth curves rather than narrow population subsets.
We’ve made Anny really simple to access under an Apache 2.0 license with no registration or gated downloads.
We’ve introduced Anny – a free, open and interpretable 3D human model that spans the full human lifespan. It fits scans without being trained on them, powers HMR pipelines across diverse datasets and generates synthetic training data at scale. Built as a comprehensive foundation that captures body shape variability across ages, genders and morphologies, Anny supports research, creative industries, healthcare, robotics and beyond.
With its open license, Anny can become the standard starting point for anyone who needs to model people in 3D.
1: Human Mesh Modeling for Anny Body, Romain Bregier, Guenole Fiche, Laura Bravo-Sanchez, Thomas Lucas, Matthieu Armando, Philippe Weinzaepfel, Gregory Rogez, Fabien Baradel. arXiv
This work was developed by the HUMANS team at NAVER LABS Europe. We thank the MakeHuman community and the world Health Organisation whose models and data supported the creation of Anny and the Anny-One dataset.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.

To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.

NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.