Abstract
Cesar Roberto De Souza, Yohann Cabon, Adrien Gaidon, Naila Murray, Antonio Lopez |
International Journal of Computer Vision, Online First 23-10-2019 |
Download |
Download |
@article{article, author = {De Souza, Cesar and Gaidon, Adrien and Cabon, Yohann and Murray, Naila and López, Antonio}, year = {2019}, month = {10}, pages = {}, title = {Generating Human Action Videos by Coupling 3D Game Engines and Probabilistic Graphical Models}, journal = {International Journal of Computer Vision}, doi = {10.1007/s11263-019-01222-z} }
Abstract
Deep video action recognition models have been highly successful in recent years but require large quantities of manually-annotated data, which are expensive and laborious to obtain. In this work, we investigate the generation of synthetic training data for video action recognition, as synthetic data have been successfully used to supervise models for a variety of other computer vision tasks. We propose an interpretable parametric generative model of human action videos that relies on procedural generation, physics models and other components of modern game engines. With this model we generate a diverse, realistic, and physically plausible dataset of human action videos, called PHAV for “Procedural Human Action Videos”. PHAV contains a total of 39,982 videos, with more than 1000 examples for each of 35 action categories. Our video generation approach is not limited to existing motion capture sequences: 14 of these 35 categories are procedurally-defined synthetic actions. In addition, each video is represented with 6 different data modalities, including RGB, optical flow and pixel-level semantic labels. These modalities are generated almost simultaneously using the Multiple Render Targets feature of modern GPUs. In order to leverage PHAV, we introduce a deep multi-task (i.e. that considers action classes from multiple datasets) representation learning architecture that is able to simultaneously learn from synthetic and real video datasets, even when their action categories differ. Our experiments on the UCF-101 and HMDB-51 benchmarks suggest that combining our large set of synthetic videos with small real-world datasets can boost recognition performance. Our approach also significantly outperforms video representations produced by fine-tuning state-of-the-art unsupervised generative models of videos.
1. Difference in female/male salary: 33/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: uncalculable
4. Number of employees in under-represented gender in 10 highest salaries: 0/10 points
NAVER France targets (with respect to the 2022 index) are as follows:
En 2022, NAVER France a obtenu les notes suivantes pour chacun des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 33 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : non calculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 0 sur 10 points
Les objectifs de progression pour l’index 2022 de NAVER France sont :
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.