Experts in computer vision around the world gathered in Venice last week for the International Conference on Computer Vision, a biannual premier event in the field lasting over a week and packed with papers, posters and workshops. NAVER LABS Europe was proud to have several papers, posters and invited talks during the week. Below is a list of our presentations and other contributions to the event with the odd photo!
Main Conference papers
Joint learning of object and action detectors, Vicky Kalogeiton (INRIA, Univ. Edinburgh), Philippe Weinzaepfel (NAVER LABS Europe), Cordelia Schmid (INRIA) and Vittorio Ferrari (Univ. of Edinburgh). Full paper-PDF
Abstract: While most existing approaches for de tection in videos focus on objects or human actions separately, we aim at jointly detecting objects performing actions, such as cat eating or dog jumping. We introduce an end-to-end multitask objective that jointly learns object-action relationships. We compare it with different training objectives, validate its effectiveness for detecting objects-actions in videos, and show that both tasks of object and action detection bene- fit from this joint learning. Moreover, the proposed architecture can be used for zero-shot learning of actions: our multitask objective leverages the commonalities of an action performed by different objects, e.g. dog and cat jumping, enabling to detect actions of an object without training with these object-actions pairs. In experiments on the A2D dataset, we obtain state-of-the-art results on segmentation of object-action pairs. We finally apply our multitask architecture to detect visual relationships between objects in images of the VRD dataset
Action Tubelet Detector for Spatio-Temporal Action Localization, Vicky Kalogeiton (INRIA and Univ. of Edinburgh), Philippe Weinzaepfel (NAVER LABS Europe), Cordelia Schmid (INRIA) and Vittorio Ferrari (Univ. of Edinburgh). Full paper PDF
Abstract: Current state-of-the-art approaches for spatio-temporal action localization rely on detections at the frame level that are then linked or tracked across time. In this paper, we leverage the temporal continuity of videos instead of operating at the frame level. We propose the ACtion Tubelet detector (ACT-detector) that takes as input a sequence of frames and outputs tubelets, i.e., sequences of bounding boxes with associated scores. The same way state-of-the art object detectors rely on anchor boxes, our ACT-detector is based on anchor cuboids. We build upon the SSD framework . Convolutional features are extracted for each frame, while scores and regressions are based on the temporal stacking of these features, thus exploiting information from a sequence. Our experimental results show that leveraging sequences of frames significantly improves detection performance over using individual frames. The gain of our tubelet detector can be explained by both more accurate scores and more precise localization. Our ACT-detector outperforms the state-of-the-art methods for frame-mAP and video-mAP on the J-HMDB  and UCF-101  datasets, in particular at high overlap thresholds.