Speaker: Edouard Oyallon, assistant professor at Centrale Supélec, Paris, France.
Abstract: Outstanding supervised classification performances obtained by Convolutional Neural Networks(CNNs) indicate they have the ability to create relevant invariants for classification. We show numerically that this can be achieved through architectures that progressively incorporate invariances, and that such invariances can still preserve most of the signal attributes. On the other hand, we build perfectly invertible CNNs architectures: it shows there is no need to build representations that discard information, in order to obtain good performances on ImageNet. Illustrations are given through Hybrid Scattering Networks [1], based on a geometric representation, and $i$-RevNets [2], a class of invertible CNNs. We explicit several empirical properties, like progressive linear separability [2,3], in order to shed light on the inner mechanisms implemented by CNNs.
Bibliography: