The longstanding paradigm of collaborative filtering in recommender systems posits that users with similar behavior tend to exhibit similar preferences. A graph formulation naturally arises from this view: the user-item interactions form a bipartite graph, which can be leveraged to refine recommendations by integrating similarities in users’ historical preferences. This perspective inspired numerous graph-based recommendation approaches in the past
Recently, the success brought about by deep learning led to the development of graph neural networks (GNNs). The key idea of GNNs is to propagate high-order information in the graph so as to learn representations which are similar for a node and its neighborhood. GNNs were initially applied to traditional machine learning problems such as classification or regression and later to recommendation and search. GNNs have in particular led to a new state of the art in top-k recommendation and next-item recommendation.
The GReS workshop on Graph Neural Networks for Recommendation and Search is then an endeavor to bridge the gap between the RecSys and GNN communities and promote inter-collaborations, creating a more attractive and dedicated space to foster GNN contributions to the RecSys domain.
The GReS workshop accepts papers of up to 14 pages following the standard single-column ACM RecSys template. This length does not include references and reviewers will be asked to comment on whether the length is appropriate for the contribution. Submissions are double-blind (and therefore should be anonymized) and to be provided in a single file in .pdf format.
Submission link: https://cmt3.research.microsoft.com/GReS2021/
Paper submission deadline: July 29th, 2021 (AoE)
Author notification: August 21st, 2021 (AoE)
Camera-ready version deadline: September 4th, 2021 (AoE)
The topics relevant to the GReS workshop include (but are not limited to):
3:00 – 3:05 PM: Workshop’s Opening
3:05 – 3:45 PM: Keynote I – Graph Neural Networks for Recommendation by Xiang Wang (National University of Singapore)
3:45 – 3:50 PM: Short Break
3:50 – 4:10 PM: Accepted Paper I – Knowledge Graph Attention for Sequential Recommendations by Amjadi et al
4:10 – 4:50 PM: Keynote II – Towards GNN explainability: Random Walk Graph Neural Networks by Michalis Vazirgiannis (Ecole Polytechnique)
4:50 – 5:20 PM: Long Break on wonder.me
5:20 – 5:40 PM: Accepted Paper II – CoRGi: Content-Rich Graph Neural Networks with Attention by Kim et al
5:40 – 6:00 PM: Accepted Paper III – Referral prediction in Healthcare using Graph Neural Networks by Duarte et al
6:00 – 6:10 PM: Short Break
6:10 – 6:50 PM: Keynote III – Powering Pinterest Recommendations with Graph Neural Networks by Andrew Zhai (Pinterest)
6:50 – 7:00 PM: Workshop’s Closing
Xiang Wang (National University of Singapore)
Title: Graph Neural Networks for Recommendation
Abstract: Graph Neural Networks (GNNs) have achieved remarkable success in many domains and shown great potentials in personalized recommendation. In this talk, I will give a brief introduction on why GNNs are suitable for recommendation, and how to incorporate GNNs into some recommendation scenarios (e.g., collaborative filtering, knowledge-aware recommendation). I will also share some points on what advances we are researching into (e.g., self-supervised learning, contrastive learning). In the last part, I will discuss in the “Achilles’ Heel” of the GNN-based recommender models (e.g., amplify popularity bias), as well as some possible solutions (e.g., causal inference).
Michalis Vazirgiannis (Ecole Polytechnique)
Title: Towards GNN explainability: Random Walk Graph Neural Networks
Abstract: In recent years, graph neural networks (GNNs) have become the de facto tool for performing machine learning tasks on graphs. Most GNNs belong to the family of message passing neural networks (MPNNs). These models employ an iterative neighborhood aggregation scheme to update vertex representations. Then, to compute vector representations of graphs, they aggregate the representations of the vertices using some permutation invariant function. One would expect the hidden layers of a GNN to be composed of parameters that take the form of graphs. However, this is not the case for MPNNs since their update procedure is parameterized by fully-connected layers. In this paper, we propose a more intuitive and transparent architecture for graph-structured data, so-called Random Walk Graph Neural Network (RWNN). The first layer of the model consists of a number of trainable “hidden graphs” which are compared against the input graphs using a random walk kernel to produce graph representations. These representations are then passed on to a fully-connected neural network which produces the output. The employed random walk kernel is differentiable, and therefore, the proposed model is end-to-end trainable. We demonstrate the model’s transparency on synthetic datasets. Furthermore, we empirically evaluate the model on graph classification datasets and show that it achieves competitive performance.
Andrew Zhai (Pinterest)
Title: Powering Pinterest Recommendations with Graph Neural Networks
Abstract: Pinterest is the home of inspiration to over 450M monthly active users. Inspiration comes from our personalized recommender systems generating content from our catalog of billions of ideas. Graph Neural Networks (GNNs) have been a key method in improving the predictive performance of these systems through helping us understand content, search queries, and users more comprehensively. In our talk we will discuss the evolution of these representations starting with (1) using GNNs to model content (pins) combining multimodal node features with our web scale pin-board graph of billions of nodes and edges (2) showing how our content embeddings enable us to learn good representations of search queries and users, combining our GNN embeddings with techniques such as sequence models to capture temporal behavior of users across all of Pinterest. Beyond sharing technical details, we will quantify our learnings with online experimentation showing the impact of our methods.
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.