Chirag Varun Shukla

Chirag Varun Shukla

PhD Student

Ludwig-Maximilians-Universität München

Hello there!

I am a PhD student at the Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence, LMU Munich, supervised by Prof. Dr. Gitta Kutyniok. My research focuses on the interactions between Interpretability and Graph Neural Networks for theoretically-rigorous reliability of GNNs. I currently work on building mathematical foundations for post-hoc explainers as well as self-interpretable models for molecules as part of the MaGriDo project.

I graduated with an M.Sc in Mathematics in 2019, with a focus on graph theory and fluid mechanics, and with a B.Sc in Physics, Chemistry, Mathematics in 2017.

Curriculum Vitae

Interests

Interpretability

Graph Neural Networks

Molecular Machine Learning

Education

PhD in Mathematics, 2021-Present

LMU Munich, Germany

M.Sc in Mathematics, 2017-2019

Christ University, India

B.Sc in Physics, Chemistry, Mathematics, 2014-2017

Christ University, India

Publications

expass

Towards Training GNNs using Explanation Directed Message Passing

Valentina Giunchiglia*, Chirag Varun Shukla*, Guadalupe Gonzalez, Chirag Agarwal

With the increasing use of Graph Neural Networks (GNNs) in critical real-world applications, several post hoc explanation methods have been proposed to understand their predictions. However, there has been no work in generating explanations on the fly during model training and utilizing them to improve the expressive power of the underlying GNN models. In this work, we introduce a novel explanation-directed neural message passing framework for GNNs, EXPASS (EXplainable message PASSing), which aggregates only embeddings from nodes and edges identified as important by a GNN explanation method. EXPASS can be used with any existing GNN architecture and subgraph-optimizing explainer to learn accurate graph embeddings. We theoretically show that EXPASS alleviates the oversmoothing problem in GNNs by slowing the layer wise loss of Dirichlet energy and that the embedding difference between the vanilla message passing and EXPASS framework can be upper bounded by the difference of their respective model weights. Our empirical results show that graph embeddings learned using EXPASS improve the predictive performance and alleviate the oversmoothing problems of GNNs, opening up new frontiers in graph machine learning to develop explanation-based training frameworks.

Interested? Contact me:

I love discussing research!