Workshop Program

Workshop Day: November 30, 2022

Invited Talk

  • When: November 30, h.14.00
  • Speaker: Pasquale Minervini, University of Edinburgh and University College London
  • Title: Backpropagating through complex discrete distributions and symbolic algorithms

  • Abstract: Combining discrete probability distributions and combinatorial optimization problems with neural network components has numerous applications but poses several challenges. We propose Implicit Maximum Likelihood Estimation (I-MLE), a framework for end-to-end learning of models combining discrete exponential family distributions and differentiable neural components. I-MLE is widely applicable as it only requires the ability to compute the most probable states and does not rely on smooth relaxations. The framework encompasses several approaches such as perturbation-based implicit differentiation and recent methods to differentiate through black-box combinatorial solvers. We introduce a novel class of noise distributions for approximating marginals via perturb-and-MAP. Moreover, we show that I-MLE simplifies to maximum likelihood estimation when used in some recently studied learning settings that involve combinatorial solvers. Experiments on several datasets suggest that I-MLE is competitive with and often outperforms existing approaches which rely on problem-specific relaxations.

  • Bio: Pasquale is a Lecturer in Natural Language Processing at the School of Informatics, University of Edinburgh, and an Honorary Lecturer at University College London (UCL). Previously, he was a Senior Research Fellow at UCL (2017-2022); a postdoc at the INSIGHT Centre for Data Analytics, Ireland (2016); and a postdoc at the University of Bari, Italy (2015). Pasquale's research interests are in NLP and ML, with a focus on relational learning and learning from graph-structured data, solving knowledge-intensive tasks, hybrid neuro-symbolic models, compositional generalisation, and designing data-efficient and robust deep learning models. Pasquale published over 60 peer-reviewed papers in top-tier AI conferences, receiving multiple awards (including one Outstanding Paper Award at ICLR 2021), and delivered several tutorials on Explainable AI and relational learning (including four AAAI tutorials). He is the main inventor of a patent assigned to Fujitsu Ltd. In 2019 he was awarded a seven-figure EU Horizon 2020 research grant on applications of relational learning to cancer research and, in 2020, his team won two tracks out of three of the Efficient Open-Domain Question Answering Challenge at NeurIPS 2020. He routinely collaborates with researchers across both academia and industry.


  • h.10.00 - 10.15 - Workshop Opening
  • Cataldo Musto, University of Bari

  • h.10.15 - 10.45 - Session 1
  • (12 min. presentation + 5 min. discussion per paper)
  • Francesca Naretto, Francesco Bodria, Fosca Giannotti and Dino Pedreschi - Benchmark analysis of black box local explanation methods

  • Lorenzo Malandri, Fabio Mercorio, Mario Mezzanzanica, Navid Nobani and Andrea Seveso - ContrXT: Generating contrastive explanations from any text classifier

  • h.10.45 - 11.15 - Break

  • h.11.15 - 13.00 - Session 2
  • (12 min. presentation + 5 min. discussion per paper)
  • Nina Spreitzer, Hinda Haned and Ilse van der Linden - Evaluating the Practicality of Counterfactual Explanations

  • Mario Alfonso Prado-Romero, Bardh Prenkaj, Giovanni Stilo, Alessandro Celi, Ernesto Estevanell-Valladares and Daniel Alejandro Valdés-Pérez - Ensemble approaches for Graph Counterfactual Explanations

  • Mario Alviano, Francesco Bartoli, Marco Botta, Roberto Esposito, Laura Giordano, Valentina Gliozzi and Daniele Theseider Dupre - Towards a Conditional and Multi-preferential Approach to Explainability of Neural Network Models in Computational Logic (Extended Abstract)

  • Federico Cabitza, Matteo Cameli, Andrea Campagner, Chiara Natali and Luca Ronzio - Painting the black box white: experimental findings from applying XAI to an ECG reading setting

  • Andrea Apicella, Francesco Isgro, Andrea Pollastro and Roberto Prevete - Toward the application of XAI methods in EEG-based systems

  • Luca Putelli, Alfonso Emilio Gerevini, Alberto Lavelli, Tahir Mehmood and Ivan Serina - On the Behaviour of BERT's Attention for the Classification of Medical Reports

  • h.13.00 - 14.00 - Lunch Break

  • h.14.00 - 15.30 - Session 3
  • (12 min. presentation + 5 min. discussion per paper)
  • Invited Talk - Pasquale Minervini (University College London)

  • Erasmo Purificato, Saijal Shahania and Ernesto William De Luca - Tell Me Why It's Fake: Developing an Explainable User Interface for a Fake News Detection System

  • Simona Colucci, Francesco M. Donini and Eugenio Di Sciascio - A Human-readable Explanation for the Similarity of RDF Resources

  • Josè Luis Corcuera Bárcena, Mattia Daole, Pietro Ducange, Francesco Marcelloni, Alessandro Renda, Fabrizio Ruffini and Alessio Schiavo - Fed-XAI: Federated Learning of Explainable Artificial Intelligence Models