Accepted Papers

Accepted Papers:

  • 1) Federico Cabitza, Matteo Cameli, Andrea Campagner, Chiara Natali and Luca Ronzio - Painting the Black Box White: Experimental Findings from Applying XAI to an ECG Reading Setting

  • 2) Andrea Apicella, Francesco Isgro, Andrea Pollastro and Roberto Prevete - Toward the application of XAI methods in EEG-based systems

  • 3) Luca Putelli, Alfonso Emilio Gerevini, Alberto Lavelli, Tahir Mehmood and Ivan Serina - On the Behaviour of BERT's Attention for the Classification of Medical Reports

  • 4) Nina Spreitzer, Hinda Haned and Ilse van der Linden - Evaluating the Practicality of Counterfactual Explanations

  • 5) Erasmo Purificato, Saijal Shahania and Ernesto William De Luca - Tell Me Why It's Fake: Developing an Explainable User Interface for a Fake News Detection System

  • 6) Mario Alviano, Francesco Bartoli, Marco Botta, Roberto Esposito, Laura Giordano, Valentina Gliozzi and Daniele Theseider Dupre - Towards a Conditional and Multi-preferential Approach to Explainability of Neural Network Models in Computational Logic

  • 7) Lorenzo Malandri, Fabio Mercorio, Mario Mezzanzanica, Navid Nobani and Andrea Seveso - ContrXT: Generating contrastive explanations from any text classifier

  • 8) Francesca Naretto, Francesco Bodria, Fosca Giannotti and Dino Pedreschi - Benchmark analysis of black box local explanation methods

  • 9) Mario Alfonso Prado-Romero, Bardh Prenkaj, Giovanni Stilo, Alessandro Celi, Ernesto Estevanell-Valladares and Daniel Alejandro Valdés-Pérez - Ensemble approaches for Graph Counterfactual Explanations

  • 10) Simona Colucci, Francesco M Donini and Eugenio Di Sciascio - A Human-readable Explanation for the Similarity of RDF Resources

  • 11) Josè Luis Corcuera Bárcena, Mattia Daole, Pietro Ducange, Francesco Marcelloni, Alessandro Renda, Fabrizio Ruffini and Alessio Schiavo - Fed-XAI: Federated Learning of Explainable Artificial Intelligence Models