Workshop Program

Proceedings

Proceedings of XAI.it are now available on CEUR.ws (volume 2742)

Go To The Proceedings

Invited Talk

  • When: November 25, h.15.15
  • Speaker: Dino Pedreschi, Università di Pisa
  • Title: The dimensions of eXplainable Artificial Intelligence

  • Abstract: Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user’s features into a class or a score without exposing the reasons why. This is problematic not only for the lack of transparency, but also for possible biases inherited by the algorithms from human prejudices and collection artefacts hidden in the training data, which may lead to unfair or wrong decisions. The future of AI lies in enabling people to collaborate with machines to solve complex problems. Like any efficient collaboration, this requires good communication, trust, clarity and understanding. Explainable AI addresses such challenges and for years different AI communities have studied such topic, leading to different definitions, evaluation protocols, motivations, and results. This lecture provides a reasoned introduction to the work of Explainable AI (XAI) to date, focussing on the urgent open challenge of how to construct meaningful explanations of opaque AI/ML black-box decision systems, introducing our early results on the local-to-global framework as a way towards explainable AI.

  • Bio: Dino Pedreschi is a professor of computer science at the University of Pisa, and a pioneering scientist in data science and artificial intelligence. He co-leads the Pisa KDD Lab - Knowledge Discovery and Data Mining Laboratory, a joint research initiative of the University of Pisa and the Italian National Research Council - CNR. He is currently shaping the research frontier of Human-centered Artificial Intelligence, as a leading figure in the European network of research labs Humane-AI-Net (scientific director of the line “Societal AI”). He is a founder of SoBigData.eu, the European H2020 Research Infrastructure “Big Data Analytics and Social Mining Ecosystem”. Dino is currently the Italian member of the Responsible AI working group of GPAI – the Global Partnership on AI, and the coordinator of the working group “Big Data & AI for Policy” of the Italian Government “data-driven” Task-force for the Covid-19 emergency. His research focus is on big data analytics and mining, machine learning and AI, and their impact on society: human mobility and sustainable cities, social network analysis, complex social and economic systems, data ethics, discrimination-preventing and privacy-preserving data analytics, explainable AI.

Program

  • *** November 25, 2020 - h.14.00 - 15.15 ***
  • Session 1 - METHODS FOR BUILDING EXPLAINABLE AI SYSTEMS
  • (Pre-recorded videos + 5 min. discussion per paper)
  • - Luca Capone and Marta Bertolaso - A Philosophical Approach for a Human-centered Explainable AI
  • - Ivan Donadello and Mauro Dragoni - SeXAI: Introducing Concepts into Black Boxes for Explainable Artificial Intelligence
  • - Roberta Calegari, Andrea Omicini and Giovanni Sartor - Argumentation and Logic Programming for Explainable and Ethical AI
  • - Stefania Costantini and Valentina Pitoni - Towards a Logic of "Inferable'' for Self-Aware Transparent Logical Agents
  • - Laura Giordano, Daniele Theseider Dupré and Valentina Gliozzi - Towards a Conditional interpretation of Self Organizing Maps

  • *** November 25, 2020 - h.15.15 - 16.00 ***
  • Session 2 - INVITED TALK
  • (Live Event + 10 min. discussion)
  • - Dino Pedreschi - University of Pisa - The dimensions of eXplainable Artificial Intelligence


  • *** November 26, 2020 - h.10.30 - 11.30 ***
  • Session 3 - EXPLAINING NEURAL NETWORKS BEHAVIOR
  • (Pre-recorded videos + 5 min. discussion per paper)
  • - Federico Maria Cau, Lucio Davide Spano and Nava Tintarev - Considerations for Applying Logical Reasoning to Explain Neural Network Outputs
  • - Bruno Apolloni and Ernesto Damiani - Learning simplified functions to understand
  • - Pierangela Bruno, Cinzia Marte and Francesco Calimeri - Understanding Automatic COVID-19 Classification using Chest X-ray images
  • - Fabio Massimo Zanzotto, Dario Onorati, Pierfrancesco Tommasino, Andrea Santilli, Leonardo Ranaldi and Francesca Fallucchi - Pat-in-the-loop: Syntax-based Neural Networks with Activation Visualization and Declarative Control
  • - Francesco Craighero, Alex Graudenzi, Fabrizio Angaroni, Fabio Stella and Marco Antoniotti - Understanding Deep Learning with Activation Pattern Diagrams

  • *** November 26, 2020 - h.11.30 - 12.30 ***
  • Session 4 - EXPLAINABLE AI SYSTEMS
  • (Pre-recorded videos + 5 min. discussion per paper)
  • - Nazanin Fouladgar, Marjan Alirezaie and Kary Främling - Decision Explanation: Applying Contextual Importance and Contextual Utility in Affect Detection
  • - Matteo Baldoni, Cristina Baroglio, Roberto Micalizio and Stefano Tedeschi - Is Explanation the Real Key Factor for Innovation?
  • - Luca Marconi, Ricardo Anibal Matamoros Aragon, Italo Zoppis, Sara Manzoni, Giancarlo Mauri and Francesco Epifania - Approaching Explainable Recommendations for Personalized Social Learning: the current stage in the educational platform "WhoTeach"
  • - Carmelo Ardito, Yashar Deldjoo, Eugenio Di Sciascio and Fatemeh Nazary - Interacting with Features: Visual Inspection of Black-box Fault Type Classification Systems in Electrical Grids