Motivation

Background and Topic Relevence

Nowadays we are witnessing a new summer of Artificial Intelligence, since the AI-based algorithms are being adopting in a growing number of contexts and applications domains, ranging from media and entertainment to medical, finance and legal decision-making.
While the very first AI systems were easily interpretable, the current trend showed the rise of opaque methodologies such as those based on Deep Neural Networks (DNN), whose (very good) effectiveness is contrasted by the enormous complexity of the models, which is due to the huge number of layers and parameters that characterize these models.
As intelligent systems become more and more widely applied (especially in very “sensitive” domain), it is not possible to adopt opaque or inscrutable black-box models or to ignore the general rationale that guides the algorithms in the task it carries on Moreover, the metrics that are usually adopted to evaluate the ef-fectiveness of the algorithms reward very opaque methodologies that maximize the accuracy of the model at the expense of the transparency and explainability.
This issue is even more felt in the light of the recent experiences, such as the General Data Protection Regulation (GDPR) and DARPA's Explainable AI Project, which further emphasized the need and the right for scrutable and transparent methodologies that can guide the user in a complete comprehension of the information held and managed by AI-based systems.
Accordingly, the main motivation of the workshop is simple and straightforward: how can we deal with such a dichotomy between the need for effective intelligent systems and the right to transparency and interpretability?
These questions trigger several lines, that are particularly relevant for the current research in AI. The workshop tries to address these research lines and aims to provide a forum for the Italian community to discuss problems, challenges and innovative approaches in the area.