Workshop Details

ExUM workshop aims to provide a forum to discuss and investigate the role of transparency and explainability in the development of novel methodologies to build user models and personalized systems. Research lines of interest for ExUM include: building scrutable user models and transparent algorithms, analyzing the impact of opaque algorithms on final users, studying the role of explanation strategies, investigating how to provide users with more control in personalized and adaptive systems.


Adaptive and personalized systems have become pervasive technologies which are gradually playing an increasingly important role in our daily lives. Indeed, we are now used to interact every day with algorithms that help us in several scenarios, ranging from services that suggest us music to be listened to or movies to be watched, to personal assistants able to proactively support us in complex decision-making tasks.

As the importance of such technologies in our everyday lives grows, it is fundamental that the internal mechanisms that guide these algorithms are as clear as possible. It is not by chance that the recent General Data Protection Regulation (GDPR) emphasized the users' right to explanation when people face machine learningbased (or – more in general artificial intelligence-based) systems. Unfortunately, the current research tends to go in the opposite direction, since most of the approaches try to maximize the effectiveness of the personalization strategy (e.g., recommendation accuracy) at the expense of the explainability and the transparency of the model.

The main research questions which arise from this scenario is simple and straightforward: how can we deal with such a dichotomy between the need for effective adaptive systems and the right to transparency and interpretability?

Several research lines are triggered by this question: building scrutable user models and transparent algorithms, analyzing the impact of opaque algorithms on final users, studying the role of explanation strategies, investigating how to provide users with more control in the personalization and adaptation problems.



Topics of interests include but are not limited to:

Transparent and Explainable Personalization Strategies

o   Scrutable User Models

o   Transparent User Profiling and Personal Data Extraction

o   Explainable Personalization and Adaptation Methodologies

o   Novel strategies (e.g., conversational recommender systems) for building transparent algorithms


Designing Explanation Algorithms

o   Explanation algorithms based on item description and item properties

o   Explanation algorithms based on user-generated content (e.g., reviews)

o   Explanation algorithms based on collaborative information

o   Building explanation algorithms for opaque personalization techniques (e.g., neural networks, matrix factorization)


Designing Transparent and Explainable User Interfaces

o   Transparent User Interfaces

o   Designing Transparent Interaction methodologies

o   Novel paradigms (e.g. chatbots) for building transparent models


Evaluating Transparency and Explainability

o   Evaluating Transparency in interaction or personalization

o   Evaluating Explainability of the algorithms

o   Designing User Studies for evaluating transparency and explainability

o   Novel metrics and experimental protocols


Open Issues in Transparent and Explainable User Models and Personalized Systems

o   Ethical issues (Fairness and Biases) in User Models and Personalized Systems

o   Privacy management of Personal and Social data

o   Discussing Recent Regulations (GDPR) and future directions