Several research questions are triggered by this questioning:
1. How can we build transparent user models? Can we design transparent data extraction strategies?
2. Can we think about novel recommendation and personalization strategies that consider transparency and explainability?
3. What is the role of explanation algorithms with a view to more transparent and explainable personalization pipelines?
4. Can we introduce explanation strategies in opaque models, as neural networks and matrix factorization techniques?
5. Can we think about novel metrics that go beyond the accuracy and reward more transparent and explainable recommendations?
6. Can we think about novel personalization paradigms (e.g., chatbots, conversational recommender systems) that enable a more transparent interaction?
7. What is the role of final users in personalization and adaptation algorithms?
The spread of adaptive and personalized systems took its roots in the recent growth of (personal) data, which led to two different phenomena: on one side, the uncontrolled growth of information emphasized the need for systems able to support the users in sifting this huge flow of data. On the other, all the data points about the users which are now available (what she likes, who are her friends, which places she often visits, etc.) led to the definition of very precise and fine-grained user models, that in turn enabled very effective personalization and adaptation mechanisms.
Nowadays we are used to interact with algorithms that exploit such personal data to support us in several scenarios, such as suggesting music to be listened to or movies to be watched. These personalized and adaptive services are continuously evolving and are becoming part of our everyday life, increasingly acting as personal assistants able to proactively help us in complex decision-making tasks.
Unfortunately, most of these systems adopt black box models whose internal mechanisms are opaque to end users. Indeed, users typically enjoy personalized suggestions or like to be supported in their decision-making tasks, but they are not aware of the general rationale that guides the algorithms in the adaptation and personalization process. Moreover, the metrics that are usually adopted to evaluate the effectiveness of the algorithms reward very opaque methodologies as matrix factorization and neural network-based techniques, that maximize the accuracy of the suggestions at the expense of the transparency and explainability of the model.
This issue is even more felt in the light of the recent General Data Protection Regulation (GDPR), which further emphasized the need and the right for scrutable and transparent methodologies that can guide the user in a complete comprehension of the information about her which are held by the systems and of the internal behavior of personalization algorithms. As a consequence, the main motivation of the workshop is simple and straightforward: how can we deal with such a dichotomy between the need for effective adaptive systems and the right to transparency and interpretability?