Program

Tentative Program

Tuesday, July 5 2022

    Opening (09.00 - 09.10 CEST)
    Session Chair: Cataldo Musto

    Session 1 (09.10 - 10.35 CEST)
    Session Chair: Marco Polignano
    •    09.10 - 09.50    Invited Talk - Martijn Willemsen - ​​Explainability in AI and Recommender systems: let’s make it interactive!
    •    09.50 - 10.05    Mohammed Muheeb Ghori, Arman Dehpanah, Jonathan Gemmell, Hamed Qahri-Saremi and Bamshad Mobasher: "Does the User Have A Theory of the Recommender? A Grounded Theory"
    •    10.05 - 10.20    Mouadh Guesmi, Mohamed Amine Chatti, Laura Vorgerd, Thao Ngo, Shoeb Joarder, Qurat Ul Ain and Arham Muslim: "Explaining User Models with Different Levels of Detail for Transparent Recommendation: A User Study"
    •    10.20 - 10.35    Owen Chambers, Robin Cohen, Maura R. Grossman and Queenie Chen: "Creating a User Model to Support User-specific Explanations of AI Systems"

    Break (10.35 - 11.00 CEST)

    Session 2 (11.00 - 12.25 CEST)
    Session Chair: Oana Inel
    •    11.00 - 11.40    Invited Talk - Alain Starke - Using Explanatory Nudges to Support ‘Better’ Decision-Making in Recommender Systems
    •    11.40 - 11.55    Zhirun Zhang, Yucheng Jin and Li Chen: "A Diary Study of Social Explanations for Recommendations in Daily Life"
    •    11.55 - 12.10    Alisa Rieger, Qurat-Ul-Ain Shaheen, Carles Sierra, Mariët Theune and Nava Tintarev: "Towards Healthy Online Debat: An Investigation of Debate Summaries and Personalized Persuasive Suggestions"
    •    12.10 - 12.25    Marco Polignano, Giuseppe Colavito, Cataldo Musto, Marco de Gemmis and Giovanni Semeraro: "Lexicon Enriched Hybrid Hate Speech Detection with Human-Centered Explanations"

    Closing (12.25 - 12.30 CEST)





    Invited Talks





    Using Explanatory Nudges to Support ‘Better’ Decision-Making in Recommender Systems

    dr.ir. Alain Starke - Wageningen University & Research, Department of Social Sciences, Subdivision of Marketing and Consumer Behaviour


    Abstract
    This talk addresses recommender systems in domains where users may seek behavioral change. Food recommender systems have become popular to help users to find foods to buy and eat. An issue is that most popular recommendations are unhealthy, making it difficult for users to be exposed to new types of food or to seek out healthier or more sustainable eating habits. Starke will describe recommender studies in which not necessarily the presented content is changed, but rather how the content is explained, examining how users can be supported to make ‘better’ decisions through different types of explanatory nudges, such as food labels, health-based justifications and normative nudges.

    Bio
    Alain Starke is a researcher on recommender systems and nudging, examining how decision-making interfaces can support changes in preferences and behavior, particularly in the food domain. He obtained his PhD in 2019 at Eindhoven University of Technology, Netherlands, on energy recommender systems. Starke has a dual affiliation. He is a postdoctoral researcher at the Marketing and Consumer Behaviour group, Wageningen, Netherlands, where he investigates consumer acceptance of personalized dietary advice. He is also an adjunct associate professor at the Department of Information Science and Media Studies, University of Bergen, Norway, where he performs user studies with recommender systems in the food and news domains.






    Explainability in AI and Recommender systems: let’s make it interactive!

    prof. Martijn Willemsen - Human Technology Interaction group at Eindhoven University of Technology (TU/e), Jheronimus Academy of Data Science in Den Bosch (JADS)


    Abstract
    Explainability has become an important topic both in Data Science and AI in general and in recommender systems in particular, as algorithms have become much less inherently explainable. However, explainability has different interpretations and goals in different fields. For example, interpretability and explanainability tools in machine learning are predominantly developed for Data Scientists to understand and scrutinize their models. Current tools are therefore often quite technical and not very ‘user-friendly’. I will illustrate this with our recent work on improving the explainability of model-agnostic tools such as LIME and SHAP. Another stream of research on explainability in the HCI and XAI fields focuses more on users’ needs for explainability, such as contrastive and selective explanations and explanations that fit with the mental models and beliefs of the user. However, how to satisfy those needs is still an open question. Based on recent work in interactive AI and machine learning, I will propose that explainability goes together with interactivity, and will illustrate this with examples from our own work in music genre exploration, that combines visualizations and interactive tools to help users understand and tune our exploration model.

    Bio
    Martijn Willemsen (www.martijnwillemsen.nl) is an Associate Professor on human decision making in interactive systems in the Human Technology Interaction group at Eindhoven University of Technology (TU/e) and at the Jheronimus Academy of Data Science in Den Bosch (JADS). He researches the cognitive aspects of Human-Technology Interaction, with a strong focus on judgment and decision making in online environments. From a theoretical perspective, he has a special interest in process tracing technologies to capture and analyze information processing of decision makers. His applied research focuses on how (online) decisions can be supported by recommender systems, and includes domains such as movies, music, health-related decisions (food, lifestyle, exercise) and energy-saving measures. His recent focus is on interactive recommender systems that help users to move forward, developing new preferences and/or healthier behavior, rather than reinforcing their current behaviors. Such systems can provide personalized behavioral change. Martijn also focusses on interactive and explainable AI, with recent work studying health and sport coaches interacting with prediction models.




    Accepted Papers

    • Personalized XAI for AI-driven Personalization - Not in the Proceedings
      Cristina Conati
    • Towards Healthy Online Debat: An Investigation of Debate Summaries and Personalized Persuasive Suggestions
      Alisa Rieger, Qurat-Ul-Ain Shaheen, Carles Sierra, Mariët Theune and Nava Tintarev
    • Explaining User Models with Different Levels of Detail for Transparent Recommendation: A User Study
      Mouadh Guesmi, Mohamed Amine Chatti, Laura Vorgerd, Thao Ngo, Shoeb Joarder, Qurat Ul Ain and Arham Muslim
    • Lexicon Enriched Hybrid Hate Speech Detection with Human-Centered Explanations
      Submission Marco Polignano, Giuseppe Colavito, Cataldo Musto, Marco de Gemmis and Giovanni Semeraro
    • Does the User Have A Theory of the Recommender? A Grounded Theory
      Mohammed Muheeb Ghori, Arman Dehpanah, Jonathan Gemmell, Hamed Qahri-Saremi and Bamshad Mobasher
    • A Diary Study of Social Explanations for Recommendations in Daily Life
      Zhirun Zhang, Yucheng Jin and Li Chen
    • Creating a User Model to Support User-specific Explanations of AI Systems
      Owen Chambers, Robin Cohen, Maura R. Grossman and Queenie Chen