University of Bari

Department of Informatics

Intelligent Interfaces


Home Research People Meetings Projects

Dialog Simulator





Objective

This testbed was initially designed and implemented in the scope of Magicster, a European Project of the IST-Future and Emerging Technologies Program. Its scope was to simulate affective dialogs with an Embodied Conversational Agent, to show how the dialog is influenced by the social context and the agent's emotional state and how this state is, in its turn, dynamically influenced by the dialog. We employed our testbed to adjust the system components after evaluating their behavior in various situations: we tested the role of context and personality in the activation of multiple emotions, upgraded the dialog strategy and the plan library, revised interpretation of the user moves and improved rendering of the agent moves.
The next step, in the scope of HUMAINE, will be to build a new prototype in Java, in which the 'emotional persuasion' component will be upgraded and an affective user modeling module will be added.

Overview

The system is driven by a Graphical Interface, which interacts with the user and coordinates activation of various modules and exchange of information among them:

  • Mind initially receives information about the setting conditions and selects "personality", "context" and "domain" files accordingly. It subsequently receives an interpreted user move and sends back a list of "emotion intensities" that this move activates in the agent;
  • Dialog Manager receives initial information about the dialog conditions. At every user turn, it receives an interpreted user move with a description of changes produced in the agent's affective state. It sends back an agent move, which is played by the TTS and showed by the body. This move is annotated with an 'Affective MarkUp Language" APML and is stored as an XML file;
  • Body reads this file and generates the ECA, which is displayed in the Interface. Due to the mind-body independence of our tool, several Embodied Agents may be employed to express the agent move. So far, we integrated a 3Drealistic character in a DLL of the Interface (face animation by (Pelachaud et al, 1996) and speech by (Festival) and we developed a wrapper for MS-Agents.

Participants

Valeria Carofiglio
Addolorata Cavalluzzi
Giuseppe Cellamare
B. Nadja De Carolis
Vittorio De Frenzi
Fiorella de Rosis
Roberto Grassano
Sebastiano Pizzutilo

Architecture and Language

The system architecture is shown in this figure:

The Interface was implemented in Visual C++, while the sockets insuring the communication among the different processes are built-in classes of the Interface code. The dialog manager is implemented with TRINDIKIT; emotion activation and argumentation strategies are modeled with belief networks and are implemented with HUGIN APIs.

Publications

  • A.Cavalluzzi, V.Carofiglio and F.de Rosis:
    Affective Advice Giving Dialogs.
    Tutorial and Research Workshop on "Affective Dialogue Systems".
    Kloster Irsee, June 2004
  • F de Rosis, B De Carolis, V Carofiglio and S Pizzutilo:
    Shallow and inner forms of emotional intelligence in advisory dialog simulation.
    In H Prendinger and M Ishizuka (Eds): "Life-like Characters. Tools, Affective Functions and Applications".
    Springer, 2003.
  • A Cavalluzzi, B De Carolis, V Carofiglio and G Grassano:
    Emotional dialogs with an embodied agent.
    In P Brusilovsky, A Corbett and F de Rosis (Eds): "User modeling '03".
    Springer LNAI 2702.
  • I Poggi, C Pelachaud, F de Rosis, V Carofiglio and B De Carolis:
    Greta. A believable embodied conversational agent.
    In O Stock and M Zancanaro (Eds): "Intelligent Information Presentation".
    Kluwer Ac Publishers.
  • C.Matheson, C.Pelachaud, F.de Rosis and T.Rist:
    Magicster: Believable Agents and Dialogue.
    Kuenstliche Intelligenz, 2003.
  • F de Rosis, C Pelachaud, I Poggi, V Carofiglio and B De Carolis:
    From Greta's mind to her face: modelling the dynamics of affective states in a conversational embodied agent. International Journal of Human-Computer Studies. Special Issue on "Applications of Affective Computing in HCI". E Hudlicka and M Mc Neese (Eds). 59, 81-118, 2003
  • How it works

    The first step is to settle the dialog parameters, by specifying the agent personality, the application domain, the context in which interaction occurs, a threshold for removing emotion noise, the displayed agent, and an empathic/non empatic variable. All these variables are employed by Mind; domain and agent are employed by TRINDI, respectively, to select the directory from which to read the application files and to play the appropriate agent move. The interface which guides the setting phase is the following:

    Once the initial setting has been completed, the scheduler triggers the dialog: the agent moves first. Users may then introduce their answers or their questions by touching a control in a touch screen or in a handheld device. This interface includes:

    • a window in which the agent's body is displayed
    • a list of enabled moves
    • a balloon tool tip to clarify the meaning of moves represented with graphical controls
    • a confirm button to send the move.
    The user selects a move in the list and clicks on the 'Send' button. The system receives a user move, makes its processing (interpretation of the move, planning of the next one and so on) and produces the output. This is characterized by the agent move, rendered by the ECA, and the list of moves among which the user will have to select her next one.



    Home Research People Meetings Projects