DEMO-13: inSIDE Fair Dialogues: Assessing and Maintaining Fairness in Human-Computer-Interaction

Aussteller: Sabine Janzen¹, Ralf Bleymehl², Aftab Alam², Sascha Xu¹, Hannah Stein²
¹ Deutsches Forschungszentrum für Künstliche Intelligenz
² Information and Service Systems, Saarland University

For simulating human-like intelligence in dialogue systems, individual and partially conflicting motives of participants have to be processed in dialogue planning. Little attention has been given to this topic in dialogue planning in contrast to dialogues that are fully aligned with anticipated user motives. When considering dialogues with congruent and incongruent participant motives like sales dialogues, dialogue systems need to find a balance between competition and cooperation. As a means for balancing such mixed motives in dialogues, we adopt the concept of fairness defined as combination of fairness state and fairness maintenance process. Focusing on dialogues between human and robot, we show the application of the SatIsficing Dialogue Engine (inSIDE) – a platform for assessing and maintaining fairness in dialogues that combines a mixed motive model with a game-theoretical equilibrium approach. Our demo shows the application of inSIDE platform for realizing sales dialogues between customers and a service robot in a retail store. Both, customer and robot, have different motives for participating in the sales dialogue, e.g., searching for the best price or increasing revenue. Nonetheless, empowered by inSIDE the robot has the ability to find a balance between selfishness, i.e. pursuing individual motives, and fair play, i.e. responding to anticipated motives of the customer for creating a dialogue perceived as fair. Main issues of the demo are: (1) assessing and maintaining fairness in dialogues with mixed motives in human-robot-interaction by means of inSIDE; (2) proactive behavior by robot as well as spatial guiding of customer when required in interaction.

YouTube Video: