Jelte van Waterschoot, Wieke Harmsen and Iris Hendrickx
In our project BLISS (Behaviour-based Language-Interactive Speaking Systems) we have developed a speech-based chatbot, or voicebot, which can talk in Dutch with participants about their daily lives. The purpose of the chatbot is threefold. First off, we gather information about someone’s wellbeing and happiness and use this information to make follow-up conversations more personal. Secondly, talking about their daily lives with a voicebot creates awareness and gives participants the opportunity to reflect on how they are doing. Thirdly, we want to make the conversation with the voicebot a pleasant experience that entertains the participant.
Recent studies by Ravichander and Black (2019) and Lee et al. (2020) have shown that self-disclosure by a voicebot can lead to deeper self-disclosure of the participant. We aimed at replicating this effect in our study. Additionally, asking questions about topics discussed in previous conversations (“memory questions”) could increase engagement with a voicebot, if done properly (Campos et al., 2018).
Our previous experiments (Van Waterschoot et al., 2020) showed that our pilot version of the voicebot was experienced as an interviewer rather than an equal interlocutor. To remedy this we altered the voicebot. In the latest experiment we addressed the following research questions: 1) does self-disclosure by the voicebot and 2) asking memory questions lead to more engaging conversations? Participants spoke with one of three possible versions of the voicebot: with self-disclosure, without self-disclosure and mixed. Each participant spoke with the voicebot once. A week later they received an email to talk a second time with the voicebot, in which the voicebot would ask them memory questions. We measured the effects of utterance length and number of content words in the different conditions. In this talk we present the results of this experiment and discuss possible avenues for future research.
Campos, J., Kennedy, J., & Lehman, J. F. (2018). Challenges in Exploiting Conversational Memory in Human-Agent Interaction. In M. Dastani, G. Sukthankar, E. André, & S. Koenig (Eds.), Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems (pp. 1649–1657). IFAAMAS. https://dl.acm.org/doi/10.5555/3237383.3237945
Lee, Y.-C., Yamashita, N., Huang, Y., & Fu, W. (2020). “I Hear You, I Feel You”: Encouraging Deep Self-disclosure through a Chatbot. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. doi:10.1145/3313831.3376175
Ravichander, A., & Black, A. W. (2019). An Empirical Study of Self-Disclosure in Spoken Dialogue Systems. Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, July, 253–263. https://doi.org/10.18653/v1/w18-5030
van Waterschoot, J., Hendrickx, I., Khan A., Klabbers, E., De Korte, M., Strik, H., Cucchiarini, C. and Theune, M. (2020), BLISS: An Agent for Collecting Spoken Dialogue data about Health and Well-being. Proceedings of LREC