Trust in a Human-Computer Collaborative Task with or Without Lexical Alignment

Sumit Srivastava, Mariët Theune, Alejandro Catala, Chris Reed

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

Lexical alignment is a form of personalization frequently found in human-human conversations. Recently, attempts have been made to incorporate it in human-computer conversations. We describe an experiment to investigate the trust of users in the performance of a conversational agent that lexically aligns or misaligns, in a collaborative task. The participants performed a travel planning task with the help of the agent, involving rescuing residents and minimizing the travel path on a fictional map. We found that trust in the conversational agent was not significantly affected by the alignment capability of the agent.

Original languageEnglish
Title of host publicationUMAP 2024 - Adjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization
PublisherAssociation for Computing Machinery
Pages189-194
Number of pages6
ISBN (Electronic)9798400704666
DOIs
Publication statusPublished - 28 Jun 2024
Event32nd ACM Conference on User Modeling, Adaptation and Personalization, UMAP 2024: ACM UMAP 2024 - Cagliari, Italy
Duration: 1 Jul 20244 Jul 2024
Conference number: 32
https://www.um.org/umap2024/

Conference

Conference32nd ACM Conference on User Modeling, Adaptation and Personalization, UMAP 2024
Country/TerritoryItaly
CityCagliari
Period1/07/244/07/24
Internet address

Keywords

  • conversational agents
  • human-agent interaction
  • lexical alignment
  • performance trust

ASJC Scopus subject areas

  • Software

Fingerprint

Dive into the research topics of 'Trust in a Human-Computer Collaborative Task with or Without Lexical Alignment'. Together they form a unique fingerprint.

Cite this