Abstract
Limited linguistic coverage for Intelligent Personal Assistants (IPAs) means that many interact in a non-native language. Yet we know little about how IPAs currently support or hinder these users. Through native (L1) and non-native (L2) English speakers interacting with Google Assistant on a smartphone and smart speaker, we aim to understand this more deeply. Interviews revealed that L2 speakers prioritised utterance planning around perceived linguistic limitations, as opposed to L1 speakers prioritising succinctness because of system limitations. L2 speakers see IPAs as insensitive to linguistic needs resulting in failed interaction. L2 speakers clearly preferred using smartphones, as visual feedback supported diagnoses of communication breakdowns whilst allowing time to process query results. Conversely, L1 speakers preferred smart speakers, with audio feedback being seen as sufficient. We discuss the need to tailor the IPA experience for L2 users, emphasising visual feedback whilst reducing the burden of language production.
Original language | English |
---|---|
Title of host publication | Mobile HCI 2020 |
Publisher | Association for Computing Machinery (ACM) |
ISBN (Electronic) | 9781450375160 |
DOIs | |
Publication status | Published - Oct 2020 |
Event | 22nd International Conference on Human-Computer Interaction with Mobile Devices and Services - Duration: 5 Oct 2020 → 9 Oct 2020 |
Conference
Conference | 22nd International Conference on Human-Computer Interaction with Mobile Devices and Services |
---|---|
Period | 5/10/20 → 9/10/20 |
Keywords
- Intelligent personal assistants
- Non-native speakers
- Speech interface
- Voice user interface
ASJC Scopus subject areas
- Computer Networks and Communications
- Information Systems
- Human-Computer Interaction
- Software