Abstract
Purpose
AI and, more specifically, large language models (LLMs) have great scope for use in voice output communication aids (VOCAs), and this is being realised as the technology finds a greater foothold in mainstream systems.
Design/methodology/approach
In this paper we examine what we know is important in VOCA design and use the approach of casuistry to examine the potential ethical implications of the use of LLMs in VOCAs.
Findings
We suggest that there is relevant similarity between some potential applications of LLMs and the discredited technique of facilitated communication (FC). We highlight risks related to authorship and authenticity of the message produced by LLM-enabled VOCAs and discuss the importance of a holistic view of communication, which is multi-modal and co-constructed by all participants, generating agreed meaning with a shared understanding of where that meaning originates. We also draw attention to the potential impact of LLMs on language and communication development, where they may remove important opportunities for co-construction, correction and non-VOCA interactions that are so vital to development.
Originality/value
Ultimately, we recognise the potential benefits of LLMs in VOCAs but counsel against a technoableist, technology-led implementation of LLMs without due consideration of the communication needs of VOCA users. We counsel against the uncritical inclusion of LLMs within new and existing VOCAs, and encourage a deeper engagement with the ethical risks of doing so, as well as with important concepts such as authorship, humanness and user-centred design.
AI and, more specifically, large language models (LLMs) have great scope for use in voice output communication aids (VOCAs), and this is being realised as the technology finds a greater foothold in mainstream systems.
Design/methodology/approach
In this paper we examine what we know is important in VOCA design and use the approach of casuistry to examine the potential ethical implications of the use of LLMs in VOCAs.
Findings
We suggest that there is relevant similarity between some potential applications of LLMs and the discredited technique of facilitated communication (FC). We highlight risks related to authorship and authenticity of the message produced by LLM-enabled VOCAs and discuss the importance of a holistic view of communication, which is multi-modal and co-constructed by all participants, generating agreed meaning with a shared understanding of where that meaning originates. We also draw attention to the potential impact of LLMs on language and communication development, where they may remove important opportunities for co-construction, correction and non-VOCA interactions that are so vital to development.
Originality/value
Ultimately, we recognise the potential benefits of LLMs in VOCAs but counsel against a technoableist, technology-led implementation of LLMs without due consideration of the communication needs of VOCA users. We counsel against the uncritical inclusion of LLMs within new and existing VOCAs, and encourage a deeper engagement with the ethical risks of doing so, as well as with important concepts such as authorship, humanness and user-centred design.
| Original language | English |
|---|---|
| Pages (from-to) | 102-112 |
| Number of pages | 11 |
| Journal | Journal of Enabling Technologies |
| Volume | 19 |
| Issue number | 2 |
| Early online date | 21 May 2025 |
| DOIs | |
| Publication status | Published - 13 Jun 2025 |
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 3 Good Health and Well-being
Keywords
- Augmentative and alternative communication (AAC)
- Facilitated communication
- Artificial intelligence (AI)
- Large language models (LLMs)
- Ethics
- Casuistry
- Voice output communication aids (VOCAs)
- Authorship
- Critical disability studies
- ethical AI
ASJC Scopus subject areas
- Artificial Intelligence
- Speech and Hearing
- Human-Computer Interaction
- Occupational Therapy
- Rehabilitation
- Issues, ethics and legal aspects
- Communication
- Health(social science)
- Computer Science Applications
- Management of Technology and Innovation
Fingerprint
Dive into the research topics of 'AI, Communication Aids and the Challenge of Authentic Authorship - Whose Line Is It Anyway‽'. Together they form a unique fingerprint.Research output
- 2 Citations
- 1 Article
-
Use of artificial intelligence (AI) in augmentative and alternative communication (AAC): community consultation on risks, benefits and the need for a code of practice
Griffiths, T., Slaughter, R. & Waller, A., 8 Nov 2024, In: Journal of Enabling Technologies. 18, 4, p. 232-247 16 p.Research output: Contribution to journal › Article › peer-review
Open AccessFile7 Link opens in a new tab Citations (Scopus)575 Downloads (Pure)
Activities
-
Electronic Assistive Technology: Now and Next
Broomfield, K. (Invited speaker) & Griffiths, T. (Invited speaker)
21 Nov 2025Activity: Talk or presentation types › Invited talk
-
Project Spinnaker Advisory Group
Griffiths, T. (Member) & Waller, A. (Member)
3 Apr 2024 → …Activity: Other activity types › Public engagement and outreach - work on advisory panels
-
AI in AAC systems: current work and future research priorities
Slaughter, R. (Speaker), Griffiths, T. (Speaker) & Waller, A. (Speaker)
10 Sept 2024Activity: Talk or presentation types › Oral presentation
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver