TY - JOUR
T1 - AI, Communication Aids and the Challenge of Authentic Authorship - Whose Line Is It Anyway‽
AU - Griffiths, Tom
AU - Broomfield, Katherine
AU - Hrastelj, Laura
AU - Judge, Simon
AU - Toogood, Jonathan
N1 - Copyright © 2025, Emerald Publishing Limited
PY - 2025/6/13
Y1 - 2025/6/13
N2 - PurposeAI and, more specifically, large language models (LLMs) have great scope for use in voice output communication aids (VOCAs), and this is being realised as the technology finds a greater foothold in mainstream systems.Design/methodology/approachIn this paper we examine what we know is important in VOCA design and use the approach of casuistry to examine the potential ethical implications of the use of LLMs in VOCAs.FindingsWe suggest that there is relevant similarity between some potential applications of LLMs and the discredited technique of facilitated communication (FC). We highlight risks related to authorship and authenticity of the message produced by LLM-enabled VOCAs and discuss the importance of a holistic view of communication, which is multi-modal and co-constructed by all participants, generating agreed meaning with a shared understanding of where that meaning originates. We also draw attention to the potential impact of LLMs on language and communication development, where they may remove important opportunities for co-construction, correction and non-VOCA interactions that are so vital to development.Originality/valueUltimately, we recognise the potential benefits of LLMs in VOCAs but counsel against a technoableist, technology-led implementation of LLMs without due consideration of the communication needs of VOCA users. We counsel against the uncritical inclusion of LLMs within new and existing VOCAs, and encourage a deeper engagement with the ethical risks of doing so, as well as with important concepts such as authorship, humanness and user-centred design.
AB - PurposeAI and, more specifically, large language models (LLMs) have great scope for use in voice output communication aids (VOCAs), and this is being realised as the technology finds a greater foothold in mainstream systems.Design/methodology/approachIn this paper we examine what we know is important in VOCA design and use the approach of casuistry to examine the potential ethical implications of the use of LLMs in VOCAs.FindingsWe suggest that there is relevant similarity between some potential applications of LLMs and the discredited technique of facilitated communication (FC). We highlight risks related to authorship and authenticity of the message produced by LLM-enabled VOCAs and discuss the importance of a holistic view of communication, which is multi-modal and co-constructed by all participants, generating agreed meaning with a shared understanding of where that meaning originates. We also draw attention to the potential impact of LLMs on language and communication development, where they may remove important opportunities for co-construction, correction and non-VOCA interactions that are so vital to development.Originality/valueUltimately, we recognise the potential benefits of LLMs in VOCAs but counsel against a technoableist, technology-led implementation of LLMs without due consideration of the communication needs of VOCA users. We counsel against the uncritical inclusion of LLMs within new and existing VOCAs, and encourage a deeper engagement with the ethical risks of doing so, as well as with important concepts such as authorship, humanness and user-centred design.
KW - Augmentative and alternative communication (AAC)
KW - Facilitated communication
KW - Artificial intelligence (AI)
KW - Large language models (LLMs)
KW - Ethics
KW - Casuistry
KW - Voice output communication aids (VOCAs)
KW - Authorship
KW - Critical disability studies
KW - ethical AI
UR - http://www.scopus.com/inward/record.url?scp=105005589886&partnerID=8YFLogxK
U2 - 10.1108/JET-01-2025-0005
DO - 10.1108/JET-01-2025-0005
M3 - Article
SN - 2398-6263
VL - 19
JO - Journal of Enabling Technologies
JF - Journal of Enabling Technologies
IS - 2
ER -