Abstract
Touch-screens are becoming increasingly ubiquitous. They have great appeal due to their capabilities to support new forms of human interaction, including their abilities to interpret rich gestural inputs, render flexible user interfaces and enable multi-user interactions. However, the technology creates new challenges and barriers for users with limited levels of vision and motor abilities. The PhD work described in this paper proposes a technique combining Shared User Models (SUM) and adaptive interfaces to improve the accessibility of touch-screen devices for people with low levels of vision and motor ability. SUM, built from an individual's interaction data across multiple applications and devices, is used to infer new knowledge of their abilities and characteristics, without the need for continuous calibration exercises or user configurations. This approach has been realized through the development of an open source software framework to support the creation of applications that make use of SUM to adapt interfaces that match the needs of individual users.
Original language | English |
---|---|
Title of host publication | Adjunct proceedings of the 25th annual ACM Symposium on User Interface Software and Technology |
Subtitle of host publication | UIST'12 |
Place of Publication | New York |
Publisher | Association for Computing Machinery |
Pages | 39-42 |
Number of pages | 4 |
ISBN (Print) | 9781450315821 |
DOIs | |
Publication status | Published - 2012 |
Event | 25th Annual ACM Symposium on User Interface Software and Technology - Cambridge Marriott Hotel, Cambridge, Mass., United States Duration: 7 Oct 2012 → 10 Oct 2012 http://www.acm.org/uist/uist2012/ |
Conference
Conference | 25th Annual ACM Symposium on User Interface Software and Technology |
---|---|
Abbreviated title | UIST '12 |
Country/Territory | United States |
City | Cambridge, Mass. |
Period | 7/10/12 → 10/10/12 |
Internet address |