Speech recognition has matured to the point that companies can seriously consider its use. We developed a distributed framework that enables multimodal user interfaces with speech recognition (dictation and command/control) on any type of mobile device. But do users already accept speech as dditional input modality, and if so, which patterns help to cope with usability challenges in multimodal applications? We found that existing guidelines for designing multimodal interfaces concentrate on high-level issues,but fail to provide applicable patterns and implementation guidelines. Usability patterns are an approved way to pass on expert knowledge on solving specific problems and enable developers to meet challenges in an efficient way. This paper discusses patterns to help with successfully developing multimodal interfaces for mobile devices supporting speech as additional input modality.
|Number of pages||6|
|Publication status||Published - 2008|