Abstract
Motivation: Interacting with a computer by speech—like humans do among each other—is a dream software engineers work on to come true since the 1960s. When using devices that lack adequate input capabilities, like personal digital assistants (PDA) or mobile phones, everybody would prefer a more convenient interaction — speech. But we see two major hindering factors: the processing power of these devices is not yet sufficient and speech recognition engines lack application developer support.
Results: We propose a highly configurable software architecture that divides speech recognition in processing steps that can be distributed among several devices and, thus, allows speech recognition also on limited devices. Additionally, we use Model Driven Development to unify graphical and voice user interface development. Thus, graphical and voice user interfaces need not be developed separately, but are generated from a single model. To lower the entry barrier for application developers, the user interface framework bases on the same paradigms and components (button, textfield, etc.) as graphical user interface frameworks do.
Original language | English |
---|---|
Title of host publication | Proceedings FH Science Day 2006 |
Publisher | Shaker Verlag |
Pages | 216-223 |
ISBN (Print) | 3-8322-555-9 |
Publication status | Published - 2006 |
Event | FH Science Day 2006 - Hagenberg, Austria Duration: 25 Oct 2006 → 25 Oct 2006 |
Conference
Conference | FH Science Day 2006 |
---|---|
Country/Territory | Austria |
City | Hagenberg |
Period | 25.10.2006 → 25.10.2006 |