Model-Driven Development of Speech-Enabled Applications

Werner Kurschl, Stefan Mitsch, Rene Prokop, Johannes Schönböck

Research output: Chapter in Book/Report/Conference proceedingsConference contribution


Motivation: Interacting with a computer by speech—like humans do among each other—is a dream software engineers work on to come true since the 1960s. When using devices that lack adequate input capabilities, like personal digital assistants (PDA) or mobile phones, everybody would prefer a more convenient interaction — speech. But we see two major hindering factors: the processing power of these devices is not yet sufficient and speech recognition engines lack application developer support. Results: We propose a highly configurable software architecture that divides speech recognition in processing steps that can be distributed among several devices and, thus, allows speech recognition also on limited devices. Additionally, we use Model Driven Development to unify graphical and voice user interface development. Thus, graphical and voice user interfaces need not be developed separately, but are generated from a single model. To lower the entry barrier for application developers, the user interface framework bases on the same paradigms and components (button, textfield, etc.) as graphical user interface frameworks do.
Original languageEnglish
Title of host publicationProceedings FH Science Day 2006
PublisherShaker Verlag
ISBN (Print)3-8322-555-9
Publication statusPublished - 2006
EventFH Science Day 2006 - Hagenberg, Austria
Duration: 25 Oct 200625 Oct 2006


ConferenceFH Science Day 2006


Dive into the research topics of 'Model-Driven Development of Speech-Enabled Applications'. Together they form a unique fingerprint.

Cite this