Font Size: a A A

Research On 3G Mobile Voice Control Of A Multimodal Health Information Web Portal

Posted on:2011-08-15Degree:MasterType:Thesis
Country:ChinaCandidate:Y K LuoFull Text:PDF
GTID:2178360305973470Subject:Biomedical engineering
Abstract/Summary:PDF Full Text Request
Health information systems can now be accessed in a variety of ways, including using mobile devices. However, its use with speech recognition has thus far been limited.This thesis's main objective is to research and provide a method on how to use speech recognition and a 3G mobile phone interface to view and access this health information. To approach this problem differently, a distributed multimodal system where many components are distributed apart, but work together in synchronisation is proposed. To achieve this distribution, synchronisation and interoperability, the thesis concentrated on adhering to and implementing international telecommunication, web and multimodal architecture standards, whose relevant information is later provided.The multimodal system consisted of two modality components i.e. modes of interaction. The mobile web browser was chosen to act as the 3G mobile interface and form the graphical modality. The voice modality consisted of a speech framework to perform the speech recognition on a remote server instead of on the phone. A simulated transformed web-based health portal is chosen as the interface to health information data. In order to synchronise viewing these web portals via the graphical modality with speech from the voice modality an interaction mechanism was implemented. The framework had to consider other technologies to implement transformation of the web portal data and to update the graphical modality.The final integrated and implemented prototype system is then presented. The results are shown where voice and a graphical modality are synchronised after a web initiated session. It is shown that distributed components need to conform to standards to interoperate with the system. This multimodal system based on standards can now be used for continuing speech recognition and health web portal research. Suggestions are provided so that the system can be fully integrated with a standardised health information portal and then how to evaluate physician workflow for future research.By providing a distributed standardised multimodal interaction system, components with standardised interfaces and communication allowed for mutual interoperability. Components could also be developed independently and separately without knowing the inner details of each other component as long as standardised interfaces were conformed to. A distribution of components allowed for more powerful components to do the data processing of components with fewer resources. For future projects, with a standardised distributed multimodal interaction system, not just graphic and speech, but other sensor modalities from other research teams may co-operate together to allow seamless interaction with Health Information Systems.
Keywords/Search Tags:speech recognition, health information system, web portals, 3G, MMI, multimodal system, interaction manager, IVR, DSR, protocols, SCXML, VoiceXML
PDF Full Text Request
Related items