Font Size: a A A

Design And Research Of Intelligent Device For Museum Based On Multi-modal Human-computer Interaction

Posted on:2017-11-11Degree:MasterType:Thesis
Country:ChinaCandidate:R MaFull Text:PDF
GTID:2348330542469491Subject:Industrial design engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of artificial intelligence and subtle fusion of modern products and information technology,a wide variety of intelligent hardware is coming like popping.At the meantime,the new technology is constantly pushing the development of human-computer interaction interface.Three periods appear gradually in the history of human-computer interaction interface,which are command line interface(CLI),graphic user interface(GUI)and natural user interface(NUI),and now the NUI is generally used.The trend of development shows that people want to communicate with machine more naturally.In recent years,gestures,voice,eye movements and some other ways are used in the human-computer interaction,the naturality increases enormously.In the paper,the feasibility of multi-modal interaction include head,gesture and voice is researched on museum and multi-modal interaction operation prototype is established based on BaiduEye,an intelligent hardware made by Baidu IDL.At present,three experiment stages are completed.Stage 1,which is for preliminary investigation.Through widely and deeply learning the existing explanation devices,users' visiting behavior and users' creative visiting behavior,guidance and reference are provided in designing multi-modal human-computer interaction scheme.Stage 2,which is for production designing.According to the user behavior library,the human-computer interaction scheme is designed from head,gesture and voice channel based on the method of storyboard,and we make a experiential prototype,then do the channel integration research,output the multi-modal human-computer interaction scheme finally.Stage 3,which is for realization and user testing.First,the preliminary multi-modal interaction scheme is achieved,the technology prototype is designed and the iterative optimization is used in the product.According to the test,users are satisfied with the multi-modal interaction based on head,gesture and voice.And its advantages are convenient interesting and advanced,but it has some disadvantages such as voice is affected to the surrounding environment.I think with the development of human-computer interaction,the system of multi-modal interaction including voice interaction will be more accurate and mature.In conclusion,the author adopts the design method of prototypes based on the storyboard,and outputs a series of multi-modal human-computer interaction program including head channel,gesture channel and voice channel,and completes the usability testing in the laboratory and museum.
Keywords/Search Tags:Intelligent Hardware, Multi-modal Human-computer Interaction, Natural User Interface, User Behavior
PDF Full Text Request
Related items