Font Size: a A A

Software Design Of Interactive Display System Based On Augmented Reality

Posted on:2021-02-01Degree:MasterType:Thesis
Country:ChinaCandidate:Q L HeFull Text:PDF
GTID:2428330623967364Subject:Control engineering
Abstract/Summary:PDF Full Text Request
Augmented reality,which can superimpose virtual scenes onto real scenes on the display screen to achieve a sensory experience beyond reality,has been applied to data model visualization,virtual training,entertainment and artistic creation.Existing way of human-computer interaction in augmented reality application system is limited,in order to improve the system of interactive entertainment,this dissertation aimed at the application of the public places such as museums,galleries and shopping scene,using Unity3 D rendering engine,combined with somatosensory camera,designs and realizes an interactive display software based on augmented reality.This software has the functions of face detection,speech recognition,gesture recognition,virtual object generation and superposition,etc.It can superimpose different virtual models on the face,palm and background environment,and can be applied in a variety of augmented reality application systems,with good practical application value.The main work of the dissertation is as follows:(1)Software overall design.By analyzing the software functions and performance requirements of the augmented reality interactive display system,Unity3 D game engine was selected as the software development platform of the system,and Kinect somatosensory camera was adopted as the hardware development platform of the system.The software is divided into three modules: human-computer interaction,three-dimensional registration,and virtual-real fusion.The software development program is formulated,and the software and hardware development environment of the system is built.(2)Human-computer interaction module design.The module including gesture recognition and speech recognition of two parts,The hand gestures recognition part adopts bones tracking technology of Kinect camera for interactive object's hand,elbow and shoulder joint point coordinates,according to the three point coordinates of the spatial location relationship to determine whether there is a wave gesture,then converts wave gesture interaction into instructions.Speech recognition part adopts the iFlytek online speech recognition cloud platform,voice sent by means of speech recognition interface will be collected to iFlytek cloud server,such as the cloud server after recognition,download and parses the identification results,so as to get the user's voice interaction instructions.(3)3D registration module design.This module is mainly to complete space coordinate transformation and camera pose estimation to find out the exact position of virtual objects in real space,adopts the visual tracking registration method.This method calibrates the camera with the camera calibration method of Zhengyou Zhang,then use the Dlib face detection library to detect the face feature points of the color images,and finally completes the design of the 3D registration module with the camera pose estimation based on the standard 3D face model.(4)Virtual-real fusion module design.This module mainly generates virtual scene data,and integrates the data of virtual scene and real scene through Unity3 D rendering mechanism.This module include three kinds of the hands,face and environment in the augmented reality model,using the three dimensional model software and image editing software to produce the model in the virtual scene,and the use of Unity3 D game engine has completed three AR mode of virtual scene,realized The whole software of interactive display of augmented reality.
Keywords/Search Tags:Augmented reality, Human-computer interaction, Unity3D, somatosensory camera, Voice recognition
PDF Full Text Request
Related items