| Stereoscopic display based on Virtual Reality(VR)can facilitate surgeons to observe the three-dimensional anatomical structure with the depth cue,understand the spatial relationship between anatomical structure and anatomical structure surgical instruments intuitively.It is also helpful for patients to understand the state of their body,and surgeons to design the surgical plan,engineers to augment the surgery navigation.The surgeons cannot control the virtual anatomical models directly by manipulating the mouse or keyboard in the operation room.This paper proposed a non-contact interaction method based on gesture recognition and speech recognition in order to meet the requirement and meanwhile provide the surgeons with interaction.The combination of this non-contact interaction and VR will allow surgeons to obtain the information of interest about patients’ anatomical structure and provide convenience for surgeons.This paper mainly studies the non-contact interactive stereoscopic display methods for human anatomical model,including: virtual reality based stereo display using Oculus,speech interaction method based on speech recognition and gesture interaction method based on gesture recognition.In the section of virtual reality based stereo display,the principle of the stereoscopic display based on binocular parallax is introduced.The advantages and disadvantages of two kinds of binocular parallax model is also analyzed.By using binocular projection model and API provided by Oculus SDK,model was rendered into Oculus and displayed.The Blinn-Phong illumination model was implemented through shade programs to distinguish the different anatomical models and the details of a single anatomical structure.Beacuse the human anatomical model may be out of the viewing frustum of the camera,the initial position and scale factor should be calculated in order to guarantee that all the anatomical models are accommodated in the viewing frustum.In the section of speech interaction,the speech commands for display,rotation,scale,displacement mode and reset operation were defined according to the system requirements.The syntax of speech commands was made by analyzing off-line speech recognition grammar specification and the desired system command word recognition network is implemented.With the network application command word recognition and speech recognition SDK,switching among different system modes could be achieved by speech command.In the section of gesture interaction,RGB-D image of the hand captured by RealSense R300 was used to identify the gesture.Kalman filter was used to smooth the motion of the hand and the parameters are adjusted to deal with instability of the recognition result of RealSense.Model rotation was implemented by using the track ball and quaternion with the gesture recognition and hand motion as the input.Finally,a non-contact interactive stereoscopic display system was built to display the virtual human anatomical model.The system uses virtual reality technology to display the immersive virtual anatomical model to the surgeons,which helps surgeons efficiently obtain the information that they are interested in from the vast amount of medical information.This system adopts speech and hand gesture interaction,and allows the surgeons to control the virtual anatomical structure display mode in a noncontact manner without affecting the speed and efficiency.The feasibility and practicability of this system was verified by the raters through practice and questionnaires. |