Font Size: a A A

Research And Design Of Multimodal Grammar And Gesture Set In VR Modeling Scene

Posted on:2022-07-01Degree:MasterType:Thesis
Country:ChinaCandidate:Y T ChengFull Text:PDF
GTID:2518306332967849Subject:Art
Abstract/Summary:PDF Full Text Request
As a key stage of product design,automobile design and architecture design,conceptual design has a far-reaching impact on the transformation and upgrading of manufacturing industry.With the gradual development of VR 3D modeling,the research on multi-modal interaction of modeling scene is not deep enough.In this paper,gesture,speech and eye movement are introduced into VR environment,multi-modal modeling in different forms and structures is studied in this paper.The main contents and achievements are as follows:(1)Multi-modal Interaction Analysis and evaluation:aiming at multi-modal interaction with high structural flexibility and complexity,an innovative two-stage heuristic method is adopted to collect users'intuitionistic behavior data in modeling scenarios.Finally,the multi-modal interaction is disassembled from four layers:grammatical structure,modal form,cooperative relationship and temporal relationship,the tool diagram of task-mode mapping with high generalization and practicality is also produced.(2)Analysis and evaluation of gesture interaction:for gestures with high degree of formal freedom and rich semantic connotation,a user heuristic method is used to collect and filter the gesture set that accords with the user's cognition,finally,from semantic tree structure,gesture relationship,gesture classification and key representation,the author summarizes the features of task interaction and the mapping between gesture and task.(3)Natural interface model based on heuristic link:the abstract model of heuristic link is put forward,and the framework of natural interface model is put forward based on the experimental conclusion,which guides the design of natural interface from six key elements,based on this,the design and development of the modeling platform are carried out,and the suggestions and feedback from users are collected to verify the application value of the research results.This paper explores the natural and applicable interaction from the aspects of multi-mode and gesture respectively,and not only accumulates systematic interaction analysis methods from the theoretical level,also precipitate highly instructional interaction design recommendations and reference diagrams at the application level.In addition,the modeling platform is designed and developed according to the research results,and the closed loop of research-application-verification is completed.
Keywords/Search Tags:3D modeling, multimodality, in-air gestures, elicitation method, multimodal grammar
PDF Full Text Request
Related items