Font Size: a A A

Modeling Motion-to-Photon Latency Perception In Virtual Reality

Posted on:2021-10-27Degree:MasterType:Thesis
Country:ChinaCandidate:M X YangFull Text:PDF
GTID:2518306131476584Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
In virtual reality(VR),high motion-to-photon(MTP)latency will cause the viewer's action not to get timely feedback and bring uncomfortable symptoms such as dizziness and nausea.It will degrade user experience quality.Present rendering and communication technologies can reduce the degree of latency at an expense of reduced visual quality or using more expensive software and hardware technologies.With limited computing and communication resources,the large-scale user population brought about by the popularity of VR industry will further exacerbate the latency problem.To solve this problem,we should determine the non-perceptible low latency standard by measuring and modeling MTP latency perception,thereby helping designers to reasonably use the latency perception tolerance to balance the availability and cost of low latency systems.This work focuses on the perception of MTP latency when the viewer turns his head actively,and establishes models of MTP latency perception threshold.There are three contributions as follows:?The first MTP latency perception database when a viewer watched a 360-degree video freely has been established.In this database,the motion of the region of interest guides observers to naturally and actively move their heads,which is consistent with the way users watch videos in practical applications.It makes up for the shortcomings of other existing databases that have unnatural head movements.The database provides real and reliable data to evaluate and analyze the model of the MTP latency perception threshold.?An MTP latency perception threshold model based on the single motion event has been proposed.Based on the characteristics of human eyes,visual cues are used to effectively guide observers to actively and purposefully turn their heads,which ensures the spatiotemporal consistency of movement between different observers.Visual cures lay a foundation for the quantitative analysis of the relationship between head movement and latency perception.The complex head movement is decomposed into multiple simple movements.Under each single simple movement,the latency perception model is established.The model uses head movement's direction changes and angular velocity to predict perceptual threshold of MTP latency.When the model has an accuracy of 93.94% for detecting publicly sensible latency in the database,the misjudgment rate is as low as 0.00%.?An MTP latency perception threshold model based on joint multiple motion events has been proposed.This model considers the interaction between adjacent motion events in complex head movements,which makes up for the shortcomings of the model based on single motion.The model fuses pre-event stimulus intensity,postevent stimulus intensity,and time interval to optimize the model.The performance result of this model on the database is that when the misjudgment rate is equal to 0.00%,the accuracy is 96.97% and increased by 3.03%.Based on the combination of multiple motion events method better conforms to the complexity and continuity of the actual head movement,and has a good effect on the modeling of latency perception.Thus,The MTP latency perception threshold models proposed in this paper can be used to estimate the perceptual threshold of the latency and detect latency perceptibility when users actually watch 360-degree videos.It plays an important guiding role in the evaluation of experience quality and the design of low-latency systems for virtual reality.
Keywords/Search Tags:Virtual reality, Motion-To-Photon latency, latency perception threshold, head movement
PDF Full Text Request
Related items