| Stereoscopic display technology is being presented early19th century, it hasbeen a hundred years of history, our environment, in the real world, it is a real three-dimensional world, we can see the object, feel its spatial relationships, Today’s displaydevices, such as TV, monitor, projector, generally only display two-dimensionalinformation, we can not feel the object, only the feeling of flatness. As technologyadvances, people’s living standards gradually improved, two-dimensional display hasfailed to meet public demand, stereoscopic display stand at this time, it allows thedisplay of objects and scenes with spatial relationships. Stereoscopic displaytechnology appears, there have been numerous achievements and innovations,although still not mature enough to put into large-scale commercial applications, butthe distance is not far off its widespread use. Now stereoscopic display method ismainly divided into two types based on binocular parallax stereoscopic displaytechnology and true three-dimensional display technology. The Integral Imaging is atrue three-dimensional display technology,it is the focus of our study and proposedin1908by the G.Lippmann, with continuous parallax, full-color display, spatialrelationships and obviously do not need to wear special viewing equipment and manyother advantages, it has low cost of implementation and can be realized by computersimulation, its display process only need a lens array, it is a future direction ofdevelopment in the three-dimensional stereoscopic display.The basic principle of the Integral Imaging is the use of a lens array to collectstationary objects or moving objects three-dimensional information, recordingdifferent spatial information from a different perspective behind each small lens, eachlen of the lens array records the information collected, such as saving as a unitimage or an unit video and then according to the Integral Imaging principle, all theunit images or videos are composed to the elemental picture or the elemental video.In the three-dimensional display, the elemental image or elemental video is placed after the rear lens element array, with appropriate light, optical path reversibility, theobject will be imaged again, so we can watch the three-dimensional information.In this article, we are using a virtual collection, through the use of dynamicscenes in Maya, by generating a virtual camera array to capture scenes information,we will gather information which is stored in the video group, that is, video content,we generally collect M Ngroup video, each video has a resolution of P P, andthen we use special video mapping algorithm, this group video will be mapped into avideo, called elemental video, and then through three-dimensional display, you canwatch3D animation, As far as we know, at home and abroad there is no research inthis area, they generally still remain in the collected information is stored in thepicture inside, mainly due to the amount of information in the video collection methodis too large, traditional method can not solve the problem of the acquisition timewith the increase size of the camera array, such as the number of rows and columnscameras increases, unit video capture time increases rapidly, has become a majorbottleneck in the Integral Imaging. In this paper we have achieved a fast acquisitionmethod species, effectively shorten the acquisition time.To shorten the rendering time, we capture unit videos by using mapping principle,we generate a new mapping camera array in new position to collect information. weused generate the original group of cameras to capture video ofresolution, now we only need to generate the group of cameras tocapture video of resolution, due to the acquisition time in Mayaprimarily concerned with the number of cameras, the mapping acquisition methodshortens the acquisition time of the video group, and the number of the camera in themapping camera array is related with the resolution of unit video in the originalcamera array,as the number of cameras in the original camera array change,thenumber of cameras in the mapping camera array remain unchanged, only theresolution of the unit video changes.Since the rendering speed in Maya is mainlydecided by the number of cameras, even if the resolution is increased, the recordingtime is growing slowly. For example, in our experiments we generate a set of5333 cameras to capture5333video group of2424resolution, by mappingmethod, now we just need to generate a set of cameras to capturevideo group of resolution, the number of cameras decreases, we save a lot oftime.In this paper, wo give a brief description of the various parameters and thelocation of the mapping camera,and detailed analysis of the capture process of unitvideo.At the same time the author has developed an auxiliary module to help Mayabatch render, and gives a detailed description of unit videos composing to theelemental video. By using the mapping camera array to collect information, we cannot only speed up the acquisition process, but also realize the collection of real imageand virtual image together, so the depth of field in the integral imaging than thetraditional method doubles. |