| With the continuing development of computer science and multimedia techniques, simple image texture can't meet the needs of all applications. For instance in video production, virtual simulation, animation production and the game engine, video textures are widely used. A video texture provides a continuous infinitely varying stream of images. While the individual frames of a video texture may be repeated from time to time, the video sequence as a whole is never repeated exactly. A flickering flame, a waterfall, a flag flapping in the breeze, flying birds—each of these phenomena has an inherently dynamic quality and exists commonly in the nature. With the limits of shooting conditions and storage equipment, videos will have lasted only limited long time. To some extent, videos reflect the changing behaviors of filmed things over time, but continuous infinitely varying stream of frames are necessary in many applications. For example, a web page advertising a scenic destination could use a video texture of a beach with palm trees blowing in the wind rather than a static photograph. Video textures could also find application as dynamic backdrops or foreground elements for scenes composited from live and synthetic elements, for example in computer games.Video texture synthesis has been the main target and one of the difficulties of computer graphics. This paper researches deeply into key techniques of interactive video texture synthesis based on segmentation. The traditional video texture is synthesized from a finite set of images by randomly rearranging original frames from a source video, which provides a continuous, infinitely varying stream of video images. This is a perfect way for producing video textures. However the volume of the source video is large in general. In comparison to video texture method, we use as input a small collection of still images that samples the dynamic scene. This collection combines the benefit of both the small input volume and some indication of the dynamic quality of the scene assuming that the scene includes some degree of regularity in motion. Our system starts by constructing a graph that connects similar images. These images form the―key frames‖of the video texture. However, they are disordered initially and cannot reflect the dynamic quality of the scene. We adopt a similarity function to recover the temporal orders among the still images. Subsequently, image sequences are generated by sampling the graph using a second-order Markov Chain model. To alleviate the discontinuity and inconsistency between adjacent images, we further propose to perform TPS warping and frame interpolation. Finally, a visually plausible video texture sequence can be synthesized.In this paper, we focus on generating a visually plausible video texture from a small collection of still images. There are two novel contributions in our work. One is the image similarity measure function. This function takes both color of the image and shape of the moving object into consideration. It can better describe the distance between any two images than the ordinary Euclidean distance while the moving objects have distinct shape features. The other contribution is that we use thin-plate spline warping technique for frame interpolation and use inverse distance weighting interpolation algorithm to eliminate the holes generated in interpolated frames.In conclusion, researching on video texture synthesis techniques has important application value. Designing good video texture synthesis algorithm can enhance the quality of generated video texture and reduce the time-space cost for generating video textures as well as building a good foundation for other applications. |