Font Size: a A A

The Research And Performance Optimization Of Cloud Transcoding System Based On Hadoop

Posted on:2015-01-13Degree:MasterType:Thesis
Country:ChinaCandidate:L F WangFull Text:PDF
GTID:2268330425988785Subject:Communication and Information System
Abstract/Summary:PDF Full Text Request
ABSTRACT:At present, the video traffic has become the main traffic of the network. All kinds of video application come forth, from digital high-definition TV to IPTV. The terminal of video application of Internet users has become more diversified, from PC to mobile phones. However, different network video platform and terminal support different video content and format, such as coding format, resolution, frame rate, etc. In order to meet the demand of video service of different platform users, video transcoding is needed. That is the changes of coding format, resolution, frame rate, etc. However, video transcoding is a time-consuming and resources-consuming work. With a sharp increase in the number of video, the traditional single or centralized transcoding system isn’t able to meet the requirements of the efficiency and quality. With concentration and allocation of resources, cloud computing can provide powerful computation ability, and has a good scalability and high fault tolerance, so video transcoding work can be moved to the cloud computing platform. Using cloud platform to video transcoding can withstand the massive video data storage, transcoding demand. At the same time, because of the aggregation property, it’s easy to use and have low cost. In numerous cloud computing platform, Hadoop for its character of open source is the most widely used cloud computing platform.This paper takes the advantage of the Map-reduce model to process the media content with distributed parallelism. Our system three major components:a proxy server, video transcoding module, the Cache module. The proxy server is responsible for handling the user’s video service demands. Video transcoding is taken place in video transcoding module. The Cache module is responsible for the management of the original and transcoding video files.Then, some tests is made. It have compared the system with single system, test the effect of segment number and segment size on performance of the system, and analyzed each stage in the execution of a system accounts for the proportion of time.In the process of execution of the system, HDFS read and written is made many times. When clients read data from HDFS, the original replica choose strategy is that selecting the nearest node by network topological distance which would lead to resources contention when hot replica was placed in the same node or same rack. In this paper, a new replica strategy is presented which is based on load balancing of clusters. Using the linear weighted method describes a node’s load, choosing the lightest load node as reading node. Simulation experiments show that the improved algorithm effectively reduces the copy of transfer time, increase the throughput of HDFS cluster.
Keywords/Search Tags:cloud computing, hadoop, cloud-transcoding, hdfs, load balancing
PDF Full Text Request
Related items