| In recent years, because the large requirement of information resource people need has led to the more enormous scale and more complex management of storage system, the contradiction between the rapid development of information resource and the lack of management ability becomes increasingly sharp. At the same time, the rapid development of information resource challenges the reliability and expansibility of storage system, and the share of information resource is also more and more important. Data grid technology could satisfy the demand of high performance, bulky distributed storage and distributed management. The system of resource management is a dispensable component of grid system. A better structural model of resource management has good reliability and expansibility and can cooperate with other components and encourage resource owner to share their resources to make costumers use sharable resources equally. What's more, it could administer resources reasonably to make the resource requirements of users mapping to the right set of resource. Economic model is one of the familiar structural models of network resource management system, and auction theory has become the most popular economic model architecture. The goal of the auction protocol is to select the cheapest replica of a file needed by a job running on a computing element. To this aim we use a type of Vickrey Auction. In our case the role of the auctioneer is played by the access mediator, while the Storage Elements play the role of bidders. However, the access mediator does not start an auction for selling a file but for buying it. The storage elements bid the price they are willing to sell the file and the winning storage element is paid the second-lowest bidding price by the access mediator. In other words, our economic model uses a reverse Vickrey auction. Because of geographically distributed sites, it is impossible that all the grid users access the same data files. An effective method is spreading data replications to end users according to the dynamic accessing features of users, which will reduce data movement among the grid sites and decrease user access time and network consume for remote data. At the same time, many data replications can also ameliorate load balance and data reliability effectively and improve the robustness of system. Consequently replica management is an important constituent part of resource management system. The paper mainly studies the two primary problems in replica management which are replica creation and replica replacement. Replica creation is to choose apropos time and sites to create replica and replica replacement is how to delete existing replications when storage elements haven't enough space. According to different strategies of replica creation and replica replacement, replica optimization strategies can be broadly classified into four categories: no replication, least recently used, least frequently used and economy-based replication strategy. Using reverse Vickrey auction the economic-based replica optimization strategy can sell and buy files among storage elements of grid sites. When any new file request is received and storage elements are all full, the prediction function is calculated for new file and every file in the storage. If there is no file in the storage element that has a value less than the value of new file then no replication occurs. Otherwise, the least valuable file is selected for deletion and a new replica of new file is created on the storage element. If multiple files on the storage element share the minimum value, the file having the earliest last access time is deleted. In this model, it is a key problem to select good prediction function. Because of different design method of prediction function, we consider two economy-based models namely binomial-based prediction function and Zipf-based prediction function. During the course of analyzing the binomial-base prediction function, we discover the prediction function hasn't taken into account the file access history, and then the file value must have some inaccuracy. Because binomial distribution is a discrete probability distribution, and owing to De Moivre-Laplace central limit theorem the normal probability distribution can be the approximation distribution for binomial distribution, we ameliorate the binomial-based optimization algorithm using normal distribution which is a continuous probability distribution. The ameliorative optimization algorithm could predict the future value of a file using prediction function based on the past file access history. OptorSim is a simulation package written in Java. It was developed to mimic the structure of a real Data Grid and study the effectiveness of replica... |