With the rapid development of cloud computing, more and more data to be stored in the cloud storage servers. Cloud storage as a new type of storage services, with high scalability, high reliability, low cost, can be accessed anytime and anywhere features, provides a new model for mass data storage management. However, due to data are outsourcing to a cloud services provider, users are actually relinquishing the ultimate control over the fate of their data, so it faces many challenges. One of the important issues is how users can ensure the data integrity which stored in the cloud storage servers. Traditional data integrity verification methods need data to be completely downloaded to the local to validate. Since the amount of data in the cloud is enormous, download data block will bring a great burden on the network, so the traditional methods and techniques are not suitable for the cloud storage environments. In the present existing solutions, some RSA-based public key cryptography methods are relatively good, which have unlimited number of verifications, privacy against verifiers, and low communication overhead characteristics. However, those algorithms contain a large number of modular exponentiation, so the calculation is very expensive. And also the data block size of the algorithm to be strictly limited.Taking into account the dynamic nature of the cloud storage data, namely data block in the storage period will be modified or deleted several times that makes the original verification metadata have to be frequently updated and further increase the computational overhead, we present a data integrity verification algorithm based on Counting Bloom Filter (CBF), which can support for third-party verification. Because of the dynamic characteristics, a data block is no need for verification of unlimited times, the algorithm uses cryptographic hash function technology to replace a large number of modular exponentiation, thus effectively reducing the computational overhead. Meanwhile, since CBF has the characteristics of high space efficiency, we use it to make a probabilistic verification metadata which can compress the checksum values with only lost a small amount of accuracy to save storage and communication overhead. In addition, the data block size impact on the overall cost is greatly reduced compared to the methods based on RSA public key technology. Finally, analysis and simulation results show that in the data dynamically changing environment the algorithm implements lightweight integrity verification within the data lifecycle, especially in the challenge-response process, the computational overhead of cloud server and third-party auditor is reduced. |