Font Size: a A A

Toward attack-resistant distributed information systems by means of social trust

Posted on:2011-06-17Degree:Ph.DType:Thesis
University:Duke UniversityCandidate:Sirivianos, MichaelFull Text:PDF
GTID:2448390002950732Subject:Computer Science
Abstract/Summary:
The first system proposed in this thesis assesses the ability of Online Social Networking (OSN) users to correctly classify online identity assertions. The second system assesses the ability of OSN users to correctly configure devices that classify spamming hosts. In both systems, an OSN user explicitly ascribes to his friends a value that reflects how trustworthy he considers their classifications. In addition, both solutions compare the classification input of friends to obtain a more accurate measure of their pairwise trust. Our solutions also exploit trust transitivity over the social network to assign trust values to the OSN users. These values are used to weigh the classification input by each user in order to derive an aggregate trust score for the identity assertions or the hosts.;In particular, the first problem involves the assessment of the veracity of assertions on identity attributes made by online users. Anonymity is one of the main virtues of the Internet. It protects privacy and freedom of speech, but makes it hard to assess the veracity of assertions made by online users concerning their identity attributes (e.g, age or profession.) We propose FaceTrust, the first system that uses OSN services to provide lightweight identity credentials while preserving a user's anonymity. FaceTrust employs a "game with a purpose" design to elicit the opinions of the friends of a user about the user's self-claimed identity attributes, and uses attack-resistant trust inference to compute veracity scores for the attributes. FaceTrust then provides credentials, which a user can use to corroborate his online identity assertions.;We evaluated FaceTrust using a crawled social network graph as well as a real-world deployment. The results show that our veracity scores strongly correlate with the ground truth, even when a large fraction of the social network users are dishonest. We have derived the following lessons from the design and deployment of FaceTrust: (a) it is plausible to obtain a relatively reliable measure of the veracity of identity assertions by relying on the friends of the user that made the assertion to classify them, and by employing social trust to determine the trustworthiness of the classifications; (b) it is plausible to employ trust inference over the social graph to effectively mitigate Sybil attacks; (c) users tend to mostly correctly classify their friends' identity assertions.;The second problem in which we apply social trust involves assessing the trustworthiness of reporters (detectors) of spamming hosts in a collaborative spam mitigation system. Spam mitigation can be broadly classified into two main approaches: (a) centralized security infrastructures that rely on a limited number of trusted monitors (reporters) to detect and report malicious traffic; and (b) highly distributed systems that leverage the experiences of multiple nodes within distinct trust domains. The first approach offers limited threat coverage and slow response times, and it is often proprietary. The second approach is not widely adopted, partly due to the lack of assurances regarding the trustworthiness of the reporters.;Our proposal, SocialFilter, aims to achieve the trustworthiness of centralized security services and the wide coverage, responsiveness, and inexpensiveness of large-scale collaborative spam mitigation. It enables nodes with no email classification functionality to query the network on whether a host is a spammer. SocialFilter employs trust inference to weigh the reports concerning spamming hosts that collaborating reporters submit to the system. To the best of our knowledge, it is the first collaborative threat mitigation system that assesses the trustworthiness of the reporters by both auditing their reports and by leveraging the social network of the reporters' human administrators.;We performed a simulation-based evaluation of SocialFilter, which indicates its potential: during a simulated spam campaign, SocialFilter classified correctly 99% of spam, while yielding no false positives. The design and evaluation of SocialFilter offered us the following lessons: (a) it is plausible to introduce Sybil-resilient OSN-based trust inference mechanisms to improve the reliability and the attack-resilience of collaborative spam mitigation; (b) using social links to obtain the trustworthiness of reports concerning spammers (spammer reports) can result in comparable spam-blocking effectiveness with approaches that use social links to rate-limit spam (e.g., Ostra [64]); (c) unlike Ostra, SocialFilter yields no false positives. (Abstract shortened by UMI.)...
Keywords/Search Tags:Social, System, OSN, Users, Spam, Identity assertions, Online, First
Related items