Font Size: a A A

Research On Group Fairness Classification Algorithm

Posted on:2024-04-15Degree:MasterType:Thesis
Country:ChinaCandidate:Z H HuFull Text:PDF
GTID:2568306932455914Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
As artificial intelligence technology continues to be applied across a range of fields,machine learning has become increasingly ubiquitous.However,in practice,machine learning models can generate predictions unfairly toward specific groups due to inherent biases in the training data.This can have serious consequences for society,making group fairness a critical area of research within the field of machine learning.The sources of bias in machine learning models primarily stem from two factors:biases that are inherent in the dataset and biases introduced by algorithm design.Unfortunately,current research has largely overlooked the broader range of biases that exist in real-world scenarios,making it difficult to accurately evaluate algorithms for group fairness.Additionally,many existing group fairness algorithms suffer from a generalization problem,whereby they cannot reliably ensure fairness performance on the test set.Furthermore,current algorithms also tend to overlook individual differences in samples within each group,which results in suboptimal fairness performance.In light of these challenges,this paper proposes a new approach to addressing model unfairness caused by data and algorithmic biases,leveraging generative adversarial networks to enrich the dataset and introducing two effective group fairness algorithms.The main contributions and innovations of this work are summarized below:Firstly,to comprehensively evaluate fairness algorithms,this paper proposes a fullspectrum discrimination tabular data generation network and a full-spectrum discrimination fairness metric.The full-spectrum discrimination data generation network can generate table datasets of any discrimination level by introducing fairness regularization.The full-spectrum discrimination fairness metric combines the model’s classification accuracy and fairness performance,which not only enables a comprehensive evaluation of algorithms under full-spectrum discrimination levels but also identifies the bias ranges applicable to different fairness algorithms,guiding the design of subsequent fairness algorithms.Secondly,to address the algorithmic fairness generalization problem,this paper proposes an adaptive priority reweighting fairness classification method.This method first satisfies the model’s fairness constraints through adaptive weighting at the group level and then re-assigns weights to samples within each group,giving higher weights to samples closer to the decision boundary.The secondary weight assignment forces sample predicted values away from the decision boundary,improving the generalization of fairness.Furthermore,this method resolves the issue of non-converging sample weights through streaming training,which can further improve the model’s fairness performance while maintaining classification accuracy.Finally,to alleviate algorithmic bias in recommendation scenarios,this paper proposes a fair recommendation method that improves the user experience of underrepresented groups.Based on distribution-robust optimization,this method considers the differences between users within the same subgroup and assigns higher weights to samples that are prone to errors in each subgroup,resulting in optimal fairness performance.
Keywords/Search Tags:Machine Learning, Group Fairness, Fairness Metric, Reweighting Algorithm
PDF Full Text Request
Related items