Academic literature on the topic 'Fairness-Accuracy trade-Off'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Fairness-Accuracy trade-Off.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Fairness-Accuracy trade-Off"

1

Jang, Taeuk, Pengyi Shi, and Xiaoqian Wang. "Group-Aware Threshold Adaptation for Fair Classification." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (2022): 6988–95. http://dx.doi.org/10.1609/aaai.v36i6.20657.

Full text
Abstract:
The fairness in machine learning is getting increasing attention, as its applications in different fields continue to expand and diversify. To mitigate the discriminated model behaviors between different demographic groups, we introduce a novel post-processing method to optimize over multiple fairness constraints through group-aware threshold adaptation. We propose to learn adaptive classification thresholds for each demographic group by optimizing the confusion matrix estimated from the probability distribution of a classification model output. As we only need an estimated probability distrib
APA, Harvard, Vancouver, ISO, and other styles
2

Plecko, Drago, and Elias Bareinboim. "Fairness-Accuracy Trade-Offs: A Causal Perspective." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 25 (2025): 26344–53. https://doi.org/10.1609/aaai.v39i25.34833.

Full text
Abstract:
With the widespread adoption of AI systems, many of the decisions once made by humans are now delegated to automated systems. Recent works in the literature demonstrate that these automated systems, when used in socially sensitive domains, may exhibit discriminatory behavior based on sensitive characteristics such as gender, sex, religion, or race. In light of this, various notions of fairness and methods to quantify discrimination have been proposed, also leading to the development of numerous approaches for constructing fair predictors. At the same time, imposing fairness constraints may dec
APA, Harvard, Vancouver, ISO, and other styles
3

Langenberg, Anna, Shih-Chi Ma, Tatiana Ermakova, and Benjamin Fabian. "Formal Group Fairness and Accuracy in Automated Decision Making." Mathematics 11, no. 8 (2023): 1771. http://dx.doi.org/10.3390/math11081771.

Full text
Abstract:
Most research on fairness in Machine Learning assumes the relationship between fairness and accuracy to be a trade-off, with an increase in fairness leading to an unavoidable loss of accuracy. In this study, several approaches for fair Machine Learning are studied to experimentally analyze the relationship between accuracy and group fairness. The results indicated that group fairness and accuracy may even benefit each other, which emphasizes the importance of selecting appropriate measures for performance evaluation. This work provides a foundation for further studies on the adequate objective
APA, Harvard, Vancouver, ISO, and other styles
4

Gupta, Soumyajit, Venelin Kovatchev, Anubrata Das, Maria De-Arteaga, and Matthew Lease. "Finding Pareto trade-offs in fair and accurate detection of toxic speech." Information Research an international electronic journal 30, iConf (2025): 123–41. https://doi.org/10.47989/ir30iconf47572.

Full text
Abstract:
Introduction. Optimizing NLP models for fairness poses many challenges. Lack of differentiable fairness measures prevents gradient-based loss training or requires surrogate losses that diverge from the true metric of interest. In addition, competing objectives (e.g., accuracy vs. fairness) often require making trade-offs based on stakeholder preferences, but stakeholders may not know their preferences before seeing system performance under different trade-off settings. Method. We formulate the GAP loss, a differentiable version of a fairness measure, Accuracy Parity, to provide balanced accura
APA, Harvard, Vancouver, ISO, and other styles
5

Tae, Ki Hyun, Hantian Zhang, Jaeyoung Park, Kexin Rong, and Steven Euijong Whang. "Falcon: Fair Active Learning Using Multi-Armed Bandits." Proceedings of the VLDB Endowment 17, no. 5 (2024): 952–65. http://dx.doi.org/10.14778/3641204.3641207.

Full text
Abstract:
Biased data can lead to unfair machine learning models, highlighting the importance of embedding fairness at the beginning of data analysis, particularly during dataset curation and labeling. In response, we propose Falcon, a scalable fair active learning framework. Falcon adopts a data-centric approach that improves machine learning model fairness via strategic sample selection. Given a user-specified group fairness measure, Falcon identifies samples from "target groups" (e.g., (attribute=female, label=positive)) that are the most informative for improving fairness. However, a challenge arise
APA, Harvard, Vancouver, ISO, and other styles
6

Badar, Maryam, Sandipan Sikdar, Wolfgang Nejdl, and Marco Fisichella. "FairTrade: Achieving Pareto-Optimal Trade-Offs between Balanced Accuracy and Fairness in Federated Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 10 (2024): 10962–70. http://dx.doi.org/10.1609/aaai.v38i10.28971.

Full text
Abstract:
As Federated Learning (FL) gains prominence in distributed machine learning applications, achieving fairness without compromising predictive performance becomes paramount. The data being gathered from distributed clients in an FL environment often leads to class imbalance. In such scenarios, balanced accuracy rather than accuracy is the true representation of model performance. However, most state-of-the-art fair FL methods report accuracy as the measure of performance, which can lead to misguided interpretations of the model's effectiveness to mitigate discrimination. To the best of our knowl
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Xuran, Peng Wu, and Jing Su. "Accurate Fairness: Improving Individual Fairness without Trading Accuracy." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (2023): 14312–20. http://dx.doi.org/10.1609/aaai.v37i12.26674.

Full text
Abstract:
Accuracy and individual fairness are both crucial for trustworthy machine learning, but these two aspects are often incompatible with each other so that enhancing one aspect may sacrifice the other inevitably with side effects of true bias or false fairness. We propose in this paper a new fairness criterion, accurate fairness, to align individual fairness with accuracy. Informally, it requires the treatments of an individual and the individual's similar counterparts to conform to a uniform target, i.e., the ground truth of the individual. We prove that accurate fairness also implies typical gr
APA, Harvard, Vancouver, ISO, and other styles
8

Silvia, Chiappa, Jiang Ray, Stepleton Tom, Pacchiano Aldo, Jiang Heinrich, and Aslanides John. "A General Approach to Fairness with Optimal Transport." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 3633–40. http://dx.doi.org/10.1609/aaai.v34i04.5771.

Full text
Abstract:
We propose a general approach to fairness based on transporting distributions corresponding to different sensitive attributes to a common distribution. We use optimal transport theory to derive target distributions and methods that allow us to achieve fairness with minimal changes to the unfair model. Our approach is applicable to both classification and regression problems, can enforce different notions of fairness, and enable us to achieve a Pareto-optimal trade-off between accuracy and fairness. We demonstrate that it outperforms previous approaches in several benchmark fairness datasets.
APA, Harvard, Vancouver, ISO, and other styles
9

Pinzón, Carlos, Catuscia Palamidessi, Pablo Piantanida, and Frank Valencia. "On the Impossibility of Non-trivial Accuracy in Presence of Fairness Constraints." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (2022): 7993–8000. http://dx.doi.org/10.1609/aaai.v36i7.20770.

Full text
Abstract:
One of the main concerns about fairness in machine learning (ML) is that, in order to achieve it, one may have to trade off some accuracy. To overcome this issue, Hardt et al. proposed the notion of equality of opportunity (EO), which is compatible with maximal accuracy when the target label is deterministic with respect to the input features. In the probabilistic case, however, the issue is more complicated: It has been shown that under differential privacy constraints, there are data sources for which EO can only be achieved at the total detriment of accuracy, in the sense that a classifier
APA, Harvard, Vancouver, ISO, and other styles
10

Pasupuleti, Murali Krishna. "AI-Based Credit Scoring Models: Balancing Accuracy and Fairness." International Journal of Academic and Industrial Research Innovations(IJAIRI) 05, no. 05 (2025): 631–40. https://doi.org/10.62311/nesx/rphcr23.

Full text
Abstract:
Abstract: The integration of artificial intelligence (AI) into credit scoring has transformed traditional risk assessment methodologies by enabling the analysis of complex, multidimensional data. While these models demonstrate superior predictive performance, concerns persist regarding fairness and potential bias against underrepresented demographic groups. This study investigates the trade-off between predictive accuracy and algorithmic fairness in AI-based credit scoring systems. A comparative evaluation of logistic regression, random forest, and XGBoost models was conducted using a publicly
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Fairness-Accuracy trade-Off"

1

Alves, da Silva Guilherme. "Traitement hybride pour l'équité algorithmique." Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0323.

Full text
Abstract:
Les décisions algorithmiques sont actuellement utilisées quotidiennement. Ces décisions reposent souvent sur des algorithmes d'apprentissage automatique (machine learning, ML) qui peuvent produire des modèles complexes et opaques. Des études récentes ont soulevé des problèmes d'iniquité en révélant des résultats discriminatoires produits par les modèles ML contre des minorités et des groupes non privilégiés. Comme les modèles ML sont capables d'amplifier la discrimination en raison de résultats injustes, cela révèle la nécessité d'approches qui découvrent et suppriment les biais inattendues. L
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Fairness-Accuracy trade-Off"

1

Wang, Jingbo, Yannan Li, and Chao Wang. "Synthesizing Fair Decision Trees via Iterative Constraint Solving." In Computer Aided Verification. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13188-2_18.

Full text
Abstract:
AbstractDecision trees are increasingly used to make socially sensitive decisions, where they are expected to be both accurate and fair, but it remains a challenging task to optimize the learning algorithm for fairness in a predictable and explainable fashion. To overcome the challenge, we propose an iterative framework for choosing decision attributes, or features, at each level by formulating feature selection as a series of mixed integer optimization problems. Both fairness and accuracy requirements are encoded as numerical constraints and solved by an off-the-shelf constraint solver. As a
APA, Harvard, Vancouver, ISO, and other styles
2

Andrae, Silvio. "Fairness and Bias in Machine Learning Models for Credit Decisions." In Advances in Finance, Accounting, and Economics. IGI Global, 2025. https://doi.org/10.4018/979-8-3693-8186-1.ch001.

Full text
Abstract:
Machine learning (ML) models transform credit decision-making, improving efficiency, predictive accuracy, and scalability. However, these innovations raise critical concerns about fairness and bias. This chapter explores the complexities of applying ML in lending. Balancing fairness with predictive accuracy is a core challenge, often requiring trade-offs highlighting current ML systems' limitations. The analysis delves into statistical fairness metrics and examines how these metrics address—or exacerbate—the bias-fairness trade-off. It underscores the importance of transparency and interpretab
APA, Harvard, Vancouver, ISO, and other styles
3

Yang, Jingran, Lingfeng Zhang, and Min Zhang. "Making Fair Classification via Correlation Alignment." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240570.

Full text
Abstract:
Machine learning learns patterns from data to improve the performance of the decision-making systems through computing, and gradually affects people’s lives. However, it shows that in current research machine learning algorithms may reinforce human discrimination, and exacerbate negative impacts on unprivileged groups. To mitigate potential unfairness in machine learning classifiers, we propose a fair classification approach by quantifying the difference in the prediction distribution with the idea of correlation alignment in transfer learning, which improves fairness efficiently by minimizing
APA, Harvard, Vancouver, ISO, and other styles
4

Gao, Jiashi, Xin Yao, and Xuetao Wei. "Anti-Matthew FL: Bridging the Performance Gap in Federated Learning to Counteract the Matthew Effect." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240712.

Full text
Abstract:
Federated learning (FL) stands as a paradigmatic approach that facilitates model training across heterogeneous and diverse datasets originating from various data providers. However, conventional FLs fall short of achieving consistent performance, potentially leading to performance degradation for clients who are disadvantaged in data resources. Influenced by the Matthew effect, deploying a performance-imbalanced global model in applications further impedes the generation of high-quality data from disadvantaged clients, exacerbating the disparities in data resources among clients. In this work,
APA, Harvard, Vancouver, ISO, and other styles
5

Rane, Nitin Liladhar, and Mallikarjuna Paramesha. "Explainable Artificial Intelligence (XAI) as a foundation for trustworthy artificial intelligence." In Trustworthy Artificial Intelligence in Industry and Society. Deep Science Publishing, 2024. http://dx.doi.org/10.70593/978-81-981367-4-9_1.

Full text
Abstract:
The rapid integration of artificial intelligence (AI) into various sectors necessitates a focus on trustworthiness, characterized by principles such as fairness, transparency, accountability, robustness, privacy, and ethics. Explainable AI has become essential and central to the achievement of trustworthy AI by answering the "black box" nature of top-of-the-line AI models through its interpretability. The research further develops the core principles relating to trustworthy AI, providing a comprehensive overview of important techniques falling under the XAI rubric, among them LIME (Local Inter
APA, Harvard, Vancouver, ISO, and other styles
6

Ahuja, Nikita, and Jyothi Pillai. "EXPLORING RECOMMENDER SYSTEMS: TYPES, EVALUATION METRICS, AND CHALLENGES." In Futuristic Trends in Computing Technologies and Data Sciences Volume 3 Book 8. Iterative International Publishers, Selfypage Developers Pvt Ltd, 2024. http://dx.doi.org/10.58532/v3bkct8p2ch2.

Full text
Abstract:
Recommender systems have emerged as powerful tools for personalized infor- mation filtering and recommendation generation. However, these systems are not without their challenges and issues. This abstract explores the various issues faced by recommender systems and the implications they have on recommendation quality and user satisfaction. Data sparsity is a prevalent issue where recommender systems struggle to generate accurate recommendations due to limited or sparse user-item interaction data. This poses a significant challenge as it becomes difficult to capture user preferences and identif
APA, Harvard, Vancouver, ISO, and other styles
7

Boyle, Alan. "Popular Audiences on the Web." In A Field Guide for Science Writers. Oxford University Press, 2005. http://dx.doi.org/10.1093/oso/9780195174991.003.0019.

Full text
Abstract:
Let's face it: We're all Web journalists now. You might be working for a newspaper or magazine, a television or radio outlet, but your story is still likely to end up on the Web as well as in its original medium. You or your publication may even provide supplemental material that appears only on the Web—say, a behind-the-scenes notebook, an interactive graphic, or a blog. Or you might even be a journalist whose work appears almost exclusively on the Web—like me. I worked at daily newspapers for 19 years before joining MSNBC, a combined Web/television news organization. So I still tend to think
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Fairness-Accuracy trade-Off"

1

Liu, Yazheng, Xi Zhang, and Sihong Xie. "Trade less Accuracy for Fairness and Trade-off Explanation for GNN." In 2022 IEEE International Conference on Big Data (Big Data). IEEE, 2022. http://dx.doi.org/10.1109/bigdata55660.2022.10020318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cooper, A. Feder, Ellen Abrams, and NA NA. "Emergent Unfairness in Algorithmic Fairness-Accuracy Trade-Off Research." In AIES '21: AAAI/ACM Conference on AI, Ethics, and Society. ACM, 2021. http://dx.doi.org/10.1145/3461702.3462519.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Silva, Bruno Pires M., and Lilian Berton. "Analyzing the Trade-off Between Fairness and Model Performance in Supervised Learning: A Case Study in the MIMIC dataset." In Simpósio Brasileiro de Computação Aplicada à Saúde. Sociedade Brasileira de Computação - SBC, 2025. https://doi.org/10.5753/sbcas.2025.6994.

Full text
Abstract:
Fairness has become a key area in machine learning (ML), aiming to ensure equitable outcomes across demographic groups and mitigate biases. This study examines fairness in healthcare using the MIMIC III dataset, comparing traditional and fair ML approaches in pre, in, and post-processing stages. Methods include Correlation Remover and Adversarial Learning from Fairlearn, and Equalized Odds Post-processing from AI Fairness 360. We evaluate performance (accuracy, F1-score) alongside fairness metrics (equal opportunity, equalized odds) considering different sensible attributes. Notably, Equalized
APA, Harvard, Vancouver, ISO, and other styles
4

Bufi, Salvatore, Vincenzo Paparella, Vito Walter Anelli, and Tommaso Di Noia. "Legal but Unfair: Auditing the Impact of Data Minimization on Fairness and Accuracy Trade-off in Recommender Systems." In UMAP '25: 33rd ACM Conference on User Modeling, Adaptation and Personalization. ACM, 2025. https://doi.org/10.1145/3699682.3728356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xie, Junsong, Yonghui Yang, Zihan Wang, and Le Wu. "Learning Fair Representations for Recommendation via Information Bottleneck Principle." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/273.

Full text
Abstract:
User-oriented recommender systems (RS) characterize users' preferences based on observed behaviors and are widely deployed in personalized services. However, RS may unintentionally capture biases related to sensitive attributes (e.g., gender) from behavioral data, leading to unfair issues and discrimination against particular groups (e.g., females). Adversarial training is a popular technique for fairness-aware RS, when filtering sensitive information in user modeling. Despite advancements in fairness, achieving a good accuracy-fairness trade-off remains a challenge in adversarial training. In
APA, Harvard, Vancouver, ISO, and other styles
6

Bell, Andrew, Ian Solano-Kamaiko, Oded Nov, and Julia Stoyanovich. "It’s Just Not That Simple: An Empirical Study of the Accuracy-Explainability Trade-off in Machine Learning for Public Policy." In FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, 2022. http://dx.doi.org/10.1145/3531146.3533090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!