Academic literature on the topic 'ML fairness'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'ML fairness.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "ML fairness"

1

Weinberg, Lindsay. "Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML Fairness Approaches." Journal of Artificial Intelligence Research 74 (May 6, 2022): 75–109. http://dx.doi.org/10.1613/jair.1.13196.

Full text
Abstract:
This survey article assesses and compares existing critiques of current fairness-enhancing technical interventions in machine learning (ML) that draw from a range of non-computing disciplines, including philosophy, feminist studies, critical race and ethnic studies, legal studies, anthropology, and science and technology studies. It bridges epistemic divides in order to offer an interdisciplinary understanding of the possibilities and limits of hegemonic computational approaches to ML fairness for producing just outcomes for society’s most marginalized. The article is organized according to ni
APA, Harvard, Vancouver, ISO, and other styles
2

Bærøe, Kristine, Torbjørn Gundersen, Edmund Henden, and Kjetil Rommetveit. "Can medical algorithms be fair? Three ethical quandaries and one dilemma." BMJ Health & Care Informatics 29, no. 1 (2022): e100445. http://dx.doi.org/10.1136/bmjhci-2021-100445.

Full text
Abstract:
ObjectiveTo demonstrate what it takes to reconcile the idea of fairness in medical algorithms and machine learning (ML) with the broader discourse of fairness and health equality in health research.MethodThe methodological approach used in this paper is theoretical and ethical analysis.ResultWe show that the question of ensuring comprehensive ML fairness is interrelated to three quandaries and one dilemma.DiscussionAs fairness in ML depends on a nexus of inherent justice and fairness concerns embedded in health research, a comprehensive conceptualisation is called for to make the notion useful
APA, Harvard, Vancouver, ISO, and other styles
3

Yanjun Li, Yanjun Li, Huan Huang Yanjun Li, Qiang Geng Huan Huang, Xinwei Guo Qiang Geng, and Yuyu Yuan Xinwei Guo. "Fairness Measures of Machine Learning Models in Judicial Penalty Prediction." 網際網路技術學刊 23, no. 5 (2022): 1109–16. http://dx.doi.org/10.53106/160792642022092305019.

Full text
Abstract:
<p>Machine learning (ML) has been widely adopted in many software applications across domains. However, accompanying the outstanding performance, the behaviors of the ML models, which are essentially a kind of black-box software, could be unfair and hard to understand in many cases. In our human-centered society, an unfair decision could potentially damage human value, even causing severe social consequences, especially in decision-critical scenarios such as legal judgment. Although some existing works investigated the ML models in terms of robustness, accuracy, security, privacy, qualit
APA, Harvard, Vancouver, ISO, and other styles
4

Alotaibi, Dalha Alhumaidi, Jianlong Zhou, Yifei Dong, Jia Wei, Xin Janet Ge, and Fang Chen. "Quantile Multi-Attribute Disparity (QMAD): An Adaptable Fairness Metric Framework for Dynamic Environments." Electronics 14, no. 8 (2025): 1627. https://doi.org/10.3390/electronics14081627.

Full text
Abstract:
Fairness is becoming indispensable for ethical machine learning (ML) applications. However, it remains a challenge to identify unfairness if there are changes in the distribution of underlying features among different groups and machine learning outputs. This paper proposes a novel fairness metric framework considering multiple attributes, including ML outputs and feature variations for bias detection. The framework comprises two principal components, comparison and aggregation functions, which collectively ensure fairness metrics with high adaptability across various contexts and scenarios. T
APA, Harvard, Vancouver, ISO, and other styles
5

Ghosh, Bishwamittra, Debabrota Basu, and Kuldeep S. Meel. "Algorithmic Fairness Verification with Graphical Models." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (2022): 9539–48. http://dx.doi.org/10.1609/aaai.v36i9.21187.

Full text
Abstract:
In recent years, machine learning (ML) algorithms have been deployed in safety-critical and high-stake decision-making, where the fairness of algorithms is of paramount importance. Fairness in ML centers on detecting bias towards certain demographic populations induced by an ML classifier and proposes algorithmic solutions to mitigate the bias with respect to different fairness definitions. To this end, several fairness verifiers have been proposed that compute the bias in the prediction of an ML classifier—essentially beyond a finite dataset—given the probability distribution of input feature
APA, Harvard, Vancouver, ISO, and other styles
6

Kuzucu, Selim, Jiaee Cheong, Hatice Gunes, and Sinan Kalkan. "Uncertainty as a Fairness Measure." Journal of Artificial Intelligence Research 81 (October 13, 2024): 307–35. http://dx.doi.org/10.1613/jair.1.16041.

Full text
Abstract:
Unfair predictions of machine learning (ML) models impede their broad acceptance in real-world settings. Tackling this arduous challenge first necessitates defining what it means for an ML model to be fair. This has been addressed by the ML community with various measures of fairness that depend on the prediction outcomes of the ML models, either at the group-level or the individual-level. These fairness measures are limited in that they utilize point predictions, neglecting their variances, or uncertainties, making them susceptible to noise, missingness and shifts in data. In this paper, we f
APA, Harvard, Vancouver, ISO, and other styles
7

Weerts, Hilde, Florian Pfisterer, Matthias Feurer, et al. "Can Fairness be Automated? Guidelines and Opportunities for Fairness-aware AutoML." Journal of Artificial Intelligence Research 79 (February 17, 2024): 639–77. http://dx.doi.org/10.1613/jair.1.14747.

Full text
Abstract:
The field of automated machine learning (AutoML) introduces techniques that automate parts of the development of machine learning (ML) systems, accelerating the process and reducing barriers for novices. However, decisions derived from ML models can reproduce, amplify, or even introduce unfairness in our societies, causing harm to (groups of) individuals. In response, researchers have started to propose AutoML systems that jointly optimize fairness and predictive performance to mitigate fairness-related harm. However, fairness is a complex and inherently interdisciplinary subject, and solely p
APA, Harvard, Vancouver, ISO, and other styles
8

Singh, Vivek K., and Kailash Joshi. "Integrating Fairness in Machine Learning Development Life Cycle: Fair CRISP-DM." e-Service Journal 14, no. 2 (2022): 1–24. http://dx.doi.org/10.2979/esj.2022.a886946.

Full text
Abstract:
ABSTRACT: Developing efficient processes for building machine learning (ML) applications is an emerging topic for research. One of the well-known frameworks for organizing, developing, and deploying predictive machine learning models is cross-industry standard for data mining (CRISP-DM). However, the framework does not provide any guidelines for detecting and mitigating different types of fairness-related biases in the development of ML applications. The study of these biases is a relatively recent stream of research. To address this significant theoretical and practical gap, we propose a new
APA, Harvard, Vancouver, ISO, and other styles
9

Makhlouf, Karima, Sami Zhioua, and Catuscia Palamidessi. "On the Applicability of Machine Learning Fairness Notions." ACM SIGKDD Explorations Newsletter 23, no. 1 (2021): 14–23. http://dx.doi.org/10.1145/3468507.3468511.

Full text
Abstract:
Machine Learning (ML) based predictive systems are increasingly used to support decisions with a critical impact on individuals' lives such as college admission, job hiring, child custody, criminal risk assessment, etc. As a result, fairness emerged as an important requirement to guarantee that ML predictive systems do not discriminate against specific individuals or entire sub-populations, in particular, minorities. Given the inherent subjectivity of viewing the concept of fairness, several notions of fairness have been introduced in the literature. This paper is a survey of fairness notions
APA, Harvard, Vancouver, ISO, and other styles
10

Zhou, Zijian, Xinyi Xu, Rachael Hwee Ling Sim, Chuan Sheng Foo, and Bryan Kian Hsiang Low. "Probably Approximate Shapley Fairness with Applications in Machine Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 5 (2023): 5910–18. http://dx.doi.org/10.1609/aaai.v37i5.25732.

Full text
Abstract:
The Shapley value (SV) is adopted in various scenarios in machine learning (ML), including data valuation, agent valuation, and feature attribution, as it satisfies their fairness requirements. However, as exact SVs are infeasible to compute in practice, SV estimates are approximated instead. This approximation step raises an important question: do the SV estimates preserve the fairness guarantees of exact SVs? We observe that the fairness guarantees of exact SVs are too restrictive for SV estimates. Thus, we generalise Shapley fairness to probably approximate Shapley fairness and propose fide
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "ML fairness"

1

Kaplan, Caelin. "Compromis inhérents à l'apprentissage automatique préservant la confidentialité." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4045.

Full text
Abstract:
À mesure que les modèles d'apprentissage automatique (ML) sont de plus en plus intégrés dans un large éventail d'applications, il devient plus important que jamais de garantir la confidentialité des données des individus. Cependant, les techniques actuelles entraînent souvent une perte d'utilité et peuvent affecter des facteurs comme l'équité et l'interprétabilité. Cette thèse vise à approfondir la compréhension des compromis dans trois techniques de ML respectueuses de la vie privée : la confidentialité différentielle, les défenses empiriques, et l'apprentissage fédéré, et à proposer des méth
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "ML fairness"

1

Steif, Ken. "People-based ML Models: Algorithmic Fairness." In Public Policy Analytics. CRC Press, 2021. http://dx.doi.org/10.1201/9781003054658-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

d’Aloisio, Giordano, Antinisca Di Marco, and Giovanni Stilo. "Democratizing Quality-Based Machine Learning Development through Extended Feature Models." In Fundamental Approaches to Software Engineering. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30826-0_5.

Full text
Abstract:
AbstractML systems have become an essential tool for experts of many domains, data scientists and researchers, allowing them to find answers to many complex business questions starting from raw datasets. Nevertheless, the development of ML systems able to satisfy the stakeholders’ needs requires an appropriate amount of knowledge about the ML domain. Over the years, several solutions have been proposed to automate the development of ML systems. However, an approach taking into account the new quality concerns needed by ML systems (like fairness, interpretability, privacy, and others) is still
APA, Harvard, Vancouver, ISO, and other styles
3

Roberts-Licklider, Karen, and Theodore Trafalis. "Fairness in Optimization and ML: A Survey Part 2." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-81010-7_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gonçalves, Rafael, Filipe Gouveia, Inês Lynce, and José Fragoso Santos. "Proxy Attribute Discovery in Machine Learning Datasets via Inductive Logic Programming." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-90653-4_17.

Full text
Abstract:
Abstract The issue of fairness is a well-known challenge in Machine Learning (ML) that has gained increased importance with the emergence of Large Language Models (LLMs) and generative AI. Algorithmic bias can manifest during the training of ML models due to the presence of sensitive attributes, such as gender or racial identity. One approach to mitigate bias is to avoid making decisions based on these protected attributes. However, indirect discrimination can still occur if sensitive information is inferred from proxy attributes. To prevent this, there is a growing interest in detecting poten
APA, Harvard, Vancouver, ISO, and other styles
5

Silva, Inês Oliveira e., Carlos Soares, Inês Sousa, and Rayid Ghani. "Systematic Analysis of the Impact of Label Noise Correction on ML Fairness." In Lecture Notes in Computer Science. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8391-9_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chopra, Deepti, and Roopal Khurana. "Bias and Fairness in Ml." In Introduction to Machine Learning with Python. BENTHAM SCIENCE PUBLISHERS, 2023. http://dx.doi.org/10.2174/9789815124422123010012.

Full text
Abstract:
In machine learning and AI, future predictions are based on past observations, and bias is based on prior information. Harmful biases occur because of human biases which are learned by an algorithm from the training data. In the previous chapter, we discussed training versus testing, bounding the testing error, and VC dimension. In this chapter, we will discuss bias and fairness.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Wenbin, Zichong Wang, Juyong Kim, et al. "Individual Fairness Under Uncertainty." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230621.

Full text
Abstract:
Algorithmic fairness, the research field of making machine learning (ML) algorithms fair, is an established area in ML. As ML technologies expand their application domains, including ones with high societal impact, it becomes essential to take fairness into consideration during the building of ML systems. Yet, despite its wide range of socially sensitive applications, most work treats the issue of algorithmic bias as an intrinsic property of supervised learning, i.e., the class label is given as a precondition. Unlike prior studies in fairness, we propose an individual fairness measure and a c
APA, Harvard, Vancouver, ISO, and other styles
8

Andrae, Silvio. "Fairness and Bias in Machine Learning Models for Credit Decisions." In Advances in Finance, Accounting, and Economics. IGI Global, 2025. https://doi.org/10.4018/979-8-3693-8186-1.ch001.

Full text
Abstract:
Machine learning (ML) models transform credit decision-making, improving efficiency, predictive accuracy, and scalability. However, these innovations raise critical concerns about fairness and bias. This chapter explores the complexities of applying ML in lending. Balancing fairness with predictive accuracy is a core challenge, often requiring trade-offs highlighting current ML systems' limitations. The analysis delves into statistical fairness metrics and examines how these metrics address—or exacerbate—the bias-fairness trade-off. It underscores the importance of transparency and interpretab
APA, Harvard, Vancouver, ISO, and other styles
9

Muralidhar, L. B., N. Sathyanarayana, H. R. Swapna, Sheetal V. Hukkeri, and P. H. Reshma Sultana. "Machine Learning Advancing Diversity Equity and Inclusion in Data-Driven HR Practices." In Advances in Human Resources Management and Organizational Development. IGI Global, 2025. https://doi.org/10.4018/979-8-3373-0149-5.ch019.

Full text
Abstract:
This work explores the transformative potential of Machine Learning (ML) in advancing Diversity, Equity, and Inclusion (DEI) within data-driven Human Resource (HR) practices. ML addresses systemic inequities in recruitment, promotions, and workplace engagement through tools that identify biases, automate fairness-focused evaluations, and analyse inclusion metrics. While offering unparalleled efficiency and scalability, ML introduces algorithmic bias, ethical complexities, and regulatory compliance challenges. Organisations can foster transparency, trust, and accountability by integrating fairn
APA, Harvard, Vancouver, ISO, and other styles
10

Cohen-Inger, Nurit, Guy Rozenblatt, Seffi Cohen, Lior Rokach, and Bracha Shapira. "FairUS - UpSampling Optimized Method for Boosting Fairness." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240585.

Full text
Abstract:
The increasing application of machine learning (ML) in critical areas such as healthcare and finance highlights the importance of fairness in ML models, challenged by biases in training data that can lead to discrimination. We introduce ‘FairUS’, a novel pre-processing method for reducing bias in ML models utilizing the Conditional Generative Adversarial Network (CTGAN) to synthesize upsampled data. Unlike traditional approaches that focus solely on balancing subgroup sample sizes, FairUS strategically optimizes the quantity of synthesized data. This optimization aims to achieve an ideal balan
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "ML fairness"

1

Sabuncuoglu, Alpay, and Carsten Maple. "Towards Proactive Fairness Monitoring of ML Development Pipelines." In 2025 IEEE Symposium on Trustworthy, Explainable and Responsible Computational Intelligence (CITREx Companion). IEEE, 2025. https://doi.org/10.1109/citrexcompanion65208.2025.10981495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

She, Yining, Sumon Biswas, Christian Kästner, and Eunsuk Kang. "FairSense: Long-Term Fairness Analysis of ML-Enabled Systems." In 2025 IEEE/ACM 47th International Conference on Software Engineering (ICSE). IEEE, 2025. https://doi.org/10.1109/icse55347.2025.00159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hertweck, Corinna, Michele Loi, and Christoph Heitz. "Group Fairness Refocused: Assessing the Social Impact of ML Systems." In 2024 11th IEEE Swiss Conference on Data Science (SDS). IEEE, 2024. http://dx.doi.org/10.1109/sds60720.2024.00034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Zhiwei, Carl Kesselman, Mike D’Arcy, Michael Pazzani, and Benjamin Yizing Xu. "Deriva-ML: A Continuous FAIRness Approach to Reproducible Machine Learning Models." In 2024 IEEE 20th International Conference on e-Science (e-Science). IEEE, 2024. http://dx.doi.org/10.1109/e-science62913.2024.10678671.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yu, Normen, Luciana Carreon, Gang Tan, and Saeid Tizpaz-Niari. "FairLay-ML: Intuitive Debugging of Fairness in Data-Driven Social-Critical Software." In 2025 IEEE/ACM 47th International Conference on Software Engineering: Companion Proceedings (ICSE-Companion). IEEE, 2025. https://doi.org/10.1109/icse-companion66252.2025.00016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Robles Herrera, Salvador, Verya Monjezi, Vladik Kreinovich, Ashutosh Trivedi, and Saeid Tizpaz-Niari. "Predicting Fairness of ML Software Configurations." In PROMISE '24: 20th International Conference on Predictive Models and Data Analytics in Software Engineering. ACM, 2024. http://dx.doi.org/10.1145/3663533.3664040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Smith, Jessie J., Michael Madaio, Robin Burke, and Casey Fiesler. "Pragmatic Fairness: Evaluating ML Fairness Within the Constraints of Industry." In FAccT '25: The 2025 ACM Conference on Fairness, Accountability, and Transparency. ACM, 2025. https://doi.org/10.1145/3715275.3732040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Makhlouf, Karima, Sami Zhioua, and Catuscia Palamidessi. "Identifiability of Causal-based ML Fairness Notions." In 2022 14th International Conference on Computational Intelligence and Communication Networks (CICN). IEEE, 2022. http://dx.doi.org/10.1109/cicn56167.2022.10008263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Baresi, Luciano, Chiara Criscuolo, and Carlo Ghezzi. "Understanding Fairness Requirements for ML-based Software." In 2023 IEEE 31st International Requirements Engineering Conference (RE). IEEE, 2023. http://dx.doi.org/10.1109/re57278.2023.00046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Silva, Bruno Pires M., and Lilian Berton. "Analyzing the Trade-off Between Fairness and Model Performance in Supervised Learning: A Case Study in the MIMIC dataset." In Simpósio Brasileiro de Computação Aplicada à Saúde. Sociedade Brasileira de Computação - SBC, 2025. https://doi.org/10.5753/sbcas.2025.6994.

Full text
Abstract:
Fairness has become a key area in machine learning (ML), aiming to ensure equitable outcomes across demographic groups and mitigate biases. This study examines fairness in healthcare using the MIMIC III dataset, comparing traditional and fair ML approaches in pre, in, and post-processing stages. Methods include Correlation Remover and Adversarial Learning from Fairlearn, and Equalized Odds Post-processing from AI Fairness 360. We evaluate performance (accuracy, F1-score) alongside fairness metrics (equal opportunity, equalized odds) considering different sensible attributes. Notably, Equalized
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!