Academic literature on the topic 'ML fairness'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'ML fairness.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "ML fairness"

1

Weinberg, Lindsay. "Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML Fairness Approaches." Journal of Artificial Intelligence Research 74 (May 6, 2022): 75–109. http://dx.doi.org/10.1613/jair.1.13196.

Full text
Abstract:
This survey article assesses and compares existing critiques of current fairness-enhancing technical interventions in machine learning (ML) that draw from a range of non-computing disciplines, including philosophy, feminist studies, critical race and ethnic studies, legal studies, anthropology, and science and technology studies. It bridges epistemic divides in order to offer an interdisciplinary understanding of the possibilities and limits of hegemonic computational approaches to ML fairness for producing just outcomes for society’s most marginalized. The article is organized according to nine major themes of critique wherein these different fields intersect: 1) how "fairness" in AI fairness research gets defined; 2) how problems for AI systems to address get formulated; 3) the impacts of abstraction on how AI tools function and its propensity to lead to technological solutionism; 4) how racial classification operates within AI fairness research; 5) the use of AI fairness measures to avoid regulation and engage in ethics washing; 6) an absence of participatory design and democratic deliberation in AI fairness considerations; 7) data collection practices that entrench “bias,” are non-consensual, and lack transparency; 8) the predatory inclusion of marginalized groups into AI systems; and 9) a lack of engagement with AI’s long-term social and ethical outcomes. Drawing from these critiques, the article concludes by imagining future ML fairness research directions that actively disrupt entrenched power dynamics and structural injustices in society.
APA, Harvard, Vancouver, ISO, and other styles
2

Bærøe, Kristine, Torbjørn Gundersen, Edmund Henden, and Kjetil Rommetveit. "Can medical algorithms be fair? Three ethical quandaries and one dilemma." BMJ Health & Care Informatics 29, no. 1 (2022): e100445. http://dx.doi.org/10.1136/bmjhci-2021-100445.

Full text
Abstract:
ObjectiveTo demonstrate what it takes to reconcile the idea of fairness in medical algorithms and machine learning (ML) with the broader discourse of fairness and health equality in health research.MethodThe methodological approach used in this paper is theoretical and ethical analysis.ResultWe show that the question of ensuring comprehensive ML fairness is interrelated to three quandaries and one dilemma.DiscussionAs fairness in ML depends on a nexus of inherent justice and fairness concerns embedded in health research, a comprehensive conceptualisation is called for to make the notion useful.ConclusionThis paper demonstrates that more analytical work is needed to conceptualise fairness in ML so it adequately reflects the complexity of justice and fairness concerns within the field of health research.
APA, Harvard, Vancouver, ISO, and other styles
3

Yanjun Li, Yanjun Li, Huan Huang Yanjun Li, Qiang Geng Huan Huang, Xinwei Guo Qiang Geng, and Yuyu Yuan Xinwei Guo. "Fairness Measures of Machine Learning Models in Judicial Penalty Prediction." 網際網路技術學刊 23, no. 5 (2022): 1109–16. http://dx.doi.org/10.53106/160792642022092305019.

Full text
Abstract:
<p>Machine learning (ML) has been widely adopted in many software applications across domains. However, accompanying the outstanding performance, the behaviors of the ML models, which are essentially a kind of black-box software, could be unfair and hard to understand in many cases. In our human-centered society, an unfair decision could potentially damage human value, even causing severe social consequences, especially in decision-critical scenarios such as legal judgment. Although some existing works investigated the ML models in terms of robustness, accuracy, security, privacy, quality, etc., the study on the fairness of ML is still in the early stage. In this paper, we first proposed a set of fairness metrics for ML models from different perspectives. Based on this, we performed a comparative study on the fairness of existing widely used classic ML and deep learning models in the domain of real-world judicial judgments. The experiment results reveal that the current state-of-the-art ML models could still raise concerns for unfair decision-making. The ML models with high accuracy and fairness are urgently demanding.</p> <p> </p>
APA, Harvard, Vancouver, ISO, and other styles
4

Alotaibi, Dalha Alhumaidi, Jianlong Zhou, Yifei Dong, Jia Wei, Xin Janet Ge, and Fang Chen. "Quantile Multi-Attribute Disparity (QMAD): An Adaptable Fairness Metric Framework for Dynamic Environments." Electronics 14, no. 8 (2025): 1627. https://doi.org/10.3390/electronics14081627.

Full text
Abstract:
Fairness is becoming indispensable for ethical machine learning (ML) applications. However, it remains a challenge to identify unfairness if there are changes in the distribution of underlying features among different groups and machine learning outputs. This paper proposes a novel fairness metric framework considering multiple attributes, including ML outputs and feature variations for bias detection. The framework comprises two principal components, comparison and aggregation functions, which collectively ensure fairness metrics with high adaptability across various contexts and scenarios. The comparison function evaluates individual variations in the attribute across different groups to generate a fairness score for an individual attribute, while the aggregation function combines individual fairness scores of single or multiple attributes for the overall fairness understanding. Both the comparison and aggregation functions can be customized based on the context for the optimized fairness evaluations. Three innovative comparison–aggregation function pairs are proposed to demonstrate the effectiveness and robustness of the novel framework. The novel framework underscores the importance of dynamic fairness, where ML systems are designed to adapt to changing societal norms and population demographics. The new metric can monitor bias as a dynamic fairness indicator for robustness in ML systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Ghosh, Bishwamittra, Debabrota Basu, and Kuldeep S. Meel. "Algorithmic Fairness Verification with Graphical Models." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (2022): 9539–48. http://dx.doi.org/10.1609/aaai.v36i9.21187.

Full text
Abstract:
In recent years, machine learning (ML) algorithms have been deployed in safety-critical and high-stake decision-making, where the fairness of algorithms is of paramount importance. Fairness in ML centers on detecting bias towards certain demographic populations induced by an ML classifier and proposes algorithmic solutions to mitigate the bias with respect to different fairness definitions. To this end, several fairness verifiers have been proposed that compute the bias in the prediction of an ML classifier—essentially beyond a finite dataset—given the probability distribution of input features. In the context of verifying linear classifiers, existing fairness verifiers are limited by accuracy due to imprecise modeling of correlations among features and scalability due to restrictive formulations of the classifiers as SSAT/SMT formulas or by sampling. In this paper, we propose an efficient fairness verifier, called FVGM, that encodes the correlations among features as a Bayesian network. In contrast to existing verifiers, FVGM proposes a stochastic subset-sum based approach for verifying linear classifiers. Experimentally, we show that FVGM leads to an accurate and scalable assessment for more diverse families of fairness-enhancing algorithms, fairness attacks, and group/causal fairness metrics than the state-of-the-art fairness verifiers. We also demonstrate that FVGM facilitates the computation of fairness influence functions as a stepping stone to detect the source of bias induced by subsets of features.
APA, Harvard, Vancouver, ISO, and other styles
6

Kuzucu, Selim, Jiaee Cheong, Hatice Gunes, and Sinan Kalkan. "Uncertainty as a Fairness Measure." Journal of Artificial Intelligence Research 81 (October 13, 2024): 307–35. http://dx.doi.org/10.1613/jair.1.16041.

Full text
Abstract:
Unfair predictions of machine learning (ML) models impede their broad acceptance in real-world settings. Tackling this arduous challenge first necessitates defining what it means for an ML model to be fair. This has been addressed by the ML community with various measures of fairness that depend on the prediction outcomes of the ML models, either at the group-level or the individual-level. These fairness measures are limited in that they utilize point predictions, neglecting their variances, or uncertainties, making them susceptible to noise, missingness and shifts in data. In this paper, we first show that a ML model may appear to be fair with existing point-based fairness measures but biased against a demographic group in terms of prediction uncertainties. Then, we introduce new fairness measures based on different types of uncertainties, namely, aleatoric uncertainty and epistemic uncertainty. We demonstrate on many datasets that (i) our uncertaintybased measures are complementary to existing measures of fairness, and (ii) they provide more insights about the underlying issues leading to bias.
APA, Harvard, Vancouver, ISO, and other styles
7

Weerts, Hilde, Florian Pfisterer, Matthias Feurer, et al. "Can Fairness be Automated? Guidelines and Opportunities for Fairness-aware AutoML." Journal of Artificial Intelligence Research 79 (February 17, 2024): 639–77. http://dx.doi.org/10.1613/jair.1.14747.

Full text
Abstract:
The field of automated machine learning (AutoML) introduces techniques that automate parts of the development of machine learning (ML) systems, accelerating the process and reducing barriers for novices. However, decisions derived from ML models can reproduce, amplify, or even introduce unfairness in our societies, causing harm to (groups of) individuals. In response, researchers have started to propose AutoML systems that jointly optimize fairness and predictive performance to mitigate fairness-related harm. However, fairness is a complex and inherently interdisciplinary subject, and solely posing it as an optimization problem can have adverse side effects. With this work, we aim to raise awareness among developers of AutoML systems about such limitations of fairness-aware AutoML, while also calling attention to the potential of AutoML as a tool for fairness research. We present a comprehensive overview of different ways in which fairness-related harm can arise and the ensuing implications for the design of fairness-aware AutoML. We conclude that while fairness cannot be automated, fairness-aware AutoML can play an important role in the toolbox of ML practitioners. We highlight several open technical challenges for future work in this direction. Additionally, we advocate for the creation of more user-centered assistive systems designed to tackle challenges encountered in fairness work. This article appears in the AI & Society track.
APA, Harvard, Vancouver, ISO, and other styles
8

Singh, Vivek K., and Kailash Joshi. "Integrating Fairness in Machine Learning Development Life Cycle: Fair CRISP-DM." e-Service Journal 14, no. 2 (2022): 1–24. http://dx.doi.org/10.2979/esj.2022.a886946.

Full text
Abstract:
ABSTRACT: Developing efficient processes for building machine learning (ML) applications is an emerging topic for research. One of the well-known frameworks for organizing, developing, and deploying predictive machine learning models is cross-industry standard for data mining (CRISP-DM). However, the framework does not provide any guidelines for detecting and mitigating different types of fairness-related biases in the development of ML applications. The study of these biases is a relatively recent stream of research. To address this significant theoretical and practical gap, we propose a new framework—Fair CRISP-DM, which groups and maps these biases corresponding to each phase of an ML application development. Through this study, we contribute to the literature on ML development and fairness. We present recommendations to ML researchers on including fairness as part of the ML evaluation process. Further, ML practitioners can use our framework to identify and mitigate fairness-related biases in each phase of an ML project development. Finally, we also discuss emerging technologies which can help developers to detect and mitigate biases in different stages of ML application development.
APA, Harvard, Vancouver, ISO, and other styles
9

Makhlouf, Karima, Sami Zhioua, and Catuscia Palamidessi. "On the Applicability of Machine Learning Fairness Notions." ACM SIGKDD Explorations Newsletter 23, no. 1 (2021): 14–23. http://dx.doi.org/10.1145/3468507.3468511.

Full text
Abstract:
Machine Learning (ML) based predictive systems are increasingly used to support decisions with a critical impact on individuals' lives such as college admission, job hiring, child custody, criminal risk assessment, etc. As a result, fairness emerged as an important requirement to guarantee that ML predictive systems do not discriminate against specific individuals or entire sub-populations, in particular, minorities. Given the inherent subjectivity of viewing the concept of fairness, several notions of fairness have been introduced in the literature. This paper is a survey of fairness notions that, unlike other surveys in the literature, addresses the question of "which notion of fairness is most suited to a given real-world scenario and why?". Our attempt to answer this question consists in (1) identifying the set of fairness-related characteristics of the real-world scenario at hand, (2) analyzing the behavior of each fairness notion, and then (3) fitting these two elements to recommend the most suitable fairness notion in every specific setup. The results are summarized in a decision diagram that can be used by practitioners and policy makers to navigate the relatively large catalogue of ML fairness notions.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhou, Zijian, Xinyi Xu, Rachael Hwee Ling Sim, Chuan Sheng Foo, and Bryan Kian Hsiang Low. "Probably Approximate Shapley Fairness with Applications in Machine Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 5 (2023): 5910–18. http://dx.doi.org/10.1609/aaai.v37i5.25732.

Full text
Abstract:
The Shapley value (SV) is adopted in various scenarios in machine learning (ML), including data valuation, agent valuation, and feature attribution, as it satisfies their fairness requirements. However, as exact SVs are infeasible to compute in practice, SV estimates are approximated instead. This approximation step raises an important question: do the SV estimates preserve the fairness guarantees of exact SVs? We observe that the fairness guarantees of exact SVs are too restrictive for SV estimates. Thus, we generalise Shapley fairness to probably approximate Shapley fairness and propose fidelity score, a metric to measure the variation of SV estimates, that determines how probable the fairness guarantees hold. Our last theoretical contribution is a novel greedy active estimation (GAE) algorithm that will maximise the lowest fidelity score and achieve a better fairness guarantee than the de facto Monte-Carlo estimation. We empirically verify GAE outperforms several existing methods in guaranteeing fairness while remaining competitive in estimation accuracy in various ML scenarios using real-world datasets.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "ML fairness"

1

Kaplan, Caelin. "Compromis inhérents à l'apprentissage automatique préservant la confidentialité." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4045.

Full text
Abstract:
À mesure que les modèles d'apprentissage automatique (ML) sont de plus en plus intégrés dans un large éventail d'applications, il devient plus important que jamais de garantir la confidentialité des données des individus. Cependant, les techniques actuelles entraînent souvent une perte d'utilité et peuvent affecter des facteurs comme l'équité et l'interprétabilité. Cette thèse vise à approfondir la compréhension des compromis dans trois techniques de ML respectueuses de la vie privée : la confidentialité différentielle, les défenses empiriques, et l'apprentissage fédéré, et à proposer des méthodes qui améliorent leur efficacité tout en maintenant la protection de la vie privée. La première étude examine l'impact de la confidentialité différentielle sur l'équité entre les groupes définis par des attributs sensibles. Alors que certaines hypothèses précédentes suggéraient que la confidentialité différentielle pourrait exacerber l'injustice dans les modèles ML, nos expériences montrent que la sélection d'une architecture de modèle optimale et le réglage des hyperparamètres pour DP-SGD (Descente de Gradient Stochastique Différentiellement Privée) peuvent atténuer les disparités d'équité. En utilisant des ensembles de données standards dans la littérature sur l'équité du ML, nous montrons que les disparités entre les groupes pour les métriques telles que la parité démographique, l'égalité des chances et la parité prédictive sont souvent réduites ou négligeables par rapport aux modèles non privés. La deuxième étude se concentre sur les défenses empiriques de la vie privée, qui visent à protéger les données d'entraînement tout en minimisant la perte d'utilité. La plupart des défenses existantes supposent l'accès à des données de référence — un ensemble de données supplémentaire provenant de la même distribution (ou similaire) que les données d'entraînement. Cependant, les travaux antérieurs n'ont que rarement évalué les risques de confidentialité associés aux données de référence. Pour y remédier, nous avons réalisé la première analyse complète de la confidentialité des données de référence dans les défenses empiriques. Nous avons proposé une méthode de défense de référence, la minimisation du risque empirique pondéré (WERM), qui permet de mieux comprendre les compromis entre l'utilité du modèle, la confidentialité des données d'entraînement et celle des données de référence. En plus d'offrir des garanties théoriques, WERM surpasse régulièrement les défenses empiriques de pointe dans presque tous les régimes de confidentialité relatifs. La troisième étude aborde les compromis liés à la convergence dans les systèmes d'inférence collaborative (CIS), de plus en plus utilisés dans l'Internet des objets (IoT) pour permettre aux nœuds plus petits de décharger une partie de leurs tâches d'inférence vers des nœuds plus puissants. Alors que l'apprentissage fédéré (FL) est souvent utilisé pour entraîner conjointement les modèles dans ces systèmes, les méthodes traditionnelles ont négligé la dynamique opérationnelle, comme l'hétérogénéité des taux de service entre les nœuds. Nous proposons une approche FL novatrice, spécialement conçue pour les CIS, qui prend en compte les taux de service variables et la disponibilité inégale des données. Notre cadre offre des garanties théoriques et surpasse systématiquement les algorithmes de pointe, en particulier dans les scénarios où les appareils finaux gèrent des taux de requêtes d'inférence élevés. En conclusion, cette thèse contribue à l'amélioration des techniques de ML respectueuses de la vie privée en analysant les compromis entre confidentialité, utilité et autres facteurs. Les méthodes proposées offrent des solutions pratiques pour intégrer ces techniques dans des applications réelles, en assurant une meilleure protection des données personnelles<br>As machine learning (ML) models are increasingly integrated into a wide range of applications, ensuring the privacy of individuals' data is becoming more important than ever. However, privacy-preserving ML techniques often result in reduced task-specific utility and may negatively impact other essential factors like fairness, robustness, and interpretability. These challenges have limited the widespread adoption of privacy-preserving methods. This thesis aims to address these challenges through two primary goals: (1) to deepen the understanding of key trade-offs in three privacy-preserving ML techniques—differential privacy, empirical privacy defenses, and federated learning; (2) to propose novel methods and algorithms that improve utility and effectiveness while maintaining privacy protections. The first study in this thesis investigates how differential privacy impacts fairness across groups defined by sensitive attributes. While previous assumptions suggested that differential privacy could exacerbate unfairness in ML models, our experiments demonstrate that selecting an optimal model architecture and tuning hyperparameters for DP-SGD (Differentially Private Stochastic Gradient Descent) can mitigate fairness disparities. Using standard ML fairness datasets, we show that group disparities in metrics like demographic parity, equalized odds, and predictive parity are often reduced or remain negligible when compared to non-private baselines, challenging the prevailing notion that differential privacy worsens fairness for underrepresented groups. The second study focuses on empirical privacy defenses, which aim to protect training data privacy while minimizing utility loss. Most existing defenses assume access to reference data---an additional dataset from the same or a similar distribution as the training data. However, previous works have largely neglected to evaluate the privacy risks associated with reference data. To address this, we conducted the first comprehensive analysis of reference data privacy in empirical defenses. We proposed a baseline defense method, Weighted Empirical Risk Minimization (WERM), which allows for a clearer understanding of the trade-offs between model utility, training data privacy, and reference data privacy. In addition to offering theoretical guarantees on model utility and the relative privacy of training and reference data, WERM consistently outperforms state-of-the-art empirical privacy defenses in nearly all relative privacy regimes.The third study addresses the convergence-related trade-offs in Collaborative Inference Systems (CISs), which are increasingly used in the Internet of Things (IoT) to enable smaller nodes in a network to offload part of their inference tasks to more powerful nodes. While Federated Learning (FL) is often used to jointly train models within CISs, traditional methods have overlooked the operational dynamics of these systems, such as heterogeneity in serving rates across nodes. We propose a novel FL approach explicitly designed for CISs, which accounts for varying serving rates and uneven data availability. Our framework provides theoretical guarantees and consistently outperforms state-of-the-art algorithms, particularly in scenarios where end devices handle high inference request rates.In conclusion, this thesis advances the field of privacy-preserving ML by addressing key trade-offs in differential privacy, empirical privacy defenses, and federated learning. The proposed methods provide new insights into balancing privacy with utility and other critical factors, offering practical solutions for integrating privacy-preserving techniques into real-world applications. These contributions aim to support the responsible and ethical deployment of AI technologies that prioritize data privacy and protection
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "ML fairness"

1

Steif, Ken. "People-based ML Models: Algorithmic Fairness." In Public Policy Analytics. CRC Press, 2021. http://dx.doi.org/10.1201/9781003054658-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

d’Aloisio, Giordano, Antinisca Di Marco, and Giovanni Stilo. "Democratizing Quality-Based Machine Learning Development through Extended Feature Models." In Fundamental Approaches to Software Engineering. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30826-0_5.

Full text
Abstract:
AbstractML systems have become an essential tool for experts of many domains, data scientists and researchers, allowing them to find answers to many complex business questions starting from raw datasets. Nevertheless, the development of ML systems able to satisfy the stakeholders’ needs requires an appropriate amount of knowledge about the ML domain. Over the years, several solutions have been proposed to automate the development of ML systems. However, an approach taking into account the new quality concerns needed by ML systems (like fairness, interpretability, privacy, and others) is still missing.In this paper, we propose a new engineering approach for the quality-based development of ML systems by realizing a workflow formalized as a Software Product Line through Extended Feature Models to generate an ML System satisfying the required quality constraints. The proposed approach leverages an experimental environment that applies all the settings to enhance a given Quality Attribute, and selects the best one. The experimental environment is general and can be used for future quality methods’ evaluations. Finally, we demonstrate the usefulness of our approach in the context of multi-class classification problem and fairness quality attribute.
APA, Harvard, Vancouver, ISO, and other styles
3

Roberts-Licklider, Karen, and Theodore Trafalis. "Fairness in Optimization and ML: A Survey Part 2." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-81010-7_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gonçalves, Rafael, Filipe Gouveia, Inês Lynce, and José Fragoso Santos. "Proxy Attribute Discovery in Machine Learning Datasets via Inductive Logic Programming." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-90653-4_17.

Full text
Abstract:
Abstract The issue of fairness is a well-known challenge in Machine Learning (ML) that has gained increased importance with the emergence of Large Language Models (LLMs) and generative AI. Algorithmic bias can manifest during the training of ML models due to the presence of sensitive attributes, such as gender or racial identity. One approach to mitigate bias is to avoid making decisions based on these protected attributes. However, indirect discrimination can still occur if sensitive information is inferred from proxy attributes. To prevent this, there is a growing interest in detecting potential proxy attributes before training ML models. In this case study, we report on the use of Inductive Logic Programming (ILP) to discover proxy attributes in training datasets, with a focus on the ML classification problem. While ILP has established applications in program synthesis and data curation, we demonstrate that it can also advance the state of the art in proxy attribute discovery by removing the need for prior domain knowledge. Our evaluation shows that this approach is effective at detecting potential sources of indirect discrimination, having successfully identified proxy attributes in several well-known datasets used in fairness-awareness studies.
APA, Harvard, Vancouver, ISO, and other styles
5

Silva, Inês Oliveira e., Carlos Soares, Inês Sousa, and Rayid Ghani. "Systematic Analysis of the Impact of Label Noise Correction on ML Fairness." In Lecture Notes in Computer Science. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8391-9_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chopra, Deepti, and Roopal Khurana. "Bias and Fairness in Ml." In Introduction to Machine Learning with Python. BENTHAM SCIENCE PUBLISHERS, 2023. http://dx.doi.org/10.2174/9789815124422123010012.

Full text
Abstract:
In machine learning and AI, future predictions are based on past observations, and bias is based on prior information. Harmful biases occur because of human biases which are learned by an algorithm from the training data. In the previous chapter, we discussed training versus testing, bounding the testing error, and VC dimension. In this chapter, we will discuss bias and fairness.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Wenbin, Zichong Wang, Juyong Kim, et al. "Individual Fairness Under Uncertainty." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230621.

Full text
Abstract:
Algorithmic fairness, the research field of making machine learning (ML) algorithms fair, is an established area in ML. As ML technologies expand their application domains, including ones with high societal impact, it becomes essential to take fairness into consideration during the building of ML systems. Yet, despite its wide range of socially sensitive applications, most work treats the issue of algorithmic bias as an intrinsic property of supervised learning, i.e., the class label is given as a precondition. Unlike prior studies in fairness, we propose an individual fairness measure and a corresponding algorithm that deal with the challenges of uncertainty arising from censorship in class labels, while enforcing similar individuals to be treated similarly from a ranking perspective, free of the Lipschitz condition in the conventional individual fairness definition. We argue that this perspective represents a more realistic model of fairness research for real-world application deployment and show how learning with such a relaxed precondition draws new insights that better explains algorithmic fairness. We conducted experiments on four real-world datasets to evaluate our proposed method compared to other fairness models, demonstrating its superiority in minimizing discrimination while maintaining predictive performance with uncertainty present.
APA, Harvard, Vancouver, ISO, and other styles
8

Andrae, Silvio. "Fairness and Bias in Machine Learning Models for Credit Decisions." In Advances in Finance, Accounting, and Economics. IGI Global, 2025. https://doi.org/10.4018/979-8-3693-8186-1.ch001.

Full text
Abstract:
Machine learning (ML) models transform credit decision-making, improving efficiency, predictive accuracy, and scalability. However, these innovations raise critical concerns about fairness and bias. This chapter explores the complexities of applying ML in lending. Balancing fairness with predictive accuracy is a core challenge, often requiring trade-offs highlighting current ML systems' limitations. The analysis delves into statistical fairness metrics and examines how these metrics address—or exacerbate—the bias-fairness trade-off. It underscores the importance of transparency and interpretability in ML models, as these are not just desirable but necessary for fostering trust and making these systems comprehensible without sacrificing performance. Empirical studies from the US and UK underscore the importance of data quality, model design, and interdisciplinary collaboration in mitigating bias. This chapter addresses technical and societal dimensions and provides actionable insights for leveraging ML responsibly in lending while minimizing ethical and economic risks.
APA, Harvard, Vancouver, ISO, and other styles
9

Muralidhar, L. B., N. Sathyanarayana, H. R. Swapna, Sheetal V. Hukkeri, and P. H. Reshma Sultana. "Machine Learning Advancing Diversity Equity and Inclusion in Data-Driven HR Practices." In Advances in Human Resources Management and Organizational Development. IGI Global, 2025. https://doi.org/10.4018/979-8-3373-0149-5.ch019.

Full text
Abstract:
This work explores the transformative potential of Machine Learning (ML) in advancing Diversity, Equity, and Inclusion (DEI) within data-driven Human Resource (HR) practices. ML addresses systemic inequities in recruitment, promotions, and workplace engagement through tools that identify biases, automate fairness-focused evaluations, and analyse inclusion metrics. While offering unparalleled efficiency and scalability, ML introduces algorithmic bias, ethical complexities, and regulatory compliance challenges. Organisations can foster transparency, trust, and accountability by integrating fairness-aware ML models, real-time DEI dashboards, and federated learning. This framework underscores a human-centred, collaborative approach, ensuring ML aligns with DEI principles and promoting equitable workplace cultures globally. The study highlights best practices, ethical considerations, and interdisciplinary strategies to balance innovation with fairness, paving the way for a future where technology drives inclusivity
APA, Harvard, Vancouver, ISO, and other styles
10

Cohen-Inger, Nurit, Guy Rozenblatt, Seffi Cohen, Lior Rokach, and Bracha Shapira. "FairUS - UpSampling Optimized Method for Boosting Fairness." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240585.

Full text
Abstract:
The increasing application of machine learning (ML) in critical areas such as healthcare and finance highlights the importance of fairness in ML models, challenged by biases in training data that can lead to discrimination. We introduce ‘FairUS’, a novel pre-processing method for reducing bias in ML models utilizing the Conditional Generative Adversarial Network (CTGAN) to synthesize upsampled data. Unlike traditional approaches that focus solely on balancing subgroup sample sizes, FairUS strategically optimizes the quantity of synthesized data. This optimization aims to achieve an ideal balance between enhancing fairness and maintaining the overall performance of the model. Extensive evaluations of our method over several canonical datasets show that the proposed method enhances fairness by 2.7 times more than the related work and 4 times more than the baseline without mitigation, while preserving the performance of the ML model. Moreover, less than a third of the amount of synthetic data was needed on average. Uniquely, the proposed method enables decision-makers to choose the working point between improved fairness and model’s performance according to their preferences.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "ML fairness"

1

Sabuncuoglu, Alpay, and Carsten Maple. "Towards Proactive Fairness Monitoring of ML Development Pipelines." In 2025 IEEE Symposium on Trustworthy, Explainable and Responsible Computational Intelligence (CITREx Companion). IEEE, 2025. https://doi.org/10.1109/citrexcompanion65208.2025.10981495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

She, Yining, Sumon Biswas, Christian Kästner, and Eunsuk Kang. "FairSense: Long-Term Fairness Analysis of ML-Enabled Systems." In 2025 IEEE/ACM 47th International Conference on Software Engineering (ICSE). IEEE, 2025. https://doi.org/10.1109/icse55347.2025.00159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hertweck, Corinna, Michele Loi, and Christoph Heitz. "Group Fairness Refocused: Assessing the Social Impact of ML Systems." In 2024 11th IEEE Swiss Conference on Data Science (SDS). IEEE, 2024. http://dx.doi.org/10.1109/sds60720.2024.00034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Zhiwei, Carl Kesselman, Mike D’Arcy, Michael Pazzani, and Benjamin Yizing Xu. "Deriva-ML: A Continuous FAIRness Approach to Reproducible Machine Learning Models." In 2024 IEEE 20th International Conference on e-Science (e-Science). IEEE, 2024. http://dx.doi.org/10.1109/e-science62913.2024.10678671.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yu, Normen, Luciana Carreon, Gang Tan, and Saeid Tizpaz-Niari. "FairLay-ML: Intuitive Debugging of Fairness in Data-Driven Social-Critical Software." In 2025 IEEE/ACM 47th International Conference on Software Engineering: Companion Proceedings (ICSE-Companion). IEEE, 2025. https://doi.org/10.1109/icse-companion66252.2025.00016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Robles Herrera, Salvador, Verya Monjezi, Vladik Kreinovich, Ashutosh Trivedi, and Saeid Tizpaz-Niari. "Predicting Fairness of ML Software Configurations." In PROMISE '24: 20th International Conference on Predictive Models and Data Analytics in Software Engineering. ACM, 2024. http://dx.doi.org/10.1145/3663533.3664040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Smith, Jessie J., Michael Madaio, Robin Burke, and Casey Fiesler. "Pragmatic Fairness: Evaluating ML Fairness Within the Constraints of Industry." In FAccT '25: The 2025 ACM Conference on Fairness, Accountability, and Transparency. ACM, 2025. https://doi.org/10.1145/3715275.3732040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Makhlouf, Karima, Sami Zhioua, and Catuscia Palamidessi. "Identifiability of Causal-based ML Fairness Notions." In 2022 14th International Conference on Computational Intelligence and Communication Networks (CICN). IEEE, 2022. http://dx.doi.org/10.1109/cicn56167.2022.10008263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Baresi, Luciano, Chiara Criscuolo, and Carlo Ghezzi. "Understanding Fairness Requirements for ML-based Software." In 2023 IEEE 31st International Requirements Engineering Conference (RE). IEEE, 2023. http://dx.doi.org/10.1109/re57278.2023.00046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Silva, Bruno Pires M., and Lilian Berton. "Analyzing the Trade-off Between Fairness and Model Performance in Supervised Learning: A Case Study in the MIMIC dataset." In Simpósio Brasileiro de Computação Aplicada à Saúde. Sociedade Brasileira de Computação - SBC, 2025. https://doi.org/10.5753/sbcas.2025.6994.

Full text
Abstract:
Fairness has become a key area in machine learning (ML), aiming to ensure equitable outcomes across demographic groups and mitigate biases. This study examines fairness in healthcare using the MIMIC III dataset, comparing traditional and fair ML approaches in pre, in, and post-processing stages. Methods include Correlation Remover and Adversarial Learning from Fairlearn, and Equalized Odds Post-processing from AI Fairness 360. We evaluate performance (accuracy, F1-score) alongside fairness metrics (equal opportunity, equalized odds) considering different sensible attributes. Notably, Equalized Odds Post-processing improved fairness with less performance loss, highlighting the trade-off between fairness and predictive accuracy in healthcare models.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!