To see the other types of publications on this topic, follow the link: ML fairness.

Journal articles on the topic 'ML fairness'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'ML fairness.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Weinberg, Lindsay. "Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML Fairness Approaches." Journal of Artificial Intelligence Research 74 (May 6, 2022): 75–109. http://dx.doi.org/10.1613/jair.1.13196.

Full text
Abstract:
This survey article assesses and compares existing critiques of current fairness-enhancing technical interventions in machine learning (ML) that draw from a range of non-computing disciplines, including philosophy, feminist studies, critical race and ethnic studies, legal studies, anthropology, and science and technology studies. It bridges epistemic divides in order to offer an interdisciplinary understanding of the possibilities and limits of hegemonic computational approaches to ML fairness for producing just outcomes for society’s most marginalized. The article is organized according to ni
APA, Harvard, Vancouver, ISO, and other styles
2

Bærøe, Kristine, Torbjørn Gundersen, Edmund Henden, and Kjetil Rommetveit. "Can medical algorithms be fair? Three ethical quandaries and one dilemma." BMJ Health & Care Informatics 29, no. 1 (2022): e100445. http://dx.doi.org/10.1136/bmjhci-2021-100445.

Full text
Abstract:
ObjectiveTo demonstrate what it takes to reconcile the idea of fairness in medical algorithms and machine learning (ML) with the broader discourse of fairness and health equality in health research.MethodThe methodological approach used in this paper is theoretical and ethical analysis.ResultWe show that the question of ensuring comprehensive ML fairness is interrelated to three quandaries and one dilemma.DiscussionAs fairness in ML depends on a nexus of inherent justice and fairness concerns embedded in health research, a comprehensive conceptualisation is called for to make the notion useful
APA, Harvard, Vancouver, ISO, and other styles
3

Yanjun Li, Yanjun Li, Huan Huang Yanjun Li, Qiang Geng Huan Huang, Xinwei Guo Qiang Geng, and Yuyu Yuan Xinwei Guo. "Fairness Measures of Machine Learning Models in Judicial Penalty Prediction." 網際網路技術學刊 23, no. 5 (2022): 1109–16. http://dx.doi.org/10.53106/160792642022092305019.

Full text
Abstract:
<p>Machine learning (ML) has been widely adopted in many software applications across domains. However, accompanying the outstanding performance, the behaviors of the ML models, which are essentially a kind of black-box software, could be unfair and hard to understand in many cases. In our human-centered society, an unfair decision could potentially damage human value, even causing severe social consequences, especially in decision-critical scenarios such as legal judgment. Although some existing works investigated the ML models in terms of robustness, accuracy, security, privacy, qualit
APA, Harvard, Vancouver, ISO, and other styles
4

Alotaibi, Dalha Alhumaidi, Jianlong Zhou, Yifei Dong, Jia Wei, Xin Janet Ge, and Fang Chen. "Quantile Multi-Attribute Disparity (QMAD): An Adaptable Fairness Metric Framework for Dynamic Environments." Electronics 14, no. 8 (2025): 1627. https://doi.org/10.3390/electronics14081627.

Full text
Abstract:
Fairness is becoming indispensable for ethical machine learning (ML) applications. However, it remains a challenge to identify unfairness if there are changes in the distribution of underlying features among different groups and machine learning outputs. This paper proposes a novel fairness metric framework considering multiple attributes, including ML outputs and feature variations for bias detection. The framework comprises two principal components, comparison and aggregation functions, which collectively ensure fairness metrics with high adaptability across various contexts and scenarios. T
APA, Harvard, Vancouver, ISO, and other styles
5

Ghosh, Bishwamittra, Debabrota Basu, and Kuldeep S. Meel. "Algorithmic Fairness Verification with Graphical Models." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (2022): 9539–48. http://dx.doi.org/10.1609/aaai.v36i9.21187.

Full text
Abstract:
In recent years, machine learning (ML) algorithms have been deployed in safety-critical and high-stake decision-making, where the fairness of algorithms is of paramount importance. Fairness in ML centers on detecting bias towards certain demographic populations induced by an ML classifier and proposes algorithmic solutions to mitigate the bias with respect to different fairness definitions. To this end, several fairness verifiers have been proposed that compute the bias in the prediction of an ML classifier—essentially beyond a finite dataset—given the probability distribution of input feature
APA, Harvard, Vancouver, ISO, and other styles
6

Kuzucu, Selim, Jiaee Cheong, Hatice Gunes, and Sinan Kalkan. "Uncertainty as a Fairness Measure." Journal of Artificial Intelligence Research 81 (October 13, 2024): 307–35. http://dx.doi.org/10.1613/jair.1.16041.

Full text
Abstract:
Unfair predictions of machine learning (ML) models impede their broad acceptance in real-world settings. Tackling this arduous challenge first necessitates defining what it means for an ML model to be fair. This has been addressed by the ML community with various measures of fairness that depend on the prediction outcomes of the ML models, either at the group-level or the individual-level. These fairness measures are limited in that they utilize point predictions, neglecting their variances, or uncertainties, making them susceptible to noise, missingness and shifts in data. In this paper, we f
APA, Harvard, Vancouver, ISO, and other styles
7

Weerts, Hilde, Florian Pfisterer, Matthias Feurer, et al. "Can Fairness be Automated? Guidelines and Opportunities for Fairness-aware AutoML." Journal of Artificial Intelligence Research 79 (February 17, 2024): 639–77. http://dx.doi.org/10.1613/jair.1.14747.

Full text
Abstract:
The field of automated machine learning (AutoML) introduces techniques that automate parts of the development of machine learning (ML) systems, accelerating the process and reducing barriers for novices. However, decisions derived from ML models can reproduce, amplify, or even introduce unfairness in our societies, causing harm to (groups of) individuals. In response, researchers have started to propose AutoML systems that jointly optimize fairness and predictive performance to mitigate fairness-related harm. However, fairness is a complex and inherently interdisciplinary subject, and solely p
APA, Harvard, Vancouver, ISO, and other styles
8

Singh, Vivek K., and Kailash Joshi. "Integrating Fairness in Machine Learning Development Life Cycle: Fair CRISP-DM." e-Service Journal 14, no. 2 (2022): 1–24. http://dx.doi.org/10.2979/esj.2022.a886946.

Full text
Abstract:
ABSTRACT: Developing efficient processes for building machine learning (ML) applications is an emerging topic for research. One of the well-known frameworks for organizing, developing, and deploying predictive machine learning models is cross-industry standard for data mining (CRISP-DM). However, the framework does not provide any guidelines for detecting and mitigating different types of fairness-related biases in the development of ML applications. The study of these biases is a relatively recent stream of research. To address this significant theoretical and practical gap, we propose a new
APA, Harvard, Vancouver, ISO, and other styles
9

Makhlouf, Karima, Sami Zhioua, and Catuscia Palamidessi. "On the Applicability of Machine Learning Fairness Notions." ACM SIGKDD Explorations Newsletter 23, no. 1 (2021): 14–23. http://dx.doi.org/10.1145/3468507.3468511.

Full text
Abstract:
Machine Learning (ML) based predictive systems are increasingly used to support decisions with a critical impact on individuals' lives such as college admission, job hiring, child custody, criminal risk assessment, etc. As a result, fairness emerged as an important requirement to guarantee that ML predictive systems do not discriminate against specific individuals or entire sub-populations, in particular, minorities. Given the inherent subjectivity of viewing the concept of fairness, several notions of fairness have been introduced in the literature. This paper is a survey of fairness notions
APA, Harvard, Vancouver, ISO, and other styles
10

Zhou, Zijian, Xinyi Xu, Rachael Hwee Ling Sim, Chuan Sheng Foo, and Bryan Kian Hsiang Low. "Probably Approximate Shapley Fairness with Applications in Machine Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 5 (2023): 5910–18. http://dx.doi.org/10.1609/aaai.v37i5.25732.

Full text
Abstract:
The Shapley value (SV) is adopted in various scenarios in machine learning (ML), including data valuation, agent valuation, and feature attribution, as it satisfies their fairness requirements. However, as exact SVs are infeasible to compute in practice, SV estimates are approximated instead. This approximation step raises an important question: do the SV estimates preserve the fairness guarantees of exact SVs? We observe that the fairness guarantees of exact SVs are too restrictive for SV estimates. Thus, we generalise Shapley fairness to probably approximate Shapley fairness and propose fide
APA, Harvard, Vancouver, ISO, and other styles
11

Sreerama, Jeevan, and Gowrisankar Krishnamoorthy. "Ethical Considerations in AI Addressing Bias and Fairness in Machine Learning Models." Journal of Knowledge Learning and Science Technology ISSN: 2959-6386 (online) 1, no. 1 (2022): 130–38. http://dx.doi.org/10.60087/jklst.vol1.n1.p138.

Full text
Abstract:
The proliferation of artificial intelligence (AI) and machine learning (ML) technologies has brought about unprecedented advancements in various domains. However, concerns surrounding bias and fairness in ML models have gained significant attention, raising ethical considerations that must be addressed. This paper explores the ethical implications of bias in AI systems and the importance of ensuring fairness in ML models. It examines the sources of bias in data collection, algorithm design, and decision-making processes, highlighting the potential consequences of biased AI systems on individua
APA, Harvard, Vancouver, ISO, and other styles
12

Blow, Christina Hastings, Lijun Qian, Camille Gibson, Pamela Obiomon, and Xishuang Dong. "Comprehensive Validation on Reweighting Samples for Bias Mitigation via AIF360." Applied Sciences 14, no. 9 (2024): 3826. http://dx.doi.org/10.3390/app14093826.

Full text
Abstract:
Fairness Artificial Intelligence (AI) aims to identify and mitigate bias throughout the AI development process, spanning data collection, modeling, assessment, and deployment—a critical facet of establishing trustworthy AI systems. Tackling data bias through techniques like reweighting samples proves effective for promoting fairness. This paper undertakes a systematic exploration of reweighting samples for conventional Machine-Learning (ML) models, utilizing five models for binary classification on datasets such as Adult Income and COMPAS, incorporating various protected attributes. In particu
APA, Harvard, Vancouver, ISO, and other styles
13

Ajarra, Ayoub, Bishwamittra Ghosh, and Debabrota Basu. "Active Fourier Auditor for Estimating Distributional Properties of ML Models." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 15 (2025): 15330–38. https://doi.org/10.1609/aaai.v39i15.33682.

Full text
Abstract:
With the pervasive deployment of Machine Learning (ML) models in real-world applications, verifying and auditing properties of ML models have become a central concern. In this work, we focus on three properties: robustness, individual fairness, and group fairness. We discuss two approaches for auditing ML model properties: estimation with and without reconstruction of the target model under audit. Though the first approach is studied in the literature, the second approach remains unexplored. For this purpose, we develop a new framework that quantifies different properties in terms of the Fouri
APA, Harvard, Vancouver, ISO, and other styles
14

Ravichandran, Nischal, Anil Chowdary Inaganti, Senthil Kumar Sundaramurthy, and Rajendra Muppalaneni. "Bias and Fairness in Machine Learning: A Systematic Review of Mitigation Techniques." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 9, no. 2 (2018): 753–87. https://doi.org/10.61841/turcomat.v9i2.15141.

Full text
Abstract:
Bias and fairness in machine learning (ML) algorithms are critical concerns that impact decision-making processes across various domains, including healthcare, finance, and criminal justice. This systematic review explores the state-of-the-art mitigation techniques employed to address bias and ensure fairness in ML systems. The review identifies and categorizes methods into pre-processing, in-processing, and post-processing strategies, while analyzing their effectiveness and limitations. Key findings indicate that although significant progress has been made, challenges remain in balancing fair
APA, Harvard, Vancouver, ISO, and other styles
15

Saha, Sanjit Kumar. "A Comparative Analysis of Logistic Regression and Random Forest for Individual Fairness in Machine Learning." International Journal of Advanced Engineering Research and Science 12, no. 5 (2025): 33–37. https://doi.org/10.22161/ijaers.125.5.

Full text
Abstract:
In high-stakes domains such as finance, healthcare, and criminal justice, machine learning (ML) systems must balance predictive performance with fairness and transparency. This paper presents a comparative analysis of two widely used ML models, logistic regression and random forest, evaluated through the lens of individual fairness. Using the UCI Adult Income and COMPAS datasets, we assess performance in terms of accuracy, F1 score, individual consistency, and disparate treatment. Our findings indicate that while random forests offer marginally higher accuracy (by approximately 1%), logistic r
APA, Harvard, Vancouver, ISO, and other styles
16

Chappidi, Shreya, and Andra V. Krauze. "Abstract B003: Towards machine learning fairness in glioblastoma: An evaluation of protected attributes in publicly available clinical datasets." Clinical Cancer Research 31, no. 13_Supplement (2025): B003. https://doi.org/10.1158/1557-3265.aimachine-b003.

Full text
Abstract:
Abstract Introduction: As machine learning (ML) algorithms are increasingly developed for clinical applications, there are growing concerns over the real-world benefits of ML-assisted decision-making applications. These issues include reduced generalizability across medical institutions, lack of clinician uptake during algorithmic deployment, and observed disparate performances across various demographics, including race, gender, and socioeconomic status. Glioblastoma (GBM) is a rare brain cancer with poor outcomes and few publicly available datasets, resulting in limited opportunities for alg
APA, Harvard, Vancouver, ISO, and other styles
17

Pessach, Dana, and Erez Shmueli. "A Review on Fairness in Machine Learning." ACM Computing Surveys 55, no. 3 (2023): 1–44. http://dx.doi.org/10.1145/3494672.

Full text
Abstract:
An increasing number of decisions regarding the daily lives of human beings are being controlled by artificial intelligence and machine learning (ML) algorithms in spheres ranging from healthcare, transportation, and education to college admissions, recruitment, provision of loans, and many more realms. Since they now touch on many aspects of our lives, it is crucial to develop ML algorithms that are not only accurate but also objective and fair. Recent studies have shown that algorithmic decision making may be inherently prone to unfairness, even when there is no intention for it. This articl
APA, Harvard, Vancouver, ISO, and other styles
18

Teodorescu, Mike, Lily Morse, Yazeed Awwad, and Gerald Kane. "Failures of Fairness in Automation Require a Deeper Understanding of Human-ML Augmentation." MIS Quarterly 45, no. 3 (2021): 1483–500. http://dx.doi.org/10.25300/misq/2021/16535.

Full text
Abstract:
Machine learning (ML) tools reduce the costs of performing repetitive, time-consuming tasks yet run the risk of introducing systematic unfairness into organizational processes. Automated approaches to achieving fair- ness often fail in complex situations, leading some researchers to suggest that human augmentation of ML tools is necessary. However, our current understanding of human–ML augmentation remains limited. In this paper, we argue that the Information Systems (IS) discipline needs a more sophisticated view of and research into human–ML augmentation. We introduce a typology of augmentat
APA, Harvard, Vancouver, ISO, and other styles
19

Rashed, Ahmed, Abdelkrim Kallich, and Mohamed Eltayeb. "Analyzing Fairness of Computer Vision and Natural Language Processing Models." Information 16, no. 3 (2025): 182. https://doi.org/10.3390/info16030182.

Full text
Abstract:
Machine learning (ML) algorithms play a critical role in decision-making across various domains, such as healthcare, finance, education, and law enforcement. However, concerns about fairness and bias in these systems have raised significant ethical and social challenges. To address these challenges, this research utilizes two prominent fairness libraries, Fairlearn by Microsoft and AIF360 by IBM. These libraries offer comprehensive frameworks for fairness analysis, providing tools to evaluate fairness metrics, visualize results, and implement bias mitigation algorithms. The study focuses on as
APA, Harvard, Vancouver, ISO, and other styles
20

Ghosh, Bishwamittra, Debabrota Basu, and Kuldeep S. Meel. "Justicia: A Stochastic SAT Approach to Formally Verify Fairness." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (2021): 7554–63. http://dx.doi.org/10.1609/aaai.v35i9.16925.

Full text
Abstract:
As a technology ML is oblivious to societal good or bad, and thus, the field of fair machine learning has stepped up to propose multiple mathematical definitions, algorithms, and systems to ensure different notions of fairness in ML applications. Given the multitude of propositions, it has become imperative to formally verify the fairness metrics satisfied by different algorithms on different datasets. In this paper, we propose a stochastic satisfiability (SSAT) framework, Justicia, that formally verifies different fairness measures of supervised learning algorithms with respect to the underly
APA, Harvard, Vancouver, ISO, and other styles
21

Drira, Mohamed, Sana Ben Hassine, Michael Zhang, and Steven Smith. "Machine Learning Methods in Student Mental Health Research: An Ethics-Centered Systematic Literature Review." Applied Sciences 14, no. 24 (2024): 11738. https://doi.org/10.3390/app142411738.

Full text
Abstract:
This study conducts an ethics-centered analysis of the AI/ML models used in Student Mental Health (SMH) research, considering the ethical principles of fairness, privacy, transparency, and interpretability. First, this paper surveys the AI/ML methods used in the extant SMH literature published between 2015 and 2024, as well as the main health outcomes, to inform future work in the SMH field. Then, it leverages advanced topic modeling techniques to depict the prevailing themes in the corpus. Finally, this study proposes novel measurable privacy, transparency (reporting and replicability), inter
APA, Harvard, Vancouver, ISO, and other styles
22

Chen, Zhenpeng, Xinyue Li, Jie M. Zhang, et al. "Software Fairness Dilemma: Is Bias Mitigation a Zero-Sum Game?" Proceedings of the ACM on Software Engineering 2, FSE (2025): 1780–801. https://doi.org/10.1145/3729350.

Full text
Abstract:
Fairness is a critical requirement for Machine Learning (ML) software, driving the development of numerous bias mitigation methods. Previous research has identified a leveling-down effect in bias mitigation for computer vision and natural language processing tasks, where fairness is achieved by lowering performance for all groups without benefiting the unprivileged group. However, it remains unclear whether this effect applies to bias mitigation for tabular data tasks, a key area in fairness research with significant real-world applications. This study evaluates eight bias mitigation methods f
APA, Harvard, Vancouver, ISO, and other styles
23

Ezzeldin, Yahya H., Shen Yan, Chaoyang He, Emilio Ferrara, and A. Salman Avestimehr. "FairFed: Enabling Group Fairness in Federated Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (2023): 7494–502. http://dx.doi.org/10.1609/aaai.v37i6.25911.

Full text
Abstract:
Training ML models which are fair across different demographic groups is of critical importance due to the increased integration of ML in crucial decision-making scenarios such as healthcare and recruitment. Federated learning has been viewed as a promising solution for collaboratively training machine learning models among multiple parties while maintaining their local data privacy. However, federated learning also poses new challenges in mitigating the potential bias against certain populations (e.g., demographic groups), as this typically requires centralized access to the sensitive informa
APA, Harvard, Vancouver, ISO, and other styles
24

Sikstrom, Laura, Marta M. Maslej, Katrina Hui, Zoe Findlay, Daniel Z. Buchman, and Sean L. Hill. "Conceptualising fairness: three pillars for medical algorithms and health equity." BMJ Health & Care Informatics 29, no. 1 (2022): e100459. http://dx.doi.org/10.1136/bmjhci-2021-100459.

Full text
Abstract:
ObjectivesFairness is a core concept meant to grapple with different forms of discrimination and bias that emerge with advances in Artificial Intelligence (eg, machine learning, ML). Yet, claims to fairness in ML discourses are often vague and contradictory. The response to these issues within the scientific community has been technocratic. Studies either measure (mathematically) competing definitions of fairness, and/or recommend a range of governance tools (eg, fairness checklists or guiding principles). To advance efforts to operationalise fairness in medicine, we synthesised a broad range
APA, Harvard, Vancouver, ISO, and other styles
25

Kumbo, Lazaro Inon, Victor Simon Nkwera, and Rodrick Frank Mero. "Evaluating the Ethical Practices in Developing AI and Ml Systems in Tanzania." ABUAD Journal of Engineering Research and Development (AJERD) 7, no. 2 (2024): 340–51. http://dx.doi.org/10.53982/ajerd.2024.0702.33-j.

Full text
Abstract:
Artificial Intelligence (AI) and Machine Learning (ML) present transformative opportunities for sectors in developing countries like Tanzania that were previously hindered by manual processes and data inefficiencies. Despite these advancements, the ethical challenges of bias, fairness, transparency, privacy, and accountability are critical during AI and ML system design and deployment. This study explores these ethical dimensions from the perspective of Tanzanian IT professionals, given the country's nascent AI landscape. The research aims to understand and address these challenges using a mix
APA, Harvard, Vancouver, ISO, and other styles
26

Fessenko, Dessislava. "Ethical Requirements for Achieving Fairness in Radiology Machine Learning: An Intersectionality and Social Embeddedness Approach." Journal of Health Ethics 20, no. 1 (2024): 37–49. http://dx.doi.org/10.18785/jhe.2001.04.

Full text
Abstract:
Radiodiagnostics by machine-learning (ML) systems is often perceived as objective and fair. It may, however, exhibit bias towards certain patient sub-groups. The typical reasons for this are the selection of disease features for ML systems to screen, that ML systems learn from human clinical judgements, which are often biased, and that fairness in ML is often inappropriately conceptualized as “equality”. ML systems with such parameters fail to accurately diagnose and address patients’ actual health needs and how they depend on patients’ social identities (i.e. intersectionality) and broader so
APA, Harvard, Vancouver, ISO, and other styles
27

Sravankumar Nandamuri. "Comprehensive guide to monitoring and observability in machine learning infrastructure: From metrics to implementation." World Journal of Advanced Research and Reviews 26, no. 2 (2025): 2068–77. https://doi.org/10.30574/wjarr.2025.26.2.1823.

Full text
Abstract:
Monitoring and observability have become critical components in the successful deployment and maintenance of machine learning systems in production. This article presents a comprehensive framework for implementing robust ML observability, covering foundational principles, model performance tracking, drift detection, operational health monitoring, fairness evaluation, and platform construction. It explores both technical implementation details and strategic considerations for ML teams looking to enhance their monitoring capabilities. The proposed architecture emphasizes proactive detection of i
APA, Harvard, Vancouver, ISO, and other styles
28

Cheng, Lu. "Demystifying Algorithmic Fairness in an Uncertain World." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 20 (2024): 22662. http://dx.doi.org/10.1609/aaai.v38i20.30278.

Full text
Abstract:
Significant progress in the field of fair machine learning (ML) has been made to counteract algorithmic discrimination against marginalized groups. However, fairness remains an active research area that is far from settled. One key bottleneck is the implicit assumption that environments, where ML is developed and deployed, are certain and reliable. In a world that is characterized by volatility, uncertainty, complexity, and ambiguity, whether what has been developed in algorithmic fairness can still serve its purpose is far from obvious. In this talk, I will first discuss how to improve algori
APA, Harvard, Vancouver, ISO, and other styles
29

Arslan, Ayse. "Mitigation Techniques to Overcome Data Harm in Model Building for ML." International Journal of Artificial Intelligence & Applications 13, no. 1 (2022): 73–82. http://dx.doi.org/10.5121/ijaia.2022.13105.

Full text
Abstract:
Given the impact of Machine Learning (ML) on individuals and the society, understanding how harm might be occur throughout the ML life cycle becomes critical more than ever. By offering a framework to determine distinct potential sources of downstream harm in ML pipeline, the paper demonstrates the importance of choices throughout distinct phases of data collection, development, and deployment that extend far beyond just model training. Relevant mitigation techniques are also suggested for being used instead of merely relying on generic notions of what counts as fairness.
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Hao, Nethra Sambamoorthi, Nathan Hoot, David Bryant, and Usha Sambamoorthi. "Evaluating fairness of machine learning prediction of prolonged wait times in Emergency Department with Interpretable eXtreme gradient boosting." PLOS Digital Health 4, no. 3 (2025): e0000751. https://doi.org/10.1371/journal.pdig.0000751.

Full text
Abstract:
It is essential to evaluate performance and assess quality before applying artificial intelligence (AI) and machine learning (ML) models to clinical practice. This study utilized ML to predict patient wait times in the Emergency Department (ED), determine model performance accuracies, and conduct fairness evaluations to further assess ethnic disparities in using ML for wait time prediction among different patient populations in the ED. This retrospective observational study included adult patients (age ≥18 years) in the ED (n=173,856 visits) who were assigned an Emergency Severity Index (ESI)
APA, Harvard, Vancouver, ISO, and other styles
31

Valentin, Leonhard Buchner, Onno Olivier Schutte Philip, Ben Allal Yassin, and Ahadi Hamed. "[Re] Fairness Guarantees under Demographic Shift." ReScience C 9, no. 2 (2023): #13. https://doi.org/10.5281/zenodo.8173680.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Arjunan, Gopalakrishnan. "Enhancing Data Quality and Integrity in Machine Learning Pipelines: Approaches for Detecting and Mitigating Bias." International Journal of Scientific Research and Management (IJSRM) 10, no. 09 (2022): 940–45. http://dx.doi.org/10.18535/ijsrm/v10i9.ec04.

Full text
Abstract:
Machine learning (ML) has become a cornerstone of innovation in numerous industries, including healthcare, finance, marketing, and criminal justice. However, the growing reliance on ML models has revealed the critical importance of data quality and integrity in ensuring fair and reliable predictions. As AI technologies are deployed in sensitive decision-making areas, the presence of hidden biases within data has become a major concern. These biases can perpetuate systemic inequalities and result in unethical outcomes, undermining trust in AI systems. The accuracy and fairness of ML models are
APA, Harvard, Vancouver, ISO, and other styles
33

Vartak, Manasi. "From ML models to intelligent applications." Proceedings of the VLDB Endowment 14, no. 13 (2021): 3419. http://dx.doi.org/10.14778/3484224.3484240.

Full text
Abstract:
The last 5+ years in ML have focused on building the best models, hyperparameter optimization, parallel training, massive neural networks, etc. Now that the building of models has become easy, models are being integrated into every piece of software and device - from smart kitchens to radiology to detecting performance of turbines. This shift from training ML models to building intelligent, ML-driven applications has highlighted a variety of problems going from "a model" to a whole application or business process running on ML. These challenges range from operational challenges (how to package
APA, Harvard, Vancouver, ISO, and other styles
34

Aditya, Gadiko. "Navigating Bias in Machine Learning (ML) Models for Clinical Applications." European Journal of Advances in Engineering and Technology 6, no. 10 (2019): 54–59. https://doi.org/10.5281/zenodo.11213893.

Full text
Abstract:
As machine learning (ML) technologies increasingly influence diverse sectors—from healthcare and finance to recruit- ment and criminal justice—the critical issue of bias within ML models has garnered significant attention. This paper explores the mechanisms through which biases infiltrate ML algorithms, highlighting the dual challenges of overt biases stemming from prejudiced data sources and subtle biases arising from algorith- mic decision-making processes. Through a meticulous examina- tion of case studies, including the abandonment of Amazon’s ML recruiting tool due to ge
APA, Harvard, Vancouver, ISO, and other styles
35

Tambari Faith Nuka and Amos Abidemi Ogunola. "AI and machine learning as tools for financial inclusion: challenges and opportunities in credit scoring." International Journal of Science and Research Archive 13, no. 2 (2024): 1052–67. http://dx.doi.org/10.30574/ijsra.2024.13.2.2258.

Full text
Abstract:
Financial inclusion remains a pressing global challenge, with millions of underserved individuals excluded from traditional credit systems due to systemic biases and outdated evaluation models. Artificial Intelligence [AI] and Machine Learning [ML] have emerged as transformative tools for addressing these inequities, offering opportunities to redefine how creditworthiness is assessed. By leveraging the predictive power of AI and ML, financial institutions can expand access to credit, improve fairness, and reduce disparities in underserved communities. This paper begins by exploring the broad p
APA, Harvard, Vancouver, ISO, and other styles
36

Singh, Arashdeep, Jashandeep Singh, Ariba Khan, and Amar Gupta. "Developing a Novel Fair-Loan Classifier through a Multi-Sensitive Debiasing Pipeline: DualFair." Machine Learning and Knowledge Extraction 4, no. 1 (2022): 240–53. http://dx.doi.org/10.3390/make4010011.

Full text
Abstract:
Machine learning (ML) models are increasingly being used for high-stake applications that can greatly impact people’s lives. Sometimes, these models can be biased toward certain social groups on the basis of race, gender, or ethnicity. Many prior works have attempted to mitigate this “model discrimination” by updating the training data (pre-processing), altering the model learning process (in-processing), or manipulating the model output (post-processing). However, more work can be done in extending this situation to intersectional fairness, where we consider multiple sensitive parameters (e.g
APA, Harvard, Vancouver, ISO, and other styles
37

Keswani, Vijay, and L. Elisa Celis. "Algorithmic Fairness From the Perspective of Legal Anti-discrimination Principles." Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7 (October 16, 2024): 724–37. http://dx.doi.org/10.1609/aies.v7i1.31674.

Full text
Abstract:
Real-world applications of machine learning (ML) algorithms often propagate negative stereotypes and social biases against marginalized groups. In response, the field of fair machine learning has proposed technical solutions for a variety of settings that aim to correct the biases in algorithmic predictions. These solutions remove the dependence of the final prediction on the protected attributes (like gender or race) and/or ensure that prediction performance is similar across demographic groups. Yet, recent studies assessing the impact of these solutions in practice demonstrate their ineffect
APA, Harvard, Vancouver, ISO, and other styles
38

Detassis, Fabrizio, Michele Lombardi, and Michela Milano. "Teaching the Old Dog New Tricks: Supervised Learning with Constraints." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 5 (2021): 3742–49. http://dx.doi.org/10.1609/aaai.v35i5.16491.

Full text
Abstract:
Adding constraint support in Machine Learning has the potential to address outstanding issues in data-driven AI systems, such as safety and fairness. Existing approaches typically apply constrained optimization techniques to ML training, enforce constraint satisfaction by adjusting the model design, or use constraints to correct the output. Here, we investigate a different, complementary, strategy based on "teaching" constraint satisfaction to a supervised ML method via the direct use of a state-of-the-art constraint solver: this enables taking advantage of decades of research on constrained o
APA, Harvard, Vancouver, ISO, and other styles
39

Khosla, Atulya Aman, Mohammad Arfat Ganiyani, Manas Pustake, et al. "Development and fairness assessment of machine learning models for predicting 30-day readmission after lung cancer surgery." Journal of Clinical Oncology 43, no. 16_suppl (2025): 1532. https://doi.org/10.1200/jco.2025.43.16_suppl.1532.

Full text
Abstract:
1532 Background: Predicting post-surgical readmissions is essential for improving patient outcomes and reducing healthcare costs. While machine learning (ML) models offer high predictive accuracy, they may perpetuate healthcare disparities if not rigorously evaluated for algorithmic bias. In this study, we examine the limitations of ML-based readmission prediction models, highlighting how bias can persist despite strong performance metrics. We also explore the impact of integrating fairness constraints to mitigate these disparities, ensuring equitable clinical decision-making across racial and
APA, Harvard, Vancouver, ISO, and other styles
40

Sunday Adeola Oladosu, Christian Chukwuemeka Ike, Peter Adeyemo Adepoju, Adeoye Idowu Afolabi, Adebimpe Bolatito Ige, and Olukunle Oladipupo Amoo. "Frameworks for ethical data governance in machine learning: Privacy, fairness, and business optimization." Magna Scientia Advanced Research and Reviews 7, no. 2 (2023): 096–106. https://doi.org/10.30574/msarr.2023.7.2.0043.

Full text
Abstract:
The rapid growth of machine learning (ML) technologies has transformed industries by enabling data-driven decision-making, yet it has also raised critical ethical concerns. Frameworks for ethical data governance are essential to ensure that ML systems uphold privacy, fairness, and business optimization while addressing societal and organizational needs. This review explores the intersection of these three pillars, providing a structured approach to balance competing priorities in ML applications. Privacy concerns focus on safeguarding individuals' data through strategies such as anonymization,
APA, Harvard, Vancouver, ISO, and other styles
41

Czarnowska, Paula, Yogarshi Vyas, and Kashif Shah. "Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics." Transactions of the Association for Computational Linguistics 9 (2021): 1249–67. http://dx.doi.org/10.1162/tacl_a_00425.

Full text
Abstract:
Abstract Measuring bias is key for better understanding and addressing unfairness in NLP/ML models. This is often done via fairness metrics, which quantify the differences in a model’s behaviour across a range of demographic groups. In this work, we shed more light on the differences and similarities between the fairness metrics used in NLP. First, we unify a broad range of existing metrics under three generalized fairness metrics, revealing the connections between them. Next, we carry out an extensive empirical comparison of existing metrics and demonstrate that the observed differences in bi
APA, Harvard, Vancouver, ISO, and other styles
42

Islam, Rashidul, Huiyuan Chen, and Yiwei Cai. "Fairness without Demographics through Shared Latent Space-Based Debiasing." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (2024): 12717–25. http://dx.doi.org/10.1609/aaai.v38i11.29167.

Full text
Abstract:
Ensuring fairness in machine learning (ML) is crucial, particularly in applications that impact diverse populations. The majority of existing works heavily rely on the availability of protected features like race and gender. However, practical challenges such as privacy concerns and regulatory restrictions often prohibit the use of this data, limiting the scope of traditional fairness research. To address this, we introduce a Shared Latent Space-based Debiasing (SLSD) method that transforms data from both the target domain, which lacks protected features, and a separate source domain, which co
APA, Harvard, Vancouver, ISO, and other styles
43

Park, Sojung, Eunhye Ahn, Tae-Hyuk Ahn, et al. "ROLE OF MACHINE LEARNING (ML) IN AGING IN PLACE RESEARCH: A SCOPING REVIEW." Innovation in Aging 8, Supplement_1 (2024): 1215. https://doi.org/10.1093/geroni/igae098.3890.

Full text
Abstract:
Abstract As global aging accelerates, Aging in Place (AIP) is increasingly central to improving older adults’ quality of life. Machine Learning (ML) is widely used in aging research, particularly in health monitoring and personalized care. However, most studies focus on clinical settings, leaving a gap in understanding ML’s application in non-clinical AIP contexts. This review addresses this gap by exploring the themes, policy implications, and ethical concerns of AIP-ML studies, including AI bias and fairness. The review examined 32 peer-reviewed studies sourced from databases like PsycINFO,
APA, Harvard, Vancouver, ISO, and other styles
44

Shah, Kanan, Yassamin Neshatvar, Elaine Shum, and Madhur Nayan. "Optimizing the fairness of survival prediction models for racial/ethnic subgroups: A study on predicting post-operative survival in stage IA and IB non-small cell lung cancer." JCO Oncology Practice 20, no. 10_suppl (2024): 380. http://dx.doi.org/10.1200/op.2024.20.10_suppl.380.

Full text
Abstract:
380 Background: The recent surge of utilizing machine learning (ML) to develop prediction models for clinical decision-making aids is promising. However, these models can demonstrate racial bias due to inequities in real-world training data. In lung cancer, multiple models have been developed to predict prognosis, but none have been optimized to mitigate bias in performance among racial/ethnic subgroups. We developed a ML model to predict five-year survival in Stage 1A-1B non-small cell lung cancer (NSCLC), ensuring fairness on race. Methods: In the National Cancer Database, we identified pati
APA, Harvard, Vancouver, ISO, and other styles
45

Lamba, Hemank, Kit T. Rodolfa, and Rayid Ghani. "An Empirical Comparison of Bias Reduction Methods on Real-World Problems in High-Stakes Policy Settings." ACM SIGKDD Explorations Newsletter 23, no. 1 (2021): 69–85. http://dx.doi.org/10.1145/3468507.3468518.

Full text
Abstract:
Applications of machine learning (ML) to high-stakes policy settings - such as education, criminal justice, healthcare, and social service delivery - have grown rapidly in recent years, sparking important conversations about how to ensure fair outcomes from these systems. The machine learning research community has responded to this challenge with a wide array of proposed fairness-enhancing strategies for ML models, but despite the large number of methods that have been developed, little empirical work exists evaluating these methods in real-world settings. Here, we seek to fill this research
APA, Harvard, Vancouver, ISO, and other styles
46

Shook, Jim, Robyn Smith, and Alex Antonio. "Transparency and Fairness in Machine Learning Applications." Symposium Edition - Artificial Intelligence and the Legal Profession 4, no. 5 (2018): 443–63. http://dx.doi.org/10.37419/jpl.v4.i5.2.

Full text
Abstract:
Businesses and consumers increasingly use artificial intelligence (“AI”)— and specifically machine learning (“ML”) applications—in their daily work. ML is often used as a tool to help people perform their jobs more efficiently, but increasingly it is becoming a technology that may eventually replace humans in performing certain functions. An AI recently beat humans in a reading comprehension test, and there is an ongoing race to replace human drivers with self-driving cars and trucks. Tomorrow there is the potential for much more—as AI is even learning to build its own AI. As the use of AI tec
APA, Harvard, Vancouver, ISO, and other styles
47

Galhotra, Sainyam, Karthikeyan Shanmugam, Prasanna Sattigeri, and Kush R. Varshney. "Interventional Fairness with Indirect Knowledge of Unobserved Protected Attributes." Entropy 23, no. 12 (2021): 1571. http://dx.doi.org/10.3390/e23121571.

Full text
Abstract:
The deployment of machine learning (ML) systems in applications with societal impact has motivated the study of fairness for marginalized groups. Often, the protected attribute is absent from the training dataset for legal reasons. However, datasets still contain proxy attributes that capture protected information and can inject unfairness in the ML model. Some deployed systems allow auditors, decision makers, or affected users to report issues or seek recourse by flagging individual samples. In this work, we examine such systems and consider a feedback-based framework where the protected attr
APA, Harvard, Vancouver, ISO, and other styles
48

Ding, Xueying, Rui Xi, and Leman Akoglu. "Outlier Detection Bias Busted: Understanding Sources of Algorithmic Bias through Data-centric Factors." Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7 (October 16, 2024): 384–95. http://dx.doi.org/10.1609/aies.v7i1.31644.

Full text
Abstract:
The astonishing successes of ML have raised growing concern for the fairness of modern methods when deployed in real world settings. However, studies on fairness have mostly focused on supervised ML, while unsupervised outlier detection (OD), with numerous applications in finance, security, etc., have attracted little attention. While a few studies proposed fairness-enhanced OD algorithms, they remain agnostic to the underlying driving mechanisms or sources of unfairness. Even within the supervised ML literature, there exists debate on whether unfairness stems solely from algorithmic biases (i
APA, Harvard, Vancouver, ISO, and other styles
49

Xiao, Ying, Jie M. Zhang, Yepang Liu, Mohammad Reza Mousavi, Sicen Liu, and Dingyuan Xue. "MirrorFair: Fixing Fairness Bugs in Machine Learning Software via Counterfactual Predictions." Proceedings of the ACM on Software Engineering 1, FSE (2024): 2121–43. http://dx.doi.org/10.1145/3660801.

Full text
Abstract:
With the increasing utilization of Machine Learning (ML) software in critical domains such as employee hiring, college admission, and credit evaluation, ensuring fairness in the decision-making processes of underlying models has emerged as a paramount ethical concern. Nonetheless, existing methods for rectifying fairness issues can hardly strike a consistent trade-off between performance and fairness across diverse tasks and algorithms. Informed by the principles of counterfactual inference, this paper introduces MirrorFair, an innovative adaptive ensemble approach designed to mitigate fairnes
APA, Harvard, Vancouver, ISO, and other styles
50

Rasel Mahmud Jewel. "Forecasting Healthcare Results in Rural and Resource-Limited Settings Using the Machine Learning Algorithm." Journal of Information Systems Engineering and Management 10, no. 16s (2025): 557–67. https://doi.org/10.52783/jisem.v10i16s.2646.

Full text
Abstract:
In this research, we investigate machine learning (ML) application in the healthcare domain we predict obesity, perinatal mortality, diabetes risk assessment, and how to integrate blockchain into healthcare. It shows ML promising to increase disease prediction, optimize the policies of healthcare, and secure the data. ML is highly predictive accurately though it suffers from interpretability and fairness challenges. To achieve the equitable healthcare outcomes, future works need to develop better models for causal inference and embed the ethics in the process.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!