To see the other types of publications on this topic, follow the link: Bias mitigation techniques.

Journal articles on the topic 'Bias mitigation techniques'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Bias mitigation techniques.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gallaher, Joshua P., Alexander J. Kamrud, and Brett J. Borghetti. "Detection and Mitigation of Inefficient Visual Searching." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, no. 1 (2020): 47–51. http://dx.doi.org/10.1177/1071181320641015.

Full text
Abstract:
A commonly known cognitive bias is a confirmation bias: the overweighting of evidence supporting a hy- pothesis and underweighting evidence countering that hypothesis. Due to high-stress and fast-paced opera- tions, military decisions can be affected by confirmation bias. One military decision task prone to confirma- tion bias is a visual search. During a visual search, the operator scans an environment to locate a specific target. If confirmation bias causes the operator to scan the wrong portion of the environment first, the search is inefficient. This study has two primary goals: 1) detect inefficient visual search using machine learning and Electroencephalography (EEG) signals, and 2) apply various mitigation techniques in an effort to im- prove the efficiency of searches. Early findings are presented showing how machine learning models can use EEG signals to detect when a person might be performing an inefficient visual search. Four mitigation techniques were evaluated: a nudge which indirectly slows search speed, a hint on how to search efficiently, an explanation for why the participant was receiving a nudge, and instructions to instruct the participant to search efficiently. These mitigation techniques are evaluated, revealing the most effective mitigations found to be the nudge and hint techniques.
APA, Harvard, Vancouver, ISO, and other styles
2

Siddique, Sunzida, Mohd Ariful Haque, Roy George, Kishor Datta Gupta, Debashis Gupta, and Md Jobair Hossain Faruk. "Survey on Machine Learning Biases and Mitigation Techniques." Digital 4, no. 1 (2023): 1–68. http://dx.doi.org/10.3390/digital4010001.

Full text
Abstract:
Machine learning (ML) has become increasingly prevalent in various domains. However, ML algorithms sometimes give unfair outcomes and discrimination against certain groups. Thereby, bias occurs when our results produce a decision that is systematically incorrect. At various phases of the ML pipeline, such as data collection, pre-processing, model selection, and evaluation, these biases appear. Bias reduction methods for ML have been suggested using a variety of techniques. By changing the data or the model itself, adding more fairness constraints, or both, these methods try to lessen bias. The best technique relies on the particular context and application because each technique has advantages and disadvantages. Therefore, in this paper, we present a comprehensive survey of bias mitigation techniques in machine learning (ML) with a focus on in-depth exploration of methods, including adversarial training. We examine the diverse types of bias that can afflict ML systems, elucidate current research trends, and address future challenges. Our discussion encompasses a detailed analysis of pre-processing, in-processing, and post-processing methods, including their respective pros and cons. Moreover, we go beyond qualitative assessments by quantifying the strategies for bias reduction and providing empirical evidence and performance metrics. This paper serves as an invaluable resource for researchers, practitioners, and policymakers seeking to navigate the intricate landscape of bias in ML, offering both a profound understanding of the issue and actionable insights for responsible and effective bias mitigation.
APA, Harvard, Vancouver, ISO, and other styles
3

K. Devasenapathy, Arun Padmanabhan,. "Uncovering Bias: Exploring Machine Learning Techniques for Detecting and Mitigating Bias in Data – A Literature Review." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 9 (2023): 776–81. http://dx.doi.org/10.17762/ijritcc.v11i9.8965.

Full text
Abstract:
The presence of Bias in models developed using machine learning algorithms has emerged as a critical issue. This literature review explores the topic of uncovering the existence of bias in data and the application of techniques for detecting and mitigating Bias. The review provides a comprehensive analysis of the existing literature, focusing on pre-processing techniques, post-pre-processing techniques, and fairness constraints employed to uncover and address the existence of Bias in machine learning models. The effectiveness, limitations, and trade-offs of these techniques are examined, highlighting their impact on advocating fairness and equity in decision-making processes.
 The methodology consists of two key steps: data preparation and bias analysis, followed by machine learning model development and evaluation. In the data preparation phase, the dataset is analyzed for biases and pre-processed using techniques like reweighting or relabeling to reduce bias. In the model development phase, suitable algorithms are selected, and fairness metrics are defined and optimized during the training process. The models are then evaluated using performance and fairness measures and the best-performing model is chosen. The methodology ensures a systematic exploration of machine learning techniques to detect and mitigate bias, leading to more equitable decision-making.
 The review begins by examining the techniques of pre-processing, which involve cleaning the data, selecting the features, feature engineering, and sampling. These techniques play an important role in preparing the data to reduce bias and promote fairness in machine learning models. The analysis highlights various studies that have explored the effectiveness of these techniques in uncovering and mitigating bias in data, contributing to the development of more equitable and unbiased machine learning models. Next, the review delves into post-pre-processing techniques that focus on detecting and mitigating bias after the initial data preparation steps. These techniques include bias detection methods that assess the disparate impact or disparate treatment in model predictions, as well as bias mitigation techniques that modify model outputs to achieve fairness across different groups. The evaluation of these techniques, their performance metrics, and potential trade-offs between fairness and accuracy are discussed, providing insights into the challenges and advancements in bias mitigation. Lastly, the review examines fairness constraints, which involve the imposition of rules or guidelines on machine learning algorithms to ensure fairness in predictions or decision-making processes. The analysis explores different fairness constraints, such as demographic parity, equalized odds, and predictive parity, and their effectiveness in reducing bias and advocating fairness in machine learning models. Overall, this literature review provides a comprehensive understanding of the techniques employed to uncover and mitigate the existence of bias in machine learning models. By examining pre-processing techniques, post-pre-processing techniques, and fairness constraints, the review contributes to the development of more fair and unbiased machine learning models, fostering equity and ethical decision-making in various domains. By examining relevant studies, this review provides insights into the effectiveness and limitations of various pre-processing techniques for bias detection and mitigation via Pre-processing, Adversarial learning, Fairness Constraints, and Post-processing techniques.
APA, Harvard, Vancouver, ISO, and other styles
4

Pasupuleti, Murali Krishna. "Bias and Fairness in Large Language Models: Evaluation and Mitigation Techniques." International Journal of Academic and Industrial Research Innovations(IJAIRI) 05, no. 05 (2025): 442–51. https://doi.org/10.62311/nesx/rphcr6.

Full text
Abstract:
Abstract: Large Language Models (LLMs) such as GPT, BERT, and LLaMA have transformed natural language processing, yet they exhibit social biases that can reinforce unfair outcomes. This paper systematically evaluates bias and fairness in LLMs across gender, race, and socioeconomic dimensions using benchmark datasets and fairness metrics. We assess bias through template-based probing, stereotype score measurement, and downstream task performance. We implement mitigation strategies including adversarial training, counterfactual data augmentation, and fairness-aware loss functions. Regression and predictive analysis reveal that token frequency and representation distance significantly correlate with bias scores. Post-mitigation analysis shows up to a 48% reduction in bias indicators with minimal accuracy trade-offs. Keywords: Large Language Models, Bias, Fairness, Evaluation, Mitigation, GPT, BERT, Counterfactual Augmentation, SHAP, LIME
APA, Harvard, Vancouver, ISO, and other styles
5

Wongvorachan, Tarid, Okan Bulut, Joyce Xinle Liu, and Elisabetta Mazzullo. "A Comparison of Bias Mitigation Techniques for Educational Classification Tasks Using Supervised Machine Learning." Information 15, no. 6 (2024): 326. http://dx.doi.org/10.3390/info15060326.

Full text
Abstract:
Machine learning (ML) has become integral in educational decision-making through technologies such as learning analytics and educational data mining. However, the adoption of machine learning-driven tools without scrutiny risks perpetuating biases. Despite ongoing efforts to tackle fairness issues, their application to educational datasets remains limited. To address the mentioned gap in the literature, this research evaluates the effectiveness of four bias mitigation techniques in an educational dataset aiming at predicting students’ dropout rate. The overarching research question is: “How effective are the techniques of reweighting, resampling, and Reject Option-based Classification (ROC) pivoting in mitigating the predictive bias associated with high school dropout rates in the HSLS:09 dataset?" The effectiveness of these techniques was assessed based on performance metrics including false positive rate (FPR), accuracy, and F1 score. The study focused on the biological sex of students as the protected attribute. The reweighting technique was found to be ineffective, showing results identical to the baseline condition. Both uniform and preferential resampling techniques significantly reduced predictive bias, especially in the FPR metric but at the cost of reduced accuracy and F1 scores. The ROC pivot technique marginally reduced predictive bias while maintaining the original performance of the classifier, emerging as the optimal method for the HSLS:09 dataset. This research extends the understanding of bias mitigation in educational contexts, demonstrating practical applications of various techniques and providing insights for educators and policymakers. By focusing on an educational dataset, it contributes novel insights beyond the commonly studied datasets, highlighting the importance of context-specific approaches in bias mitigation.
APA, Harvard, Vancouver, ISO, and other styles
6

Djebrouni, Yasmine, Nawel Benarba, Ousmane Touat, et al. "Bias Mitigation in Federated Learning for Edge Computing." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 7, no. 4 (2023): 1–35. http://dx.doi.org/10.1145/3631455.

Full text
Abstract:
Federated learning (FL) is a distributed machine learning paradigm that enables data owners to collaborate on training models while preserving data privacy. As FL effectively leverages decentralized and sensitive data sources, it is increasingly used in ubiquitous computing including remote healthcare, activity recognition, and mobile applications. However, FL raises ethical and social concerns as it may introduce bias with regard to sensitive attributes such as race, gender, and location. Mitigating FL bias is thus a major research challenge. In this paper, we propose Astral, a novel bias mitigation system for FL. Astral provides a novel model aggregation approach to select the most effective aggregation weights to combine FL clients' models. It guarantees a predefined fairness objective by constraining bias below a given threshold while keeping model accuracy as high as possible. Astral handles the bias of single and multiple sensitive attributes and supports all bias metrics. Our comprehensive evaluation on seven real-world datasets with three popular bias metrics shows that Astral outperforms state-of-the-art FL bias mitigation techniques in terms of bias mitigation and model accuracy. Moreover, we show that Astral is robust against data heterogeneity and scalable in terms of data size and number of FL clients. Astral's code base is publicly available.
APA, Harvard, Vancouver, ISO, and other styles
7

McDaniel, Gail. "Understanding and Mitigating Bias and Noise in Data Collection, Imputation and Analysis." International Multidisciplinary Journal of Science, Technology and Business Volume 04, no. 01 (2025): 1–29. https://doi.org/10.5281/zenodo.15004631.

Full text
Abstract:
<strong>Abstract</strong><strong>:</strong><strong><em>&nbsp;</em></strong><em>Bias and noise in data significantly impact the accuracy and reliability of research findings and data-driven decision-making. This paper provides a comprehensive overview of various types of bias and noise affecting data quality, their impact on research and decision-making, and strategies for mitigation. We examine sampling bias, </em><em>non response</em><em>&nbsp;bias, measurement bias, imputation bias, and analysis bias, as well as the role of noise as a source of bias. The paper also explores bias in survey design and interpretation, emphasizing the importance of careful question wording, structure, and consideration of cultural and linguistic factors.</em> <em>To address these issues, we propose several strategies, including appropriate sampling techniques, methods to encourage participation, improved measurement tools and protocols, suitable imputation methods, and transparent data analysis practices. We discuss the ethical implications of biased data and the responsibility of researchers, decision-makers, and institutions to prioritize bias and noise mitigation.</em> <em>The paper concludes by calling for future research on methodological tools for detecting and mitigating bias and noise. It stresses the need for interdisciplinary collaboration to ensure the integrity and trustworthiness of data-driven insights.</em> <em>(Word count: 159)</em>
APA, Harvard, Vancouver, ISO, and other styles
8

Tripathi, Manish, and Raghav Agarwal. "Bias Mitigation in NLP: Automated Detection and Correction." International Journal of Research in Modern Engineering & Emerging Technology 13, no. 5 (2025): 45–60. https://doi.org/10.63345/ijrmeet.org.v13.i5.130503.

Full text
Abstract:
Natural Language Processing (NLP) systems have shown remarkable capabilities, but they often inherit biases from the datasets they are trained on, resulting in outcomes that can be unfair or even harmful. These biases can appear in different forms, such as those related to gender, race, or socioeconomic status. Addressing and mitigating bias in NLP has become a critical area of research, aiming to ensure that machine learning models generate fair and impartial results. This paper delves into the automation of bias detection and correction within NLP systems. It reviews current methods for identifying biases, including fairness metrics, sensitivity analyses, and adversarial testing. Additionally, it examines techniques for mitigating bias, such as data augmentation, algorithmic adjustments, and post-processing methods. The paper also discusses the limitations and challenges of these approaches, emphasizing the balance between maintaining accuracy and promoting fairness. Lastly, it explores potential research directions, such as embedding ethical considerations into model development and establishing more comprehensive frameworks for continuous bias detection and mitigation.
APA, Harvard, Vancouver, ISO, and other styles
9

Harshila, Gujar. "Addressing Unconscious Bias: Tools and Techniques to Mitigate Bias in the Workplace." Journal of Scientific and Engineering Research 11, no. 4 (2024): 351–54. https://doi.org/10.5281/zenodo.13604160.

Full text
Abstract:
Unconscious bias in the workplace can undermine diversity and inclusion efforts, leading to inequitable outcomes and a less inclusive work environment. This book explores the tools and techniques to identify, address, and mitigate unconscious bias in organizations. By understanding the roots of unconscious bias and implementing targeted strategies, organizations can foster a more inclusive culture, enhance employee engagement, and improve overall performance.
APA, Harvard, Vancouver, ISO, and other styles
10

V S, Aswathy, Nandini Padmakumar, and Liji Thomas P. "Fairness in Predictive Modeling: Addressing Gender Bias in Income Prediction through Bias Mitigation Techniques." Indian Journal of Computer Science and Engineering 15, no. 6 (2024): 450–56. https://doi.org/10.21817/indjcse/2024/v15i6/241506010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Lee, Yu-Hao, Norah E. Dunbar, Claude H. Miller, et al. "Training Anchoring and Representativeness Bias Mitigation Through a Digital Game." Simulation & Gaming 47, no. 6 (2016): 751–79. http://dx.doi.org/10.1177/1046878116662955.

Full text
Abstract:
Objective. Humans systematically make poor decisions because of cognitive biases. Can digital games train people to avoid cognitive biases? The goal of this study is to investigate the affordance of different educational media in training people about cognitive biases and to mitigate cognitive biases within their decision-making processes. Method. A between-subject experiment was conducted to compare a digital game, a traditional slideshow, and a combined condition in mitigating two types of cognitive biases: anchoring bias and representativeness bias. We measured both immediate effects and delayed effects after four weeks. Results. The digital game and slideshow conditions were effective in mitigating cognitive biases immediately after the training, but the effects decayed after four weeks. By providing the basic knowledge through the slideshow, then allowing learners to practice bias-mitigation techniques in the digital game, the combined condition was most effective at mitigating the cognitive biases both immediately and after four weeks.
APA, Harvard, Vancouver, ISO, and other styles
12

Katoch, Naman. "Addressing Bias in AI: Ethical Concerns, Challenges, and Mitigation Strategies." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 04 (2025): 1–9. https://doi.org/10.55041/ijsrem45370.

Full text
Abstract:
Abstract - Artificial Intelligence (AI) has revolutionized most industries, but it is also susceptible to bias, which gives rise to ethical problems and unintended consequences. AI bias can arise from biased training data, flawed algorithms, or institutional biases, which give rise to discriminatory judgments in healthcare, finance, law enforcement, and the workplace. This paper explores the reasons behind AI bias, its ethical dimensions, and how biased algorithms affect society. Besides, it also covers various mitigation strategies, including data preprocessing techniques, fair- aware algorithms, model auditing, and regulatory frameworks. Mitigation of AI bias is critical to constructing transparent, fair, and accountable AI systems that promote inclusivity and ethical decision-making. Key Words: AI bias, ethical concerns, fairness in AI, algorithmic bias, data preprocessing, transparency, model auditing, accountability, mitigation strategies
APA, Harvard, Vancouver, ISO, and other styles
13

Yin, Maxwell J., Boyu Wang, and Charles Ling. "MABR: Multilayer Adversarial Bias Removal Without Prior Bias Knowledge." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 24 (2025): 25724–32. https://doi.org/10.1609/aaai.v39i24.34764.

Full text
Abstract:
Models trained on real-world data often mirror and exacerbate existing social biases. Traditional methods for mitigating these biases typically require prior knowledge of the specific biases to be addressed, and the social groups associated with each instance. In this paper, we introduce a novel adversarial training strategy that operates withour relying on prior bias-type knowledge (e.g., gender or racial bias) and protected attribute labels. Our approach dynamically identifies biases during model training by utilizing auxiliary bias detector. These detected biases are simultaneously mitigated through adversarial training. Crucially, we implement these bias detectors at various levels of the feature maps of the main model, enabling the detection of a broader and more nuanced range of bias features. Through experiments on racial and gender biases in sentiment and occupation classification tasks, our method effectively reduces social biases without the need for demographic annotations. Moreover, our approach not only matches but often surpasses the efficacy of methods that require detailed demographic insights, marking a significant advancement in bias mitigation techniques.
APA, Harvard, Vancouver, ISO, and other styles
14

Prater, James, Konstantinos Kirytopoulos, and Tony Ma. "Optimism bias within the project management context." International Journal of Managing Projects in Business 10, no. 2 (2017): 370–85. http://dx.doi.org/10.1108/ijmpb-07-2016-0063.

Full text
Abstract:
Purpose One of the major challenges for any project is to prepare and develop an achievable baseline schedule and thus set the project up for success, rather than failure. The purpose of this paper is to explore and investigate research outputs in one of the major causes, optimism bias, to identify problems with developing baseline schedules and analyse mitigation techniques and their effectiveness recommended by research to minimise the impact of this bias. Design/methodology/approach A systematic quantitative literature review was followed, examining Project Management Journals, documenting the mitigation approaches recommended and then reviewing whether these approaches were validated by research. Findings Optimism bias proved to be widely accepted as a major cause of unrealistic scheduling for projects, and there is a common understanding as to what it is and the effects that it has on original baseline schedules. Based upon this review, the most recommended mitigation method is Flyvbjerg’s “Reference class,” which has been developed based upon Kahneman’s “Outside View”. Both of these mitigation techniques are based upon using an independent third party to review the estimate. However, within the papers reviewed, apart from the engineering projects, there has been no experimental and statistically validated research into the effectiveness of this method. The majority of authors who have published on this topic are based in Europe. Research limitations/implications The short-listed papers for this review referred mainly to non-engineering projects which included information technology focussed ones. Thus, on one hand, empirical research is needed for engineering projects, while on the other hand, the lack of tangible evidence for the effectiveness of methods related to the alleviation of optimism bias issues calls for greater research into the effectiveness of mitigation techniques for not only engineering projects, but for all projects. Originality/value This paper documents the growth within the project management research literature over time on the topic of optimism bias. Specifically, it documents the various methods recommended to mitigate the phenomenon and highlights quantitatively the research undertaken on the subject. Moreover, it introduces paths for further research.
APA, Harvard, Vancouver, ISO, and other styles
15

Vega-Gonzalo, María, and Panayotis Christidis. "Fair Models for Impartial Policies: Controlling Algorithmic Bias in Transport Behavioural Modelling." Sustainability 14, no. 14 (2022): 8416. http://dx.doi.org/10.3390/su14148416.

Full text
Abstract:
The increasing use of new data sources and machine learning models in transport modelling raises concerns with regards to potentially unfair model-based decisions that rely on gender, age, ethnicity, nationality, income, education or other socio-economic and demographic data. We demonstrate the impact of such algorithmic bias and explore the best practices to address it using three different representative supervised learning models of varying levels of complexity. We also analyse how the different kinds of data (survey data vs. big data) could be associated with different levels of bias. The methodology we propose detects the model’s bias and implements measures to mitigate it. Specifically, three bias mitigation algorithms are implemented, one at each stage of the model development pipeline—before the classifier is trained (pre-processing), when training the classifier (in-processing) and after the classification (post-processing). As these debiasing techniques have an inevitable impact on the accuracy of predicting the behaviour of individuals, the comparison of different types of models and algorithms allows us to determine which techniques provide the best balance between bias mitigation and accuracy loss for each case. This approach improves model transparency and provides an objective assessment of model fairness. The results reveal that mode choice models are indeed affected by algorithmic bias, and it is proven that the implementation of off-the-shelf mitigation techniques allows us to achieve fairer classification models.
APA, Harvard, Vancouver, ISO, and other styles
16

Broadbent, Craig D. "Evaluating mitigation and calibration techniques for hypothetical bias in choice experiments." Journal of Environmental Planning and Management 57, no. 12 (2013): 1831–48. http://dx.doi.org/10.1080/09640568.2013.839447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Pablo, Dalia Ortiz, Sushruth Badri, Erik Norén, and Christoph Nötzli. "Bias mitigation techniques in Image Classification: Fair Machine Learning in Human Heritage Collections." Journal of WSCG 31, no. 1-2 (2023): 53–62. http://dx.doi.org/10.24132/jwscg.2023.6.

Full text
Abstract:
A major problem with using automated classification systems is that if they are not engineered correctly and with fairness considerations, they could be detrimental to certain populations. Furthermore, while engineers have developed cutting-edge technologies for image classification, there is still a gap in the application of these models in human heritage collections, where data sets usually consist of low-quality pictures of people with diverse ethnicity, gender, and age. In this work, we evaluate three bias mitigation techniques using two state-of-the-art neural networks, Xception and EfficientNet, for gender classification. Moreover, we explore the use of transfer learning using a fair data set to overcome the training data scarcity. We evaluated the effectiveness of the bias mitigation pipeline on a cultural heritage collection of photographs from the 19th and 20th centuries, and we used the FairFace data set for the transfer learning experiments. After the evaluation, we found that transfer learning is a good technique that allows better performance when working with a small data set. Moreover, the fairest classifier was found to be accomplished using transfer learning, threshold change, re-weighting and image augmentation as bias mitigation methods.
APA, Harvard, Vancouver, ISO, and other styles
18

Querol, J., A. Perez, and A. Camps. "A Review of RFI Mitigation Techniques in Microwave Radiometry." Remote Sensing 11, no. 24 (2019): 3042. http://dx.doi.org/10.3390/rs11243042.

Full text
Abstract:
Radio frequency interference (RFI) is a well-known problem in microwave radiometry (MWR). Any undesired signal overlapping the MWR protected frequency bands introduces a bias in the measurements, which can corrupt the retrieved geophysical parameters. This paper presents a literature review of RFI detection and mitigation techniques for microwave radiometry from space. The reviewed techniques are divided between real aperture and aperture synthesis. A discussion and assessment of the application of RFI mitigation techniques is presented for each type of radiometer.
APA, Harvard, Vancouver, ISO, and other styles
19

Aninze, Ashionye. "Artificial Intelligence Life Cycle: The Detection and Mitigation of Bias." International Conference on AI Research 4, no. 1 (2024): 40–49. https://doi.org/10.34190/icair.4.1.3131.

Full text
Abstract:
The rapid expansion of Artificial Intelligence(AI) has outpaced the development of ethical guidelines and regulations, raising concerns about the potential for bias in AI systems. These biases in AI can manifest in real-world applications leading to unfair or discriminatory outcomes in areas like job hiring, loan approvals or criminal justice predictions. For example, a biased AI model used for loan prediction may deny loans to qualified applicants based on demographic factors such as race or gender. This paper investigates the presence and mitigation of bias in Machine Learning(ML) models trained on the Adult Census Income dataset, known to have limitations in gender and race. Through comprehensive data analysis, focusing on sensitive attributes like gender, race and relationship status, this research sheds light on complex relationships between societal biases and algorithmic outcomes and how societal biases can be rooted and amplified by ML algorithms. Utilising fairness metrics like demographic parity(DP) and equalised odds(EO), this paper quantifies the impact of bias on model predictions. The results demonstrated that biased datasets often lead to biased models even after applying pre-processing techniques. The effectiveness of mitigation techniques such as reweighting(Exponential Gradient(EG)) to reduce disparities was examined, resulting in a measurable reduction in bias disparities. However, these improvements came with trade-offs in accuracy and sometimes in other fairness metrics, identified the complex nature of bias mitigation and the need for precise consideration of ethical implications. The findings of this research highlight the critical importance of addressing bias at all stages of the AI life cycle, from data collection to model deployment. The limitation of this research, especially the use of EG, demonstrates the need for further development of bias mitigation techniques that can address complex relationships while maintaining accuracy. This paper concludes with recommendations for best practices in Artificial Intelligence development, emphasising the need for ongoing research and collaboration to mitigate bias by prioritising ethical considerations, transparency, explainability, and accountability to ensure fairness in AI systems.
APA, Harvard, Vancouver, ISO, and other styles
20

Pasupuleti, Murali Krishna. "Detecting and Mitigating Algorithmic Discrimination in Recruitment Platforms." International Journal of Academic and Industrial Research Innovations(IJAIRI) 05, no. 05 (2025): 586–97. https://doi.org/10.62311/nesx/rphcr19.

Full text
Abstract:
Abstract: The rapid adoption of artificial intelligence (AI) in recruitment has introduced efficiency gains but also raised concerns over fairness and bias. This study investigates the presence and mitigation of algorithmic discrimination in recruitment platforms through a data-driven and model-agnostic framework. Utilizing both synthetic and real-world recruitment datasets, the research applies logistic regression and random forest classifiers to detect disparities in hiring outcomes based on sensitive attributes such as gender and ethnicity. Fairness metrics—including Statistical Parity Difference (SPD) and Equal Opportunity Difference (EOD)—are employed to quantify bias before and after applying mitigation techniques. Among the techniques tested, adversarial debiasing demonstrated the most effective balance between fairness enhancement and predictive performance. Regression analysis revealed statistically significant negative coefficients for female and minority candidates, confirming bias in model decision-making. Post-mitigation results showed substantial improvement in fairness metrics (SPD improved from -0.28 to -0.05), with minimal compromise on accuracy. The findings underscore the necessity of integrating algorithmic audits and fairness-aware learning into AI-driven hiring systems. These insights contribute to the development of transparent, equitable recruitment platforms aligned with ethical AI principles and regulatory requirements. Keywords: algorithmic discrimination, recruitment platforms, fairness metrics, statistical parity, adversarial debiasing, AI ethics, bias mitigation, logistic regression, equal opportunity, algorithmic accountability
APA, Harvard, Vancouver, ISO, and other styles
21

Maripova, Tamanno. "Mitigating Algorithmic Bias in Predictive Models." American Journal of Engineering and Technology 07, no. 05 (2025): 192–201. https://doi.org/10.37547/tajet/volume07issue05-19.

Full text
Abstract:
This article considers the issue of systematic errors in predictive machine-learning models generating disparate outcomes for different social groups and proposes a holistic approach to its mitigation. The risks and increasing legal requirements, along with corporate commitments to ethical AIs, drive the relevance of this study. The work herewith attempts to develop a bias-source taxonomy at data collection and annotation, proxy-feature selection, model training, and deployment stages; also, it tries to compare pre-, in-, and post-processing methods' effectiveness on representative datasets measured by demographic parity, equalized error rates, and disparate impact. This article is unprecedented in undertaking a two-level approach: first, a systematic review of regulatory definitions (NIST, IBM) and case studies (COMPAS, healthcare-service prediction, face recognition) that identified key bias factors from sample imbalance to feedback loops; second, an empirical comparison of Reweighing, adversarial debiasing, threshold post-processing techniques alongside flexible multi-objective strategies—YODO (via AI Fairness 360 and Fairlearn libraries)—considering acceptable accuracy losses. The root source of unfairness remains data bias; hence, pre-processing must be undertaken (rebalancing, synthetic oversampling), while in- and post-processing can essentially harmonize group metrics at some cost in accuracy reduction Furthermore, without continuous online monitoring and documentation (datasheets, model cards), the balanced model risks losing fairness due to dynamic feedback effects. Bringing together technical fixes with rules and making the audit process official ensures the ability to copy and openness, which is key for long-term faith in AI systems. This article will help machine-learning builders, AI-responsibility experts, and checkers find ways to find, gauge, and lessen algorithmic bias in live models.
APA, Harvard, Vancouver, ISO, and other styles
22

Ravichandran, Nischal, Anil Chowdary Inaganti, Senthil Kumar Sundaramurthy, and Rajendra Muppalaneni. "Bias and Fairness in Machine Learning: A Systematic Review of Mitigation Techniques." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 9, no. 2 (2018): 753–87. https://doi.org/10.61841/turcomat.v9i2.15141.

Full text
Abstract:
Bias and fairness in machine learning (ML) algorithms are critical concerns that impact decision-making processes across various domains, including healthcare, finance, and criminal justice. This systematic review explores the state-of-the-art mitigation techniques employed to address bias and ensure fairness in ML systems. The review identifies and categorizes methods into pre-processing, in-processing, and post-processing strategies, while analyzing their effectiveness and limitations. Key findings indicate that although significant progress has been made, challenges remain in balancing fairness with other performance metrics such as accuracy and efficiency. The review highlights the need for more standardized benchmarks and improved algorithms that provide equitable outcomes without compromising system performance. We provide insights into future directions for enhancing fairness across machine learning models.
APA, Harvard, Vancouver, ISO, and other styles
23

Aditya Kambhampati. "Mitigating bias in financial decision systems through responsible machine learning." World Journal of Advanced Engineering Technology and Sciences 15, no. 2 (2025): 1415–21. https://doi.org/10.30574/wjaets.2025.15.2.0687.

Full text
Abstract:
Algorithmic bias in financial decision systems perpetuates and sometimes amplifies societal inequities, affecting millions of consumers through discriminatory lending practices, inequitable pricing, and exclusionary fraud detection. Minority borrowers face interest rate premiums that collectively cost communities hundreds of millions of dollars annually, while technological barriers to financial inclusion affect tens of millions of "credit invisible" Americans. This article provides a comprehensive framework for detecting, measuring, and mitigating algorithmic bias across the machine learning development lifecycle in financial services. Through examination of statistical fairness metrics, technical mitigation strategies, feature engineering approaches, and regulatory considerations, the article demonstrates that financial institutions can significantly reduce discriminatory outcomes while maintaining model performance. Pre-processing techniques like reweighing and data transformation, in-processing methods such as adversarial debiasing, and post-processing adjustments including threshold optimization provide complementary strategies that together constitute effective bias mitigation. Feature selection emerges as particularly impactful, with proxy variable detection and alternative data integration expanding opportunities for underserved populations. As regulatory expectations evolve toward mandatory fairness testing and explainability requirements, financial institutions implementing comprehensive fairness frameworks not only reduce compliance risks but also expand market opportunities through more inclusive algorithmic systems.
APA, Harvard, Vancouver, ISO, and other styles
24

Cibaca, Khandelwal. "Bias Mitigation Strategies in AI Models for Financial Data." INTERNATIONAL JOURNAL OF INNOVATIVE RESEARCH AND CREATIVE TECHNOLOGY 11, no. 2 (2025): 1–9. https://doi.org/10.5281/zenodo.15318708.

Full text
Abstract:
Artificial intelligence (AI) has become integral to financial systems, enabling automation in credit scoring, fraud detection, and investment management. However, the presence of bias in AI models can propagate systemic inequities, leading to ethical, operational, and regulatory challenges. This paper examines strategies to mitigate bias in AI systems applied to financial data. It discusses challenges associated with biased datasets, feature selection, and algorithmic decisions, alongside practical mitigation approaches such as data balancing, algorithmic fairness techniques, and post-processing adjustments. Insights from case studies demonstrate the real-world application of these strategies, highlighting their effectiveness in promoting fairness, enhancing transparency, and reducing adverse outcomes. By providing a comprehensive framework, this paper contributes to fostering equitable financial decision-making.
APA, Harvard, Vancouver, ISO, and other styles
25

Shahul Hameed, Mohamed Ashik, Asifa Mehmood Qureshi, and Abhishek Kaushik. "Bias Mitigation via Synthetic Data Generation: A Review." Electronics 13, no. 19 (2024): 3909. http://dx.doi.org/10.3390/electronics13193909.

Full text
Abstract:
Artificial intelligence (AI) is widely used in healthcare applications to perform various tasks. Although these models have great potential to improve the healthcare system, they have also raised significant ethical concerns, including biases that increase the risk of health disparities in medical applications. The under-representation of a specific group can lead to bias in the datasets that are being replicated in the AI models. These disadvantaged groups are disproportionately affected by bias because they may have less accurate algorithmic forecasts or underestimate the need for treatment. One solution to eliminate bias is to use synthetic samples or artificially generated data to balance datasets. Therefore, the purpose of this study is to review and evaluate how synthetic data can be generated and used to mitigate biases, specifically focusing on the medical domain. We explored high-quality peer-reviewed articles that were focused on synthetic data generation to eliminate bias. These studies were selected based on our defined inclusion criteria and exclusion criteria and the quality of the content. The findings reveal that generated synthetic data can help improve accuracy, precision, and fairness. However, the effectiveness of synthetic data is closely dependent on the quality of the data generation process and the initial datasets used. The study also highlights the need for continuous improvement in synthetic data generation techniques and the importance of evaluation metrics for fairness in AI models.
APA, Harvard, Vancouver, ISO, and other styles
26

Siebert, Jana, and Johannes Ulrich Siebert. "Effective mitigation of the belief perseverance bias after the retraction of misinformation: Awareness training and counter-speech." PLOS ONE 18, no. 3 (2023): e0282202. http://dx.doi.org/10.1371/journal.pone.0282202.

Full text
Abstract:
The spread and influence of misinformation have become a matter of concern in society as misinformation can negatively impact individuals’ beliefs, opinions and, consequently, decisions. Research has shown that individuals persevere in their biased beliefs and opinions even after the retraction of misinformation. This phenomenon is known as the belief perseverance bias. However, research on mitigating the belief perseverance bias after the retraction of misinformation has been limited. Only a few debiasing techniques with limited practical applicability have been proposed, and research on comparing various techniques in terms of their effectiveness has been scarce. This paper contributes to research on mitigating the belief perseverance bias after the retraction of misinformation by proposing counter-speech and awareness-training techniques and comparing them in terms of effectiveness to the existing counter-explanation technique in an experiment with N = 251 participants. To determine changes in opinions, the extent of the belief perseverance bias and the effectiveness of the debiasing techniques in mitigating the belief perseverance bias, we measure participants’ opinions four times in the experiment by using Likert items and phi-coefficient measures. The effectiveness of the debiasing techniques is assessed by measuring the difference between the baseline opinions before exposure to misinformation and the opinions after exposure to a debiasing technique. Further, we discuss the efforts of the providers and recipients of debiasing and the practical applicability of the debiasing techniques. The CS technique, with a very large effect size, is the most effective among the three techniques. The CE and AT techniques, with medium effect sizes, are close to being equivalent in terms of their effectiveness. The CS and AT techniques are associated with less cognitive and time effort of the recipients of debiasing than the CE technique, while the AT and CE techniques require less effort from the providers of debiasing than the CS technique.
APA, Harvard, Vancouver, ISO, and other styles
27

Chu, Charlene, Simon Donato-Woodger, Shehroz Khan, et al. "STRATEGIES TO MITIGATE MACHINE LEARNING BIAS AFFECTING OLDER ADULTS: RESULTS FROM A SCOPING REVIEW." Innovation in Aging 7, Supplement_1 (2023): 717–18. http://dx.doi.org/10.1093/geroni/igad104.2325.

Full text
Abstract:
Abstract Digital ageism, defined as age-related bias in artificial intelligence (AI) and technological systems, has emerged as a significant concern for its potential impact on society, health, equity, and older people’s well-being. This scoping review aims to identify mitigation strategies used in research studies to address age-related bias in machine learning literature. We conducted a scoping review following Arksey &amp; O’Malley’s methodology, and completed a comprehensive search strategy of five databases (Web of Science, CINAHL, EMBASE, IEEE Xplore, and ACM digital library). Articles were included if there was an AI application, age-related bias, and the use of a mitigation strategy. Efforts to mitigate digital ageism were sparse: our search generated 7595 articles, but only a limited number of them met the inclusion criteria. Upon screening, we identified only nine papers which attempted to mitigate digital ageism. Of these, eight involved computer vision models (facial, age prediction, brain age) while one predicted activity based on accelerometer and vital sign measurements. Three broad categories of approaches to mitigating bias in AI were identified: i) sample modification: creating a smaller, more balanced sample from the existing dataset; ii) data augmentation: modifying images to create more training data from the existing datasets without adding additional images; and iii) application of statistical or algorithmic techniques to reduce bias. Digital ageism is a newly-established topic of research, and can affect machine learning models through multiple pathways. Our results advance research on digital ageism by presenting the challenges and possibilities for mitigating digital ageism in machine learning models.
APA, Harvard, Vancouver, ISO, and other styles
28

Fajar, Jonny. "Approaches for identifying and managing publication bias in meta-analysis." Deka in Medicine 1, no. 1 (2024): e865. http://dx.doi.org/10.69863/dim.v1i1.1.

Full text
Abstract:
The consequences of publication bias in meta-analysis pose significant risks, potentially leading to erroneous conclusions within the meta-analytic framework. The objective of this article was to explore the methodologies for identifying publication bias and approaches for mitigating its effects. The techniques employed to detect publication bias can generally be distinguished into two major categories: graphical and statistical methodologies. Graphical approaches utilize techniques such as funnel plots and meta-plots, which visually depict the distribution of effect sizes and standard errors across studies. Statistical methods encompass various computations, including Fail-Safe N, rank correlation, Egger regression, tests for excess significance (TES), and selection models tailored for evaluating publication bias through quantitative analyses. The combination of these methods is recommended for a more comprehensive assessment, rather than relying on individual approaches. Methods for addressing publication bias include the trim and fill (T&amp;F) method, Publication Error and True Effect Size Estimation (PET-PEESE) method, and the Weight-Function Model, each offering unique strategies for adjusting effect size estimates. The selection of these methods should consider the specific characteristics of the meta-analysis under consideration, ensuring the most appropriate approach is employed. Publication bias poses a significant risk in the field of meta-analysis, and selecting methods for its identification and mitigation requires comprehensive consideration
APA, Harvard, Vancouver, ISO, and other styles
29

Isha Mishra, Vedika Kashyap, Nancy Yadav, and Dr. Ritu Pahwa. "Harmonizing Intelligence: A Holistic Approach to Bias Mitigation in Artificial Intelligence (AI)." International Research Journal on Advanced Engineering Hub (IRJAEH) 2, no. 07 (2024): 1978–85. http://dx.doi.org/10.47392/irjaeh.2024.0270.

Full text
Abstract:
Artificial intelligence (AI) is transforming the way we interact with data, leading to a growing concern about bias. This study aims to address this issue by developing intelligent algorithms that can identify and prevent new biases in AI systems. The strategy involves combining innovative machine-learning techniques, ethical considerations, and interdisciplinary perspectives to address bias at various stages, including data collection, model training, and decision-making processes. The proposed strategy uses robust model evaluation techniques, adaptive learning strategies, and fairness-aware machine learning algorithms to ensure AI systems function fairly across diverse demographic groups. The paper also highlights the importance of diverse and representative datasets and the inclusion of underrepresented groups in training. The goal is to develop AI models that reduce prejudice while maintaining moral norms, promoting user acceptance and trust. Empirical evaluations and case studies demonstrate the effectiveness of this approach, contributing to the ongoing conversation about bias reduction in AI.
APA, Harvard, Vancouver, ISO, and other styles
30

Badr, Youakim, and Rahul Sharma. "Data Transparency and Fairness Analysis of the NYPD Stop-and-Frisk Program." Journal of Data and Information Quality 14, no. 2 (2022): 1–14. http://dx.doi.org/10.1145/3460533.

Full text
Abstract:
Given the increased concern of racial disparities in the stop-and-frisk programs, the New York Police Department ( NYPD ) requires publicly displaying detailed data for all the stops conducted by police authorities, including the suspected offense and race of the suspects. By adopting a public data transparency policy, it becomes possible to investigate racial biases in stop-and-frisk data and demonstrate the benefit of data transparency to approve or disapprove social beliefs and police practices. Thus, data transparency becomes a crucial need in the era of Artificial Intelligence ( AI ), where police and justice increasingly use different AI techniques not only to understand police practices but also to predict recidivism, crimes, and terrorism. In this study, we develop a predictive analytics method, including bias metrics and bias mitigation techniques to analyze the NYPD Stop-and-Frisk datasets and discover whether underline bias patterns are responsible for stops and arrests. In addition, we perform a fairness analysis on two protected attributes, namely, the race and the gender, and investigate their impacts on arrest decisions. We also apply bias mitigation techniques. The experimental results show that the NYPD Stop-and-Frisk dataset is not biased toward colored and Hispanic individuals and thus law enforcement authorities can apply the bias predictive analytics method to inculcate more fair decisions before making any arrests.
APA, Harvard, Vancouver, ISO, and other styles
31

Pagano, Tiago P., Rafael B. Loureiro, Fernanda V. N. Lisboa, et al. "Bias and Unfairness in Machine Learning Models: A Systematic Review on Datasets, Tools, Fairness Metrics, and Identification and Mitigation Methods." Big Data and Cognitive Computing 7, no. 1 (2023): 15. http://dx.doi.org/10.3390/bdcc7010015.

Full text
Abstract:
One of the difficulties of artificial intelligence is to ensure that model decisions are fair and free of bias. In research, datasets, metrics, techniques, and tools are applied to detect and mitigate algorithmic unfairness and bias. This study examines the current knowledge on bias and unfairness in machine learning models. The systematic review followed the PRISMA guidelines and is registered on OSF plataform. The search was carried out between 2021 and early 2022 in the Scopus, IEEE Xplore, Web of Science, and Google Scholar knowledge bases and found 128 articles published between 2017 and 2022, of which 45 were chosen based on search string optimization and inclusion and exclusion criteria. We discovered that the majority of retrieved works focus on bias and unfairness identification and mitigation techniques, offering tools, statistical approaches, important metrics, and datasets typically used for bias experiments. In terms of the primary forms of bias, data, algorithm, and user interaction were addressed in connection to the preprocessing, in-processing, and postprocessing mitigation methods. The use of Equalized Odds, Opportunity Equality, and Demographic Parity as primary fairness metrics emphasizes the crucial role of sensitive attributes in mitigating bias. The 25 datasets chosen span a wide range of areas, including criminal justice image enhancement, finance, education, product pricing, and health, with the majority including sensitive attributes. In terms of tools, Aequitas is the most often referenced, yet many of the tools were not employed in empirical experiments. A limitation of current research is the lack of multiclass and multimetric studies, which are found in just a few works and constrain the investigation to binary-focused method. Furthermore, the results indicate that different fairness metrics do not present uniform results for a given use case, and that more research with varied model architectures is necessary to standardize which ones are more appropriate for a given context. We also observed that all research addressed the transparency of the algorithm, or its capacity to explain how decisions are taken.
APA, Harvard, Vancouver, ISO, and other styles
32

Usman Jayadi, Nur Sayidah, and Sri Utami Ady. "PSYCHOLOGY OF STRATEGIC LEADERSHIP: UNVEILING COGNITIVE BIASES AND EMOTIONAL INFLUENCE." MORFAI JOURNAL 4, no. 4 (2025): 1216–24. https://doi.org/10.54443/morfai.v4i4.2297.

Full text
Abstract:
Strategic leadership plays a crucial role in steering organizations through the complexities of a rapidly evolving global landscape. In today's competitive environment, effective leadership transcends informed decision-making and technical expertise, encompassing a deep understanding of the psychological dynamics that influence decisions. This study examines the psychological foundations of strategic leadership, emphasizing the impact of cognitive biases and emotional intelligence on leadership behaviors and decision-making processes. Key cognitive biases such as overconfidence, confirmation bias, and anchoring are explored for their potential to distort leaders' perceptions and strategic choices. Concurrently, the role of emotional intelligence, including self-awareness, self-regulation, empathy, and relationship management, is analyzed for its capacity to enhance leadership effectiveness by mitigating the adverse effects of these biases. Utilizing a qualitative approach, the research incorporates literature review, expert interviews, and case studies to investigate how cognitive biases disrupt decision-making and how emotional intelligence can counterbalance these distortions. Findings indicate that integrating emotional intelligence training and bias mitigation strategies into leadership development programs significantly improves strategic leadership effectiveness. However, challenges remain in consistently applying these techniques, particularly in high-pressure situations. Therefore, ongoing and dynamic leadership development initiatives are essential to ensure the effective application of emotional intelligence and cognitive bias mitigation in strategic decision-making.
APA, Harvard, Vancouver, ISO, and other styles
33

Blow, Christina Hastings, Lijun Qian, Camille Gibson, Pamela Obiomon, and Xishuang Dong. "Comprehensive Validation on Reweighting Samples for Bias Mitigation via AIF360." Applied Sciences 14, no. 9 (2024): 3826. http://dx.doi.org/10.3390/app14093826.

Full text
Abstract:
Fairness Artificial Intelligence (AI) aims to identify and mitigate bias throughout the AI development process, spanning data collection, modeling, assessment, and deployment—a critical facet of establishing trustworthy AI systems. Tackling data bias through techniques like reweighting samples proves effective for promoting fairness. This paper undertakes a systematic exploration of reweighting samples for conventional Machine-Learning (ML) models, utilizing five models for binary classification on datasets such as Adult Income and COMPAS, incorporating various protected attributes. In particular, AI Fairness 360 (AIF360) from IBM, a versatile open-source library aimed at identifying and mitigating bias in machine-learning models throughout the entire AI application lifecycle, is employed as the foundation for conducting this systematic exploration. The evaluation of prediction outcomes employs five fairness metrics from AIF360, elucidating the nuanced and model-specific efficacy of reweighting samples in fostering fairness within traditional ML frameworks. Experimental results illustrate that reweighting samples effectively reduces bias in traditional ML methods for classification tasks. For instance, after reweighting samples, the balanced accuracy of Decision Tree (DT) improves to 100%, and its bias, as measured by fairness metrics such as Average Odds Difference (AOD), Equal Opportunity Difference (EOD), and Theil Index (TI), is mitigated to 0. However, reweighting samples does not effectively enhance the fairness performance of K Nearest Neighbor (KNN). This sheds light on the intricate dynamics of bias, underscoring the complexity involved in achieving fairness across different models and scenarios.
APA, Harvard, Vancouver, ISO, and other styles
34

Researcher. "ADVANCEMENTS IN REDUCING BIAS IN RECOMMENDATION SYSTEMS: A TECHNICAL OVERVIEW." International Journal of Computer Engineering and Technology (IJCET) 15, no. 6 (2024): 1139–46. https://doi.org/10.5281/zenodo.14328046.

Full text
Abstract:
This article explores recent advancements in addressing bias within recommendation systems, focusing on three key approaches: Ranking-based Equal Opportunity (RBEO), post-processing adjustments, and future challenges in implementation. The article examines how these methods effectively reduce demographic disparities while maintaining recommendation quality across various platforms. It investigates the technical architecture of RBEO, the flexibility of post-processing techniques, and the complex balance between fairness and system performance. The article also addresses critical challenges in scaling bias mitigation techniques and managing ethical considerations in algorithmic decision-making, providing insights into future directions for developing more equitable recommendation systems.
APA, Harvard, Vancouver, ISO, and other styles
35

Raftopoulos, George, Gregory Davrazos, and Sotiris Kotsiantis. "Evaluating Fairness Strategies in Educational Data Mining: A Comparative Study of Bias Mitigation Techniques." Electronics 14, no. 9 (2025): 1856. https://doi.org/10.3390/electronics14091856.

Full text
Abstract:
Ensuring fairness in machine learning models applied to educational data is crucial for mitigating biases that can reinforce systemic inequities. This paper compares various fairness-enhancing algorithms across preprocessing, in-processing, and post-processing stages. Preprocessing methods such as Reweighting, Learning Fair Representations, and Disparate Impact Remover aim to adjust training data to reduce bias before model learning. In-processing techniques, including Adversarial Debiasing and Prejudice Remover, intervene during model training to directly minimize discrimination. Post-processing approaches, such as Equalized Odds Post-Processing, Calibrated Equalized Odds Post-Processing, and Reject Option Classification, adjust model predictions to improve fairness without altering the underlying model. We evaluate these methods on educational datasets, examining their effectiveness in reducing disparate impact while maintaining predictive performance. Our findings highlight tradeoffs between fairness and accuracy, as well as the suitability of different techniques for various educational applications.
APA, Harvard, Vancouver, ISO, and other styles
36

Sai, Mani Krishna Sistla, Periyasamy Vathsala, and Jeyaraman Jawaharbabu. "Explainable AI Techniques and Applications in Healthcare." International Journal of Innovative Science and Research Technology (IJISRT) 9, no. 2 (2024): 7. https://doi.org/10.5281/zenodo.10776780.

Full text
Abstract:
Explainable AI techniques are increasingly crucial in healthcare, where transparency and interpretability of artificial intelligence (AI) models are paramount. In domains like medical imaging and clinical decision-making, AI serves to elucidate the rationale behind AI-driven decisions, emulating human reasoning to bolster trust and acceptance. However, the implementation of XAI in healthcare is not without challenges, including algorithmic bias, operational speed, and the necessity for multidisciplinary collaboration to navigate technical, legal, medical, and patient-centric considerations effectively. By providing explanations to healthcare professionals, AI fosters trust and ensures the judicious use of AI tools. Overcoming issues such as bias in algorithmic outputs derived from data and interactions is essential for maintaining fairness in personalized medicine applications. Various XAI techniques, such as causal explanations and interactive interfaces, facilitate improved human-computer interactions by making AI decisions comprehensible and reliable. The development and deployment of XAI in clinical settings offer transparency to AI models but require concerted efforts to address practical concerns like speed, bias mitigation, and interdisciplinary cooperation to uphold the ethical and efficient utilization of AI in healthcare. Through the strategic application of XAI techniques, healthcare practitioners can leverage transparent and trustworthy AI systems to enhance decision-making processes and patient outcomes. Keywords:- Explainable AI (XAI), Healthcare, Radiomics, Human Judgment, Medical Imaging, Deep Learnings.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhou, Ren. "Empirical Study and Mitigation Methods of Bias in LLM-Based Robots." Academic Journal of Science and Technology 12, no. 1 (2024): 86–93. http://dx.doi.org/10.54097/re9qp070.

Full text
Abstract:
Our study provides a comprehensive analysis of biased behaviors exhibited by robots utilizing large language models (LLMs) in real-world applications, focusing on five experimental scenarios: customer service, education, healthcare, recruitment, and social interaction. The analysis reveals significant differences in user experiences based on race, health status, work experience, and social status. For instance, the average satisfaction score for white customers is 4.2, compared to 3.5 for black customers, and the response accuracy for white students is 92%, versus 85% for black students. To address these biases, we propose several mitigation methods, including data resampling, model regularization, post-processing techniques, diversity assessment, and user feedback mechanisms. These methods aim to enhance the fairness and inclusivity of robotic systems, promoting healthy human-robot interactions. By combining our quantitative data analysis with existing research, we affirm the importance of bias detection and mitigation, and propose various improvement strategies. Future research should further explore data balancing strategies, fairness-constrained models, real-time monitoring and adjustment mechanisms, and cross-domain studies to comprehensively evaluate and improve the performance of LLM-based robotic systems across various tasks.
APA, Harvard, Vancouver, ISO, and other styles
38

Mohammad Aljanabi. "Safeguarding Connected Health: Leveraging Trustworthy AI Techniques to Harden Intrusion Detection Systems Against Data Poisoning Threats in IoMT Environments." Babylonian Journal of Internet of Things 2023 (May 17, 2023): 31–37. http://dx.doi.org/10.58496/bjiot/2023/005.

Full text
Abstract:
Internet of Medical Things (IoMT) environments introduce vast security exposures including vulnerabilities to data poisoning threats that undermine integrity of automated patient health analytics like diagnosis models. This research explores applying trustworthy artificial intelligence (AI) methodologies including explainability, bias mitigation, and adversarial sample detection to substantially enhance resilience of medical intrusion detection systems. We architect an integrated anomaly detector featuring purpose-built modules for model interpretability, bias quantification, and advanced malicious input recognition alongside conventional classifier pipelines. Additional infrastructure provides full-lifecycle accountability via independent auditing. Our experimental intrusion detection system design embodying multiple trustworthy AI principles is rigorously evaluated against staged electronic record poisoning attacks emulating realistic threats to healthcare IoMT ecosystems spanning wearables, edge devices, and hospital information systems. Results demonstrate significantly strengthened threat response capabilities versus baseline detectors lacking safeguards. Explainability mechanisms build justified trust in model behaviors by surfacing rationale for each prediction to human operators. Continuous bias tracking enables preemptively identifying and mitigating unfair performance gaps before they widen into operational exposures over time. SafeML classifiers reliably detect even camouflaged data manipulation attempts with 97% accuracy. Together the integrated modules restore classification performance to baseline levels even when overwhelmed with 30% contaminated data across all samples. Findings strongly motivate prioritizing adoption of ethical ML practices to fulfill duty of care around patient safety and data integrity as algorithmic capabilities advance.
APA, Harvard, Vancouver, ISO, and other styles
39

Dev, Sunipa, Tao Li, Jeff M. Phillips, and Vivek Srikumar. "On Measuring and Mitigating Biased Inferences of Word Embeddings." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 7659–66. http://dx.doi.org/10.1609/aaai.v34i05.6267.

Full text
Abstract:
Word embeddings carry stereotypical connotations from the text they are trained on, which can lead to invalid inferences in downstream models that rely on them. We use this observation to design a mechanism for measuring stereotypes using the task of natural language inference. We demonstrate a reduction in invalid inferences via bias mitigation strategies on static word embeddings (GloVe). Further, we show that for gender bias, these techniques extend to contextualized embeddings when applied selectively only to the static components of contextualized embeddings (ELMo, BERT).
APA, Harvard, Vancouver, ISO, and other styles
40

de Castro Vieira, José Rômulo, Flavio Barboza, Daniel Cajueiro, and Herbert Kimura. "Towards Fair AI: Mitigating Bias in Credit Decisions—A Systematic Literature Review." Journal of Risk and Financial Management 18, no. 5 (2025): 228. https://doi.org/10.3390/jrfm18050228.

Full text
Abstract:
The increasing adoption of artificial intelligence algorithms is redefining decision-making across various industries. In the financial sector, where automated credit granting has undergone profound changes, this transformation raises concerns about biases perpetuated or introduced by AI systems. This study investigates the methods used to identify and mitigate biases in AI models applied to credit granting. We conducted a systematic literature review using the IEEE, Scopus, Web of Science, and Science Direct databases, covering the period from 1 January 2013 to 1 October 2024. From the 414 identified articles, 34 were selected for detailed analysis. Most studies are empirical and quantitative, focusing on fairness in outcomes and biases present in datasets. Preprocessing techniques dominated as the approach for bias mitigation, often relying on public academic datasets. Gender and race were the most studied sensitive attributes, with statistical parity being the most commonly used fairness metric. The findings reveal a maturing research landscape that prioritizes fairness in model outcomes and the mitigation of biases embedded in historical data. However, only a quarter of the papers report more than one fairness metric, limiting comparability across approaches. The literature remains largely focused on a narrow set of sensitive attributes, with little attention to intersectionality or alternative sources of bias. Furthermore, no study employed causal inference techniques to identify proxy discrimination. Despite some promising results—where fairness gains exceed 30% with minimal accuracy loss—significant methodological gaps persist, including the lack of standardized metrics, overreliance on legacy data, and insufficient transparency in model pipelines. Future work should prioritize developing advanced bias mitigation methods, exploring sensitive attributes, standardizing fairness metrics, improving model explainability, reducing computational complexity, enhancing synthetic data generation, and addressing the legal and ethical challenges of algorithms.
APA, Harvard, Vancouver, ISO, and other styles
41

Mohammed, K., and G. George. "IDENTIFICATION AND MITIGATION OF BIAS USING EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI) FOR BRAIN STROKE PREDICTION." Open Journal of Physical Science (ISSN: 2734-2123) 4, no. 1 (2023): 19–33. http://dx.doi.org/10.52417/ojps.v4i1.457.

Full text
Abstract:
Stroke is a time-sensitive illness that without rapid care and diagnosis can result in detrimental effects on the person. Caretakers need to enhance patient management by procedurally mining and storing the patient's medical records because of the increasing synergy between technology and medical diagnosis. Therefore, it is essential to explore how these risk variables interconnect with each other in patient health records and understand how they each individually affect stroke prediction. Using explainable Artificial Intelligence (XAI) techniques, we were able to show the imbalance dataset and improve our model’s accuracy, we showed how oversampling improves our model’s performance and used explainable AI techniques to further investigate the decision and oversample a feature to have even better performance. We showed and suggested explainable AI as a technique to improve model performance and serve as a level of trustworthiness for practitioners, we used four evaluation metrics, recall, precision, accuracy, and f1 score. The f1 score with the original data was 0% due to imbalanced data, the non-stroke data was significantly higher than the stroke data, the 2nd model has an f1 score of 81.78% and we used explainable AI techniques, Local Interpretable Model-agnostic Explanations (LIME) and SHapely Additive exPlanation (SHAP) to further analyse how the model came to a decision, this led us to investigate and oversample a specific feature to have a new f1 score of 83.34%. We suggest the use of explainable AI as a technique to further investigate a model’s method for decision-making.
APA, Harvard, Vancouver, ISO, and other styles
42

Para, Raghu K. "The Role of Explainable AI in Bias Mitigation for Hyper-personalization." Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023 6, no. 1 (2024): 625–35. https://doi.org/10.60087/jaigs.v6i1.289.

Full text
Abstract:
Hyper-personalization involves leveraging advanced data analytics and machine learning models to deliver highly personalized recommendations and consumer experiences. While these methods provide substantial user experience benefits, they raise ethical and technical concerns, notably the risk of propagating or escalating biases. As personalization algorithms become increasingly intricate and complex, biases may inadvertently shape the hyper-personalized content consumers receive, potentially reinforcing stereotypes, thereby limiting exposure to diverse information, and entrenching social inequalities. Explainable AI (XAI) has emerged as a critical approach to enhance transparency, trust, and accountability in complex data models. By making the inner workings and decision-making processes of machine learning models more interpretable, XAI enables stakeholders—starting from developers to policy regulators and end-users—to detect and mitigate biases. This paper provides a comprehensive literature-driven exploration of how XAI methods can assist in bias identification, audits, and mitigation in hyper-personalized systems. We examine state-of-the-art explainability techniques, discuss their applicability, strengths and limitations, and highlight related fairness frameworks, and propose a conceptual roadmap for integrating XAI into hyper-personalized pipelines. We conclude with a discussion on future research directions and the need for interdisciplinary efforts to ensure crafting ethical and inclusive hyper-personalization strategies.
APA, Harvard, Vancouver, ISO, and other styles
43

Augustin, NYEMBO MPAMPI. "Bias in Content-Generating AI Algorithms: Technical Analysis, Detection, And Mitigation with Python." International Journal of Mathematics and Computer Research 13, no. 04 (2025): 5087–95. https://doi.org/10.5281/zenodo.15245614.

Full text
Abstract:
Generative artificial intelligence models, such as<em>&nbsp;GPT-4, DALL&middot;E and Stable Diffusion&nbsp;</em>are now essential tools for automated text and image production. However, these models are influenced by&nbsp;<em>algorithmic biases&nbsp;</em>resulting from&nbsp;<em>training data&nbsp;</em>,&nbsp;<em>learning mechanisms&nbsp;</em>and&nbsp;<em>user interactions&nbsp;</em>. These biases, often unconscious, can have significant consequences by&nbsp;<em>reinforcing social stereotypes, excluding certain populations&nbsp;</em>and altering the diversity of generated content. This article provides an in-depth technical analysis of the biases present in generative AI. We first explore their&nbsp;<em>origins&nbsp;</em>, highlighting the biases of&nbsp;<em>corpora</em>&nbsp;<em>training&nbsp;</em>, biases&nbsp;<em>introduced by learning algorithms&nbsp;</em>and those&nbsp;<em>induced by users&nbsp;</em>. Then, we present&nbsp;<em>methods for detecting and evaluating biases&nbsp;</em>, using&nbsp;<em>natural language processing (NLP), computer vision and data modeling tools&nbsp;</em>. Experiments in&nbsp;<em>Python&nbsp;</em>illustrate how these biases manifest themselves in text and image models. Finally, we propose&nbsp;<em>bias mitigation strategies&nbsp;</em>based on several technical approaches:&nbsp;<em>data rebalancing&nbsp;</em><em>, embedding debiasing&nbsp;</em><em>, adjustment of cost functions , regulation of outputs&nbsp;</em>And<em>&nbsp;Model auditability&nbsp;</em>. Integrating these techniques helps make AI models fairer and more transparent. The goal is to provide a pragmatic and rigorous approach to designing&nbsp;<em>responsible generative AI models&nbsp;</em>that respect the principles of fairness and diversity.
APA, Harvard, Vancouver, ISO, and other styles
44

Kolberg, Jascha, Yannik Schäfer, Christian Rathgeb, and Christoph Busch. "On the Potential of Algorithm Fusion for Demographic Bias Mitigation in Face Recognition." IET Biometrics 2024 (February 23, 2024): 1–18. http://dx.doi.org/10.1049/2024/1808587.

Full text
Abstract:
With the rise of deep neural networks, the performance of biometric systems has increased tremendously. Biometric systems for face recognition are now used in everyday life, e.g., border control, crime prevention, or personal device access control. Although the accuracy of face recognition systems is generally high, they are not without flaws. Many biometric systems have been found to exhibit demographic bias, resulting in different demographic groups being not recognized with the same accuracy. This is especially true for facial recognition due to demographic factors, e.g., gender and skin color. While many previous works already reported demographic bias, this work aims to reduce demographic bias for biometric face recognition applications. In this regard, 12 face recognition systems are benchmarked regarding biometric recognition performance as well as demographic differentials, i.e., fairness. Subsequently, multiple fusion techniques are applied with the goal to improve the fairness in contrast to single systems. The experimental results show that it is possible to improve the fairness regarding single demographics, e.g., skin color or gender, while improving fairness for demographic subgroups turns out to be more challenging.
APA, Harvard, Vancouver, ISO, and other styles
45

Foka, Anna, and Gabriele Griffin. "AI, Cultural Heritage, and Bias: Some Key Queries That Arise from the Use of GenAI." Heritage 7, no. 11 (2024): 6125–36. http://dx.doi.org/10.3390/heritage7110287.

Full text
Abstract:
Our article AI, cultural heritage, and bias examines the challenges and potential solutions for using machine learning to interpret and classify human memory and cultural heritage artifacts. We argue that bias is inherent in cultural heritage collections (CHCs) and their digital versions and that AI pipelines may amplify this bias. We hypothesise that effective AI methods require vast, well-annotated datasets with structured metadata, which CHCs often lack due to diverse digitisation practices and limited interconnectivity. This paper discusses the definition of bias in CHCs and other datasets, exploring how it stems from training data and insufficient humanities expertise in generative platforms. We conclude that scholarship, guidelines, and policies on AI and CHCs should address bias as both inherent and augmented by AI technologies. We recommend implementing bias mitigation techniques throughout the process, from collection to curation, to support meaningful curation, embrace diversity, and cater to future heritage audiences.
APA, Harvard, Vancouver, ISO, and other styles
46

Gawali, Apurva. "Bias Checker AI Web Application: A Framework for Identifying Bias in AI Models." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 05 (2025): 1–9. https://doi.org/10.55041/ijsrem48266.

Full text
Abstract:
Abstract— Artificial Intelligence (AI) models are widely deployed in decision-making systems, but they often exhibit bias due to skewed training data or inherent algorithmic issues. This paper presents a Bias Checker AI Web Application designed to analyze and detect biases in AI-generated outputs. The system uses natural language processing (NLP) and statistical analysis techniques to assess potential biases in text-based predictions. The web-based interface enables [1] real-time bias evaluation, ensuring transparency and fairness in AI systems. The proposed system provides a user-friendly platform for developers and stakeholders to assess their models and mitigate discriminatory outcomes. Additionally, this paper explores the ethical implications of biased AI, potential mitigation techniques, and the importance of transparency in AI-driven decision-making processes. The issue of AI bias extends beyond technical flaws, influencing societal and economic structures by reinforcing stereotypes and discriminatory practices. Addressing bias in AI models is crucial for ensuring fairness in automated decision- making. As AI continues to permeate sectors like finance, healthcare, and law enforcement, biased models can perpetuate historical injustices, leading [14] to tangible negative consequences for marginalized groups. This paper emphasizes the role of bias detection tools in fostering trust and accountability in AI applications. Furthermore, we discuss the significance of incorporating explainability in AI-driven bias detection. The Bias Checker AI Web Application aims to bridge the gap between technical bias analysis and user interpretability, ensuring that results are accessible to both developers and non-technical stakeholders. By integrating intuitive visualization tools and user feedback mechanisms, our system enhances the accessibility of bias detection methodologies. Keywords: Bias detection, AI fairness, Natural Language Processing, Machine Learning, Web Application, Ethical AI, Algorithmic Transparency, AI Ethics.
APA, Harvard, Vancouver, ISO, and other styles
47

Wagoner, Erika L., Eduardo Rozo, Xiao Fang, Martín Crocce, Jack Elvin-Poole, and Noah Weaverdyck. "Linear systematics mitigation in galaxy clustering in the Dark Energy Survey Year 1 Data." Monthly Notices of the Royal Astronomical Society 503, no. 3 (2021): 4349–62. http://dx.doi.org/10.1093/mnras/stab717.

Full text
Abstract:
ABSTRACT We implement a linear model for mitigating the effect of observing conditions and other sources of contamination in galaxy clustering analyses. Our treatment improves upon the fiducial systematics treatment of the Dark Energy Survey (DES) Year 1 (Y1) cosmology analysis in four crucial ways. Specifically, our treatment (1) does not require decisions as to which observable systematics are significant and which are not, allowing for the possibility of multiple maps adding coherently to give rise to significant bias even if no single map leads to a significant bias by itself, (2) characterizes both the statistical and systematic uncertainty in our mitigation procedure, allowing us to propagate said uncertainties into the reported cosmological constraints, (3) explicitly exploits the full spatial structure of the galaxy density field to differentiate between cosmology-sourced and systematics-sourced fluctuations within the galaxy density field, and (4) is fully automated, and can therefore be trivially applied to any data set. The updated correlation function for the DES Y1 redMaGiC catalogue minimally impacts the cosmological posteriors from that analysis. Encouragingly, our analysis does improve the goodness-of-fit statistic of the DES Y1 3 × 2pt data set (Δχ2 = −6.5 with no additional parameters). This improvement is due in nearly equal parts to both the change in the correlation function and the added statistical and systematic uncertainties associated with our method. We expect the difference in mitigation techniques to become more important in future work as the size of cosmological data sets grows.
APA, Harvard, Vancouver, ISO, and other styles
48

Gitonga, Charles Kinyua, Dennis Murithi, and Edna Chebet. "Mitigating Demographic Bias in ImageNet: A Comprehensive Analysis of Disparities and Fairness in Deep Learning Models." European Journal of Artificial Intelligence and Machine Learning 4, no. 2 (2025): 15–26. https://doi.org/10.24018/ejai.2025.4.2.51.

Full text
Abstract:
Deep learning has transformed artificial intelligence (AI), yet fairness concerns persist due to biases in training datasets. ImageNet, a key dataset in computer vision, contains demographic imbalances in its “person” categories, raising concerns about biased AI models. This study is to examine these biases, evaluate their impact on model performance, and implement fairness aware mitigation strategies. Using a fine-tuned EfficientNet-B0 model, we achieved 98.44% accuracy. Subgroup analysis revealed higher error rates for darker-skinned individuals and women compared to lighter-skinned individuals and men. Mitigation techniques, including data augmentation and re-sampling, improved fairness metrics by 1.4% for underrepresented groups. Confidence analysis showed 99.25% accuracy for predictions with over 80% confidence. To enhance reproducibility, we deployed our demographic bias detection model on Hugging Face Spaces. The study’s limitations include a focus on “person” categories, computational constraints, and potential annotation biases. Future research should extend fairness-aware interventions across diverse datasets.
APA, Harvard, Vancouver, ISO, and other styles
49

Schwerk, Anne, and Armin Grasnick. "PHANTOMATRIX: Explainability for Detecting Gender Bias in Affective Computing." International Conference on Gender Research 8, no. 1 (2025): 515–18. https://doi.org/10.34190/icgr.8.1.3199.

Full text
Abstract:
The PHANTOMATRIX project is a research incubator running at the International University of Applied Sciences and aims to advance the field of Human-Machine Interaction by integrating machine learning (ML) techniques to predict emotional states using physiological and facial expression data within Virtual Reality environments. A major focus of the PHANTOMATRIX project is on employing trustworthy ML models by using explainable AI (XAI) methods that allow to rank features according to their predictive power, which aids in understanding the most influential factors in emotional state predictions. In addition, a comparative analysis of XAI techniques to emotion prediction models allows us to assess and correct for the effect of gender on the predictive performance. As affective computing is a highly sensitive research arena, it is of outmost importance to ensure bias free models. Key XAI methods such as Deep Taylor Decomposition (DTD), and SHapley Additive exPlanations (SHAP) are employed to clarify the contributions of features towards model predictions, providing insights into how specific signals influence emotion detection across individuals. This allows for a comprehensive comparison of different XAI approaches and their utility in gender bias detection and mitigation. To further our understanding of gender dynamics within emotional predictions, we develop intuitive visualizations that graphically represent the link between multimodal input data and the resulting emotional predictions to support the interpretation of complex model outputs and to make them more accessible not only to researchers but also to novice users of the system. Our background research demonstrates the effectiveness of XAI methods in identifying and mitigating gender bias in emotion prediction models. By applying XAI, the project reduces the influence of gender-based disparities in affective computing, leading to more equitable model performance across demographics. This research not only highlights the importance of transparent, bias-free AI-affect models but also sets a foundation for future developments in responsible affective computing. The findings contribute to advancing trust in AI-driven emotion analysis, promoting fairer and more inclusive applications of this highly relevant technology.
APA, Harvard, Vancouver, ISO, and other styles
50

Zhou, Zhongwen, Yue Xi, Suchuan Xing, and Yizhe Chen. "Cultural Bias Mitigation in Vision-Language Models for Digital Heritage Documentation: A Comparative Analysis of Debiasing Techniques." Artificial Intelligence and Machine Learning Review 5, no. 3 (2024): 28–40. https://doi.org/10.69987/aimlr.2024.50303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography