Academic literature on the topic 'Bias mitigation techniques'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Bias mitigation techniques.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Bias mitigation techniques"

1

Gallaher, Joshua P., Alexander J. Kamrud, and Brett J. Borghetti. "Detection and Mitigation of Inefficient Visual Searching." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, no. 1 (2020): 47–51. http://dx.doi.org/10.1177/1071181320641015.

Full text
Abstract:
A commonly known cognitive bias is a confirmation bias: the overweighting of evidence supporting a hy- pothesis and underweighting evidence countering that hypothesis. Due to high-stress and fast-paced opera- tions, military decisions can be affected by confirmation bias. One military decision task prone to confirma- tion bias is a visual search. During a visual search, the operator scans an environment to locate a specific target. If confirmation bias causes the operator to scan the wrong portion of the environment first, the search is inefficient. This study has two primary goals: 1) detect inefficient visual search using machine learning and Electroencephalography (EEG) signals, and 2) apply various mitigation techniques in an effort to im- prove the efficiency of searches. Early findings are presented showing how machine learning models can use EEG signals to detect when a person might be performing an inefficient visual search. Four mitigation techniques were evaluated: a nudge which indirectly slows search speed, a hint on how to search efficiently, an explanation for why the participant was receiving a nudge, and instructions to instruct the participant to search efficiently. These mitigation techniques are evaluated, revealing the most effective mitigations found to be the nudge and hint techniques.
APA, Harvard, Vancouver, ISO, and other styles
2

Siddique, Sunzida, Mohd Ariful Haque, Roy George, Kishor Datta Gupta, Debashis Gupta, and Md Jobair Hossain Faruk. "Survey on Machine Learning Biases and Mitigation Techniques." Digital 4, no. 1 (2023): 1–68. http://dx.doi.org/10.3390/digital4010001.

Full text
Abstract:
Machine learning (ML) has become increasingly prevalent in various domains. However, ML algorithms sometimes give unfair outcomes and discrimination against certain groups. Thereby, bias occurs when our results produce a decision that is systematically incorrect. At various phases of the ML pipeline, such as data collection, pre-processing, model selection, and evaluation, these biases appear. Bias reduction methods for ML have been suggested using a variety of techniques. By changing the data or the model itself, adding more fairness constraints, or both, these methods try to lessen bias. The best technique relies on the particular context and application because each technique has advantages and disadvantages. Therefore, in this paper, we present a comprehensive survey of bias mitigation techniques in machine learning (ML) with a focus on in-depth exploration of methods, including adversarial training. We examine the diverse types of bias that can afflict ML systems, elucidate current research trends, and address future challenges. Our discussion encompasses a detailed analysis of pre-processing, in-processing, and post-processing methods, including their respective pros and cons. Moreover, we go beyond qualitative assessments by quantifying the strategies for bias reduction and providing empirical evidence and performance metrics. This paper serves as an invaluable resource for researchers, practitioners, and policymakers seeking to navigate the intricate landscape of bias in ML, offering both a profound understanding of the issue and actionable insights for responsible and effective bias mitigation.
APA, Harvard, Vancouver, ISO, and other styles
3

K. Devasenapathy, Arun Padmanabhan,. "Uncovering Bias: Exploring Machine Learning Techniques for Detecting and Mitigating Bias in Data – A Literature Review." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 9 (2023): 776–81. http://dx.doi.org/10.17762/ijritcc.v11i9.8965.

Full text
Abstract:
The presence of Bias in models developed using machine learning algorithms has emerged as a critical issue. This literature review explores the topic of uncovering the existence of bias in data and the application of techniques for detecting and mitigating Bias. The review provides a comprehensive analysis of the existing literature, focusing on pre-processing techniques, post-pre-processing techniques, and fairness constraints employed to uncover and address the existence of Bias in machine learning models. The effectiveness, limitations, and trade-offs of these techniques are examined, highlighting their impact on advocating fairness and equity in decision-making processes.
 The methodology consists of two key steps: data preparation and bias analysis, followed by machine learning model development and evaluation. In the data preparation phase, the dataset is analyzed for biases and pre-processed using techniques like reweighting or relabeling to reduce bias. In the model development phase, suitable algorithms are selected, and fairness metrics are defined and optimized during the training process. The models are then evaluated using performance and fairness measures and the best-performing model is chosen. The methodology ensures a systematic exploration of machine learning techniques to detect and mitigate bias, leading to more equitable decision-making.
 The review begins by examining the techniques of pre-processing, which involve cleaning the data, selecting the features, feature engineering, and sampling. These techniques play an important role in preparing the data to reduce bias and promote fairness in machine learning models. The analysis highlights various studies that have explored the effectiveness of these techniques in uncovering and mitigating bias in data, contributing to the development of more equitable and unbiased machine learning models. Next, the review delves into post-pre-processing techniques that focus on detecting and mitigating bias after the initial data preparation steps. These techniques include bias detection methods that assess the disparate impact or disparate treatment in model predictions, as well as bias mitigation techniques that modify model outputs to achieve fairness across different groups. The evaluation of these techniques, their performance metrics, and potential trade-offs between fairness and accuracy are discussed, providing insights into the challenges and advancements in bias mitigation. Lastly, the review examines fairness constraints, which involve the imposition of rules or guidelines on machine learning algorithms to ensure fairness in predictions or decision-making processes. The analysis explores different fairness constraints, such as demographic parity, equalized odds, and predictive parity, and their effectiveness in reducing bias and advocating fairness in machine learning models. Overall, this literature review provides a comprehensive understanding of the techniques employed to uncover and mitigate the existence of bias in machine learning models. By examining pre-processing techniques, post-pre-processing techniques, and fairness constraints, the review contributes to the development of more fair and unbiased machine learning models, fostering equity and ethical decision-making in various domains. By examining relevant studies, this review provides insights into the effectiveness and limitations of various pre-processing techniques for bias detection and mitigation via Pre-processing, Adversarial learning, Fairness Constraints, and Post-processing techniques.
APA, Harvard, Vancouver, ISO, and other styles
4

Pasupuleti, Murali Krishna. "Bias and Fairness in Large Language Models: Evaluation and Mitigation Techniques." International Journal of Academic and Industrial Research Innovations(IJAIRI) 05, no. 05 (2025): 442–51. https://doi.org/10.62311/nesx/rphcr6.

Full text
Abstract:
Abstract: Large Language Models (LLMs) such as GPT, BERT, and LLaMA have transformed natural language processing, yet they exhibit social biases that can reinforce unfair outcomes. This paper systematically evaluates bias and fairness in LLMs across gender, race, and socioeconomic dimensions using benchmark datasets and fairness metrics. We assess bias through template-based probing, stereotype score measurement, and downstream task performance. We implement mitigation strategies including adversarial training, counterfactual data augmentation, and fairness-aware loss functions. Regression and predictive analysis reveal that token frequency and representation distance significantly correlate with bias scores. Post-mitigation analysis shows up to a 48% reduction in bias indicators with minimal accuracy trade-offs. Keywords: Large Language Models, Bias, Fairness, Evaluation, Mitigation, GPT, BERT, Counterfactual Augmentation, SHAP, LIME
APA, Harvard, Vancouver, ISO, and other styles
5

Wongvorachan, Tarid, Okan Bulut, Joyce Xinle Liu, and Elisabetta Mazzullo. "A Comparison of Bias Mitigation Techniques for Educational Classification Tasks Using Supervised Machine Learning." Information 15, no. 6 (2024): 326. http://dx.doi.org/10.3390/info15060326.

Full text
Abstract:
Machine learning (ML) has become integral in educational decision-making through technologies such as learning analytics and educational data mining. However, the adoption of machine learning-driven tools without scrutiny risks perpetuating biases. Despite ongoing efforts to tackle fairness issues, their application to educational datasets remains limited. To address the mentioned gap in the literature, this research evaluates the effectiveness of four bias mitigation techniques in an educational dataset aiming at predicting students’ dropout rate. The overarching research question is: “How effective are the techniques of reweighting, resampling, and Reject Option-based Classification (ROC) pivoting in mitigating the predictive bias associated with high school dropout rates in the HSLS:09 dataset?" The effectiveness of these techniques was assessed based on performance metrics including false positive rate (FPR), accuracy, and F1 score. The study focused on the biological sex of students as the protected attribute. The reweighting technique was found to be ineffective, showing results identical to the baseline condition. Both uniform and preferential resampling techniques significantly reduced predictive bias, especially in the FPR metric but at the cost of reduced accuracy and F1 scores. The ROC pivot technique marginally reduced predictive bias while maintaining the original performance of the classifier, emerging as the optimal method for the HSLS:09 dataset. This research extends the understanding of bias mitigation in educational contexts, demonstrating practical applications of various techniques and providing insights for educators and policymakers. By focusing on an educational dataset, it contributes novel insights beyond the commonly studied datasets, highlighting the importance of context-specific approaches in bias mitigation.
APA, Harvard, Vancouver, ISO, and other styles
6

Djebrouni, Yasmine, Nawel Benarba, Ousmane Touat, et al. "Bias Mitigation in Federated Learning for Edge Computing." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 7, no. 4 (2023): 1–35. http://dx.doi.org/10.1145/3631455.

Full text
Abstract:
Federated learning (FL) is a distributed machine learning paradigm that enables data owners to collaborate on training models while preserving data privacy. As FL effectively leverages decentralized and sensitive data sources, it is increasingly used in ubiquitous computing including remote healthcare, activity recognition, and mobile applications. However, FL raises ethical and social concerns as it may introduce bias with regard to sensitive attributes such as race, gender, and location. Mitigating FL bias is thus a major research challenge. In this paper, we propose Astral, a novel bias mitigation system for FL. Astral provides a novel model aggregation approach to select the most effective aggregation weights to combine FL clients' models. It guarantees a predefined fairness objective by constraining bias below a given threshold while keeping model accuracy as high as possible. Astral handles the bias of single and multiple sensitive attributes and supports all bias metrics. Our comprehensive evaluation on seven real-world datasets with three popular bias metrics shows that Astral outperforms state-of-the-art FL bias mitigation techniques in terms of bias mitigation and model accuracy. Moreover, we show that Astral is robust against data heterogeneity and scalable in terms of data size and number of FL clients. Astral's code base is publicly available.
APA, Harvard, Vancouver, ISO, and other styles
7

McDaniel, Gail. "Understanding and Mitigating Bias and Noise in Data Collection, Imputation and Analysis." International Multidisciplinary Journal of Science, Technology and Business Volume 04, no. 01 (2025): 1–29. https://doi.org/10.5281/zenodo.15004631.

Full text
Abstract:
<strong>Abstract</strong><strong>:</strong><strong><em>&nbsp;</em></strong><em>Bias and noise in data significantly impact the accuracy and reliability of research findings and data-driven decision-making. This paper provides a comprehensive overview of various types of bias and noise affecting data quality, their impact on research and decision-making, and strategies for mitigation. We examine sampling bias, </em><em>non response</em><em>&nbsp;bias, measurement bias, imputation bias, and analysis bias, as well as the role of noise as a source of bias. The paper also explores bias in survey design and interpretation, emphasizing the importance of careful question wording, structure, and consideration of cultural and linguistic factors.</em> <em>To address these issues, we propose several strategies, including appropriate sampling techniques, methods to encourage participation, improved measurement tools and protocols, suitable imputation methods, and transparent data analysis practices. We discuss the ethical implications of biased data and the responsibility of researchers, decision-makers, and institutions to prioritize bias and noise mitigation.</em> <em>The paper concludes by calling for future research on methodological tools for detecting and mitigating bias and noise. It stresses the need for interdisciplinary collaboration to ensure the integrity and trustworthiness of data-driven insights.</em> <em>(Word count: 159)</em>
APA, Harvard, Vancouver, ISO, and other styles
8

Tripathi, Manish, and Raghav Agarwal. "Bias Mitigation in NLP: Automated Detection and Correction." International Journal of Research in Modern Engineering & Emerging Technology 13, no. 5 (2025): 45–60. https://doi.org/10.63345/ijrmeet.org.v13.i5.130503.

Full text
Abstract:
Natural Language Processing (NLP) systems have shown remarkable capabilities, but they often inherit biases from the datasets they are trained on, resulting in outcomes that can be unfair or even harmful. These biases can appear in different forms, such as those related to gender, race, or socioeconomic status. Addressing and mitigating bias in NLP has become a critical area of research, aiming to ensure that machine learning models generate fair and impartial results. This paper delves into the automation of bias detection and correction within NLP systems. It reviews current methods for identifying biases, including fairness metrics, sensitivity analyses, and adversarial testing. Additionally, it examines techniques for mitigating bias, such as data augmentation, algorithmic adjustments, and post-processing methods. The paper also discusses the limitations and challenges of these approaches, emphasizing the balance between maintaining accuracy and promoting fairness. Lastly, it explores potential research directions, such as embedding ethical considerations into model development and establishing more comprehensive frameworks for continuous bias detection and mitigation.
APA, Harvard, Vancouver, ISO, and other styles
9

Harshila, Gujar. "Addressing Unconscious Bias: Tools and Techniques to Mitigate Bias in the Workplace." Journal of Scientific and Engineering Research 11, no. 4 (2024): 351–54. https://doi.org/10.5281/zenodo.13604160.

Full text
Abstract:
Unconscious bias in the workplace can undermine diversity and inclusion efforts, leading to inequitable outcomes and a less inclusive work environment. This book explores the tools and techniques to identify, address, and mitigate unconscious bias in organizations. By understanding the roots of unconscious bias and implementing targeted strategies, organizations can foster a more inclusive culture, enhance employee engagement, and improve overall performance.
APA, Harvard, Vancouver, ISO, and other styles
10

V S, Aswathy, Nandini Padmakumar, and Liji Thomas P. "Fairness in Predictive Modeling: Addressing Gender Bias in Income Prediction through Bias Mitigation Techniques." Indian Journal of Computer Science and Engineering 15, no. 6 (2024): 450–56. https://doi.org/10.21817/indjcse/2024/v15i6/241506010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Bias mitigation techniques"

1

Salomon, Sophie. "Bias Mitigation Techniques and a Cost-Aware Framework for Boosted Ranking Algorithms." Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1586450345426827.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Bias mitigation techniques"

1

da Silva, Mickaelle Caldeira, Marcelo Fantinato, and Sarajane Marques Peres. "Towards Fairness-Aware Predictive Process Monitoring: Evaluating Bias Mitigation Techniques." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-81375-7_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Prerak, Shah. "Enhanced prompts for ethical representation: Bias mitigation in text-to-image generation models." In Intelligent Computing and Communication Techniques. CRC Press, 2025. https://doi.org/10.1201/9781003635680-18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lidén, Moa. "Overall Summary and Conclusion." In Confirmation Bias in Criminal Cases. Oxford University PressOxford, 2023. http://dx.doi.org/10.1093/oso/9780192867643.003.0005.

Full text
Abstract:
Abstract Since the 1960s and Wason’s famous experiments, the research into confirmation bias has not only become more field specific and realistic but it has also gradually shifted towards bias mitigation rather than studying the phenomenon of confirmation bias per se. This has become clear not least when it comes to confirmation bias in criminal investigations and proceedings. The chapter summarizes manifestations as well as possible debiasing techniques for specific settings during investigative and litigation phases. This includes interviews, suspect-line ups, crime scene investigations, forensic analysis, and so on (investigative phase) as well as prosecutors’ and judges’ decision-making (litigation phase). It is also clear that mitigating confirmation bias in criminal cases in a wider sense will require joint efforts of all actors included, and is to a certain extent a question of legal policy. It is essential that adjustments are made with a firm basis in research. In this chapter, important suggestions for future research, primarily focusing on debiasing techniques, are outlined.
APA, Harvard, Vancouver, ISO, and other styles
4

Kasyap, Harsh, Ugur Ilker Atmaca, Michela Iezzi, Toby Walsh, and Carsten Maple. "Mitigating Bias: Model Pruning for Enhanced Model Fairness and Efficiency." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240589.

Full text
Abstract:
Machine learning models have been instrumental in making decisions across domains, like mortgage lending and risk assessment in finance. However, these models have been found susceptible to biases, causing unfair decisions for a specific group of individuals. Such bias is generally based on some protected (or sensitive) attributes, such as age, sex, or race, and is still prevalent due to historical context or algorithmic bias. There have been several efforts to ensure equal opportunities for each individual/group, based on creditworthiness, rather than any social bias. Several pre-, in- and post-processing bias mitigation techniques have been proposed. However, these techniques perform data transformation or design new constraint/cost functions, which are task-specific, to achieve a fair prediction. Such techniques even require further access to the complete training/testing data. This paper proposes a novel post-processing bias mitigation technique that employs a model interpretation strategy to find the responsible model weights causing the bias. Pruning only a few model weights exhibits group fairness in model predictions while maintaining competitive accuracy levels, thus aligning with the goals of fairness and efficiency in decision-making. The proposed scheme requires access to only a few data samples representing the protected attributes, without exposing the complete training data. Through extensive experiments with multiple census datasets/methods, we demonstrate the efficacy of our approach, achieving up to a significant 50% reduction in bias while preserving the overall accuracy.
APA, Harvard, Vancouver, ISO, and other styles
5

Aldosari, Bakheet, and Abdullah Alanazi. "Pitfalls of Artificial Intelligence in Medicine." In Studies in Health Technology and Informatics. IOS Press, 2024. http://dx.doi.org/10.3233/shti240474.

Full text
Abstract:
Artificial Intelligence (AI) offers great promise for healthcare, but integrating it comes with challenges. Over-reliance on AI systems can lead to automation bias, necessitating human oversight. Ethical considerations, transparency, and collaboration between healthcare providers and AI developers are crucial. Pursuing ethical frameworks, bias mitigation techniques, and transparency measures is key to advancing AI’s role in healthcare while upholding patient safety and quality care.
APA, Harvard, Vancouver, ISO, and other styles
6

Lidén, Moa. "The ‘Human Factor’ in Criminal Cases." In Confirmation Bias in Criminal Cases. Oxford University PressOxford, 2023. http://dx.doi.org/10.1093/oso/9780192867643.003.0001.

Full text
Abstract:
Abstract The importance of the so-called human factor for the outcome and accuracy of criminal law and procedure has been acknowledged for a long time but only recently has it been specified and more directly addressed in research. Confirmation bias specifically has been identified as a particularly problematic aspect of human reasoning that impacts on legal actors and criminal practitioners subconsciously. This means that even judges or prosecutors who are expected to be objective, and who may perceive of themselves as objective, may display this form of bias. Following the subconscious nature of confirmation bias, legal actors and criminal practitioners are unlikely to detect and mitigate the bias in themselves. Instead, effective bias mitigation requires scientific evaluations by researchers using established methodologies to evaluate different debiasing techniques.
APA, Harvard, Vancouver, ISO, and other styles
7

Prater, James, Konstantinos Kirytopoulos, and Tony Ma. "Dilbert Moments." In Research Anthology on Agile Software, Software Development, and Testing. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-3702-5.ch095.

Full text
Abstract:
Developing and delivering a project to an agreed schedule is fundamentally what project managers do. There is still an ongoing debate about schedule delays. This research investigates the development of schedules through semi-structured in-depth interviews. The findings reveal that half of the respondents believe that delays reported in the media are not real and should be attributed to scope changes. IT project managers estimating techniques include bottom-up estimates, analogy, and expert judgement. Impeding factors reported for the development of realistic schedules were technical (e.g. honest mistakes) and political (e.g. completion dates imposed by the sponsor). Respondents did not mention any psychological factors, although most were aware of optimism bias. However, they were not familiar with approaches to mitigate its impacts. Yet, when these techniques were mentioned, the overwhelming majority agreed that these mitigation approaches would change their schedule estimate.
APA, Harvard, Vancouver, ISO, and other styles
8

Kunduru, Krishnamohan Reddy, Yagya Dutta Dwivedi, R. Aruna, G. R. Thippeswamy, Subramanian Selvakumar, and M. Sudhakar. "Elevating Performance for Enhancing AI-Powered Humanoid Robots Through Innovation." In Applied AI and Humanoid Robotics for the Ultra-Smart Cyberspace. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-2399-1.ch004.

Full text
Abstract:
This chapter explores strategies for enhancing the performance of AI-powered humanoid robots through innovation. It begins with an overview of the current landscape of humanoid robots, highlighting their diverse applications across industries. The review examines existing performance metrics and identifies areas for improvement. Subsequent sections delve into specific avenues for innovation, including advancements in cognitive capabilities, motor skills, emotional intelligence, and human-robot interaction. Leveraging machine learning techniques for continuous improvement is also explored. Ethical considerations, such as privacy concerns and bias mitigation, are addressed, along with challenges associated with societal impact. The chapter concludes with case studies showcasing successful implementations of performance-enhancing strategies and outlines potential future directions for research and development in the field.
APA, Harvard, Vancouver, ISO, and other styles
9

Samuthira Pandi, V., M. Kavitha, and Vijaya Vardan Reddy S P. "Exploration of supervised and unsupervised learning techniques applied to prosthetic device functionality." In The Role of Artificial Intelligence in Advanced Prosthetics and Implantable Devices. RADemics Research Institute, 2025. https://doi.org/10.71443/9789349552975-02.

Full text
Abstract:
The integration of artificial intelligence (AI) in prosthetic technology has led to significant advancements in adaptive control, improving functionality and user experience. Among AI-driven approaches, the combination of unsupervised learning and reinforcement learning has emerged as a transformative solution for real-time prosthetic adaptation. Unsupervised learning enables prosthetic devices to extract latent patterns from high-dimensional sensory data, allowing for autonomous identification of movement dynamics. Reinforcement learning, on the other hand, refines prosthetic control strategies through continuous reward-driven optimization, ensuring personalized adaptability without extensive manual calibration. This chapter explores the synergy between these learning paradigms, addressing key challenges such as sample efficiency, real-time implementation, and computational constraints. The impact of clustering techniques, dimensionality reduction, and generative models in enhancing prosthetic adaptability is analyzed. Ethical concerns, including data privacy, bias mitigation, and transparency, are also examined to ensure responsible AI deployment in prosthetic applications. The discussion highlights future research directions, emphasizing the need for efficient model generalization, lightweight AI architectures, and cloud-integrated learning frameworks to advance the next generation of intelligent prosthetic systems.
APA, Harvard, Vancouver, ISO, and other styles
10

Aldosari, Bakheet, Hanan Aldosari, and Abdullah Alanazi. "Challenges of Artificial Intelligence in Medicine." In Studies in Health Technology and Informatics. IOS Press, 2025. https://doi.org/10.3233/shti250039.

Full text
Abstract:
Artificial Intelligence (AI) holds great promise for healthcare, promising improved patient outcomes and streamlining processes. Nevertheless, this transformational journey comes with numerous potential pitfalls that warrant attention. This comprehensive review explores some key challenges involved with integrating AI into medicine. First and foremost is the risk of over-reliance on AI systems. Users often rely on recommendations provided by AI to follow without question, potentially causing automation bias. Human oversight is essential to avoid mistakes and patient harm; failure to provide such oversight could have serious repercussions that necessitate having someone in control at all times - emphasizing the necessity for having a human-in-the-loop approach. Ethical considerations must always come first when developing AI systems, with privacy, informed consent, and data protection as non-negotiable obligations for patients and organizations. Transparency and accountability within AI systems are necessary to quickly identify biases or errors to enable AI development with integrity that mitigates bias, ensures fairness, and maintains transparency. Ethical AI development involves ongoing efforts made with great diligence by developers to mitigate any bias, ensure fairness, and maintain transparency. These principles form the bedrock upon which ethical development depends. Collaboration between healthcare providers and AI developers is of utmost importance for patient safety and well-being; healthcare providers must protect patient data while developers must ensure AI systems adhere to legal and ethical requirements. AI and healthcare present significant challenges. Ethical frameworks, bias mitigation techniques, and transparency measures must all be pursued to advance AI’s role within healthcare delivery systems. We can unleash AI’s full potential by overcoming such hurdles while upholding patient safety, ethics, and quality care as the cornerstones of healthcare innovation.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Bias mitigation techniques"

1

Akhter, Farheen, Nabeel Alzahrani, and Khalil Dajani. "Bias Mitigation in Deep Learning: A Survey of Modern Techniques." In 2025 IEEE Conference on Artificial Intelligence (CAI). IEEE, 2025. https://doi.org/10.1109/cai64502.2025.00105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Deokar, Ruchira, Preethi Nanjundan, and Sachi Nandan Mohanty. "Transparency in Translation: A Deep Dive into Explainable AI Techniques for Bias Mitigation." In 2024 Asia Pacific Conference on Innovation in Technology (APCIT). IEEE, 2024. http://dx.doi.org/10.1109/apcit62007.2024.10673712.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Boughareb, Djalila, Hana Bordjiba, Rima Boughareb, and Hamid Seridi. "Bias Mitigation in Medical Datasets: Enhancing GBM and KNN Predictive Models with SMOTE Techniques." In 2024 2nd International Conference on Computing and Data Analytics (ICCDA). IEEE, 2024. https://doi.org/10.1109/iccda64887.2024.10867315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Garimella, Aparna, Rada Mihalcea, and Akhash Amarnath. "Demographic-Aware Language Model Fine-tuning as a Bias Mitigation Technique." In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.aacl-short.38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sidana, Sagar, Sujata Negi Thakur, Sandeep Kumar, et al. "Mitigating Racial Bias in Facial Expression Recognition Using Enhanced Data Diversification Techniques." In 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT). IEEE, 2024. http://dx.doi.org/10.1109/icccnt61001.2024.10723863.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sokolová, Zuzana, Maroš Harahus, Ján Staš, et al. "Measuring and Mitigating Stereotype Bias in Language Models: An Overview of Debiasing Techniques." In 2024 International Symposium ELMAR. IEEE, 2024. http://dx.doi.org/10.1109/elmar62909.2024.10694175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Singh, Aarushi, Akshita Goel, Jyotika Jaichand, Vaishali Kikan, and Ashwni Kumar. "Algorithms for Fair Hiring: A Review of Techniques for Detecting and Mitigating Bias." In 2024 3rd Edition of IEEE Delhi Section Flagship Conference (DELCON). IEEE, 2024. https://doi.org/10.1109/delcon64804.2024.10867161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Patrikar, Ajay M., Arjuna Mahenthiran, and Ahmad Said. "Leveraging synthetic data for AI bias mitigation." In Synthetic Data for Artificial Intelligence and Machine Learning: Tools, Techniques, and Applications, edited by Kimberly E. Manser, Raghuveer M. Rao, and Christopher L. Howell. SPIE, 2023. http://dx.doi.org/10.1117/12.2662276.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shrestha, Robik, Kushal Kafle, and Christopher Kanan. "An Investigation of Critical Issues in Bias Mitigation Techniques." In 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE, 2022. http://dx.doi.org/10.1109/wacv51458.2022.00257.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Calegari, Roberta, Gabriel G. Castañé, Michela Milano, and Barry O'Sullivan. "Assessing and Enforcing Fairness in the AI Lifecycle." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/735.

Full text
Abstract:
A significant challenge in detecting and mitigating bias is creating a mindset amongst AI developers to address unfairness. The current literature on fairness is broad, and the learning curve to distinguish where to use existing metrics and techniques for bias detection or mitigation is difficult. This survey systematises the state-of-the-art about distinct notions of fairness and relative techniques for bias mitigation according to the AI lifecycle. Gaps and challenges identified during the development of this work are also discussed.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Bias mitigation techniques"

1

Tipton, Kelley, Brian F. Leas, Emilia Flores, et al. Impact of Healthcare Algorithms on Racial and Ethnic Disparities in Health and Healthcare. Agency for Healthcare Research and Quality (AHRQ), 2023. http://dx.doi.org/10.23970/ahrqepccer268.

Full text
Abstract:
Objectives. To examine the evidence on whether and how healthcare algorithms (including algorithm-informed decision tools) exacerbate, perpetuate, or reduce racial and ethnic disparities in access to healthcare, quality of care, and health outcomes, and examine strategies that mitigate racial and ethnic bias in the development and use of algorithms. Data sources. We searched published and grey literature for relevant studies published between January 2011 and February 2023. Based on expert guidance, we determined that earlier articles are unlikely to reflect current algorithms. We also hand-searched reference lists of relevant studies and reviewed suggestions from experts and stakeholders. Review methods. Searches identified 11,500 unique records. Using predefined criteria and dual review, we screened and selected studies to assess one or both Key Questions (KQs): (1) the effect of algorithms on racial and ethnic disparities in health and healthcare outcomes and (2) the effect of strategies or approaches to mitigate racial and ethnic bias in the development, validation, dissemination, and implementation of algorithms. Outcomes of interest included access to healthcare, quality of care, and health outcomes. We assessed studies’ methodologic risk of bias (ROB) using the ROBINS-I tool and piloted an appraisal supplement to assess racial and ethnic equity-related ROB. We completed a narrative synthesis and cataloged study characteristics and outcome data. We also examined four Contextual Questions (CQs) designed to explore the context and capture insights on practical aspects of potential algorithmic bias. CQ 1 examines the problem’s scope within healthcare. CQ 2 describes recently emerging standards and guidance on how racial and ethnic bias can be prevented or mitigated during algorithm development and deployment. CQ 3 explores stakeholder awareness and perspectives about the interaction of algorithms and racial and ethnic disparities in health and healthcare. We addressed these CQs through supplemental literature reviews and conversations with experts and key stakeholders. For CQ 4, we conducted an in-depth analysis of a sample of six algorithms that have not been widely evaluated before in the published literature to better understand how their design and implementation might contribute to disparities. Results. Fifty-eight studies met inclusion criteria, of which three were included for both KQs. One study was a randomized controlled trial, and all others used cohort, pre-post, or modeling approaches. The studies included numerous types of clinical assessments: need for intensive care or high-risk care management; measurement of kidney or lung function; suitability for kidney or lung transplant; risk of cardiovascular disease, stroke, lung cancer, prostate cancer, postpartum depression, or opioid misuse; and warfarin dosing. We found evidence suggesting that algorithms may: (a) reduce disparities (i.e., revised Kidney Allocation System, prostate cancer screening tools); (b) perpetuate or exacerbate disparities (e.g., estimated glomerular filtration rate [eGFR] for kidney function measurement, cardiovascular disease risk assessments); and/or (c) have no effect on racial or ethnic disparities. Algorithms for which mitigation strategies were identified are included in KQ 2. We identified six types of strategies often used to mitigate the potential of algorithms to contribute to disparities: removing an input variable; replacing a variable; adding one or more variables; changing or diversifying the racial and ethnic composition of the patient population used to train or validate a model; creating separate algorithms or thresholds for different populations; and modifying the statistical or analytic techniques used by an algorithm. Most mitigation efforts improved proximal outcomes (e.g., algorithmic calibration) for targeted populations, but it is more challenging to infer or extrapolate effects on longer term outcomes, such as racial and ethnic disparities. The scope of racial and ethnic bias related to algorithms and their application is difficult to quantify, but it clearly extends across the spectrum of medicine. Regulatory, professional, and corporate stakeholders are undertaking numerous efforts to develop standards for algorithms, often emphasizing the need for transparency, accountability, and representativeness. Conclusions. Algorithms have been shown to potentially perpetuate, exacerbate, and sometimes reduce racial and ethnic disparities. Disparities were reduced when race and ethnicity were incorporated into an algorithm to intentionally tackle known racial and ethnic disparities in resource allocation (e.g., kidney transplant allocation) or disparities in care (e.g., prostate cancer screening that historically led to Black men receiving more low-yield biopsies). It is important to note that in such cases the rationale for using race and ethnicity was clearly delineated and did not conflate race and ethnicity with ancestry and/or genetic predisposition. However, when algorithms include race and ethnicity without clear rationale, they may perpetuate the incorrect notion that race is a biologic construct and contribute to disparities. Finally, some algorithms may reduce or perpetuate disparities without containing race and ethnicity as an input. Several modeling studies showed that applying algorithms out of context of original development (e.g., illness severity scores used for crisis standards of care) could perpetuate or exacerbate disparities. On the other hand, algorithms may also reduce disparities by standardizing care and reducing opportunities for implicit bias (e.g., Lung Allocation Score for lung transplantation). Several mitigation strategies have been shown to potentially reduce the contribution of algorithms to racial and ethnic disparities. Results of mitigation efforts are highly context specific, relating to unique combinations of algorithm, clinical condition, population, setting, and outcomes. Important future steps include increasing transparency in algorithm development and implementation, increasing diversity of research and leadership teams, engaging diverse patient and community groups in the development to implementation lifecycle, promoting stakeholder awareness (including patients) of potential algorithmic risk, and investing in further research to assess the real-world effect of algorithms on racial and ethnic disparities before widespread implementation.
APA, Harvard, Vancouver, ISO, and other styles
2

Eslava, Marcela, Alessandro Maffioli, and Marcela Meléndez Arjona. Second-tier Government Banks and Access to Credit: Micro-Evidence from Colombia. Inter-American Development Bank, 2012. http://dx.doi.org/10.18235/0011364.

Full text
Abstract:
Government-owned development banks have often been justified by the need to respond to financial market imperfections that hinder the establishment and growth of promising businesses, and as a result, stifle economic development more generally. However, evidence on the effectiveness of these banks in mitigating financial constraints is still lacking. To fill this gap, this paper analyzes the impact of Bancoldex, Colombia's publicly owned development bank, on access to credit. It uses a unique dataset that contains key characteristics of all loans issued to businesses in Colombia, including the financial intermediary through which the loan was granted and whether the loan was funded with Bancoldex resources. The paper assesses effects on access to credit by comparing Bancoldex loans to loans from other sources and study the impact of receiving credit from Bancoldex on a firm's subsequent credit history. To address concerns about selection bias, it uses a combination of models that control for fixed effects and matching techniques. The findings herein show that credit relationships involving Bancoldex funding are characterized by lower interest rates, larger loans, and loans with longer terms. These characteristics translated into lower average interest rates and larger average loans for firms that used Bancoldex credit. Average loans of Bancoldex' beneficiaries also exhibit longer terms, although this effect can take two years to materialize. Finally, the findings show evidence of a demonstration effect of Bancoldex: beneficiary firms that have access Bancoldex credit are able to significantly expand the number of intermediaries with whom they have credit relationships.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!