Academic literature on the topic 'Unfairness mitigation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Unfairness mitigation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Unfairness mitigation"

1

Jiang, Zifan, Salman Seyedi, Emily Griner, et al. "Evaluating and mitigating unfairness in multimodal remote mental health assessments." PLOS Digital Health 3, no. 7 (2024): e0000413. http://dx.doi.org/10.1371/journal.pdig.0000413.

Full text
Abstract:
Research on automated mental health assessment tools has been growing in recent years, often aiming to address the subjectivity and bias that existed in the current clinical practice of the psychiatric evaluation process. Despite the substantial health and economic ramifications, the potential unfairness of those automated tools was understudied and required more attention. In this work, we systematically evaluated the fairness level in a multimodal remote mental health dataset and an assessment system, where we compared the fairness level in race, gender, education level, and age. Demographic
APA, Harvard, Vancouver, ISO, and other styles
2

Arnaiz-Rodriguez, Adrian, Georgina Curto Rex, and Nuria Oliver. "Structural Group Unfairness: Measurement and Mitigation by Means of the Effective Resistance." Proceedings of the International AAAI Conference on Web and Social Media 19 (June 7, 2025): 83–106. https://doi.org/10.1609/icwsm.v19i1.35805.

Full text
Abstract:
Social networks contribute to the distribution of social capital, defined as the relationships, norms of trust and reciprocity within a community or society that facilitate cooperation and collective action. Therefore, better positioned members in a social network benefit from faster access to diverse information and higher influence on information dissemination. A variety of methods have been proposed in the literature to measure social capital at an individual level. However, there is a lack of methods to quantify social capital at a group level, which is particularly important when the grou
APA, Harvard, Vancouver, ISO, and other styles
3

Balayn, Agathe, Christoph Lofi, and Geert-Jan Houben. "Managing bias and unfairness in data for decision support: a survey of machine learning and data engineering approaches to identify and mitigate bias and unfairness within data management and analytics systems." VLDB Journal 30, no. 5 (2021): 739–68. http://dx.doi.org/10.1007/s00778-021-00671-8.

Full text
Abstract:
AbstractThe increasing use of data-driven decision support systems in industry and governments is accompanied by the discovery of a plethora of bias and unfairness issues in the outputs of these systems. Multiple computer science communities, and especially machine learning, have started to tackle this problem, often developing algorithmic solutions to mitigate biases to obtain fairer outputs. However, one of the core underlying causes for unfairness is bias in training data which is not fully covered by such approaches. Especially, bias in data is not yet a central topic in data engineering a
APA, Harvard, Vancouver, ISO, and other styles
4

Singh, Nimisha, Amita Kapoor, and Neha Soni. "A sociotechnical perspective for explicit unfairness mitigation techniques for algorithm fairness." International Journal of Information Management Data Insights 4, no. 2 (2024): 100259. http://dx.doi.org/10.1016/j.jjimei.2024.100259.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pagano, Tiago P., Rafael B. Loureiro, Fernanda V. N. Lisboa, et al. "Bias and Unfairness in Machine Learning Models: A Systematic Review on Datasets, Tools, Fairness Metrics, and Identification and Mitigation Methods." Big Data and Cognitive Computing 7, no. 1 (2023): 15. http://dx.doi.org/10.3390/bdcc7010015.

Full text
Abstract:
One of the difficulties of artificial intelligence is to ensure that model decisions are fair and free of bias. In research, datasets, metrics, techniques, and tools are applied to detect and mitigate algorithmic unfairness and bias. This study examines the current knowledge on bias and unfairness in machine learning models. The systematic review followed the PRISMA guidelines and is registered on OSF plataform. The search was carried out between 2021 and early 2022 in the Scopus, IEEE Xplore, Web of Science, and Google Scholar knowledge bases and found 128 articles published between 2017 and
APA, Harvard, Vancouver, ISO, and other styles
6

Ding, Xueying, Rui Xi, and Leman Akoglu. "Outlier Detection Bias Busted: Understanding Sources of Algorithmic Bias through Data-centric Factors." Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7 (October 16, 2024): 384–95. http://dx.doi.org/10.1609/aies.v7i1.31644.

Full text
Abstract:
The astonishing successes of ML have raised growing concern for the fairness of modern methods when deployed in real world settings. However, studies on fairness have mostly focused on supervised ML, while unsupervised outlier detection (OD), with numerous applications in finance, security, etc., have attracted little attention. While a few studies proposed fairness-enhanced OD algorithms, they remain agnostic to the underlying driving mechanisms or sources of unfairness. Even within the supervised ML literature, there exists debate on whether unfairness stems solely from algorithmic biases (i
APA, Harvard, Vancouver, ISO, and other styles
7

Abdullah, Nurhidayah, and Zuhairah Ariff Abd Ghadas. "THE APPLICATION OF GOOD FAITH IN CONTRACTS DURING A FORCE MAJEURE EVENT AND BEYOND WITH SPECIAL REFERENCE TO THE COVID-19 ACT 2020." UUM Journal of Legal Studies 14, no. 1 (2023): 141–60. http://dx.doi.org/10.32890/uumjls2023.14.1.6.

Full text
Abstract:
Many parties face difficulties in performing contracts due to the economic dislocation since the outbreak of COVID-19. The extraordinary nature of this pandemic situation calls for good faith in contractual settings. The discussion of this paper focuses on the imposition in a force majeure event which will cause many contracts to be unenforceable. The research method used doctrinal analysis to discuss the force majeure clause in the context of the COVID-19 pandemic and the obligation of good faith in contracts. This paper will discuss the COVID-19 pandemic as a force majeure event, arguing tha
APA, Harvard, Vancouver, ISO, and other styles
8

Verger, Mélina, Chunyang Fan, Sébastien Lallé, François Bouchet, and Vanda Luengo. "A Comprehensive Study on Evaluating and Mitigating Algorithmic Unfairness with the MADD Metric." Journal of Educational Data Mining (JEDM) 16, no. 1 (2024): 365–409. https://doi.org/10.5281/zenodo.12180668.

Full text
Abstract:
Predictive student models are increasingly used in learning environments due to their ability to enhanceeducational outcomes and support stakeholders in making informed decisions. However, predictive modelscan be biased and produce unfair outcomes, leading to potential discrimination against certain individualsand harmful long-term implications. This has prompted research on fairness metrics meant tocapture and quantify such biases. Nonetheless, current metrics primarily focus on predictive performancecomparisons between groups, without considering the behavior of the models or the severity of
APA, Harvard, Vancouver, ISO, and other styles
9

Popoola, Gideon, and John Sheppard. "Investigating and Mitigating the Performance–Fairness Tradeoff via Protected-Category Sampling." Electronics 13, no. 15 (2024): 3024. http://dx.doi.org/10.3390/electronics13153024.

Full text
Abstract:
Machine learning algorithms have become common in everyday decision making, and decision-assistance systems are ubiquitous in our everyday lives. Hence, research on the prevention and mitigation of potential bias and unfairness of the predictions made by these algorithms has been increasing in recent years. Most research on fairness and bias mitigation in machine learning often treats each protected variable separately, but in reality, it is possible for one person to belong to multiple protected categories. Hence, in this work, combining a set of protected variables and generating new columns
APA, Harvard, Vancouver, ISO, and other styles
10

Menziwa, Yolanda, Eunice Lebogang Sesale, and Solly Matshonisa Seeletse. "Challenges in research data collection and mitigation interventions." International Journal of Research in Business and Social Science (2147- 4478) 13, no. 2 (2024): 336–44. http://dx.doi.org/10.20525/ijrbs.v13i2.3187.

Full text
Abstract:
This paper investigated the challenges that researchers in a health sciences university can experience, and ways to counterbalance the negative effects of these challenges. Focus was on the extent to which gatekeepers on higher education institutions (HEIs) can restrict research, and the way natural sciences researchers often experience gatekeeper biasness on denying them access as compared to the way health sciences researchers are treated. The method compared experiences of researchers for Master of Science (MSc) degrees in selected science subjects, and the projects undertaken by health sci
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Unfairness mitigation"

1

Yao, Sirui. "Evaluating, Understanding, and Mitigating Unfairness in Recommender Systems." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103779.

Full text
Abstract:
Recommender systems are information filtering tools that discover potential matchings between users and items and benefit both parties. This benefit can be considered a social resource that should be equitably allocated across users and items, especially in critical domains such as education and employment. Biases and unfairness in recommendations raise both ethical and legal concerns. In this dissertation, we investigate the concept of unfairness in the context of recommender systems. In particular, we study appropriate unfairness evaluation metrics, examine the relation between bias in recom
APA, Harvard, Vancouver, ISO, and other styles
2

Alves, da Silva Guilherme. "Traitement hybride pour l'équité algorithmique." Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0323.

Full text
Abstract:
Les décisions algorithmiques sont actuellement utilisées quotidiennement. Ces décisions reposent souvent sur des algorithmes d'apprentissage automatique (machine learning, ML) qui peuvent produire des modèles complexes et opaques. Des études récentes ont soulevé des problèmes d'iniquité en révélant des résultats discriminatoires produits par les modèles ML contre des minorités et des groupes non privilégiés. Comme les modèles ML sont capables d'amplifier la discrimination en raison de résultats injustes, cela révèle la nécessité d'approches qui découvrent et suppriment les biais inattendues. L
APA, Harvard, Vancouver, ISO, and other styles
3

Verger, Mélina. "Algorithmic fairness analyses of supervised machine learning in education." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS600.

Full text
Abstract:
Cette thèse vise à évaluer et réduire l'iniquité algorithmique des modèles d'apprentissage automatique largement utilisés en éducation. En effet, ces modèles prédictifs, fondés sur des données éducatives et des traces d'apprentissage de plus en plus abondantes, ont pour but d'améliorer l'expérience d'apprentissage humain. Ils permettent, par exemple, de prédire le décrochage scolaire ou de personnaliser l'expérience d'apprentissage en adaptant les contenus éducatifs selon les besoins de chaque apprenant et apprenante. Cependant, il a été démontré à plusieurs reprises que ces modèles peuvent pr
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Unfairness mitigation"

1

Xu, Zikang, Shang Zhao, Quan Quan, Qingsong Yao, and S. Kevin Zhou. "FairAdaBN: Mitigating Unfairness with Adaptive Batch Normalization and Its Application to Dermatological Disease Classification." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43895-0_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yi, Kun, Xisha Jin, Zhengyang Bai, Yuntao Kong, and Qiang Ma. "An Empirical User Study on Congestion-Aware Route Recommendation." In Information and Communication Technologies in Tourism 2024. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-58839-6_35.

Full text
Abstract:
AbstractOvertourism has become a significant concern in many popular travel destinations around the world. As one of considerable approaches to handle the overtourism issues, congestion-aware methods can be effective in mitigating overcrowding at popular attractions by spreading tourists to less-visited areas. However, they may lead to a potential Hawk-Dove game: tourists who share the same preference may have some of them assigned worse routes than others to avoid congestion, which raises a possibility that the tourists who are assigned to relatively unfavorable routes may feel dissatisfactio
APA, Harvard, Vancouver, ISO, and other styles
3

Taylor, Steven. "Social Distancing." In The New Psychology of Pandemics. Oxford University PressNew York, NY, 2025. https://doi.org/10.1093/9780197811009.003.0006.

Full text
Abstract:
Abstract This chapter discusses social distancing as a pandemic-mitigation strategy, including its positive and adverse effects, objections, and alternatives. Social distancing is effective for controlling infection but also a source of contention and conflict. Highly restrictive forms of social distancing, such as lockdown and quarantine, have been controversial for centuries. Criticisms concern the necessity and efficacy of the interventions, their unfairness in some circumstances, and the hardships they create. An alternative to community-wide lockdown involves targeted self-isolation of po
APA, Harvard, Vancouver, ISO, and other styles
4

Chakrobartty, Shuvro, and Omar F. El-Gayar. "Fairness Challenges in Artificial Intelligence." In Encyclopedia of Data Science and Machine Learning. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-9220-5.ch101.

Full text
Abstract:
Fairness is a highly desirable human value in day-to-day decisions that affect human life. In recent years many successful applications of AI systems have been developed, and increasingly, AI methods are becoming part of many new applications for decision-making tasks that were previously carried out by human beings. Questions have been raised: 1) Can the decision be trusted? 2) Is it fair? Overall, are the AI-based systems making fair decisions, or are they increasing the unfairness in society? This article presents a systematic literature review (SLR) of existing works on AI fairness challen
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Zichong, and Wenbin Zhang. "Group Fairness with Individual and Censorship Constraints." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240578.

Full text
Abstract:
The widespread use of Artificial Intelligence (AI) based decision-making systems has raised a lot of concerns regarding potential discrimination, particularly in domains with high societal impact. Most existing fairness research focused on tackling bias relies heavily on the presence of class labels, an assumption that often mismatches real-world scenarios, which ignores the ubiquity of censored data. Further, existing works regard group fairness and individual fairness as two disparate goals, overlooking their inherent interconnection, i.e., addressing one can degrade the other. This paper pr
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Unfairness mitigation"

1

Calegari, Roberta, Gabriel G. Castañé, Michela Milano, and Barry O'Sullivan. "Assessing and Enforcing Fairness in the AI Lifecycle." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/735.

Full text
Abstract:
A significant challenge in detecting and mitigating bias is creating a mindset amongst AI developers to address unfairness. The current literature on fairness is broad, and the learning curve to distinguish where to use existing metrics and techniques for bias detection or mitigation is difficult. This survey systematises the state-of-the-art about distinct notions of fairness and relative techniques for bias mitigation according to the AI lifecycle. Gaps and challenges identified during the development of this work are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
2

Boratto, Ludovico, Francesco Fabbri, Gianni Fenu, Mirko Marras, and Giacomo Medda. "Counterfactual Graph Augmentation for Consumer Unfairness Mitigation in Recommender Systems." In CIKM '23: The 32nd ACM International Conference on Information and Knowledge Management. ACM, 2023. http://dx.doi.org/10.1145/3583780.3615165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mahmud, Md Sultan, and Md Forkan Uddin. "Unfairness problem in WLANs due to asymmetric co-channel interference and its mitigation." In 2013 16th International Conference on Computer and Information Technology (ICCIT). IEEE, 2014. http://dx.doi.org/10.1109/iccitechn.2014.6997322.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Meerza, Syed Irfan Ali, and Jian Liu. "EAB-FL: Exacerbating Algorithmic Bias through Model Poisoning Attacks in Federated Learning." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/51.

Full text
Abstract:
Federated Learning (FL) is a technique that allows multiple parties to train a shared model collaboratively without disclosing their private data. It has become increasingly popular due to its distinct privacy advantages. However, FL models can suffer from biases against certain demographic groups (e.g., racial and gender groups) due to the heterogeneity of data and party selection. Researchers have proposed various strategies for characterizing the group fairness of FL algorithms to address this issue. However, the effectiveness of these strategies in the face of deliberate adversarial attack
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Zhining, Ruizhong Qiu, Zhichen Zeng, Yada Zhu, Hendrik Hamann, and Hanghang Tong. "AIM: Attributing, Interpreting, Mitigating Data Unfairness." In KDD '24: The 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. ACM, 2024. http://dx.doi.org/10.1145/3637528.3671797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kim, Dohyung, Sungho Park, Sunhee Hwang, Minsong Ki, Seogkyu Jeon, and Hyeran Byun. "Resampling Strategy for Mitigating Unfairness in Face Attribute Classification." In 2020 International Conference on Information and Communication Technology Convergence (ICTC). IEEE, 2020. http://dx.doi.org/10.1109/ictc49870.2020.9289379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Tianlin, Zhiming Li, Anran Li, et al. "Fairness via Group Contribution Matching." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/49.

Full text
Abstract:
Fairness issues in Deep Learning models have recently received increasing attention due to their significant societal impact. Although methods for mitigating unfairness are constantly proposed, little research has been conducted to understand how discrimination and bias develop during the standard training process. In this study, we propose analyzing the contribution of each subgroup (i.e., a group of data with the same sensitive attribute) in the training process to understand the cause of such bias development process. We propose a gradient-based metric to assess training subgroup contributi
APA, Harvard, Vancouver, ISO, and other styles
8

Lin, Yin, Samika Gupta, and H. V. Jagadish. "Mitigating Subgroup Unfairness in Machine Learning Classifiers: A Data-Driven Approach." In 2024 IEEE 40th International Conference on Data Engineering (ICDE). IEEE, 2024. http://dx.doi.org/10.1109/icde60146.2024.00171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Singhal, Anmol, Preethu Rose Anish, Shirish Karande, and Smita Ghaisas. "Towards Mitigating Perceived Unfairness in Contracts from a Non-Legal Stakeholder’s Perspective." In Proceedings of the Natural Legal Language Processing Workshop 2023. Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.nllp-1.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cirino, Fernanda R. P., Carlos D. Maia, Marcelo S. Balbino, and Cristiane N. Nobre. "Proposal of a Method for Identifying Unfairness in Machine Learning Models based on Counterfactual Explanations." In Symposium on Knowledge Discovery, Mining and Learning. Sociedade Brasileira de Computação - SBC, 2023. http://dx.doi.org/10.5753/kdmile.2023.232900.

Full text
Abstract:
As machine learning models continue impacting diverse areas of society, the need to ensure fairness in decision-making becomes increasingly vital. Unfair outcomes resulting from biased data can have profound societal implications. This work proposes a method for identifying unfairness and mitigating biases in machine learning models based on counterfactual explanations. By analyzing the model’s equity implications after training, we provide insight into the potential of the method proposed to address equity issues. The findings of this study contribute to advancing the understanding of fairnes
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!