To see the other types of publications on this topic, follow the link: Algorithmic decision systems.

Journal articles on the topic 'Algorithmic decision systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Algorithmic decision systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Waldman, Ari, and Kirsten Martin. "Governing algorithmic decisions: The role of decision importance and governance on perceived legitimacy of algorithmic decisions." Big Data & Society 9, no. 1 (2022): 205395172211004. http://dx.doi.org/10.1177/20539517221100449.

Full text
Abstract:
The algorithmic accountability literature to date has primarily focused on procedural tools to govern automated decision-making systems. That prescriptive literature elides a fundamentally empirical question: whether and under what circumstances, if any, is the use of algorithmic systems to make public policy decisions perceived as legitimate? The present study begins to answer this question. Using factorial vignette survey methodology, we explore the relative importance of the type of decision, the procedural governance, the input data used, and outcome errors on perceptions of the legitimacy of algorithmic public policy decisions as compared to similar human decisions. Among other findings, we find that the type of decision—low importance versus high importance—impacts the perceived legitimacy of automated decisions. We find that human governance of algorithmic systems (aka human-in-the-loop) increases perceptions of the legitimacy of algorithmic decision-making systems, even when those decisions are likely to result in significant errors. Notably, we also find the penalty to perceived legitimacy is greater when human decision-makers make mistakes than when algorithmic systems make the same errors. The positive impact on perceived legitimacy from governance—such as human-in-the-loop—is greatest for highly pivotal decisions such as parole, policing, and healthcare. After discussing the study’s limitations, we outline avenues for future research.
APA, Harvard, Vancouver, ISO, and other styles
2

Dawson, April. "Algorithmic Adjudication and Constitutional AI—The Promise of A Better AI Decision Making Future?" SMU Science and Technology Law Review 27, no. 1 (2024): 11. http://dx.doi.org/10.25172/smustlr.27.1.3.

Full text
Abstract:
Algorithmic governance is when algorithms, often in the form of AI, make decisions, predict outcomes, and manage resources in various aspects of governance. This approach can be applied in areas like public administration, legal systems, policy-making, and urban planning. Algorithmic adjudication involves using AI to assist in or decide legal disputes. This often includes the analysis of legal documents, case precedents, and relevant laws to provide recommendations or even final decisions. The AI models typically used in these emerging decision-making systems use traditionally trained AI systems on large data sets so the system can render a decision or prediction based on past practices. However, the decisions often perpetuate existing biases and can be difficult to explain. Algorithmic decision-making models using a constitutional AI framework (like Anthropic's LLM Claude) may produce results that are more explainable and aligned with societal values. The constitutional AI framework integrates core legal and ethical standards directly into the algorithm’s design and operation, ensuring decisions are made with considerations for fairness, equality, and justice. This article will discuss society’s movement toward algorithmic governance and adjudication, the challenges associated with using traditionally trained AI in these decision-making models, and the potential for better outcomes with constitutional AI models.
APA, Harvard, Vancouver, ISO, and other styles
3

Herzog, Lisa. "Algorithmisches Entscheiden, Ambiguitätstoleranz und die Frage nach dem Sinn." Deutsche Zeitschrift für Philosophie 69, no. 2 (2021): 197–213. http://dx.doi.org/10.1515/dzph-2021-0016.

Full text
Abstract:
Abstract In more and more contexts, human decision-making is replaced by algorithmic decision-making. While promising to deliver efficient and objective decisions, algorithmic decision systems have specific weaknesses, some of which are particularly dangerous if data are collected and processed by profit-oriented companies. In this paper, I focus on two problems that are at the root of the logic of algorithmic decision-making: (1) (in)tolerance for ambiguity, and (2) instantiations of Campbell’s law, i. e. of indicators that are used for “social decision-making” being subject to “corruption pressures” and tending to “distort and corrupt” the underlying social processes. As a result, algorithmic decision-making can risk missing the point of the social practice in question. These problems are intertwined with problems of structural injustice; hence, if algorithms are to deliver on their promises of efficiency and objectivity, accountability and critical scrutiny are needed.
APA, Harvard, Vancouver, ISO, and other styles
4

Kleinberg, Jon, and Manish Raghavan. "Algorithmic monoculture and social welfare." Proceedings of the National Academy of Sciences 118, no. 22 (2021): e2018340118. http://dx.doi.org/10.1073/pnas.2018340118.

Full text
Abstract:
As algorithms are increasingly applied to screen applicants for high-stakes decisions in employment, lending, and other domains, concerns have been raised about the effects of algorithmic monoculture, in which many decision-makers all rely on the same algorithm. This concern invokes analogies to agriculture, where a monocultural system runs the risk of severe harm from unexpected shocks. Here, we show that the dangers of algorithmic monoculture run much deeper, in that monocultural convergence on a single algorithm by a group of decision-making agents, even when the algorithm is more accurate for any one agent in isolation, can reduce the overall quality of the decisions being made by the full collection of agents. Unexpected shocks are therefore not needed to expose the risks of monoculture; it can hurt accuracy even under “normal” operations and even for algorithms that are more accurate when used by only a single decision-maker. Our results rely on minimal assumptions and involve the development of a probabilistic framework for analyzing systems that use multiple noisy estimates of a set of alternatives.
APA, Harvard, Vancouver, ISO, and other styles
5

Duran, Sergi Gálvez. "Opening the Black-Box in Private-Law Employment Relationships: A Critical Review of the Newly Implemented Spanish Workers’ Council’s Right to Access Algorithms." Global Privacy Law Review 4, Issue 1 (2023): 17–30. http://dx.doi.org/10.54648/gplr2023003.

Full text
Abstract:
Article 22 of the General Data Protection Regulation (GDPR) provides individuals with the right not to be subject to automated decisions. In this article, the author questions the extent to which the legal framework for automated decision-making in the GDPR is attuned to the employment context. More specifically, the author argues that an individual’s right may not be the most appropriate approach to contesting artificial intelligence (AI) based decisions in situations involving dependency contracts, such as employment relationships. Furthermore, Article 22 GDPR derogations rarely apply in the employment context, which puts organizations on the wrong track when deploying AI systems to make decisions about hiring, performance, and termination. In this scenario, emerging initiatives are calling for a shift from an individual rights perspective to a collective governance approach over data as a way to leverage collective bargaining power. Taking inspiration from these different initiatives, I propose ‘algorithmic co-governance’ to address the lack of accountability and transparency in AI-based employment decisions. Algorithmic co-governance implies giving third parties (ideally, the workforce’s legal representatives) the power to negotiate, correct, and overturn AI-based employment decision tools. In this context, Spain has implemented a law reform requiring that Workers’ Councils are informed about the ‘parameters, rules, and instructions’ on which algorithmic decision-making is based, becoming the first law in the European Union requiring employers to share information about AI-based decisions with Workers’ Councils. I use this reform to evaluate a potential algorithmic co-governance model in the workplace, highlighting some shortcomings that may deprive its quality and effectiveness. Algorithms, Artificial Intelligence, AI Systems, Automated Decision-Making, Algorithmic Co-governance, Algorithmic Management, Data Protection, Privacy, GDPR, Employment Decisions, Right To Access Algorithms, Workers’ Council
APA, Harvard, Vancouver, ISO, and other styles
6

Morchhale, Yogesh. "Ethical Considerations in Artificial Intelligence: Addressing Bias and Fairness in Algorithmic Decision-Making." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem31693.

Full text
Abstract:
The expanding use of artificial intelligence (AI) in decision-making across a range of industries has given rise to serious ethical questions about prejudice and justice. This study looks at the moral ramifications of using AI algorithms in decision-making and looks at methods to combat prejudice and advance justice. The study investigates the underlying causes of prejudice in AI systems, the effects of biased algorithms on people and society, and the moral obligations of stakeholders in reducing bias, drawing on prior research and real-world examples. The study also addresses new frameworks and strategies for advancing justice in algorithmic decision-making, emphasizing the value of openness, responsibility, and diversity in dataset gathering and algorithm development. The study concludes with suggestions for further investigation and legislative actions to guarantee that AI systems respect moral standards and advance justice and equity in the processes of making decisions. Keywords Ethical considerations, Artificial intelligence, Bias, Fairness, Algorithmic decision-making, Ethical implications, Ethical responsibilities, Stakeholders, Bias in AI systems, Impact of biased algorithms, Strategies for addressing bias, Promoting fairness, Algorithmic transparency.
APA, Harvard, Vancouver, ISO, and other styles
7

Starke, Christopher, Janine Baleis, Birte Keller, and Frank Marcinkowski. "Fairness perceptions of algorithmic decision-making: A systematic review of the empirical literature." Big Data & Society 9, no. 2 (2022): 205395172211151. http://dx.doi.org/10.1177/20539517221115189.

Full text
Abstract:
Algorithmic decision-making increasingly shapes people's daily lives. Given that such autonomous systems can cause severe harm to individuals and social groups, fairness concerns have arisen. A human-centric approach demanded by scholars and policymakers requires considering people's fairness perceptions when designing and implementing algorithmic decision-making. We provide a comprehensive, systematic literature review synthesizing the existing empirical insights on perceptions of algorithmic fairness from 58 empirical studies spanning multiple domains and scientific disciplines. Through thorough coding, we systemize the current empirical literature along four dimensions: (1) algorithmic predictors, (2) human predictors, (3) comparative effects (human decision-making vs. algorithmic decision-making), and (4) consequences of algorithmic decision-making. While we identify much heterogeneity around the theoretical concepts and empirical measurements of algorithmic fairness, the insights come almost exclusively from Western-democratic contexts. By advocating for more interdisciplinary research adopting a society-in-the-loop framework, we hope our work will contribute to fairer and more responsible algorithmic decision-making.
APA, Harvard, Vancouver, ISO, and other styles
8

Hoeppner, Sven, and Martin Samek. "Procedural Fairness as Stepping Stone for Successful Implementation of Algorithmic Decision-Making in Public Administration: Review and Outlook." AUC IURIDICA 70, no. 2 (2024): 85–99. http://dx.doi.org/10.14712/23366478.2024.24.

Full text
Abstract:
Algorithmic decision-making (ADM) is becoming more and more prevalent in everyday life. Due to their promise of producing faster, better, and less biased decisions, automated and data-driven processes also receive increasing attention in many different administrative settings. However, as a result of human mistakes ADM also poses the threat of producing unfair outcomes. Looming algorithmic discrimination can undermine the legitimacy of administrative decision-making. While lawyers and lawmakers face the age-old question of regulation, many decision-makers tasked with designing ADM for and implementing ADM in public administration wrestle with harnessing its advantages and limiting its disadvantages. “Algorithmic fairness” has evolved as key concept in developing algorithmic systems to counter detrimental outcomes. We provide a review of the vast literature on algorithmic fairness and show how key dimensions alter people’s perception of whether an algorithm is fair. In doing so, we provide entry point into this literature for anybody who is required to think about algorithmic fairness, particularly in an public administration context. We also pinpoint critical concerns about algorithmic fairness that public officials and researchers should note.
APA, Harvard, Vancouver, ISO, and other styles
9

Saxena, Devansh, Karla Badillo-Urquiola, Pamela J. Wisniewski, and Shion Guha. "A Framework of High-Stakes Algorithmic Decision-Making for the Public Sector Developed through a Case Study of Child-Welfare." Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021): 1–41. http://dx.doi.org/10.1145/3476089.

Full text
Abstract:
Algorithms have permeated throughout civil government and society, where they are being used to make high-stakes decisions about human lives. In this paper, we first develop a cohesive framework of algorithmic decision-making adapted for the public sector (ADMAPS) that reflects the complex socio-technical interactions between human discretion, bureaucratic processes, and algorithmic decision-making by synthesizing disparate bodies of work in the fields of Human-Computer Interaction (HCI), Science and Technology Studies (STS), and Public Administration (PA). We then applied the ADMAPS framework to conduct a qualitative analysis of an in-depth, eight-month ethnographic case study of algorithms in daily use within a child-welfare agency that serves approximately 900 families and 1300 children in the mid-western United States. Overall, we found that there is a need to focus on strength-based algorithmic outcomes centered in social ecological frameworks. In addition, algorithmic systems need to support existing bureaucratic processes and augment human discretion, rather than replace it. Finally, collective buy-in in algorithmic systems requires trust in the target outcomes at both the practitioner and bureaucratic levels. As a result of our study, we propose guidelines for the design of high-stakes algorithmic decision-making tools in the child-welfare system, and more generally, in the public sector. We empirically validate the theoretically derived ADMAPS framework to demonstrate how it can be useful for systematically making pragmatic decisions about the design of algorithms for the public sector.
APA, Harvard, Vancouver, ISO, and other styles
10

Zerilli, John, Alistair Knott, James Maclaurin, and Colin Gavaghan. "Algorithmic Decision-Making and the Control Problem." Minds and Machines 29, no. 4 (2019): 555–78. http://dx.doi.org/10.1007/s11023-019-09513-7.

Full text
Abstract:
AbstractThe danger of human operators devolving responsibility to machines and failing to detect cases where they fail has been recognised for many years by industrial psychologists and engineers studying the human operators of complex machines. We call it “the control problem”, understood as the tendency of the human within a human–machine control loop to become complacent, over-reliant or unduly diffident when faced with the outputs of a reliable autonomous system. While the control problem has been investigated for some time, up to this point its manifestation in machine learning contexts has not received serious attention. This paper aims to fill that gap. We argue that, except in certain special circumstances, algorithmic decision tools should not be used in high-stakes or safety-critical decisions unless the systems concerned are significantly “better than human” in the relevant domain or subdomain of decision-making. More concretely, we recommend three strategies to address the control problem, the most promising of which involves a complementary (and potentially dynamic) coupling between highly proficient algorithmic tools and human agents working alongside one another. We also identify six key principles which all such human–machine systems should reflect in their design. These can serve as a framework both for assessing the viability of any such human–machine system as well as guiding the design and implementation of such systems generally.
APA, Harvard, Vancouver, ISO, and other styles
11

Romanova, Anna. "Fundamentals of Modeling Algorithmic Decisions in Corporate Management." Artificial societies 19, no. 3 (2024): 0. http://dx.doi.org/10.18254/s207751800032184-1.

Full text
Abstract:
Since 2014, several artificial intelligence systems have been officially appointed to management positions in international companies. Thus, it can be said that a paradigm shift in management decision-making is currently taking place: from a situation where artificial intelligence simply serves as a tool to support directors or board committees, we are moving to a situation where artificial intelligence controls the decision-making process. One of the key questions now is whether it is worthwhile to deal with law as a computation at all. High-quality computational law is the most important condition for the successful development of modern human civilization. The need to create algorithmic legislation for technical systems has already been considered by famous mathematicians, in particular, Gottfried Leibniz and Pierre Laplace. Despite the fact that the great thinkers of the past were able to foresee the basic principles that could form the basis for constructing reasoning by technical systems, at that time they did not have the necessary technical and social tools. The article proposes a methodology for creating a reference book (dictionary) for formulating algorithmic foundations of management decisions. The considered reference (dictionary) model is aimed at offering a reference base for the formation of policies of autonomous AI systems, which will allow all interested users to express informed consent or disagreement with the decisions made by AI systems. To the best of the author's knowledge, this is the first reference (dictionary) model for modeling algorithmic decisions of autonomous AI systems for corporate governance purposes.
APA, Harvard, Vancouver, ISO, and other styles
12

Maw, Maw, Su-Cheng Haw, and Kok-Why Ng. "Perspectives of Defining Algorithmic Fairness in Customer-oriented Applications: A Systematic Literature Review." International Journal on Advanced Science, Engineering and Information Technology 14, no. 5 (2024): 1504–13. http://dx.doi.org/10.18517/ijaseit.14.5.11676.

Full text
Abstract:
Automated decision-making systems are massively engaged in different types of businesses, including customer-oriented sectors, and bring countless achievements in persuading customers with more personalized experiences. However, it was observed that the decisions made by the algorithms could bring unfairness to a person or a group of people, according to recent studies. Thus, algorithmic fairness has become a spotlight research area, and defining a concrete version of fairness notions has also become significant research. In existing literature, there are more than 21 definitions of algorithmic fairness. Many studies have shown that each notion has an incompatibility problem, and it is still necessary to make those notions more adaptable to the legal and social principles of the desired sectors. Yet, the constraints of algorithmic fairness for customer-oriented areas have not been thoroughly studied. This motivates us to work on a systematic literature review to investigate the sectors concerned about algorithmic fairness as a significant matter when using machine-based decision-making systems, what are the well-applied algorithmic fairness notions, and why they can or cannot be directly applicable to the customer-oriented sectors, what are the possible algorithmic fairness constraints for the customer-oriented sectors. By applying the standard guidelines of systematic literature review, we explored 65 prominent articles thoroughly. The findings show 43 different ways of algorithmic fairness notions in the varieties of domains. We also identified the three important perspectives to be considered for enhancing algorithmic fairness notions in the customer-oriented sectors.
APA, Harvard, Vancouver, ISO, and other styles
13

Aditya Kambhampati. "Mitigating bias in financial decision systems through responsible machine learning." World Journal of Advanced Engineering Technology and Sciences 15, no. 2 (2025): 1415–21. https://doi.org/10.30574/wjaets.2025.15.2.0687.

Full text
Abstract:
Algorithmic bias in financial decision systems perpetuates and sometimes amplifies societal inequities, affecting millions of consumers through discriminatory lending practices, inequitable pricing, and exclusionary fraud detection. Minority borrowers face interest rate premiums that collectively cost communities hundreds of millions of dollars annually, while technological barriers to financial inclusion affect tens of millions of "credit invisible" Americans. This article provides a comprehensive framework for detecting, measuring, and mitigating algorithmic bias across the machine learning development lifecycle in financial services. Through examination of statistical fairness metrics, technical mitigation strategies, feature engineering approaches, and regulatory considerations, the article demonstrates that financial institutions can significantly reduce discriminatory outcomes while maintaining model performance. Pre-processing techniques like reweighing and data transformation, in-processing methods such as adversarial debiasing, and post-processing adjustments including threshold optimization provide complementary strategies that together constitute effective bias mitigation. Feature selection emerges as particularly impactful, with proxy variable detection and alternative data integration expanding opportunities for underserved populations. As regulatory expectations evolve toward mandatory fairness testing and explainability requirements, financial institutions implementing comprehensive fairness frameworks not only reduce compliance risks but also expand market opportunities through more inclusive algorithmic systems.
APA, Harvard, Vancouver, ISO, and other styles
14

Žliobaitė, Indrė. "Measuring discrimination in algorithmic decision making." Data Mining and Knowledge Discovery 31, no. 4 (2017): 1060–89. http://dx.doi.org/10.1007/s10618-017-0506-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Cheng, Lingwei, and Alexandra Chouldechova. "Heterogeneity in Algorithm-Assisted Decision-Making: A Case Study in Child Abuse Hotline Screening." Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (2022): 1–33. http://dx.doi.org/10.1145/3555101.

Full text
Abstract:
Algorithmic risk assessment tools are now commonplace in public sector domains such as criminal justice and human services. These tools are intended to aid decision makers in systematically using rich and complex data captured in administrative systems. In this study we investigate sources of heterogeneity in the alignment between worker decisions and algorithmic risk scores in the context of a real world child abuse hotline screening use case. Specifically, we focus on heterogeneity related to worker experience. We find that senior workers are far more likely to screen in referrals for investigation, even after we control for the observed algorithmic risk score and other case characteristics. We also observe that the decisions of less-experienced workers are more closely aligned with algorithmic risk scores than those of senior workers who had decision-making experience prior to the tool being introduced. While screening decisions vary across child race, we do not find evidence of racial differences in the relationship between worker experience and screening decisions. Our findings indicate that it is important for agencies and system designers to consider ways of preserving institutional knowledge when introducing algorithms into high employee turnover settings such as child welfare call screening.
APA, Harvard, Vancouver, ISO, and other styles
16

Doshi, Paras. "FROM INSIGHT TO IMPACT: THE EVOLUTION OF DATA-DRIVEN DECISION MAKING IN THE AGE OF AI." International Journal of Artificial Intelligence & Applications 16, no. 3 (2025): 83–92. https://doi.org/10.5121/ijaia.2025.16306.

Full text
Abstract:
This paper presents a comprehensive critical review of contemporary technical solutions and approaches to artificial intelligence-based decision making systems in executive strategy scenarios. Drawing on systematic review of deployed technical solutions, algorithmic approaches, and empirical studies, this survey classifies and delineates the current decision support technology landscape and outlines future directions. Drawing on extensive review of current research and business application, the paper explains how AI technologies are redefining strategic decision frameworks in various industries. This survey contrasts machine learning algorithms, decision support architectures, and human-AI hybrid systems on various performance dimensions in a systematic way. The research points out prevailing trends such as the growth of augmented intelligence systems, the integration of predictive analytics with human intelligence, and new paradigms on ethics. Simulation results indicate that hybrid decision models that combine algorithmic precision with human intuition achieve 23% higher decision quality scores compared to algorithmic alone or human-alone approaches. The review outlines that effective executive strategy in the AI age calls for systematic organizational change involving technological infrastructure, leadership capability, and cultural adjustment.
APA, Harvard, Vancouver, ISO, and other styles
17

YÁÑEZ, J., J. MONTERO, and D. GÓMEZ. "AN ALGORITHMIC APPROACH TO PREFERENCE REPRESENTATION." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 16, supp02 (2008): 1–18. http://dx.doi.org/10.1142/s0218488508005455.

Full text
Abstract:
In a previous paper, the authors proposed an alternative approach to classical dimension theory, based upon a general representation of strict preferences not being restricted to partial order sets. Without any relevant restriction, the proposed approach was conceived as a potential powerful tool for decision making problems where basic information has been modeled by means of valued binary preference relations. In fact, assuming that each decision maker is able to consistently manage intensity values for preferences is a strong assumption even when there are few alternatives being involved (if the number of alternatives is large, the same criticism applies to crisp preferences). Any representation tool, as the one proposed by the authors, will in principle play a key role in order to help decision makers to understand their preference structure. In this paper we introduce an alternative approach in order to avoid certain complexity issues of the initial proposal, allowing a close representation easier to be obtained in practice.
APA, Harvard, Vancouver, ISO, and other styles
18

Sharma, Shefali, Priyanka Sharma, and Rohit Sharma. "The Influence of Algorithmic Management on Employees Perceptions of Organisational Justice – A Conceptual Paper." Journal of Technology Management for Growing Economies 15, no. 1 (2024): 31–40. https://doi.org/10.15415/jtmge/2024.151004.

Full text
Abstract:
Background: The use of algorithms in organizations has become increasingly common, with many companies transitioning managerial control from humans to algorithms. While algorithmic management promises significantly efficiency gains, it overlooks an important factor of employees’ perception of fairness towards their workplace. Employees pay close attention to how they perceive justice in the workplace; their behavior is shaped by these perceptions. Purpose: This study aims to explore how algorithmic management systems can be designed to balance efficiency with transparency and fairness. So, to foster a sense of justice, organizations should design algorithmic systems in such a way to ensure a transparent decision-making process where employees clearly understand how decisions are made and can trust the outcomes. Method: A review of literature on algorithmic management and workplace fairness was conducted, focusing on the impact of transparent decision-making processes, equitable resource distribution, and constructive feedback mechanisms on employee trust and engagement. Results: Providing timely and constructive feedback and equal distribution of resources can foster a strong sense of fairness, making employees valued and connected to the organization. This will enhance employee trust, engagement, and overall organizational performance, as workers will be more likely to align with the company’s goals when they are treated fairly. Conclusion: Organizations should prioritize fairness and transparency in the design of algorithmic management systems. Treating employees fairly not only strengthens their connection to the organization but also aligns their efforts with its objectives.
APA, Harvard, Vancouver, ISO, and other styles
19

Erlei, Alexander, Franck Nekdem, Lukas Meub, Avishek Anand, and Ujwal Gadiraju. "Impact of Algorithmic Decision Making on Human Behavior: Evidence from Ultimatum Bargaining." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 8 (October 1, 2020): 43–52. http://dx.doi.org/10.1609/hcomp.v8i1.7462.

Full text
Abstract:
Recent advances in machine learning have led to the widespread adoption of ML models for decision support systems. However, little is known about how the introduction of such systems affects the behavior of human stakeholders. This pertains both to the people using the system, as well as those who are affected by its decisions. To address this knowledge gap, we present a series of ultimatum bargaining game experiments comprising 1178 participants. We find that users are willing to use a black-box decision support system and thereby make better decisions. This translates into higher levels of cooperation and better market outcomes. However, because users under-weigh algorithmic advice, market outcomes remain far from optimal. Explanations increase the number of unique system inquiries, but users appear less willing to follow the system’s recommendation. People who negotiate with a user who has a decision support system, but cannot use one themselves, react to its introduction by demanding a better deal for themselves, thereby decreasing overall cooperation levels. This effect is largely driven by the percentage of participants who perceive the system’s availability as unfair. Interpretability mitigates perceptions of unfairness. Our findings highlight the potential for decision support systems to further human cooperation, but also the need for regulators to consider heterogeneous stakeholder reactions. In particular, higher levels of transparency might inadvertently hurt cooperation through changes in fairness perceptions.
APA, Harvard, Vancouver, ISO, and other styles
20

Raja Kumar, Jambi Ratna, Aarti Kalnawat, Avinash M. Pawar, Varsha D. Jadhav, P. Srilatha, and Vinit Khetani. "Transparency in Algorithmic Decision-making: Interpretable Models for Ethical Accountability." E3S Web of Conferences 491 (2024): 02041. http://dx.doi.org/10.1051/e3sconf/202449102041.

Full text
Abstract:
Concerns regarding their opacity and potential ethical ramifications have been raised by the spread of algorithmic decisionmaking systems across a variety of fields. By promoting the use of interpretable machine learning models, this research addresses the critical requirement for openness and moral responsibility in these systems. Interpretable models provide a transparent and intelligible depiction of how decisions are made, as opposed to complicated black-box algorithms. Users and stakeholders need this openness in order to understand, verify, and hold accountable the decisions made by these algorithms. Furthermore, interpretability promotes fairness in algorithmic results by making it easier to detect and reduce biases. In this article, we give an overview of the difficulties brought on by algorithmic opacity, highlighting how crucial it is to solve these difficulties in a variety of settings, including those involving healthcare, banking, criminal justice, and more. From linear models to rule-based systems to surrogate models, we give a thorough analysis of interpretable machine learning techniques, highlighting their benefits and drawbacks. We suggest that incorporating interpretable models into the design and use of algorithms can result in a more responsible and moral application of AI in society, ultimately benefiting people and communities while lowering the risks connected to opaque decision-making processes.
APA, Harvard, Vancouver, ISO, and other styles
21

Atmaja, Subhanjaya Angga. "Ethical Considerations in Algorithmic Decision-making: Towards Fair and Transparent AI Systems." Riwayat: Educational Journal of History and Humanities 8, no. 1 (2025): 620–27. https://doi.org/10.24815/jr.v8i1.44112.

Full text
Abstract:
Artificial intelligence (AI)-based algorithmic decision-making is a major concern in the digital age due to its potential to improve efficiency in various sectors, including healthcare, law, and finance. However, its implementation poses significant ethical challenges, such as bias in data and a lack of transparency that can affect fairness and public trust. This research aims to explore ethical considerations in algorithmic decision-making with a focus on fairness and transparency, identify key challenges, and provide policy recommendations to improve the accountability of AI systems. The research method uses a qualitative approach through literature studies that include academic articles, books, and research reports. The results show that algorithmic bias often arises due to unrepresentative historical data, while low transparency makes it difficult to understand the decision-making process. To overcome this problem, independent algorithm audits, the application of explainable AI, progressive regulations, public education, and the use of more diverse data are needed. This recommendation aims to create a fair, transparent, and trustworthy AI system, thereby supporting wider acceptance of the technology in society.
APA, Harvard, Vancouver, ISO, and other styles
22

Mazijn, Carmen, Carina Prunkl, Andres Algaba, Jan Danckaert, and Vincent Ginis. "LUCID: Exposing Algorithmic Bias through Inverse Design." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (2023): 14391–99. http://dx.doi.org/10.1609/aaai.v37i12.26683.

Full text
Abstract:
AI systems can create, propagate, support, and automate bias in decision-making processes. To mitigate biased decisions, we both need to understand the origin of the bias and define what it means for an algorithm to make fair decisions. Most group fairness notions assess a model's equality of outcome by computing statistical metrics on the outputs. We argue that these output metrics encounter intrinsic obstacles and present a complementary approach that aligns with the increasing focus on equality of treatment. By Locating Unfairness through Canonical Inverse Design (LUCID), we generate a canonical set that shows the desired inputs for a model given a preferred output. The canonical set reveals the model's internal logic and exposes potential unethical biases by repeatedly interrogating the decision-making process. We evaluate LUCID on the UCI Adult and COMPAS data sets and find that some biases detected by a canonical set differ from those of output metrics. The results show that by shifting the focus towards equality of treatment and looking into the algorithm's internal workings, the canonical sets are a valuable addition to the toolbox of algorithmic fairness evaluation.
APA, Harvard, Vancouver, ISO, and other styles
23

Pasupuleti, Murali Krishna. "Auditing Black-Box AI Systems Using Counterfactual Explanations." International Journal of Academic and Industrial Research Innovations(IJAIRI) 05, no. 05 (2025): 598–608. https://doi.org/10.62311/nesx/rphcr20.

Full text
Abstract:
Abstract: The widespread deployment of black-box artificial intelligence (AI) systems in high-stakes domains such as healthcare, finance, and criminal justice has intensified the demand for transparent and accountable decision-making. This study investigates the utility of counterfactual explanations as a method for auditing opaque AI models. By answering the question of what minimal change in input would alter a model’s output, counterfactuals offer a practical and interpretable means of understanding algorithmic decisions. The methodology employs Random Forest and Neural Network classifiers on two benchmark datasets—German Credit (finance) and MIMIC-III (healthcare)—to examine prediction outcomes before and after applying counterfactual explanations. Techniques including DiCE (Diverse Counterfactual Explanations) and gradient-based methods are evaluated using fidelity, sparsity, proximity, and fairness metrics. Results indicate that counterfactual interventions significantly reduce statistical disparity while maintaining high predictive accuracy, with interpretability scores improving by an average of 22%. Additionally, retraining models using counterfactual insights leads to enhanced fairness without notable performance degradation. The findings underscore the potential of counterfactual auditing to uncover bias, enhance transparency, and facilitate compliance with emerging AI governance standards. This research contributes to the development of interpretable, ethically aligned AI systems suitable for critical decision-support environments. Keywords: Counterfactual explanations, black-box AI, model auditing, interpretability, fairness, algorithmic transparency, DiCE, AI governance, ethical AI, decision-support systems
APA, Harvard, Vancouver, ISO, and other styles
24

Green, Ben, and Yiling Chen. "Algorithm-in-the-Loop Decision Making." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 09 (2020): 13663–64. http://dx.doi.org/10.1609/aaai.v34i09.7115.

Full text
Abstract:
We introduce a new framework for conceiving of and studying algorithms that are deployed to aid human decision making: “algorithm-in-the-loop” systems. The algorithm-in-the-loop framework centers human decision making, providing a more precise lens for studying the social impacts of algorithmic decision making aids. We report on two experiments that evaluate algorithm-in-the-loop decision making and find significant limits to these systems.
APA, Harvard, Vancouver, ISO, and other styles
25

Sravan Yella. "Algorithmic Campaign Orchestration: A Framework for Automated Multi-Channel Marketing Decisions." Journal of Computer Science and Technology Studies 7, no. 2 (2025): 323–31. https://doi.org/10.32996/jcsts.2025.7.2.33.

Full text
Abstract:
This article examines the paradigm shift from traditional rule-based marketing automation to continuous experience optimization enabled by AI-driven decision engines. The article presents an architectural framework for real-time campaign orchestration systems that leverage predictive analytics, reinforcement learning, and natural language processing to dynamically personalize customer interactions across channels. Through multiple case studies across different industry sectors, the article demonstrates how these systems process multi-source data streams to make intelligent decisions in milliseconds, creating responsive customer journeys that adapt to behavioral signals and contextual cues. The article indicates significant improvements in engagement metrics, customer retention, and marketing return on investment compared to conventional batch-processing approaches. The article identifies implementation challenges, including technical integration barriers, data quality dependencies, and organizational readiness factors, while proposing solutions to these obstacles. This article contributes to the growing field of algorithmic marketing by establishing methodological guidelines for evaluating the performance of real-time decision systems and outlining a roadmap for future advancements in continuous optimization technologies.
APA, Harvard, Vancouver, ISO, and other styles
26

Sravan Yella. "Algorithmic Campaign Orchestration: A Framework for Automated Multi-Channel Marketing Decisions." Journal of Computer Science and Technology Studies 7, no. 2 (2025): 165–73. https://doi.org/10.32996/jcsts.2025.7.2.15.

Full text
Abstract:
This article examines the paradigm shift from traditional rule-based marketing automation to continuous experience optimization enabled by AI-driven decision engines. The article presents an architectural framework for real-time campaign orchestration systems that leverage predictive analytics, reinforcement learning, and natural language processing to dynamically personalize customer interactions across channels. Through multiple case studies across different industry sectors, the article demonstrates how these systems process multi-source data streams to make intelligent decisions in milliseconds, creating responsive customer journeys that adapt to behavioral signals and contextual cues. The article indicates significant improvements in engagement metrics, customer retention, and marketing return on investment compared to conventional batch-processing approaches. The article identifies implementation challenges, including technical integration barriers, data quality dependencies, and organizational readiness factors, while proposing solutions to these obstacles. This article contributes to the growing field of algorithmic marketing by establishing methodological guidelines for evaluating the performance of real-time decision systems and outlining a roadmap for future advancements in continuous optimization technologies.
APA, Harvard, Vancouver, ISO, and other styles
27

Rubel, Alan, Clinton Castro, and Adam Pham. "Algorithms, Agency, and Respect for Persons." Social Theory and Practice 46, no. 3 (2020): 547–72. http://dx.doi.org/10.5840/soctheorpract202062497.

Full text
Abstract:
Algorithmic systems and predictive analytics play an increasingly important role in various aspects of modern life. Scholarship on the moral ramifications of such systems is in its early stages, and much of it focuses on bias and harm. This paper argues that in understanding the moral salience of algorithmic systems it is essential to understand the relation between algorithms, autonomy, and agency. We draw on several recent cases in criminal sentencing and K–12 teacher evaluation to outline four key ways in which issues of agency, autonomy, and respect for persons can conflict with algorithmic decision-making. Three of these involve failures to treat individual agents with the respect they deserve. The fourth involves distancing oneself from a morally suspect action by attributing one’s decision to take that action to an algorithm, thereby laundering one’s agency.
APA, Harvard, Vancouver, ISO, and other styles
28

Nizar, Qashta, Abdalla kheiri Murtada, and Issa Aljaradat Dorgham. "Algorithmic Bias in Artificial Intelligence Systems and its Legal Dimensions." Journal of Asian American Studies 27, no. 3 (2025): 520–36. https://doi.org/10.5281/zenodo.15074440.

Full text
Abstract:
Artificial intelligence algorithms form the foundation of modernintelligent systems, enabling machines to learn, infer, and make decisionsindependently. These algorithms have achieved significant progress due tothe abundance of data and increased computational power, enabling themto solve many complex problems. Despite these benefits, such systems faceethical and legal challenges related to algorithmic bias and its impact onmarginalized groups. This necessitates strict legislative interventions.For these reasons and others, this research focuses on the legal and ethicalchallenges associated with algorithmic bias in artificial intelligence systems.This issue is one of the most prominent challenges facing these modernsystems. It adopts an integrated scientific approach that combines bothdescriptive and analytical methods. It can also be said research adopted amixed methodology that combines the quantitative approach by examiningprevious statistics from reliable sources and the qualitative approach basedon analyzing information and correlating results. The research examines thedevelopment of artificial intelligence and the effects of algorithmic bias onmarginalized groups, such as women and minorities. The research clarifies the nature and causes of algorithmic bias. It highlights the danger of algorithmic bias to individuals and societies. It also sheds light on the legislative challenges related to algorithmic bias. Additionally, the research reviews global legislations that attempt to address this challenge and examines some Arab legislation, particularly Omani legislation. The research reached several key findings, including the identification of algorithmic bias. This bias refers to unfair discrimination in intelligent system decisions based on factors such as gender, race, or religion. Algorithmic bias can reinforce patterns of racial discrimination due to distorted data or flawed algorithm design. European law requires platforms to assess the risks associated with algorithms, including bias. They are also required to submit annual reports to an independent oversight system. As for Arab legislation, the UAE has established a charter for the development and use of artificial intelligence, focusing on algorithmic bias. Meanwhile, Oman explicitly criminalizes all forms of racial discrimination. The research recommends increasing academic research on the legal frameworks to address the challenges of artificial intelligence and justice. It calls on research centers and decision-makers to create a unified Arab law that addresses algorithmic bias. It also encourages updating current laws to keep pace with technological advancements. Additionally, it urges the Omani legislator to introduce specific laws or clear legislative provisions regarding algorithmic bias.  
APA, Harvard, Vancouver, ISO, and other styles
29

Engelmann, Alana. "Algorithmic transparency as a fundamental right in the democratic rule of law." Brazilian Journal of Law, Technology and Innovation 1, no. 2 (2023): 169–88. http://dx.doi.org/10.59224/bjlti.v1i2.169-188.

Full text
Abstract:
This article scrutinizes the escalating apprehensions surrounding algorithmic transparency, positing it as a pivotal facet for ethics and accountability in the development and deployment of artificial intelligence (AI) systems. By delving into legislative and regulatory initiatives across various jurisdictions, the article discerns how different countries and regions endeavor to institute guidelines fostering ethical and responsible AI systems. Within the United States, both the US Algorithmic Accountability Act of 2022 and The European Artificial Intelligence Act share a common objective of establishing governance frameworks to hold errant entities accountable, ensuring the ethical, legal, and secure implementation of AI systems. A key emphasis in both legislations is placed on algorithmic transparency and elucidation of system functionalities, with the overarching goal of instilling accountability in AI operations. This examination extends to Brazil, where legislative proposals such as PL 2.338/2023 grapple with the intricacies of AI deployment and algorithmic transparency. Furthermore, PEC 29/2023 endeavors to enshrine algorithmic transparency as a fundamental right, recognizing its pivotal role in safeguarding users' mental integrity in the face of advancing neurotechnology and algorithmic utilization. To ascertain the approaches adopted by Europe, the United States, and Brazil in realizing the concept of Algorithmic Transparency in AI systems employed for decision-making, a comparative and deductive methodology is employed. This methodology aligns with bibliographical analysis, incorporating legal doctrines, legislative texts, and jurisprudential considerations from the respective legal systems. The analysis encompasses Algorithmic Transparency, Digital Due Process, and Accountability as inherent legal constructs, offering a comprehensive comparative perspective. However, the mere accessibility of source codes is deemed insufficient to guarantee effective comprehension and scrutiny by end-users. Recognizing this, the imperative of explainability in elucidating how AI systems function becomes evident, enabling citizens to comprehend the rationale behind decisions made by these systems. Legislative initiatives, exemplified by Resolution No. 332/2020 of the National Council of Justice (CNJ), underscore the acknowledgment of the imperative for transparency and accountability in AI systems utilized within the Judiciary.
APA, Harvard, Vancouver, ISO, and other styles
30

Saibabu Neyyila, Chaitanya Neyyala, and Saumendra Das. "Leveraging AI and behavioral economics to enhance decision-making." World Journal of Advanced Research and Reviews 26, no. 2 (2025): 1721–30. https://doi.org/10.30574/wjarr.2025.26.2.1581.

Full text
Abstract:
This research examines the integration of artificial intelligence (AI) within behavioral economics, specifically its impact on decision-making processes. Behavioral economics explores the psychological, cognitive, and emotional factors that influence economic choices, and AI offers innovative tools for analyzing, predicting, and shaping these decisions. The paper highlights recent advancements in AI technologies, such as machine learning, natural language processing, and predictive analytics, and their role in deepening our understanding of human economic behavior. It also investigates how AI-driven decision-making systems are affecting both individuals and organizations, with a focus on ethical considerations and practical applications. The study explores AI's transformative effect on decision-making in areas like digital markets and finance. Using India's e-cockpit and fintech sectors as examples, the research looks at AI’s role in pricing strategies, consumer decisions, and financial behavior. It demonstrates how businesses can enhance sales and customer engagement through behavioral nudges, dynamic pricing, and AI-based recommendation systems. The case study of Flipkart’s AI-powered recommendation engine shows a 30% increase in user engagement and a 25% boost in sales. However, challenges such as algorithmic bias, data privacy concerns, and the need for ethical transparency remain. The findings highlight that 58% of users are worried about algorithmic bias in financial decisions. The study calls for stronger data protection laws, greater human interpretability of AI models, and the responsible, ethical development of AI, urging future research to focus on explainable AI and equitable, transparent systems.
APA, Harvard, Vancouver, ISO, and other styles
31

Sankaran, Ganesh, Marco A. Palomino, Martin Knahl, and Guido Siestrup. "A Modeling Approach for Measuring the Performance of a Human-AI Collaborative Process." Applied Sciences 12, no. 22 (2022): 11642. http://dx.doi.org/10.3390/app122211642.

Full text
Abstract:
Despite the unabated growth of algorithmic decision-making in organizations, there is a growing consensus that numerous situations will continue to require humans in the loop. However, the blending of a formal machine and bounded human rationality also amplifies the risk of what is known as local rationality. Therefore, it is crucial, especially in a data-abundant environment that characterizes algorithmic decision-making, to devise means to assess performance holistically. In this paper, we propose a simulation-based model to address the current lack of research on quantifying algorithmic interventions in a broader organizational context. Our approach allows the combining of causal modeling and data science algorithms to represent decision settings involving a mix of machine and human rationality to measure performance. As a testbed, we consider the case of a fictitious company trying to improve its forecasting process with the help of a machine learning approach. The example demonstrates that a myopic assessment obscures problems that only a broader framing reveals. It highlights the value of a systems view since the effects of the interplay between human and algorithmic decisions can be largely unintuitive. Such a simulation-based approach can be an effective tool in efforts to delineate roles for humans and algorithms in hybrid contexts.
APA, Harvard, Vancouver, ISO, and other styles
32

Zweig, Katharina A., Georg Wenzelburger, and Tobias D. Krafft. "On Chances and Risks of Security Related Algorithmic Decision Making Systems." European Journal for Security Research 3, no. 2 (2018): 181–203. http://dx.doi.org/10.1007/s41125-018-0031-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Cundiff, W. E. "Database organisation: An algorithmic framework for decision support systems in APL." Decision Support Systems 5, no. 1 (1989): 57–64. http://dx.doi.org/10.1016/0167-9236(89)90028-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Cibaca, Khandelwal. "Bias Mitigation Strategies in AI Models for Financial Data." INTERNATIONAL JOURNAL OF INNOVATIVE RESEARCH AND CREATIVE TECHNOLOGY 11, no. 2 (2025): 1–9. https://doi.org/10.5281/zenodo.15318708.

Full text
Abstract:
Artificial intelligence (AI) has become integral to financial systems, enabling automation in credit scoring, fraud detection, and investment management. However, the presence of bias in AI models can propagate systemic inequities, leading to ethical, operational, and regulatory challenges. This paper examines strategies to mitigate bias in AI systems applied to financial data. It discusses challenges associated with biased datasets, feature selection, and algorithmic decisions, alongside practical mitigation approaches such as data balancing, algorithmic fairness techniques, and post-processing adjustments. Insights from case studies demonstrate the real-world application of these strategies, highlighting their effectiveness in promoting fairness, enhancing transparency, and reducing adverse outcomes. By providing a comprehensive framework, this paper contributes to fostering equitable financial decision-making.
APA, Harvard, Vancouver, ISO, and other styles
35

Veldurthi, Anil Kumar. "Ethical Considerations in AI-Driven Financial Decision-Making." European Journal of Computer Science and Information Technology 13, no. 31 (2025): 49–64. https://doi.org/10.37745/ejcsit.2013/vol13n314964.

Full text
Abstract:
This article examines the ethical dimensions of artificial intelligence in financial decision-making systems. As AI increasingly permeates critical functions across the financial services industry—from credit underwriting and fraud detection to algorithmic trading and personalized financial advice—it introduces profound ethical challenges that demand careful examination. It explores how algorithmic bias manifests through training data, feature selection, and algorithmic design, creating disparate outcomes for marginalized communities despite the absence of explicit discriminatory intent. The article provides a technical analysis of fairness-aware machine learning techniques, including pre-processing, in-processing, and post-processing approaches that financial institutions can implement to mitigate bias. Further, it examines explainability approaches necessary for transparency, privacy preservation methods to protect sensitive financial data, and human oversight frameworks essential for responsible governance. The regulatory landscape across multiple jurisdictions is analyzed, with particular attention to evolving compliance requirements and emerging best practices. Through a comprehensive examination of these interconnected ethical considerations, the article offers a framework for financial institutions to develop AI systems that balance innovation with responsibility, ensuring technological advancement aligns with core human values of fairness, transparency, privacy, and accountability. This paper recommends a multi-pronged approach combining fairness-aware modeling, explainable API, privacy-preserving technologies, and strong governance structures. Financial institutions should embed these principles throughout the AI lifecycle to ensure compliance, build consumer trust, and promote responsible innovation.
APA, Harvard, Vancouver, ISO, and other styles
36

Obed Boateng and Bright Boateng. "Algorithmic bias in educational systems: Examining the impact of AI-driven decision making in modern education." World Journal of Advanced Research and Reviews 25, no. 1 (2025): 2012–17. https://doi.org/10.30574/wjarr.2025.25.1.0253.

Full text
Abstract:
The increasing integration of artificial intelligence and algorithmic systems in educational settings has raised critical concerns about their impact on educational equity. This paper examines the manifestation and implications of algorithmic bias across various educational domains, including admissions processes, assessment systems, and learning management platforms. Through analysis of current research and studies, we investigate how these biases can perpetuate or exacerbate existing educational disparities, particularly affecting students from marginalized communities. The study reveals that algorithmic bias in education operates through multiple channels, from data collection and algorithm design to implementation practices and institutional policies. Our findings indicate that biased algorithms can significantly impact students' educational trajectories, creating new forms of systemic barriers in education. We propose a comprehensive framework for addressing these challenges, combining technical solutions with policy reforms and institutional guidelines. This research contributes to the growing discourse on ethical AI in education and provides practical strategies for creating more equitable educational systems in an increasingly digitized world.
APA, Harvard, Vancouver, ISO, and other styles
37

Collins, Robert, Johan Redström, and Marco Rozendaal. "The Right to Contestation: Towards Repairing Our Interactions with Algorithmic Decision Systems." International Journal of Design 18, no. 1 (2024): 95–106. https://doi.org/10.57698/v18i1.06.

Full text
Abstract:
This paper looks at how contestation in the context of algorithmic decision systems is essentially the progeny of repair for our more decentralised and abstracted digital world. The act of repair has often been a way for users to contest with bad design, substandard products, and disappointing outcomes - not to mention often being a necessary aspect of ensuring effective use over time. As algorithmic systems continue to make more decisions about our lives and futures, we need to look for new ways to contest their outcomes and repair potentially broken systems. Through looking at examples of contemporary repair and contestation and tracing the history of electronics repair from discrete components into the decentralised systems of today, we look at how the shared values of repair and contestation helps surface ways to approach contestation using tactics of the Right to Repair movement and the instincts of the Fixer. Finally, we speculate on roles, communities and a move towards an agonistic interaction space where response-ability rests more equally across user, designer and system.
APA, Harvard, Vancouver, ISO, and other styles
38

Pääkkönen, Juho. "Bureaucratic routines and error management in algorithmic systems." Hallinnon Tutkimus 40, no. 4 (2021): 243–53. http://dx.doi.org/10.37450/ht.107880.

Full text
Abstract:
This article discusses how an analogy between algorithms and bureaucratic decision-making could help conceptualize error management in algorithmic systems. It argues that a view of algorithms as irreflexive bureaucratic processes is insufficient as an account of errors in complex public sector contexts, where algorithms operate jointly with other organizational work practices. To conceptualize such contexts, the article proposes that algorithms could be viewed as analogous to more traditional work routines in bureaucratic organizations. Doing so helps clarify that algorithmic irreflexivity becomes problematic when the coordination of routine work around automation fails. Thus, also the challenges of error management come to concern the wider context of organized work. This argument is illustrated using known examples from the critical literature on algorithms. Finally, drawing on recent studies in routine dynamics, the article formulates empirical research directions on error management in algorithmic systems.
APA, Harvard, Vancouver, ISO, and other styles
39

Chandra, Rushil, Karun Sanjaya, AR Aravind, Ahmed Radie Abbas, Ruzieva Gulrukh, and T. S. Senthil kumar. "Algorithmic Fairness and Bias in Machine Learning Systems." E3S Web of Conferences 399 (2023): 04036. http://dx.doi.org/10.1051/e3sconf/202339904036.

Full text
Abstract:
In recent years, research into and concern over algorithmic fairness and bias in machine learning systems has grown significantly. It is vital to make sure that these systems are fair, impartial, and do not support discrimination or social injustices since machine learning algorithms are becoming more and more prevalent in decision-making processes across a variety of disciplines. This abstract gives a general explanation of the idea of algorithmic fairness, the difficulties posed by bias in machine learning systems, and different solutions to these problems. Algorithmic bias and fairness in machine learning systems are crucial issues in this regard that demand the attention of academics, practitioners, and policymakers. Building fair and unbiased machine learning systems that uphold equality and prevent discrimination requires addressing biases in training data, creating fairness-aware algorithms, encouraging transparency and interpretability, and encouraging diversity and inclusivity.
APA, Harvard, Vancouver, ISO, and other styles
40

K, Suraj, Yogesh K, Malavya Manivarnnan, and Dr Mahesh Kumar Sarva. "Exploring Youth Perspectives on Algorithmic Trading: Knowledge, Trust, and Adoption." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 04 (2025): 1–9. https://doi.org/10.55041/ijsrem43789.

Full text
Abstract:
The increasing adoption of algorithmic trading has significantly transformed financial markets by enabling automated decision-making and high-speed trade execution. While institutional investors and hedge funds have widely embraced this technology, its understanding and acceptance among young retail investors, particularly those aged 18 to 25, remain relatively unexplored. As digital trading platforms and fintech innovations continue to gain popularity, assessing the awareness, perception, and preferences of youth regarding algorithmic trading is crucial. This study aims to examine the extent to which young investors are familiar with algorithmic trading, their perceptions of its advantages and risks, and their willingness to adopt it. This study quantitatively examines youth engagement with algorithmic trading, revealing low awareness, moderate trust, and key adoption factors such as transparency and cost. The study explores their level of awareness and primary sources of information, evaluates their trust in automated trading systems, concerns about fairness and risks, and identifies key factors influencing their decision to use algorithmic trading, such as cost, transparency, and control. The findings of this research offer valuable insights for fintech companies, trading platforms, financial educators, and policymakers, helping them design financial literacy programs and trading solutions tailored to the next generation of investors. As young traders continue to influence market trends, understanding their perspective on algorithmic trading will be essential in shaping the future of digital investing and automated financial systems. Keywords: Algorithmic Trading, Youth Investors, Financial Markets, Trading Automation, Awareness, Perception, Investment Preferences, Fintech, Digital Trading Platforms, Market Risks, Trading Strategies, Retail Investors, Financial Literacy, Automated Trading Systems, Investment Behaviour.
APA, Harvard, Vancouver, ISO, and other styles
41

K, BDr LAKSHMINARAYANA. "Ethical AI in the IT Industry: Addressing Bias, Transparency, and Accountability in Algorithmic Decision-Making." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 03 (2025): 1–9. https://doi.org/10.55041/ijsrem42652.

Full text
Abstract:
The rapid integration of artificial intelligence (AI) into the IT industry has raised significant ethical concerns, particularly regarding bias, transparency, and accountability in algorithmic decision-making. While AI systems offer transformative potential, their deployment often perpetuates existing biases, lacks transparency, and fails to ensure accountability, leading to unintended societal consequences. This study examines the ethical challenges posed by AI in the IT sector, focusing on the mechanisms through which bias is embedded in algorithms, the opacity of decision-making processes, and the inadequacy of accountability frameworks. Through a systematic review of existing literature and case studies, the research identifies critical gaps in current approaches to ethical AI, including the lack of standardized methodologies for bias detection, insufficient regulatory oversight, and limited stakeholder engagement in AI development. The study employs a mixed-methods approach, combining qualitative analysis of industry practices with quantitative assessments of algorithmic outcomes, to provide a comprehensive understanding of these issues. Findings reveal that while efforts to address bias and improve transparency are underway, significant disparities persist in the implementation of ethical principles across organizations. The research highlights the need for robust, interdisciplinary frameworks that integrate technical, legal, and ethical perspectives to ensure fair and accountable AI systems. Recommendations include the development of industry-wide standards for bias mitigation, enhanced transparency through explainable AI techniques, and the establishment of independent oversight bodies to monitor algorithmic decision-making. By addressing these challenges, the IT industry can foster trust in AI technologies and ensure their alignment with societal values. This study contributes to the ongoing discourse on ethical AI by identifying actionable pathways for achieving fairness, transparency, and accountability in algorithmic systems. Keywords: Ethical AI, algorithmic bias, transparency, accountability, IT industry, decision-making, bias mitigation, explainable AI.
APA, Harvard, Vancouver, ISO, and other styles
42

Sinclair, Sean R. "Adaptivity, Structure, and Objectives in Sequential Decision-Making." ACM SIGMETRICS Performance Evaluation Review 51, no. 3 (2024): 38–41. http://dx.doi.org/10.1145/3639830.3639846.

Full text
Abstract:
Sequential decision-making algorithms are ubiquitous in the design and optimization of large-scale systems due to their practical impact. The typical algorithmic paradigm ignores the sequential notion of these problems: use a historical dataset to predict future uncertainty and solve the resulting offline planning problem.
APA, Harvard, Vancouver, ISO, and other styles
43

Hou, Yoyo Tsung-Yu, and Malte F. Jung. "Who is the Expert? Reconciling Algorithm Aversion and Algorithm Appreciation in AI-Supported Decision Making." Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021): 1–25. http://dx.doi.org/10.1145/3479864.

Full text
Abstract:
The increased use of algorithms to support decision making raises questions about whether people prefer algorithmic or human input when making decisions. Two streams of research on algorithm aversion and algorithm appreciation have yielded contradicting results. Our work attempts to reconcile these contradictory findings by focusing on the framings of humans and algorithms as a mechanism. In three decision making experiments, we created an algorithm appreciation result (Experiment 1) as well as an algorithm aversion result (Experiment 2) by manipulating only the description of the human agent and the algorithmic agent, and we demonstrated how different choices of framings can lead to inconsistent outcomes in previous studies (Experiment 3). We also showed that these results were mediated by the agent's perceived competence, i.e., expert power. The results provide insights into the divergence of the algorithm aversion and algorithm appreciation literature. We hope to shift the attention from these two contradicting phenomena to how we can better design the framing of algorithms. We also call the attention of the community to the theory of power sources, as it is a systemic framework that can open up new possibilities for designing algorithmic decision support systems.
APA, Harvard, Vancouver, ISO, and other styles
44

Phillips, Camryn, and Scott Ransom. "Algorithmic Pulsar Timing." Astronomical Journal 163, no. 2 (2022): 84. http://dx.doi.org/10.3847/1538-3881/ac403e.

Full text
Abstract:
Abstract Pulsar timing is a process of iteratively fitting pulse arrival times to constrain the spindown, astrometric, and possibly binary parameters of a pulsar, by enforcing integer numbers of pulsar rotations between the arrival times. Phase connection is the process of unambiguously determining those rotation numbers between the times of arrival while determining a pulsar timing solution. Pulsar timing currently requires a manual process of step-by-step phase connection performed by individuals. In an effort to quantify and streamline this process, we created the Algorithmic Pulsar Timer (APT), an algorithm that can accurately phase connect and time isolate pulsars. Using the statistical F-test and knowledge of parameter uncertainties and covariances, the algorithm decides what new data to include in a fit, when to add additional timing parameters, and which model to attempt in subsequent iterations. Using these tools, the algorithm can phase-connect timing data that previously required substantial manual effort. We tested the algorithm on 100 simulated systems, with a 99% success rate. APT combines statistical tests and techniques with a logical decision-making process, very similar to the manual one used by pulsar astronomers for decades, and some computational brute force, to automate the often tricky process of isolated pulsar phase connection, setting the foundation for automated fitting of binary pulsar systems.
APA, Harvard, Vancouver, ISO, and other styles
45

Pasupuleti, Murali Krishna. "Human-in-the-Loop AI: Enhancing Transparency and Accountability." International Journal of Academic and Industrial Research Innovations(IJAIRI) 05, no. 05 (2025): 574–85. https://doi.org/10.62311/nesx/rphcr18.

Full text
Abstract:
Abstract: As artificial intelligence (AI) increasingly informs decisions in critical sectors such as healthcare, finance, and governance, concerns regarding algorithmic opacity and fairness have intensified. This research investigates the integration of Human-in-the-Loop (HITL) mechanisms as a strategy to enhance transparency, interpretability, and accountability in AI systems. Using real-world datasets from financial fraud detection and healthcare triage, we evaluate the comparative performance of machine learning models with and without human intervention. The study employs fairness metrics—such as disparate impact ratio—and interpretability tools like SHAP and LIME to quantify model transparency. Regression analysis further explores the influence of HITL elements on outcome variance and bias mitigation. Results indicate that HITL models significantly improve interpretability and reduce algorithmic bias, with only marginal reductions in predictive accuracy. Additionally, human evaluators corrected edge-case errors that purely algorithmic systems often misclassified. These findings suggest that strategically designed HITL frameworks can bridge the gap between high-performance AI and ethical, responsible decision-making. The paper concludes by proposing a scalable governance model for HITL integration in high-stakes AI applications. Keywords Human-in-the-Loop AI, Explainable AI, Algorithmic Transparency, Model Interpretability, Fairness Metrics, SHAP, LIME, Ethical AI, Bias Mitigation, Accountable AI Systems, Responsible Machine Learning, AI Governance
APA, Harvard, Vancouver, ISO, and other styles
46

Monachou, Faidra, and Ana-Andreea Stoica. "Fairness and equity in resource allocation and decision-making." ACM SIGecom Exchanges 20, no. 1 (2022): 64–66. http://dx.doi.org/10.1145/3572885.3572891.

Full text
Abstract:
Fairness and equity considerations in the allocation of social goods and the development of algorithmic systems pose new challenges for decision-makers and interesting questions for the EC community. We overview a list of papers that point towards emerging directions in this research area.
APA, Harvard, Vancouver, ISO, and other styles
47

Hunt, Robert, and Fenwick McKelvey. "Algorithmic Regulation in Media and Cultural Policy: A Framework to Evaluate Barriers to Accountability." Journal of Information Policy 9, no. 1 (2019): 307–35. http://dx.doi.org/10.5325/jinfopoli.9.1.0307.

Full text
Abstract:
Abstract The word “algorithm” is best understood as a generic term for automated decision-making. Algorithms can be coded by humans or they can become self-taught through machine learning. Cultural goods and news increasingly pass through information intermediaries known as platforms that rely on algorithms to filter, rank, sort, classify, and promote information. Algorithmic content recommendation acts as an important and increasingly contentious gatekeeper. Numerous controversies around the nature of content being recommended—from disturbing children's videos to conspiracies and political misinformation—have undermined confidence in the neutrality of these systems. Amid a generational challenge for media policy, algorithmic accountability has emerged as one area of regulatory innovation. Algorithmic accountability seeks to explain automated decision-making, ultimately locating responsibility and improving the overall system. This article focuses on the technical, systemic issues related to algorithmic accountability, highlighting that deployment matters as much as development when explaining algorithmic outcomes. After outlining the challenges faced by those seeking to enact algorithmic accountability, we conclude by comparing some emerging approaches to addressing cultural discoverability by different international policymakers.
APA, Harvard, Vancouver, ISO, and other styles
48

Rigopoulos, Georgios. "Weighted OWA (Ordered Weighted Averaging Operator) Preference Aggregation for Group Multicriteria Decisions." International Journal of Computational and Applied Mathematics & Computer Science 3 (May 8, 2023): 10–17. http://dx.doi.org/10.37394/232028.2023.3.2.

Full text
Abstract:
Group decision making is an integral part of operations and management functions in almost every business domain with substantial applications in finance and economics. In parallel to human decision makers, software agents operate in business systems and environments, collaborate, compete and perform algorithmic decision-making tasks as well. In both settings, information aggregation of decision problem parameters and agent preferences is a necessary step to generate group decision outcome. Although plenty aggregation information approaches exist, overcomplexity of the underlying aggregating operation, in most of them, is a drawback, especially for human based group decisions in practice. In this work we introduce an aggregation method for group decision setting, based on the Weighted Ordered Averaging Operator (WOWA). The aggregation is applied on decision maker preferences, following the majority concept to generate a unique set of preferences as input for the decision algorithm. We present the theoretical construction of the model and an application at a group multicriteria assignment decision problem, along with detailed numerical results. The proposed method contributes in the field, as it offers a novel approach that is simple and intuitive, and avoids overcomplexity during group decision process. The method can be also easily deployed into artificial environments and algorithmic decision-making mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
49

Talapina, Elvira V. "Administrative Algorithmic Solutions." Administrative law and procedure, October 10, 2024, 39–42. http://dx.doi.org/10.18572/2071-1166-2024-10-39-42.

Full text
Abstract:
Examples of algorithms being involved in administrative practice are multiplying. However, the process of their use remains unsettled. For the legal evaluation of the results of algorithms’ decisions and subsequent legal regulation, it is important to distinguish between automated decision-making systems and machine learning systems. But if the prospects for the use of algorithms by both courts and the executive branch seem obvious, such use should certainly take into account possible risks and disadvantages. This is especially true for machine learning algorithms. First, human decision-making inevitably leads to the problem of scoring, which has not only an ethical but also a legal dimension. Second, there is the problem of algorithm errors. A statistical decision, even based on a working algorithm, can be wrong. Statistics using probability calculus, even when applied to a mass of cases, leads to probable rather than certain conclusions. This means that the personalized decision of a machine learning algorithm cannot be final, but must be “sanctioned” by a human. Hence the inevitability of human control.
APA, Harvard, Vancouver, ISO, and other styles
50

Turek, Matt. "DARPA’s In the Moment (ITM) Program: Human-Aligned Algorithms for Making Difficult Battlefield Triage Decisions." Disaster Medicine and Public Health Preparedness 18 (2024). http://dx.doi.org/10.1017/dmp.2024.196.

Full text
Abstract:
Abstract DARPA’s In the Moment (ITM) program seeks to develop algorithmic decision makers for battlefield triage that are aligned with key decision-making attributes of trusted humans. ITM also seeks to develop a quantitative alignment score (based on the decision-making attributes) as a method for establishing appropriate trust in algorithmic decision-making systems. ITM is interested in a specific notion of trust, specifically the willingness of a human to delegate difficult decision-making to an algorithmic system. While the AI community often identifies technical performance characteristics (e.g., error rate) as trust factors for autonomous systems, ITM focuses on human attributes and characteristics (e.g., risk tolerance, rule following, or other personality characteristics; subject matter expertise; and human values to name a few) that could be encoded into algorithmic systems. This presentation will provide an overview of ITM program, including the quantitative alignment framework that will produce an alignment score between the human trustor and algorithmic trustee, as well as the evaluation planned to assess the contribution of alignment to the willingness to delegate. Learning Objectives Define how difficult decisions are understood in the context of the In the Moment program. Describe the role of trust and decision-maker alignment for the In the Moment program. Discuss the elements of the In the Moment evaluation, including the role of human delegation of difficult decisions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography