Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Algorithmic decision systems.

Zeitschriftenartikel zum Thema „Algorithmic decision systems“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Algorithmic decision systems" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Waldman, Ari, and Kirsten Martin. "Governing algorithmic decisions: The role of decision importance and governance on perceived legitimacy of algorithmic decisions." Big Data & Society 9, no. 1 (2022): 205395172211004. http://dx.doi.org/10.1177/20539517221100449.

Der volle Inhalt der Quelle
Annotation:
The algorithmic accountability literature to date has primarily focused on procedural tools to govern automated decision-making systems. That prescriptive literature elides a fundamentally empirical question: whether and under what circumstances, if any, is the use of algorithmic systems to make public policy decisions perceived as legitimate? The present study begins to answer this question. Using factorial vignette survey methodology, we explore the relative importance of the type of decision, the procedural governance, the input data used, and outcome errors on perceptions of the legitimacy
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Dawson, April. "Algorithmic Adjudication and Constitutional AI—The Promise of A Better AI Decision Making Future?" SMU Science and Technology Law Review 27, no. 1 (2024): 11. http://dx.doi.org/10.25172/smustlr.27.1.3.

Der volle Inhalt der Quelle
Annotation:
Algorithmic governance is when algorithms, often in the form of AI, make decisions, predict outcomes, and manage resources in various aspects of governance. This approach can be applied in areas like public administration, legal systems, policy-making, and urban planning. Algorithmic adjudication involves using AI to assist in or decide legal disputes. This often includes the analysis of legal documents, case precedents, and relevant laws to provide recommendations or even final decisions. The AI models typically used in these emerging decision-making systems use traditionally trained AI syste
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Herzog, Lisa. "Algorithmisches Entscheiden, Ambiguitätstoleranz und die Frage nach dem Sinn." Deutsche Zeitschrift für Philosophie 69, no. 2 (2021): 197–213. http://dx.doi.org/10.1515/dzph-2021-0016.

Der volle Inhalt der Quelle
Annotation:
Abstract In more and more contexts, human decision-making is replaced by algorithmic decision-making. While promising to deliver efficient and objective decisions, algorithmic decision systems have specific weaknesses, some of which are particularly dangerous if data are collected and processed by profit-oriented companies. In this paper, I focus on two problems that are at the root of the logic of algorithmic decision-making: (1) (in)tolerance for ambiguity, and (2) instantiations of Campbell’s law, i. e. of indicators that are used for “social decision-making” being subject to “corruption pr
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kleinberg, Jon, and Manish Raghavan. "Algorithmic monoculture and social welfare." Proceedings of the National Academy of Sciences 118, no. 22 (2021): e2018340118. http://dx.doi.org/10.1073/pnas.2018340118.

Der volle Inhalt der Quelle
Annotation:
As algorithms are increasingly applied to screen applicants for high-stakes decisions in employment, lending, and other domains, concerns have been raised about the effects of algorithmic monoculture, in which many decision-makers all rely on the same algorithm. This concern invokes analogies to agriculture, where a monocultural system runs the risk of severe harm from unexpected shocks. Here, we show that the dangers of algorithmic monoculture run much deeper, in that monocultural convergence on a single algorithm by a group of decision-making agents, even when the algorithm is more accurate
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Duran, Sergi Gálvez. "Opening the Black-Box in Private-Law Employment Relationships: A Critical Review of the Newly Implemented Spanish Workers’ Council’s Right to Access Algorithms." Global Privacy Law Review 4, Issue 1 (2023): 17–30. http://dx.doi.org/10.54648/gplr2023003.

Der volle Inhalt der Quelle
Annotation:
Article 22 of the General Data Protection Regulation (GDPR) provides individuals with the right not to be subject to automated decisions. In this article, the author questions the extent to which the legal framework for automated decision-making in the GDPR is attuned to the employment context. More specifically, the author argues that an individual’s right may not be the most appropriate approach to contesting artificial intelligence (AI) based decisions in situations involving dependency contracts, such as employment relationships. Furthermore, Article 22 GDPR derogations rarely apply in the
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Morchhale, Yogesh. "Ethical Considerations in Artificial Intelligence: Addressing Bias and Fairness in Algorithmic Decision-Making." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem31693.

Der volle Inhalt der Quelle
Annotation:
The expanding use of artificial intelligence (AI) in decision-making across a range of industries has given rise to serious ethical questions about prejudice and justice. This study looks at the moral ramifications of using AI algorithms in decision-making and looks at methods to combat prejudice and advance justice. The study investigates the underlying causes of prejudice in AI systems, the effects of biased algorithms on people and society, and the moral obligations of stakeholders in reducing bias, drawing on prior research and real-world examples. The study also addresses new frameworks a
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Starke, Christopher, Janine Baleis, Birte Keller, and Frank Marcinkowski. "Fairness perceptions of algorithmic decision-making: A systematic review of the empirical literature." Big Data & Society 9, no. 2 (2022): 205395172211151. http://dx.doi.org/10.1177/20539517221115189.

Der volle Inhalt der Quelle
Annotation:
Algorithmic decision-making increasingly shapes people's daily lives. Given that such autonomous systems can cause severe harm to individuals and social groups, fairness concerns have arisen. A human-centric approach demanded by scholars and policymakers requires considering people's fairness perceptions when designing and implementing algorithmic decision-making. We provide a comprehensive, systematic literature review synthesizing the existing empirical insights on perceptions of algorithmic fairness from 58 empirical studies spanning multiple domains and scientific disciplines. Through thor
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Hoeppner, Sven, and Martin Samek. "Procedural Fairness as Stepping Stone for Successful Implementation of Algorithmic Decision-Making in Public Administration: Review and Outlook." AUC IURIDICA 70, no. 2 (2024): 85–99. http://dx.doi.org/10.14712/23366478.2024.24.

Der volle Inhalt der Quelle
Annotation:
Algorithmic decision-making (ADM) is becoming more and more prevalent in everyday life. Due to their promise of producing faster, better, and less biased decisions, automated and data-driven processes also receive increasing attention in many different administrative settings. However, as a result of human mistakes ADM also poses the threat of producing unfair outcomes. Looming algorithmic discrimination can undermine the legitimacy of administrative decision-making. While lawyers and lawmakers face the age-old question of regulation, many decision-makers tasked with designing ADM for and impl
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Saxena, Devansh, Karla Badillo-Urquiola, Pamela J. Wisniewski, and Shion Guha. "A Framework of High-Stakes Algorithmic Decision-Making for the Public Sector Developed through a Case Study of Child-Welfare." Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021): 1–41. http://dx.doi.org/10.1145/3476089.

Der volle Inhalt der Quelle
Annotation:
Algorithms have permeated throughout civil government and society, where they are being used to make high-stakes decisions about human lives. In this paper, we first develop a cohesive framework of algorithmic decision-making adapted for the public sector (ADMAPS) that reflects the complex socio-technical interactions between human discretion, bureaucratic processes, and algorithmic decision-making by synthesizing disparate bodies of work in the fields of Human-Computer Interaction (HCI), Science and Technology Studies (STS), and Public Administration (PA). We then applied the ADMAPS framework
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Zerilli, John, Alistair Knott, James Maclaurin, and Colin Gavaghan. "Algorithmic Decision-Making and the Control Problem." Minds and Machines 29, no. 4 (2019): 555–78. http://dx.doi.org/10.1007/s11023-019-09513-7.

Der volle Inhalt der Quelle
Annotation:
AbstractThe danger of human operators devolving responsibility to machines and failing to detect cases where they fail has been recognised for many years by industrial psychologists and engineers studying the human operators of complex machines. We call it “the control problem”, understood as the tendency of the human within a human–machine control loop to become complacent, over-reliant or unduly diffident when faced with the outputs of a reliable autonomous system. While the control problem has been investigated for some time, up to this point its manifestation in machine learning contexts h
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Romanova, Anna. "Fundamentals of Modeling Algorithmic Decisions in Corporate Management." Artificial societies 19, no. 3 (2024): 0. http://dx.doi.org/10.18254/s207751800032184-1.

Der volle Inhalt der Quelle
Annotation:
Since 2014, several artificial intelligence systems have been officially appointed to management positions in international companies. Thus, it can be said that a paradigm shift in management decision-making is currently taking place: from a situation where artificial intelligence simply serves as a tool to support directors or board committees, we are moving to a situation where artificial intelligence controls the decision-making process. One of the key questions now is whether it is worthwhile to deal with law as a computation at all. High-quality computational law is the most important con
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Maw, Maw, Su-Cheng Haw, and Kok-Why Ng. "Perspectives of Defining Algorithmic Fairness in Customer-oriented Applications: A Systematic Literature Review." International Journal on Advanced Science, Engineering and Information Technology 14, no. 5 (2024): 1504–13. http://dx.doi.org/10.18517/ijaseit.14.5.11676.

Der volle Inhalt der Quelle
Annotation:
Automated decision-making systems are massively engaged in different types of businesses, including customer-oriented sectors, and bring countless achievements in persuading customers with more personalized experiences. However, it was observed that the decisions made by the algorithms could bring unfairness to a person or a group of people, according to recent studies. Thus, algorithmic fairness has become a spotlight research area, and defining a concrete version of fairness notions has also become significant research. In existing literature, there are more than 21 definitions of algorithmi
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Žliobaitė, Indrė. "Measuring discrimination in algorithmic decision making." Data Mining and Knowledge Discovery 31, no. 4 (2017): 1060–89. http://dx.doi.org/10.1007/s10618-017-0506-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Aditya Kambhampati. "Mitigating bias in financial decision systems through responsible machine learning." World Journal of Advanced Engineering Technology and Sciences 15, no. 2 (2025): 1415–21. https://doi.org/10.30574/wjaets.2025.15.2.0687.

Der volle Inhalt der Quelle
Annotation:
Algorithmic bias in financial decision systems perpetuates and sometimes amplifies societal inequities, affecting millions of consumers through discriminatory lending practices, inequitable pricing, and exclusionary fraud detection. Minority borrowers face interest rate premiums that collectively cost communities hundreds of millions of dollars annually, while technological barriers to financial inclusion affect tens of millions of "credit invisible" Americans. This article provides a comprehensive framework for detecting, measuring, and mitigating algorithmic bias across the machine learning
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Cheng, Lingwei, and Alexandra Chouldechova. "Heterogeneity in Algorithm-Assisted Decision-Making: A Case Study in Child Abuse Hotline Screening." Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (2022): 1–33. http://dx.doi.org/10.1145/3555101.

Der volle Inhalt der Quelle
Annotation:
Algorithmic risk assessment tools are now commonplace in public sector domains such as criminal justice and human services. These tools are intended to aid decision makers in systematically using rich and complex data captured in administrative systems. In this study we investigate sources of heterogeneity in the alignment between worker decisions and algorithmic risk scores in the context of a real world child abuse hotline screening use case. Specifically, we focus on heterogeneity related to worker experience. We find that senior workers are far more likely to screen in referrals for invest
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Doshi, Paras. "FROM INSIGHT TO IMPACT: THE EVOLUTION OF DATA-DRIVEN DECISION MAKING IN THE AGE OF AI." International Journal of Artificial Intelligence & Applications 16, no. 3 (2025): 83–92. https://doi.org/10.5121/ijaia.2025.16306.

Der volle Inhalt der Quelle
Annotation:
This paper presents a comprehensive critical review of contemporary technical solutions and approaches to artificial intelligence-based decision making systems in executive strategy scenarios. Drawing on systematic review of deployed technical solutions, algorithmic approaches, and empirical studies, this survey classifies and delineates the current decision support technology landscape and outlines future directions. Drawing on extensive review of current research and business application, the paper explains how AI technologies are redefining strategic decision frameworks in various industrie
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

YÁÑEZ, J., J. MONTERO, and D. GÓMEZ. "AN ALGORITHMIC APPROACH TO PREFERENCE REPRESENTATION." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 16, supp02 (2008): 1–18. http://dx.doi.org/10.1142/s0218488508005455.

Der volle Inhalt der Quelle
Annotation:
In a previous paper, the authors proposed an alternative approach to classical dimension theory, based upon a general representation of strict preferences not being restricted to partial order sets. Without any relevant restriction, the proposed approach was conceived as a potential powerful tool for decision making problems where basic information has been modeled by means of valued binary preference relations. In fact, assuming that each decision maker is able to consistently manage intensity values for preferences is a strong assumption even when there are few alternatives being involved (i
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Erlei, Alexander, Franck Nekdem, Lukas Meub, Avishek Anand, and Ujwal Gadiraju. "Impact of Algorithmic Decision Making on Human Behavior: Evidence from Ultimatum Bargaining." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 8 (October 1, 2020): 43–52. http://dx.doi.org/10.1609/hcomp.v8i1.7462.

Der volle Inhalt der Quelle
Annotation:
Recent advances in machine learning have led to the widespread adoption of ML models for decision support systems. However, little is known about how the introduction of such systems affects the behavior of human stakeholders. This pertains both to the people using the system, as well as those who are affected by its decisions. To address this knowledge gap, we present a series of ultimatum bargaining game experiments comprising 1178 participants. We find that users are willing to use a black-box decision support system and thereby make better decisions. This translates into higher levels of c
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Sharma, Shefali, Priyanka Sharma, and Rohit Sharma. "The Influence of Algorithmic Management on Employees Perceptions of Organisational Justice – A Conceptual Paper." Journal of Technology Management for Growing Economies 15, no. 1 (2024): 31–40. https://doi.org/10.15415/jtmge/2024.151004.

Der volle Inhalt der Quelle
Annotation:
Background: The use of algorithms in organizations has become increasingly common, with many companies transitioning managerial control from humans to algorithms. While algorithmic management promises significantly efficiency gains, it overlooks an important factor of employees’ perception of fairness towards their workplace. Employees pay close attention to how they perceive justice in the workplace; their behavior is shaped by these perceptions. Purpose: This study aims to explore how algorithmic management systems can be designed to balance efficiency with transparency and fairness. So, to
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Raja Kumar, Jambi Ratna, Aarti Kalnawat, Avinash M. Pawar, Varsha D. Jadhav, P. Srilatha, and Vinit Khetani. "Transparency in Algorithmic Decision-making: Interpretable Models for Ethical Accountability." E3S Web of Conferences 491 (2024): 02041. http://dx.doi.org/10.1051/e3sconf/202449102041.

Der volle Inhalt der Quelle
Annotation:
Concerns regarding their opacity and potential ethical ramifications have been raised by the spread of algorithmic decisionmaking systems across a variety of fields. By promoting the use of interpretable machine learning models, this research addresses the critical requirement for openness and moral responsibility in these systems. Interpretable models provide a transparent and intelligible depiction of how decisions are made, as opposed to complicated black-box algorithms. Users and stakeholders need this openness in order to understand, verify, and hold accountable the decisions made by thes
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Atmaja, Subhanjaya Angga. "Ethical Considerations in Algorithmic Decision-making: Towards Fair and Transparent AI Systems." Riwayat: Educational Journal of History and Humanities 8, no. 1 (2025): 620–27. https://doi.org/10.24815/jr.v8i1.44112.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence (AI)-based algorithmic decision-making is a major concern in the digital age due to its potential to improve efficiency in various sectors, including healthcare, law, and finance. However, its implementation poses significant ethical challenges, such as bias in data and a lack of transparency that can affect fairness and public trust. This research aims to explore ethical considerations in algorithmic decision-making with a focus on fairness and transparency, identify key challenges, and provide policy recommendations to improve the accountability of AI systems. The res
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Mazijn, Carmen, Carina Prunkl, Andres Algaba, Jan Danckaert, and Vincent Ginis. "LUCID: Exposing Algorithmic Bias through Inverse Design." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (2023): 14391–99. http://dx.doi.org/10.1609/aaai.v37i12.26683.

Der volle Inhalt der Quelle
Annotation:
AI systems can create, propagate, support, and automate bias in decision-making processes. To mitigate biased decisions, we both need to understand the origin of the bias and define what it means for an algorithm to make fair decisions. Most group fairness notions assess a model's equality of outcome by computing statistical metrics on the outputs. We argue that these output metrics encounter intrinsic obstacles and present a complementary approach that aligns with the increasing focus on equality of treatment. By Locating Unfairness through Canonical Inverse Design (LUCID), we generate a cano
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Schneider, Dr Jakob. "ALGORITHMIC INEQUITY IN JUSTICE: UNPACKING THE SOCIETAL IMPACT OF AI IN JUDICIAL DECISION-MAKING." International Journal of Advanced Artificial Intelligence Research 2, no. 1 (2025): 7–12. https://doi.org/10.55640/ijaair-v02i01-02.

Der volle Inhalt der Quelle
Annotation:
The integration of Artificial Intelligence (AI) in judicial decision-making processes has introduced both opportunities and significant concerns, particularly regarding fairness and transparency. This paper critically examines the phenomenon of algorithmic inequity within legal systems, focusing on how biased data, opaque algorithms, and lack of accountability can perpetuate or even amplify existing social injustices. Through interdisciplinary analysis, the study explores the structural factors contributing to algorithmic bias, its implications for marginalized communities, and the ethical dil
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Pasupuleti, Murali Krishna. "Auditing Black-Box AI Systems Using Counterfactual Explanations." International Journal of Academic and Industrial Research Innovations(IJAIRI) 05, no. 05 (2025): 598–608. https://doi.org/10.62311/nesx/rphcr20.

Der volle Inhalt der Quelle
Annotation:
Abstract: The widespread deployment of black-box artificial intelligence (AI) systems in high-stakes domains such as healthcare, finance, and criminal justice has intensified the demand for transparent and accountable decision-making. This study investigates the utility of counterfactual explanations as a method for auditing opaque AI models. By answering the question of what minimal change in input would alter a model’s output, counterfactuals offer a practical and interpretable means of understanding algorithmic decisions. The methodology employs Random Forest and Neural Network classifiers
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Green, Ben, and Yiling Chen. "Algorithm-in-the-Loop Decision Making." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 09 (2020): 13663–64. http://dx.doi.org/10.1609/aaai.v34i09.7115.

Der volle Inhalt der Quelle
Annotation:
We introduce a new framework for conceiving of and studying algorithms that are deployed to aid human decision making: “algorithm-in-the-loop” systems. The algorithm-in-the-loop framework centers human decision making, providing a more precise lens for studying the social impacts of algorithmic decision making aids. We report on two experiments that evaluate algorithm-in-the-loop decision making and find significant limits to these systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Sravan Yella. "Algorithmic Campaign Orchestration: A Framework for Automated Multi-Channel Marketing Decisions." Journal of Computer Science and Technology Studies 7, no. 2 (2025): 323–31. https://doi.org/10.32996/jcsts.2025.7.2.33.

Der volle Inhalt der Quelle
Annotation:
This article examines the paradigm shift from traditional rule-based marketing automation to continuous experience optimization enabled by AI-driven decision engines. The article presents an architectural framework for real-time campaign orchestration systems that leverage predictive analytics, reinforcement learning, and natural language processing to dynamically personalize customer interactions across channels. Through multiple case studies across different industry sectors, the article demonstrates how these systems process multi-source data streams to make intelligent decisions in millise
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Sravan Yella. "Algorithmic Campaign Orchestration: A Framework for Automated Multi-Channel Marketing Decisions." Journal of Computer Science and Technology Studies 7, no. 2 (2025): 165–73. https://doi.org/10.32996/jcsts.2025.7.2.15.

Der volle Inhalt der Quelle
Annotation:
This article examines the paradigm shift from traditional rule-based marketing automation to continuous experience optimization enabled by AI-driven decision engines. The article presents an architectural framework for real-time campaign orchestration systems that leverage predictive analytics, reinforcement learning, and natural language processing to dynamically personalize customer interactions across channels. Through multiple case studies across different industry sectors, the article demonstrates how these systems process multi-source data streams to make intelligent decisions in millise
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Rubel, Alan, Clinton Castro, and Adam Pham. "Algorithms, Agency, and Respect for Persons." Social Theory and Practice 46, no. 3 (2020): 547–72. http://dx.doi.org/10.5840/soctheorpract202062497.

Der volle Inhalt der Quelle
Annotation:
Algorithmic systems and predictive analytics play an increasingly important role in various aspects of modern life. Scholarship on the moral ramifications of such systems is in its early stages, and much of it focuses on bias and harm. This paper argues that in understanding the moral salience of algorithmic systems it is essential to understand the relation between algorithms, autonomy, and agency. We draw on several recent cases in criminal sentencing and K–12 teacher evaluation to outline four key ways in which issues of agency, autonomy, and respect for persons can conflict with algorithmi
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Nizar, Qashta, Abdalla kheiri Murtada, and Issa Aljaradat Dorgham. "Algorithmic Bias in Artificial Intelligence Systems and its Legal Dimensions." Journal of Asian American Studies 27, no. 3 (2025): 520–36. https://doi.org/10.5281/zenodo.15074440.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence algorithms form the foundation of modernintelligent systems, enabling machines to learn, infer, and make decisionsindependently. These algorithms have achieved significant progress due tothe abundance of data and increased computational power, enabling themto solve many complex problems. Despite these benefits, such systems faceethical and legal challenges related to algorithmic bias and its impact onmarginalized groups. This necessitates strict legislative interventions.For these reasons and others, this research focuses on the legal and ethicalchallenges associated wi
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Engelmann, Alana. "Algorithmic transparency as a fundamental right in the democratic rule of law." Brazilian Journal of Law, Technology and Innovation 1, no. 2 (2023): 169–88. http://dx.doi.org/10.59224/bjlti.v1i2.169-188.

Der volle Inhalt der Quelle
Annotation:
This article scrutinizes the escalating apprehensions surrounding algorithmic transparency, positing it as a pivotal facet for ethics and accountability in the development and deployment of artificial intelligence (AI) systems. By delving into legislative and regulatory initiatives across various jurisdictions, the article discerns how different countries and regions endeavor to institute guidelines fostering ethical and responsible AI systems. Within the United States, both the US Algorithmic Accountability Act of 2022 and The European Artificial Intelligence Act share a common objective of e
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Saibabu Neyyila, Chaitanya Neyyala, and Saumendra Das. "Leveraging AI and behavioral economics to enhance decision-making." World Journal of Advanced Research and Reviews 26, no. 2 (2025): 1721–30. https://doi.org/10.30574/wjarr.2025.26.2.1581.

Der volle Inhalt der Quelle
Annotation:
This research examines the integration of artificial intelligence (AI) within behavioral economics, specifically its impact on decision-making processes. Behavioral economics explores the psychological, cognitive, and emotional factors that influence economic choices, and AI offers innovative tools for analyzing, predicting, and shaping these decisions. The paper highlights recent advancements in AI technologies, such as machine learning, natural language processing, and predictive analytics, and their role in deepening our understanding of human economic behavior. It also investigates how AI-
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Sankaran, Ganesh, Marco A. Palomino, Martin Knahl, and Guido Siestrup. "A Modeling Approach for Measuring the Performance of a Human-AI Collaborative Process." Applied Sciences 12, no. 22 (2022): 11642. http://dx.doi.org/10.3390/app122211642.

Der volle Inhalt der Quelle
Annotation:
Despite the unabated growth of algorithmic decision-making in organizations, there is a growing consensus that numerous situations will continue to require humans in the loop. However, the blending of a formal machine and bounded human rationality also amplifies the risk of what is known as local rationality. Therefore, it is crucial, especially in a data-abundant environment that characterizes algorithmic decision-making, to devise means to assess performance holistically. In this paper, we propose a simulation-based model to address the current lack of research on quantifying algorithmic int
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Zweig, Katharina A., Georg Wenzelburger, and Tobias D. Krafft. "On Chances and Risks of Security Related Algorithmic Decision Making Systems." European Journal for Security Research 3, no. 2 (2018): 181–203. http://dx.doi.org/10.1007/s41125-018-0031-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Cundiff, W. E. "Database organisation: An algorithmic framework for decision support systems in APL." Decision Support Systems 5, no. 1 (1989): 57–64. http://dx.doi.org/10.1016/0167-9236(89)90028-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Cibaca, Khandelwal. "Bias Mitigation Strategies in AI Models for Financial Data." INTERNATIONAL JOURNAL OF INNOVATIVE RESEARCH AND CREATIVE TECHNOLOGY 11, no. 2 (2025): 1–9. https://doi.org/10.5281/zenodo.15318708.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence (AI) has become integral to financial systems, enabling automation in credit scoring, fraud detection, and investment management. However, the presence of bias in AI models can propagate systemic inequities, leading to ethical, operational, and regulatory challenges. This paper examines strategies to mitigate bias in AI systems applied to financial data. It discusses challenges associated with biased datasets, feature selection, and algorithmic decisions, alongside practical mitigation approaches such as data balancing, algorithmic fairness techniques, and post-processi
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Veldurthi, Anil Kumar. "Ethical Considerations in AI-Driven Financial Decision-Making." European Journal of Computer Science and Information Technology 13, no. 31 (2025): 49–64. https://doi.org/10.37745/ejcsit.2013/vol13n314964.

Der volle Inhalt der Quelle
Annotation:
This article examines the ethical dimensions of artificial intelligence in financial decision-making systems. As AI increasingly permeates critical functions across the financial services industry—from credit underwriting and fraud detection to algorithmic trading and personalized financial advice—it introduces profound ethical challenges that demand careful examination. It explores how algorithmic bias manifests through training data, feature selection, and algorithmic design, creating disparate outcomes for marginalized communities despite the absence of explicit discriminatory intent. The a
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Obed Boateng and Bright Boateng. "Algorithmic bias in educational systems: Examining the impact of AI-driven decision making in modern education." World Journal of Advanced Research and Reviews 25, no. 1 (2025): 2012–17. https://doi.org/10.30574/wjarr.2025.25.1.0253.

Der volle Inhalt der Quelle
Annotation:
The increasing integration of artificial intelligence and algorithmic systems in educational settings has raised critical concerns about their impact on educational equity. This paper examines the manifestation and implications of algorithmic bias across various educational domains, including admissions processes, assessment systems, and learning management platforms. Through analysis of current research and studies, we investigate how these biases can perpetuate or exacerbate existing educational disparities, particularly affecting students from marginalized communities. The study reveals tha
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Collins, Robert, Johan Redström, and Marco Rozendaal. "The Right to Contestation: Towards Repairing Our Interactions with Algorithmic Decision Systems." International Journal of Design 18, no. 1 (2024): 95–106. https://doi.org/10.57698/v18i1.06.

Der volle Inhalt der Quelle
Annotation:
This paper looks at how contestation in the context of algorithmic decision systems is essentially the progeny of repair for our more decentralised and abstracted digital world. The act of repair has often been a way for users to contest with bad design, substandard products, and disappointing outcomes - not to mention often being a necessary aspect of ensuring effective use over time. As algorithmic systems continue to make more decisions about our lives and futures, we need to look for new ways to contest their outcomes and repair potentially broken systems. Through looking at examples of co
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Pääkkönen, Juho. "Bureaucratic routines and error management in algorithmic systems." Hallinnon Tutkimus 40, no. 4 (2021): 243–53. http://dx.doi.org/10.37450/ht.107880.

Der volle Inhalt der Quelle
Annotation:
This article discusses how an analogy between algorithms and bureaucratic decision-making could help conceptualize error management in algorithmic systems. It argues that a view of algorithms as irreflexive bureaucratic processes is insufficient as an account of errors in complex public sector contexts, where algorithms operate jointly with other organizational work practices. To conceptualize such contexts, the article proposes that algorithms could be viewed as analogous to more traditional work routines in bureaucratic organizations. Doing so helps clarify that algorithmic irreflexivity bec
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Chandra, Rushil, Karun Sanjaya, AR Aravind, Ahmed Radie Abbas, Ruzieva Gulrukh, and T. S. Senthil kumar. "Algorithmic Fairness and Bias in Machine Learning Systems." E3S Web of Conferences 399 (2023): 04036. http://dx.doi.org/10.1051/e3sconf/202339904036.

Der volle Inhalt der Quelle
Annotation:
In recent years, research into and concern over algorithmic fairness and bias in machine learning systems has grown significantly. It is vital to make sure that these systems are fair, impartial, and do not support discrimination or social injustices since machine learning algorithms are becoming more and more prevalent in decision-making processes across a variety of disciplines. This abstract gives a general explanation of the idea of algorithmic fairness, the difficulties posed by bias in machine learning systems, and different solutions to these problems. Algorithmic bias and fairness in m
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

K, Suraj, Yogesh K, Malavya Manivarnnan, and Dr Mahesh Kumar Sarva. "Exploring Youth Perspectives on Algorithmic Trading: Knowledge, Trust, and Adoption." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 04 (2025): 1–9. https://doi.org/10.55041/ijsrem43789.

Der volle Inhalt der Quelle
Annotation:
The increasing adoption of algorithmic trading has significantly transformed financial markets by enabling automated decision-making and high-speed trade execution. While institutional investors and hedge funds have widely embraced this technology, its understanding and acceptance among young retail investors, particularly those aged 18 to 25, remain relatively unexplored. As digital trading platforms and fintech innovations continue to gain popularity, assessing the awareness, perception, and preferences of youth regarding algorithmic trading is crucial. This study aims to examine the extent
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

K, BDr LAKSHMINARAYANA. "Ethical AI in the IT Industry: Addressing Bias, Transparency, and Accountability in Algorithmic Decision-Making." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 03 (2025): 1–9. https://doi.org/10.55041/ijsrem42652.

Der volle Inhalt der Quelle
Annotation:
The rapid integration of artificial intelligence (AI) into the IT industry has raised significant ethical concerns, particularly regarding bias, transparency, and accountability in algorithmic decision-making. While AI systems offer transformative potential, their deployment often perpetuates existing biases, lacks transparency, and fails to ensure accountability, leading to unintended societal consequences. This study examines the ethical challenges posed by AI in the IT sector, focusing on the mechanisms through which bias is embedded in algorithms, the opacity of decision-making processes,
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Sinclair, Sean R. "Adaptivity, Structure, and Objectives in Sequential Decision-Making." ACM SIGMETRICS Performance Evaluation Review 51, no. 3 (2024): 38–41. http://dx.doi.org/10.1145/3639830.3639846.

Der volle Inhalt der Quelle
Annotation:
Sequential decision-making algorithms are ubiquitous in the design and optimization of large-scale systems due to their practical impact. The typical algorithmic paradigm ignores the sequential notion of these problems: use a historical dataset to predict future uncertainty and solve the resulting offline planning problem.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Hou, Yoyo Tsung-Yu, and Malte F. Jung. "Who is the Expert? Reconciling Algorithm Aversion and Algorithm Appreciation in AI-Supported Decision Making." Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021): 1–25. http://dx.doi.org/10.1145/3479864.

Der volle Inhalt der Quelle
Annotation:
The increased use of algorithms to support decision making raises questions about whether people prefer algorithmic or human input when making decisions. Two streams of research on algorithm aversion and algorithm appreciation have yielded contradicting results. Our work attempts to reconcile these contradictory findings by focusing on the framings of humans and algorithms as a mechanism. In three decision making experiments, we created an algorithm appreciation result (Experiment 1) as well as an algorithm aversion result (Experiment 2) by manipulating only the description of the human agent
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Phillips, Camryn, and Scott Ransom. "Algorithmic Pulsar Timing." Astronomical Journal 163, no. 2 (2022): 84. http://dx.doi.org/10.3847/1538-3881/ac403e.

Der volle Inhalt der Quelle
Annotation:
Abstract Pulsar timing is a process of iteratively fitting pulse arrival times to constrain the spindown, astrometric, and possibly binary parameters of a pulsar, by enforcing integer numbers of pulsar rotations between the arrival times. Phase connection is the process of unambiguously determining those rotation numbers between the times of arrival while determining a pulsar timing solution. Pulsar timing currently requires a manual process of step-by-step phase connection performed by individuals. In an effort to quantify and streamline this process, we created the Algorithmic Pulsar Timer (
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Pasupuleti, Murali Krishna. "Human-in-the-Loop AI: Enhancing Transparency and Accountability." International Journal of Academic and Industrial Research Innovations(IJAIRI) 05, no. 05 (2025): 574–85. https://doi.org/10.62311/nesx/rphcr18.

Der volle Inhalt der Quelle
Annotation:
Abstract: As artificial intelligence (AI) increasingly informs decisions in critical sectors such as healthcare, finance, and governance, concerns regarding algorithmic opacity and fairness have intensified. This research investigates the integration of Human-in-the-Loop (HITL) mechanisms as a strategy to enhance transparency, interpretability, and accountability in AI systems. Using real-world datasets from financial fraud detection and healthcare triage, we evaluate the comparative performance of machine learning models with and without human intervention. The study employs fairness metrics—
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Monachou, Faidra, and Ana-Andreea Stoica. "Fairness and equity in resource allocation and decision-making." ACM SIGecom Exchanges 20, no. 1 (2022): 64–66. http://dx.doi.org/10.1145/3572885.3572891.

Der volle Inhalt der Quelle
Annotation:
Fairness and equity considerations in the allocation of social goods and the development of algorithmic systems pose new challenges for decision-makers and interesting questions for the EC community. We overview a list of papers that point towards emerging directions in this research area.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Rigopoulos, Georgios. "Weighted OWA (Ordered Weighted Averaging Operator) Preference Aggregation for Group Multicriteria Decisions." International Journal of Computational and Applied Mathematics & Computer Science 3 (May 8, 2023): 10–17. http://dx.doi.org/10.37394/232028.2023.3.2.

Der volle Inhalt der Quelle
Annotation:
Group decision making is an integral part of operations and management functions in almost every business domain with substantial applications in finance and economics. In parallel to human decision makers, software agents operate in business systems and environments, collaborate, compete and perform algorithmic decision-making tasks as well. In both settings, information aggregation of decision problem parameters and agent preferences is a necessary step to generate group decision outcome. Although plenty aggregation information approaches exist, overcomplexity of the underlying aggregating o
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Talapina, Elvira V. "Administrative Algorithmic Solutions." Administrative law and procedure, October 10, 2024, 39–42. http://dx.doi.org/10.18572/2071-1166-2024-10-39-42.

Der volle Inhalt der Quelle
Annotation:
Examples of algorithms being involved in administrative practice are multiplying. However, the process of their use remains unsettled. For the legal evaluation of the results of algorithms’ decisions and subsequent legal regulation, it is important to distinguish between automated decision-making systems and machine learning systems. But if the prospects for the use of algorithms by both courts and the executive branch seem obvious, such use should certainly take into account possible risks and disadvantages. This is especially true for machine learning algorithms. First, human decision-making
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Turek, Matt. "DARPA’s In the Moment (ITM) Program: Human-Aligned Algorithms for Making Difficult Battlefield Triage Decisions." Disaster Medicine and Public Health Preparedness 18 (2024). http://dx.doi.org/10.1017/dmp.2024.196.

Der volle Inhalt der Quelle
Annotation:
Abstract DARPA’s In the Moment (ITM) program seeks to develop algorithmic decision makers for battlefield triage that are aligned with key decision-making attributes of trusted humans. ITM also seeks to develop a quantitative alignment score (based on the decision-making attributes) as a method for establishing appropriate trust in algorithmic decision-making systems. ITM is interested in a specific notion of trust, specifically the willingness of a human to delegate difficult decision-making to an algorithmic system. While the AI community often identifies technical performance characteristic
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!