Academic literature on the topic 'Algorithmic decision systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Algorithmic decision systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Algorithmic decision systems"

1

Waldman, Ari, and Kirsten Martin. "Governing algorithmic decisions: The role of decision importance and governance on perceived legitimacy of algorithmic decisions." Big Data & Society 9, no. 1 (2022): 205395172211004. http://dx.doi.org/10.1177/20539517221100449.

Full text
Abstract:
The algorithmic accountability literature to date has primarily focused on procedural tools to govern automated decision-making systems. That prescriptive literature elides a fundamentally empirical question: whether and under what circumstances, if any, is the use of algorithmic systems to make public policy decisions perceived as legitimate? The present study begins to answer this question. Using factorial vignette survey methodology, we explore the relative importance of the type of decision, the procedural governance, the input data used, and outcome errors on perceptions of the legitimacy of algorithmic public policy decisions as compared to similar human decisions. Among other findings, we find that the type of decision—low importance versus high importance—impacts the perceived legitimacy of automated decisions. We find that human governance of algorithmic systems (aka human-in-the-loop) increases perceptions of the legitimacy of algorithmic decision-making systems, even when those decisions are likely to result in significant errors. Notably, we also find the penalty to perceived legitimacy is greater when human decision-makers make mistakes than when algorithmic systems make the same errors. The positive impact on perceived legitimacy from governance—such as human-in-the-loop—is greatest for highly pivotal decisions such as parole, policing, and healthcare. After discussing the study’s limitations, we outline avenues for future research.
APA, Harvard, Vancouver, ISO, and other styles
2

Dawson, April. "Algorithmic Adjudication and Constitutional AI—The Promise of A Better AI Decision Making Future?" SMU Science and Technology Law Review 27, no. 1 (2024): 11. http://dx.doi.org/10.25172/smustlr.27.1.3.

Full text
Abstract:
Algorithmic governance is when algorithms, often in the form of AI, make decisions, predict outcomes, and manage resources in various aspects of governance. This approach can be applied in areas like public administration, legal systems, policy-making, and urban planning. Algorithmic adjudication involves using AI to assist in or decide legal disputes. This often includes the analysis of legal documents, case precedents, and relevant laws to provide recommendations or even final decisions. The AI models typically used in these emerging decision-making systems use traditionally trained AI systems on large data sets so the system can render a decision or prediction based on past practices. However, the decisions often perpetuate existing biases and can be difficult to explain. Algorithmic decision-making models using a constitutional AI framework (like Anthropic's LLM Claude) may produce results that are more explainable and aligned with societal values. The constitutional AI framework integrates core legal and ethical standards directly into the algorithm’s design and operation, ensuring decisions are made with considerations for fairness, equality, and justice. This article will discuss society’s movement toward algorithmic governance and adjudication, the challenges associated with using traditionally trained AI in these decision-making models, and the potential for better outcomes with constitutional AI models.
APA, Harvard, Vancouver, ISO, and other styles
3

Herzog, Lisa. "Algorithmisches Entscheiden, Ambiguitätstoleranz und die Frage nach dem Sinn." Deutsche Zeitschrift für Philosophie 69, no. 2 (2021): 197–213. http://dx.doi.org/10.1515/dzph-2021-0016.

Full text
Abstract:
Abstract In more and more contexts, human decision-making is replaced by algorithmic decision-making. While promising to deliver efficient and objective decisions, algorithmic decision systems have specific weaknesses, some of which are particularly dangerous if data are collected and processed by profit-oriented companies. In this paper, I focus on two problems that are at the root of the logic of algorithmic decision-making: (1) (in)tolerance for ambiguity, and (2) instantiations of Campbell’s law, i. e. of indicators that are used for “social decision-making” being subject to “corruption pressures” and tending to “distort and corrupt” the underlying social processes. As a result, algorithmic decision-making can risk missing the point of the social practice in question. These problems are intertwined with problems of structural injustice; hence, if algorithms are to deliver on their promises of efficiency and objectivity, accountability and critical scrutiny are needed.
APA, Harvard, Vancouver, ISO, and other styles
4

Kleinberg, Jon, and Manish Raghavan. "Algorithmic monoculture and social welfare." Proceedings of the National Academy of Sciences 118, no. 22 (2021): e2018340118. http://dx.doi.org/10.1073/pnas.2018340118.

Full text
Abstract:
As algorithms are increasingly applied to screen applicants for high-stakes decisions in employment, lending, and other domains, concerns have been raised about the effects of algorithmic monoculture, in which many decision-makers all rely on the same algorithm. This concern invokes analogies to agriculture, where a monocultural system runs the risk of severe harm from unexpected shocks. Here, we show that the dangers of algorithmic monoculture run much deeper, in that monocultural convergence on a single algorithm by a group of decision-making agents, even when the algorithm is more accurate for any one agent in isolation, can reduce the overall quality of the decisions being made by the full collection of agents. Unexpected shocks are therefore not needed to expose the risks of monoculture; it can hurt accuracy even under “normal” operations and even for algorithms that are more accurate when used by only a single decision-maker. Our results rely on minimal assumptions and involve the development of a probabilistic framework for analyzing systems that use multiple noisy estimates of a set of alternatives.
APA, Harvard, Vancouver, ISO, and other styles
5

Duran, Sergi Gálvez. "Opening the Black-Box in Private-Law Employment Relationships: A Critical Review of the Newly Implemented Spanish Workers’ Council’s Right to Access Algorithms." Global Privacy Law Review 4, Issue 1 (2023): 17–30. http://dx.doi.org/10.54648/gplr2023003.

Full text
Abstract:
Article 22 of the General Data Protection Regulation (GDPR) provides individuals with the right not to be subject to automated decisions. In this article, the author questions the extent to which the legal framework for automated decision-making in the GDPR is attuned to the employment context. More specifically, the author argues that an individual’s right may not be the most appropriate approach to contesting artificial intelligence (AI) based decisions in situations involving dependency contracts, such as employment relationships. Furthermore, Article 22 GDPR derogations rarely apply in the employment context, which puts organizations on the wrong track when deploying AI systems to make decisions about hiring, performance, and termination. In this scenario, emerging initiatives are calling for a shift from an individual rights perspective to a collective governance approach over data as a way to leverage collective bargaining power. Taking inspiration from these different initiatives, I propose ‘algorithmic co-governance’ to address the lack of accountability and transparency in AI-based employment decisions. Algorithmic co-governance implies giving third parties (ideally, the workforce’s legal representatives) the power to negotiate, correct, and overturn AI-based employment decision tools. In this context, Spain has implemented a law reform requiring that Workers’ Councils are informed about the ‘parameters, rules, and instructions’ on which algorithmic decision-making is based, becoming the first law in the European Union requiring employers to share information about AI-based decisions with Workers’ Councils. I use this reform to evaluate a potential algorithmic co-governance model in the workplace, highlighting some shortcomings that may deprive its quality and effectiveness. Algorithms, Artificial Intelligence, AI Systems, Automated Decision-Making, Algorithmic Co-governance, Algorithmic Management, Data Protection, Privacy, GDPR, Employment Decisions, Right To Access Algorithms, Workers’ Council
APA, Harvard, Vancouver, ISO, and other styles
6

Morchhale, Yogesh. "Ethical Considerations in Artificial Intelligence: Addressing Bias and Fairness in Algorithmic Decision-Making." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem31693.

Full text
Abstract:
The expanding use of artificial intelligence (AI) in decision-making across a range of industries has given rise to serious ethical questions about prejudice and justice. This study looks at the moral ramifications of using AI algorithms in decision-making and looks at methods to combat prejudice and advance justice. The study investigates the underlying causes of prejudice in AI systems, the effects of biased algorithms on people and society, and the moral obligations of stakeholders in reducing bias, drawing on prior research and real-world examples. The study also addresses new frameworks and strategies for advancing justice in algorithmic decision-making, emphasizing the value of openness, responsibility, and diversity in dataset gathering and algorithm development. The study concludes with suggestions for further investigation and legislative actions to guarantee that AI systems respect moral standards and advance justice and equity in the processes of making decisions. Keywords Ethical considerations, Artificial intelligence, Bias, Fairness, Algorithmic decision-making, Ethical implications, Ethical responsibilities, Stakeholders, Bias in AI systems, Impact of biased algorithms, Strategies for addressing bias, Promoting fairness, Algorithmic transparency.
APA, Harvard, Vancouver, ISO, and other styles
7

Starke, Christopher, Janine Baleis, Birte Keller, and Frank Marcinkowski. "Fairness perceptions of algorithmic decision-making: A systematic review of the empirical literature." Big Data & Society 9, no. 2 (2022): 205395172211151. http://dx.doi.org/10.1177/20539517221115189.

Full text
Abstract:
Algorithmic decision-making increasingly shapes people's daily lives. Given that such autonomous systems can cause severe harm to individuals and social groups, fairness concerns have arisen. A human-centric approach demanded by scholars and policymakers requires considering people's fairness perceptions when designing and implementing algorithmic decision-making. We provide a comprehensive, systematic literature review synthesizing the existing empirical insights on perceptions of algorithmic fairness from 58 empirical studies spanning multiple domains and scientific disciplines. Through thorough coding, we systemize the current empirical literature along four dimensions: (1) algorithmic predictors, (2) human predictors, (3) comparative effects (human decision-making vs. algorithmic decision-making), and (4) consequences of algorithmic decision-making. While we identify much heterogeneity around the theoretical concepts and empirical measurements of algorithmic fairness, the insights come almost exclusively from Western-democratic contexts. By advocating for more interdisciplinary research adopting a society-in-the-loop framework, we hope our work will contribute to fairer and more responsible algorithmic decision-making.
APA, Harvard, Vancouver, ISO, and other styles
8

Hoeppner, Sven, and Martin Samek. "Procedural Fairness as Stepping Stone for Successful Implementation of Algorithmic Decision-Making in Public Administration: Review and Outlook." AUC IURIDICA 70, no. 2 (2024): 85–99. http://dx.doi.org/10.14712/23366478.2024.24.

Full text
Abstract:
Algorithmic decision-making (ADM) is becoming more and more prevalent in everyday life. Due to their promise of producing faster, better, and less biased decisions, automated and data-driven processes also receive increasing attention in many different administrative settings. However, as a result of human mistakes ADM also poses the threat of producing unfair outcomes. Looming algorithmic discrimination can undermine the legitimacy of administrative decision-making. While lawyers and lawmakers face the age-old question of regulation, many decision-makers tasked with designing ADM for and implementing ADM in public administration wrestle with harnessing its advantages and limiting its disadvantages. “Algorithmic fairness” has evolved as key concept in developing algorithmic systems to counter detrimental outcomes. We provide a review of the vast literature on algorithmic fairness and show how key dimensions alter people’s perception of whether an algorithm is fair. In doing so, we provide entry point into this literature for anybody who is required to think about algorithmic fairness, particularly in an public administration context. We also pinpoint critical concerns about algorithmic fairness that public officials and researchers should note.
APA, Harvard, Vancouver, ISO, and other styles
9

Saxena, Devansh, Karla Badillo-Urquiola, Pamela J. Wisniewski, and Shion Guha. "A Framework of High-Stakes Algorithmic Decision-Making for the Public Sector Developed through a Case Study of Child-Welfare." Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021): 1–41. http://dx.doi.org/10.1145/3476089.

Full text
Abstract:
Algorithms have permeated throughout civil government and society, where they are being used to make high-stakes decisions about human lives. In this paper, we first develop a cohesive framework of algorithmic decision-making adapted for the public sector (ADMAPS) that reflects the complex socio-technical interactions between human discretion, bureaucratic processes, and algorithmic decision-making by synthesizing disparate bodies of work in the fields of Human-Computer Interaction (HCI), Science and Technology Studies (STS), and Public Administration (PA). We then applied the ADMAPS framework to conduct a qualitative analysis of an in-depth, eight-month ethnographic case study of algorithms in daily use within a child-welfare agency that serves approximately 900 families and 1300 children in the mid-western United States. Overall, we found that there is a need to focus on strength-based algorithmic outcomes centered in social ecological frameworks. In addition, algorithmic systems need to support existing bureaucratic processes and augment human discretion, rather than replace it. Finally, collective buy-in in algorithmic systems requires trust in the target outcomes at both the practitioner and bureaucratic levels. As a result of our study, we propose guidelines for the design of high-stakes algorithmic decision-making tools in the child-welfare system, and more generally, in the public sector. We empirically validate the theoretically derived ADMAPS framework to demonstrate how it can be useful for systematically making pragmatic decisions about the design of algorithms for the public sector.
APA, Harvard, Vancouver, ISO, and other styles
10

Zerilli, John, Alistair Knott, James Maclaurin, and Colin Gavaghan. "Algorithmic Decision-Making and the Control Problem." Minds and Machines 29, no. 4 (2019): 555–78. http://dx.doi.org/10.1007/s11023-019-09513-7.

Full text
Abstract:
AbstractThe danger of human operators devolving responsibility to machines and failing to detect cases where they fail has been recognised for many years by industrial psychologists and engineers studying the human operators of complex machines. We call it “the control problem”, understood as the tendency of the human within a human–machine control loop to become complacent, over-reliant or unduly diffident when faced with the outputs of a reliable autonomous system. While the control problem has been investigated for some time, up to this point its manifestation in machine learning contexts has not received serious attention. This paper aims to fill that gap. We argue that, except in certain special circumstances, algorithmic decision tools should not be used in high-stakes or safety-critical decisions unless the systems concerned are significantly “better than human” in the relevant domain or subdomain of decision-making. More concretely, we recommend three strategies to address the control problem, the most promising of which involves a complementary (and potentially dynamic) coupling between highly proficient algorithmic tools and human agents working alongside one another. We also identify six key principles which all such human–machine systems should reflect in their design. These can serve as a framework both for assessing the viability of any such human–machine system as well as guiding the design and implementation of such systems generally.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Algorithmic decision systems"

1

Shcherban, V. Yu. "Algorithmic and programmatic providing decision systems of linear equalizations." Thesis, Київський національний університет технологій та дизайну, 2019. https://er.knutd.edu.ua/handle/123456789/14586.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Böhnlein, Toni [Verfasser]. "Algorithmic Decision-Making in Multi-Agent Systems: Votes and Prices / Toni Böhnlein." München : Verlag Dr. Hut, 2018. http://d-nb.info/1164294113/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Björklund, Pernilla. "The curious case of artificial intelligence : An analysis of the relationship between the EU medical device regulations and algorithmic decision systems used within the medical domain." Thesis, Uppsala universitet, Juridiska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-442122.

Full text
Abstract:
The healthcare sector has become a key area for the development and application of new technology and, not least, Artificial Intelligence (AI). New reports are constantly being published about how this algorithm-based technology supports or performs various medical tasks. These illustrates the rapid development of AI that is taking place within healthcare and how algorithms are increasingly involved in systems and medical devices designed to support medical decision-making.  The digital revolution and the advancement of AI technologies represent a step change in the way healthcare may be delivered, medical services coordinated and well-being supported. It could allow for easier and faster communication, earlier and more accurate diagnosing and better healthcare at lower costs. However, systems and devices relying on AI differs significantly from other, traditional, medical devices. AI algorithms are – by nature – complex and partly unpredictable. Additionally, varying levels of opacity has made it hard, sometimes impossible, to interpret and explain recommendations or decisions made by or with support from algorithmic decision systems. These characteristics of AI technology raise important technological, practical, ethical and regulatory issues. The objective of this thesis is to analyse the relationship between the EU regulation on medical devices (MDR) and algorithmic decision systems (ADS) used within the medical domain. The principal question is whether the MDR is enough to guarantee safe and robust ADS within the European healthcare sector or if complementary (or completely different) regulation is necessary. In essence, it will be argued that (i) while ADS are heavily reliant on the quality and representativeness of underlying datasets, there are no requirements with regard to the quality or composition of these datasets in the MDR, (ii) while it is believed that ADS will lead to historically unprecedented changes in healthcare , the regulation lacks guidance on how to manage novel risks and hazards, unique to ADS, and that (iii) as increasingly autonomous systems continue to challenge the existing perceptions of how safety and performance is best maintained, new mechanisms (for transparency, human control and accountability) must be incorporated in the systems. It will also be found that the ability of ADS to change after market certification, will eventually necessitate radical changes in the current regulation and a new regulatory paradigm might be needed.
APA, Harvard, Vancouver, ISO, and other styles
4

Fairley, Andrew. "Information systems for tactical decision making." Thesis, University of Liverpool, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.241479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Weingartner, Stephan G. "System development : an algorithmic approach." Virtual Press, 1987. http://liblink.bsu.edu/uhtbin/catkey/483077.

Full text
Abstract:
The subject chosen to develop this thesis project on is developing an algorithm or methodology for system selection. The specific problem studied involves a procedure to determine anion computer system alternative is the best choice for a given user situation.The general problem to be addressed is the need for one to choose computing hardware, software, systems, or services in a -Logical approach from a user perspective, considering cost, performance and human factors. Most existing methods consider only cost and performance factors, combining these factors in ad hoc, subjective fashions to react: a selection decision. By not considering factors treat measure effectiveness and functionality of computer services for a user, existing methods ignore some of the most important measures of value to the user.In this work, a systematic and comprehensive approach to computer system selection has been developed. Also developed were methods for selecting and organizing various criteria.Also ways to assess the importance and value of different service attributes to a end-user are discussed.Finally, the feasibility of a systematic approach to computer system selection has been proven by establishing a general methodology and by proving it through a demonstration of a specific application.
APA, Harvard, Vancouver, ISO, and other styles
6

Manongga, D. H. F. "Using genetic algorithm-based methods for financial analysis." Thesis, University of East Anglia, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.320950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bacak, Hikmet Ozge. "Decision Making System Algorithm On Menopause Data Set." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12612471/index.pdf.

Full text
Abstract:
Multiple-centered clustering method and decision making system algorithm on menopause data set depending on multiple-centered clustering are described in this study. This method consists of two stages. At the first stage, fuzzy C-means (FCM) clustering algorithm is applied on the data set under consideration with a high number of cluster centers. As the output of FCM, cluster centers and membership function values for each data member is calculated. At the second stage, original cluster centers obtained in the first stage are merged till the new numbers of clusters are reached. Merging process relies upon a &ldquo<br>similarity measure&rdquo<br>between clusters defined in the thesis. During the merging process, the cluster center coordinates do not change but the data members in these clusters are merged in a new cluster. As the output of this method, therefore, one obtains clusters which include many cluster centers. In the final part of this study, an application of the clustering algorithms &ndash<br>including the multiple centered clustering method &ndash<br>a decision making system is constructed using a special data on menopause treatment. The decisions are based on the clusterings created by the algorithms already discussed in the previous chapters of the thesis. A verification of the decision making system / v decision aid system is done by a team of experts from the Department of Department of Obstetrics and Gynecology of Hacettepe University under the guidance of Prof. Sinan Beksa&ccedil<br>.
APA, Harvard, Vancouver, ISO, and other styles
8

Wan, Min. "Decision diagram algorithms for logic and timed verification." Diss., [Riverside, Calif.] : University of California, Riverside, 2008. http://proquest.umi.com/pqdweb?index=0&did=1663077981&SrchMode=2&sid=1&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1268242250&clientId=48051.

Full text
Abstract:
Thesis (Ph. D.)--University of California, Riverside, 2008.<br>Includes abstract. Title from first page of PDF file (viewed March 10, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 166-170). Also issued in print.
APA, Harvard, Vancouver, ISO, and other styles
9

Raboun, Oussama. "Multiple Criteria Spatial Risk Rating." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLED066.

Full text
Abstract:
La thèse est motivée par une étude de cas intéressante liée à l’évaluation du risque nucléaire. Le cas d’étude consiste à évaluer l’impact d’un accident nucléaire survenu dans le milieu marin. Ce problème comporte des caractéristiques spatiales, différents enjeux économiques et environnementaux, des connaissances incomplètes sur les potentiels acteurs et un nombre élevé de scénarios d’accident possibles. Le cas d’étude a été résolu en utilisant différentes techniques d’analyse décisionnelle telles que la comparaison des loteries et les outils MCDA (Multiple Criteria Decision Analysis).Une nouvelle méthode de classification ordinale, nommée Dynamic-R, est née de cette thèse, visant à fournir une notation complète et convaincante. La méthode développée a fourni des résultats intéressants au cas d’étude et des propriétés théoriques très intéressantes qui sont présenté dans les chapitres 6 et 7 de ce manuscrit<br>The thesis is motivated by an interesting case study related to environmental risk assessment. The case study problem consists on assessing the impact of a nuclear accident taking place in the marine environment. This problem is characterized by spatial characteristics, different assets characterizing the spatial area, incomplete knowledge about the possible stakeholders, and a high number of possible accident scenarios. A first solution of the case study problem was proposed where different decision analysis techniques were used such as lotteries comparison, and MCDA (Multiple Criteria Decision Analysis) tools. A new MCDA rating method, named Dynamic-R, was born from this thesis, aiming at providing a complete and convincing rating. The developed method provided interesting results to the case study, and very interesting theoretical properties that will be presented in chapters 6 and 7 of this manuscript
APA, Harvard, Vancouver, ISO, and other styles
10

Schroeder, Pascal. "Performance guaranteeing algorithms for solving online decision problems in financial systems." Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0143.

Full text
Abstract:
Cette thèse contient quelques problèmes de décision financière en ligne et des solutions. Les problèmes sont formulés comme des problèmes en ligne (OP) et des algorithmes en ligne (OA) sont créés pour résoudre. Comme il peut y avoir plusieurs OAs pour le même OP, il doit y avoir un critère afin de pouvoir faire des indications au sujet de la qualité d’un OA. Dans cette thèse ces critères sont le ratio compétitif (c), la différence compétitive (cd) et la performance numérique. Un OA qui a un c ou cd plus bas qu’un autre est à préférer. Un OA qui possède le c le plus petit est appelé optimal. Nous considérons les OPs suivants. Le problème de conversion en ligne (OCP), le problème de sélection de portefeuille en ligne (PSP) et le problème de gestion de trésorerie en ligne (CMP). Après le premier chapitre d’introduction, les OPs, la notation et l’état des arts dans le champ des OPs sont présentés. Dans le troisième chapitre on résoudre trois variantes des OCP avec des prix interconnectés. En Chapitre 4 on travaille encore sur le problème de recherche de série chronologie avec des prix interconnectés et on construit des nouveaux OAs. À la fin de ce chapitre l’OA k-DIV est créé pour le problème de recherche générale des k maximums. k-DIV est aussi optimal. En Chapitre 5 on résout le PSP avec des prix interconnectés. L’OA créé s’appelle OPIP et est optimal. En utilisant les idées de OPIP, on construit un OA pour le problème d’échange bidirectionnel qui s’appelle OCIP et qui est optimal. Avec OPIP, on construit un OA optimal pour le problème de recherche bidirectionnel (OA BUND) sachant les valeurs de θ_1 et θ_2. Pour des valeurs inconnues, on construit l’OA RUN qui est aussi optimal. Les chapitres 6 et 7 traitent sur le CMP. Dans les deux chapitres il y a des tests numériques afin de pouvoir comparer la performance des OAs nouveaux avec celle des OAs déjà établies. En Chapitre 6 on construit des OAs optimaux ; en chapitre 7 on construit des OA qui minimisent cd. L’OA BCSID résoudre le CMP avec des demandes interconnectées ; LOA aBBCSID résoudre le problème lorsqu’ on connaît les valeurs de θ_1,θ_2,m et M. L’OA n’est pas optimal. En Chapitre 7 on résout le CMP par rapport à m et M en minimisant cd (OA MRBD). Ensuite on construit l’OA HMRID et l’OA MRID pour des demandes interconnectées. MRID minimise cd et HMRID est un bon compromis entre la performance numérique et la minimisation de cd<br>This thesis contains several online financial decision problems and their solutions. The problems are formulated as online problems (OP) and online algorithms (OA) are created to solve them. Due to the fact that there can be various OA for the same OP, there must be some criteria with which one can make statements about the quality of an OA. In this thesis these criteria are the competitive ratio (c), the competitive difference (cd) and the numerical performance. An OA with a lower c is preferable to another one with a higher value. An OA that has the lowest c is called optimal. We consider the following OPS. The online conversion problem (OCP), the online portfolio selection problem (PSP) and the cash management problem (CMP). After the introductory chapter, the OPs, the notation and the state of the art in the field of OPs is presented. In the third chapter, three variants of the OCP with interrelated prices are solved. In the fourth chapter the time series search with interrelated prices is revisited and new algorithms are created. At the end of the chapter, the optimal OA k-DIV for the general k-max search with interrelated prices is developed. In Chapter 5 the PSP with interrelated prices is solved. The created OA OPIP is optimal. Using the idea of OPIP, an optimal OA for the two-way trading is created (OCIP). Having OCIP, an optimal OA for the bi-directional search knowing the values of θ_1 and θ_2 is created (BUND). For unknown θ_1 and θ_2, the optimal OA RUNis created. The chapter ends with an empirical (for OPIP) and experimental (for OCIP, BUND and RUN) testing. Chapters 6 and 7 deal with the CMP. In both of them, a numerical testing is done in order to compare the numerical performance of the new OAs to the one of the already established ones. In Chapter 6 an optimal OA is constructed; in Chapter 7, OAs are designed which minimize cd. The OA BCSID solves the CMP with interrelated demands to optimality. The OA aBBCSID solves the CMP when the values of de θ_1, θ_2,m and M are known; however, this OA is not optimal. In Chapter 7 the CMP is solved, knowing m and M and minimizing cd (OA MRBD). For the interrelated demands, a heuristic OA (HMRID) and a cd-minimizing OA (MRID) is presented. HMRID is good compromise between the numerical performance and the minimization of cd. The thesis concludes with a short discussion about shortcomings of the considered OPs and the created OAs. Then some remarks about future research possibilities in this field are given
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Algorithmic decision systems"

1

Tijms, H. C. Stochastic models: An algorithmic approach. Wiley, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Szapiro, Tomasz, and Janusz Kacprzyk, eds. Collective Decisions: Theory, Algorithms And Decision Support Systems. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-84997-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chang, Hyeong Soo. Simulation-Based Algorithms for Markov Decision Processes. 2nd ed. Springer London, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Koukoudakis, Alexandros. Visualisation decision algorithm for temporal database management system. UMIST, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

A, Grundel Don, Murphey Robert, and Pardalos P. M. 1954-, eds. Theory and algorithms for cooperative systems. World Scientific, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zilio, Daniel C. Physical database design decision algorithms and concurrent reorganization for parallel database systems. University of Toronto, Dept. of Computer Science, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Center, Langley Research, ed. An algorithm for integrated subsystem embodiment and system synthesis. National Aeronautics and Space Administration, Langley Research Center, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Center, Langley Research, ed. An algorithm for integrated subsystem embodiment and system synthesis. National Aeronautics and Space Administration, Langley Research Center, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gdanskiy, Nikolay. Fundamentals of the theory and algorithms on graphs. INFRA-M Academic Publishing LLC., 2020. http://dx.doi.org/10.12737/978686.

Full text
Abstract:
The textbook describes the main theoretical principles of graph theory, the main tasks to be solved using graph structures, and General methods of their solution and specific algorithms, with estimates of their complexity. I covered a lot of the examples given questions to test knowledge and tasks for independent decisions. Along with the control tasks to verify the theoretical training provided practical assignments to develop programs to study topics of graph theory.&#x0D; Meets the requirements of Federal state educational standards of higher education of the last generation.&#x0D; Designed for undergraduate and graduate programs, studying information technology, for in-depth training in analysis and design of systems of complex structure. Also the guide can be useful to specialists of the IT sphere in the study of algorithmic aspects of graph theory.
APA, Harvard, Vancouver, ISO, and other styles
10

Topolsky, Nikolay, and Valeriy Vilisov. Methods, models and algorithms in security systems: machine learning, robotics, insurance, risks, control. Publishing Center RIOR, 2021. http://dx.doi.org/10.29039/02072-2.

Full text
Abstract:
The monograph examines topical issues of decision support and management in safety systems for fire and emergency situations through the use of innovative approaches and tools for operations research, artificial intelligence, robotics and management methods in organizational systems.&#x0D; The monograph is intended for faculty, researchers, graduate students (adjuncts) and doctoral students, as well as for undergraduates, students and listeners of educational organizations, all those who are interested in the problems of decision support and management in security systems.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Algorithmic decision systems"

1

Prestwich, Steve D., S. Armagan Tarim, Roberto Rossi, and Brahim Hnich. "Neuroevolutionary Inventory Control in Multi-Echelon Systems." In Algorithmic Decision Theory. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04428-1_35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kumova, Bora İ., and Hüseyin Çakır. "Algorithmic Decision of Syllogisms." In Trends in Applied Intelligent Systems. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13025-0_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rubel, Alan, Adam Pham, and Clinton Castro. "Agency Laundering and Algorithmic Decision Systems." In Information in Contemporary Society. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15742-5_56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Seeger, Tessa. "Axiomatic and Algorithmic Study on Different Areas of Collective Decision Making." In Multi-Agent Systems. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-20614-6_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zoomer, Thijmen, Dolf van der Beek, Coen van Gulijk, and Jan Harmen Kwantes. "Algorithmic Management and Occupational Safety: The End Does not Justify the Means." In Studies in Systems, Decision and Control. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40997-4_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Acharya, Shubhashree, Harshal Anil Salunkhe, and M. Mathiyarasan. "Navigating Financial Waters: Exploring the Intersection of Algorithmic Trading and Market Liquidity Dynamics." In Studies in Systems, Decision and Control. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-67890-5_48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

De Stefano, Valerio. "Algorithmic Bosses and What to Do About Them: Automation, Artificial Intelligence and Labour Protection." In Studies in Systems, Decision and Control. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-45340-4_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Miller, Gloria J. "Artificial Intelligence Project Success Factors—Beyond the Ethical Principles." In Lecture Notes in Business Information Processing. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98997-2_4.

Full text
Abstract:
AbstractThe algorithms implemented through artificial intelligence (AI) and big data projects are used in life-and-death situations. Despite research that addresses varying aspects of moral decision-making based upon algorithms, the definition of project success is less clear. Nevertheless, researchers place the burden of responsibility for ethical decisions on the developers of AI systems. This study used a systematic literature review to identify five categories of AI project success factors in 17 groups related to moral decision-making with algorithms. It translates AI ethical principles into practical project deliverables and actions that underpin the success of AI projects. It considers success over time by investigating the development, usage, and consequences of moral decision-making by algorithmic systems. Moreover, the review reveals and defines AI success factors within the project management literature. Project managers and sponsors can use the results during project planning and execution.
APA, Harvard, Vancouver, ISO, and other styles
9

Baumeister, Jan, Bernd Finkbeiner, Frederik Scheerer, Julian Siber, and Tobias Wagenpfeil. "Stream-Based Monitoring of Algorithmic Fairness." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-90643-5_4.

Full text
Abstract:
Abstract Automatic decision and prediction systems are increasingly deployed in applications where they significantly impact the livelihood of people, such as for predicting the creditworthiness of loan applicants or the recidivism risk of defendants. These applications have given rise to a new class of algorithmic-fairness specifications that require the systems to decide and predict without bias against social groups. Verifying these specifications statically is often out of reach for realistic systems, since the systems may, e.g., employ complex learning components, and reason over a large input space. In this paper, we therefore propose stream-based monitoring as a solution for verifying the algorithmic fairness of decision and prediction systems at runtime. Concretely, we present a principled way to formalize algorithmic fairness over temporal data streams in the specification language RTLola and demonstrate the efficacy of this approach on a number of benchmarks. Besides synthetic scenarios that particularly highlight its efficiency on streams with a scaling amount of data, we notably evaluate the monitor on real-world data from the recidivism prediction tool COMPAS.
APA, Harvard, Vancouver, ISO, and other styles
10

dos Anjos, Lucas Costa. "Rethinking Algorithmic Explainability Through the Lenses of Intellectual Property and Competition." In Information Technology and Law Series. T.M.C. Asser Press, 2024. https://doi.org/10.1007/978-94-6265-639-0_13.

Full text
Abstract:
AbstractAlgorithmic decision-making is integral to digital platforms, influencing user experiences and societal dynamics. This paper chapter scrutinizes algorithmic opacity, highlighting the inherent biases, the anti-competitive strategies that may result from dominant market power and the potential for discrimination within these systems. Despite the promise of objectivity, algorithms often operate under a veil of opacity, shaping content and information access, with significant implications for individual perspectives and societal functioning. The chapter explores the legal challenges posed by the protection of algorithms through the lenses of intellectual property rights and competition law. It calls for a multifaceted regulatory approach to ensure transparency. The analysis emphasises the need to balance innovation with competition and societal well-being, advocating for a right to explanation in the face of automated decisions within the European Union.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Algorithmic decision systems"

1

Aminou, Loubna, Abdelaziz Daaif, Maha Soulami, Abderrahim Chalfaouat, and Mohamed Youssfi. "Converging human and algorithmic biases in the hiring decision-making process." In 2024 International Conference on Intelligent Systems and Computer Vision (ISCV). IEEE, 2024. http://dx.doi.org/10.1109/iscv60512.2024.10620077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tajima, Keito, Naoki Ichijo, Yuta Nakahara, Koshi Shimada, and Toshiyasu Mathushima. "An Algorithmic Framework for Constructing Multiple Decision Trees by Evaluating Their Combination Performance Throughout the Construction Process." In 2024 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2024. https://doi.org/10.1109/smc54092.2024.10830978.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sherman, Jason A. F., Natalie M. Isenberg, John D. Siirola, and Chrysanthos E. Gounaris. "Recent Advances of PyROS: A Pyomo Solver for Nonconvex Two-Stage Robust Optimization in Process Systems Engineering." In Foundations of Computer-Aided Process Design. PSE Press, 2024. http://dx.doi.org/10.69997/sct.142058.

Full text
Abstract:
In this work, we present recent algorithmic and implementation advances of the nonconvex two-stage robust optimization solver PyROS. Our advances include extensions of the scope of PyROS to models with uncertain variable bounds, improvements to the formulations and/or initializations of the various subproblems used by the underlying cutting set algorithm, and extensions to the pre-implemented uncertainty set interfaces. The effectiveness of PyROS is demonstrated through the results of an original benchmarking study on a library of over 8,500 small-scale instances, with variations in the nonlinearities, degree-of-freedom partitioning, uncertainty sets, and polynomial decision rule approximations. To demonstrate the utility of PyROS for large-scale process models, we present the results of a carbon capture case study. Overall, our results highlight the effectiveness of PyROS for obtaining robust solutions to optimization problems with uncertain equality constraints.
APA, Harvard, Vancouver, ISO, and other styles
4

Roberge, P. R., and K. R. Trethewey. "An Indexing System of Corrosion Failures for Case-Based Reasoning." In CORROSION 1996. NACE International, 1996. https://doi.org/10.5006/c1996-96359.

Full text
Abstract:
Abstract The difficulties associated with the committal of human expert knowledge to computers for the development of more effective knowledge-based systems have raised the possibility of mimicking the process of reasoning from previous experiences. Experts are known to make proficient decisions based more upon analogy with similar events and situations than the kind of sequential mechanisms used in many algorithmic approaches. For many years, both law and business schools have used cases as the foundation for knowledge in their respective disciplines. Computer reasoning by analogy, a technique known as Case-Based Reasoning (CBR) has met with tangible success in such diverse human decision-making applications as banking, autoclave loading, tactical decision-making, and foreign trade negotiations. Failure analysts and corrosion engineers also reason by analogy when faced with new situations or problems. The CBR approach is particularly valuable in cases containing ill-structured problems, uncertainty, ambiguity, and missing data. Dynamic environments can also be tackled, or when there are shifting, ill-defined and competing objectives. Cases where there are action feedback loops, multiple human involvement, and multiple and potentially changing organizational goals and norms can also be tackled. This paper describes a method of indexing case histories for use in a case-based reasoning system in support of failure analysis.
APA, Harvard, Vancouver, ISO, and other styles
5

Nusser, Jeffrey K., Darryl J. Stimson, Eric Herzberg, and Charles A. Babish. "Data-Driven Corrosion Prevention and Control Decisions for the USAF." In SSPC 2017 Greencoat. SSPC, 2017. https://doi.org/10.5006/s2017-00041.

Full text
Abstract:
Abstract Corrosion significantly impacts1 safety, availability and sustainment costs of U.S. Air Force (AF) systems and equipment. System downtime due to corrosion maintenance decreases the availability of systems to perform their National defense mission and drives the need for more aircraft and associated logistics tail. In addition, the AF spends about $5.5 Billion per year, about 21 percent of the annual AF maintenance budget, on corrosion maintenance. This cost exceeds the annual Pentagon budget for the campaign against the Islamic State.2 Because of these significant impacts, AF leaders need reliable maintenance data and analytical tools to make decisions to reduce the impact of corrosion maintenance. This paper proposes that AF maintenance leaders adopt decision-making model that is built upon metrics developed by LMI3 to prioritize opportunities for data-driven corrosion maintenance decisions. The LMI metrics methodology uses top-down and bottom-up approaches to converge on an accurate estimate for corrosion-related availability and maintenance cost. The top-down approach starts with DoD-wide data systems then uses a process of elimination to yield AF corrosion maintenance costs. The bottom-up approach aggregates labor and material cost data from maintenance records, using an algorithm or “recipe” developed jointly with AF maintenance experts, to yield availability and cost data. LMI bridges gaps between the top-down and bottom-up totals by applying statistically valid scaling factors. The resulting metrics feed a corrosion decision-making model that includes performance monitoring, corrosion problem identification, analysis of options, and selecting and launching solutions. The proposed decision-making model and metrics will enable stakeholders to make data-driven assessments of which subsystems and maintenance activities to investigate for potential corrosion maintenance improvements.
APA, Harvard, Vancouver, ISO, and other styles
6

Plambeck, Swantje, Gorschwin Fey, Jakob Schyga, Johannes Hinckeldeyn, and Jochen Kreutzfeldt. "Explaining Cyber-Physical Systems Using Decision Trees." In 2022 2nd International Workshop on Computation-Aware Algorithmic Design for Cyber-Physical Systems (CAADCPS). IEEE, 2022. http://dx.doi.org/10.1109/caadcps56132.2022.00006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Schecter, Aaron, Eric Bogert, and Nina Lauharatanahirun. "Algorithmic appreciation or aversion? The moderating effects of uncertainty on algorithmic decision making." In CHI '23: CHI Conference on Human Factors in Computing Systems. ACM, 2023. http://dx.doi.org/10.1145/3544549.3585908.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Haid, Charlotte, Alicia Lang, and Johannes Fottner. "Explaining algorithmic decisions: design guidelines for explanations in User Interfaces." In 14th International Conference on Applied Human Factors and Ergonomics (AHFE 2023). AHFE International, 2023. http://dx.doi.org/10.54941/ahfe1003764.

Full text
Abstract:
Artificial Intelligence (AI)-based decision support is becoming a growing issue in manufacturing and logistics. Users of AI-based systems have the claim to understand the decisions made by the systems. In addition, users like workers or managers, but also works councils in companies, demand transparency in the use of AI. Given this background, AI research faces the challenge of making the decisions of algorithmic systems explainable. Algorithms, especially in the field of AI, but also classical algorithms do not provide an explanation for their decision. To generate such explanations, new algorithms have been designed to explain the decisions of the other algorithms post hoc. This subfield is called explainable artificial intelligence (XAI). Methods like local interpretable model-agnostic explanations (LIME), shapley additive explanations (SHAP) or layer-wise relevance propagation (LRP) can be applied. LIME is an algorithm that can explain the predictions of any classifier by learning an interpretable model around the prediction locally. In the case of image recognition, for example, a LIME algorithm can highlight the image areas based on which the algorithm arrived at its decision. They even show that the algorithm can also come to a result based on the image caption. SHAP, a game theoretic approach that can be applied to the output of any machine learning model, connects optimal credit allocation with local explanations. It uses Shapley values as in game theory for the allocation. In the research of XAI, explanatory user interfaces and user interactions have hardly been studied. One of the most crucial factors to make a model understandable through explanations is the involvement of users in XAI. Human-computer interaction skills are needed in addition to technical expertise. According to Miller and Molnar, good explanations should be designed contrastively to explain why event A happened instead of another event B, rather than just emphasizing why event A occurred. In addition, it is important that explanations are limited to only one or two causes and are thus formulated selectively. In literature, four guidelines to be respected for explanations are formulated: use a natural language, use various methods to explain, adapt to mental models of users and be responsive, so a user can ask follow-up questions. The explanations are often very mathematical and a deep knowledge of details is needed to understand the explanations. In this paper, we present design guidelines to help make explanations of algorithms understandable and user-friendly. We use the example of AI-based algorithmic scheduling in logistics and show the importance of a comprehensive user interface in explaining decisions. In our use case, AI-based shift scheduling in logistics, where workers are assigned to workplaces based on their preferences, we designed a user interface to support transparency as well as explainability of the underlying algorithm and then evaluated it with various users and two different user interfaces. We show excerpts from the user interface and our explanations for the users and give recommendations for the creation of explanations in user interfaces.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Ruotong, F. Maxwell Harper, and Haiyi Zhu. "Factors Influencing Perceived Fairness in Algorithmic Decision-Making." In CHI '20: CHI Conference on Human Factors in Computing Systems. ACM, 2020. http://dx.doi.org/10.1145/3313831.3376813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Blair, Kathryn, Pil Hansen, and Lora Oehlberg. "Participatory Art for Public Exploration of Algorithmic Decision-Making." In DIS '21: Designing Interactive Systems Conference 2021. ACM, 2021. http://dx.doi.org/10.1145/3468002.3468235.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Algorithmic decision systems"

1

Lewis, Dustin, Naz Modirzadeh, and Gabriella Blum. War-Algorithm Accountability. Harvard Law School Program on International Law and Armed Conflict, 2016. http://dx.doi.org/10.54813/fltl8789.

Full text
Abstract:
In War-Algorithm Accountability (August 2016), we introduce a new concept—war algorithms—that elevates algorithmically-derived “choices” and “decisions” to a, and perhaps the, central concern regarding technical autonomy in war. We thereby aim to shed light on and recast the discussion regarding “autonomous weapon systems” (AWS). We define “war algorithm” as any algorithm that is expressed in computer code, that is effectuated through a constructed system, and that is capable of operating in relation to armed conflict. In introducing this concept, our foundational technological concern is the capability of a constructed system, without further human intervention, to help make and effectuate a “decision” or “choice” of a war algorithm. Distilled, the two core ingredients are an algorithm expressed in computer code and a suitably capable constructed system. Through that lens, we link international law and related accountability architectures to relevant technologies. We sketch a three-part (non-exhaustive) approach that highlights traditional and unconventional accountability avenues. We focus largely on international law because it is the only normative regime that purports—in key respects but with important caveats—to be both universal and uniform. In this way, international law is different from the myriad domestic legal systems, administrative rules, or industry codes that govern the development and use of technology in all other spheres. By not limiting our inquiry only to weapon systems, we take an expansive view, showing how the broad concept of war algorithms might be susceptible to regulation—and how those algorithms might already fit within the existing regulatory system established by international law.
APA, Harvard, Vancouver, ISO, and other styles
2

Tipton, Kelley, Brian F. Leas, Emilia Flores, et al. Impact of Healthcare Algorithms on Racial and Ethnic Disparities in Health and Healthcare. Agency for Healthcare Research and Quality (AHRQ), 2023. http://dx.doi.org/10.23970/ahrqepccer268.

Full text
Abstract:
Objectives. To examine the evidence on whether and how healthcare algorithms (including algorithm-informed decision tools) exacerbate, perpetuate, or reduce racial and ethnic disparities in access to healthcare, quality of care, and health outcomes, and examine strategies that mitigate racial and ethnic bias in the development and use of algorithms. Data sources. We searched published and grey literature for relevant studies published between January 2011 and February 2023. Based on expert guidance, we determined that earlier articles are unlikely to reflect current algorithms. We also hand-searched reference lists of relevant studies and reviewed suggestions from experts and stakeholders. Review methods. Searches identified 11,500 unique records. Using predefined criteria and dual review, we screened and selected studies to assess one or both Key Questions (KQs): (1) the effect of algorithms on racial and ethnic disparities in health and healthcare outcomes and (2) the effect of strategies or approaches to mitigate racial and ethnic bias in the development, validation, dissemination, and implementation of algorithms. Outcomes of interest included access to healthcare, quality of care, and health outcomes. We assessed studies’ methodologic risk of bias (ROB) using the ROBINS-I tool and piloted an appraisal supplement to assess racial and ethnic equity-related ROB. We completed a narrative synthesis and cataloged study characteristics and outcome data. We also examined four Contextual Questions (CQs) designed to explore the context and capture insights on practical aspects of potential algorithmic bias. CQ 1 examines the problem’s scope within healthcare. CQ 2 describes recently emerging standards and guidance on how racial and ethnic bias can be prevented or mitigated during algorithm development and deployment. CQ 3 explores stakeholder awareness and perspectives about the interaction of algorithms and racial and ethnic disparities in health and healthcare. We addressed these CQs through supplemental literature reviews and conversations with experts and key stakeholders. For CQ 4, we conducted an in-depth analysis of a sample of six algorithms that have not been widely evaluated before in the published literature to better understand how their design and implementation might contribute to disparities. Results. Fifty-eight studies met inclusion criteria, of which three were included for both KQs. One study was a randomized controlled trial, and all others used cohort, pre-post, or modeling approaches. The studies included numerous types of clinical assessments: need for intensive care or high-risk care management; measurement of kidney or lung function; suitability for kidney or lung transplant; risk of cardiovascular disease, stroke, lung cancer, prostate cancer, postpartum depression, or opioid misuse; and warfarin dosing. We found evidence suggesting that algorithms may: (a) reduce disparities (i.e., revised Kidney Allocation System, prostate cancer screening tools); (b) perpetuate or exacerbate disparities (e.g., estimated glomerular filtration rate [eGFR] for kidney function measurement, cardiovascular disease risk assessments); and/or (c) have no effect on racial or ethnic disparities. Algorithms for which mitigation strategies were identified are included in KQ 2. We identified six types of strategies often used to mitigate the potential of algorithms to contribute to disparities: removing an input variable; replacing a variable; adding one or more variables; changing or diversifying the racial and ethnic composition of the patient population used to train or validate a model; creating separate algorithms or thresholds for different populations; and modifying the statistical or analytic techniques used by an algorithm. Most mitigation efforts improved proximal outcomes (e.g., algorithmic calibration) for targeted populations, but it is more challenging to infer or extrapolate effects on longer term outcomes, such as racial and ethnic disparities. The scope of racial and ethnic bias related to algorithms and their application is difficult to quantify, but it clearly extends across the spectrum of medicine. Regulatory, professional, and corporate stakeholders are undertaking numerous efforts to develop standards for algorithms, often emphasizing the need for transparency, accountability, and representativeness. Conclusions. Algorithms have been shown to potentially perpetuate, exacerbate, and sometimes reduce racial and ethnic disparities. Disparities were reduced when race and ethnicity were incorporated into an algorithm to intentionally tackle known racial and ethnic disparities in resource allocation (e.g., kidney transplant allocation) or disparities in care (e.g., prostate cancer screening that historically led to Black men receiving more low-yield biopsies). It is important to note that in such cases the rationale for using race and ethnicity was clearly delineated and did not conflate race and ethnicity with ancestry and/or genetic predisposition. However, when algorithms include race and ethnicity without clear rationale, they may perpetuate the incorrect notion that race is a biologic construct and contribute to disparities. Finally, some algorithms may reduce or perpetuate disparities without containing race and ethnicity as an input. Several modeling studies showed that applying algorithms out of context of original development (e.g., illness severity scores used for crisis standards of care) could perpetuate or exacerbate disparities. On the other hand, algorithms may also reduce disparities by standardizing care and reducing opportunities for implicit bias (e.g., Lung Allocation Score for lung transplantation). Several mitigation strategies have been shown to potentially reduce the contribution of algorithms to racial and ethnic disparities. Results of mitigation efforts are highly context specific, relating to unique combinations of algorithm, clinical condition, population, setting, and outcomes. Important future steps include increasing transparency in algorithm development and implementation, increasing diversity of research and leadership teams, engaging diverse patient and community groups in the development to implementation lifecycle, promoting stakeholder awareness (including patients) of potential algorithmic risk, and investing in further research to assess the real-world effect of algorithms on racial and ethnic disparities before widespread implementation.
APA, Harvard, Vancouver, ISO, and other styles
3

Buchanan, Ben. The AI Triad and What It Means for National Security Strategy. Center for Security and Emerging Technology, 2020. http://dx.doi.org/10.51593/20200021.

Full text
Abstract:
One sentence summarizes the complexities of modern artificial intelligence: Machine learning systems use computing power to execute algorithms that learn from data. This AI triad of computing power, algorithms, and data offers a framework for decision-making in national security policy.
APA, Harvard, Vancouver, ISO, and other styles
4

Ludwig, Jens, and Sendhil Mullainathan. Fragile Algorithms and Fallible Decision-Makers: Lessons from the Justice System. National Bureau of Economic Research, 2021. http://dx.doi.org/10.3386/w29267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kulhandjian, Hovannes. Smart Robot Design and Implementation to Assist Pedestrian Road Crossing. Mineta Transportation Institute, 2024. http://dx.doi.org/10.31979/mti.2024.2353.

Full text
Abstract:
This research focuses on designing and developing a smart robot to assist pedestrians with road crossings. Pedestrian safety is a major concern, as highlighted by the high annual rates of fatalities and injuries. In 2020, the United States recorded 6,516 pedestrian fatalities and approximately 55,000 injuries, with children under 16 being especially vulnerable. This project aims to address this need by offering an innovative solution that prioritizes real-time detection and intelligent decision-making at intersections. Unlike existing studies that rely on traffic light infrastructure, our approach accurately identifies both vehicles and pedestrians at intersections, creating a comprehensive safety system. Our strategy involves implementing advanced Machine Learning (ML) algorithms for real-time detection of vehicles, pedestrians, and cyclists. These algorithms, executed in Python, leverage data from LiDAR and video cameras to assess road conditions and guide pedestrians and cyclists safely through intersections. The smart robot, powered by ML insights, will make intelligent decisions to ensure a safer and more secure road crossing experience for pedestrians and cyclists. This project is a pioneering effort in holistic pedestrian safety, ensuring robust detection capabilities and intelligent decision-making.
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, W. S., Victor Alchanatis, and Asher Levi. Innovative yield mapping system using hyperspectral and thermal imaging for precision tree crop management. United States Department of Agriculture, 2014. http://dx.doi.org/10.32747/2014.7598158.bard.

Full text
Abstract:
Original objectives and revisions – The original overall objective was to develop, test and validate a prototype yield mapping system for unit area to increase yield and profit for tree crops. Specific objectives were: (1) to develop a yield mapping system for a static situation, using hyperspectral and thermal imaging independently, (2) to integrate hyperspectral and thermal imaging for improved yield estimation by combining thermal images with hyperspectral images to improve fruit detection, and (3) to expand the system to a mobile platform for a stop-measure- and-go situation. There were no major revisions in the overall objective, however, several revisions were made on the specific objectives. The revised specific objectives were: (1) to develop a yield mapping system for a static situation, using color and thermal imaging independently, (2) to integrate color and thermal imaging for improved yield estimation by combining thermal images with color images to improve fruit detection, and (3) to expand the system to an autonomous mobile platform for a continuous-measure situation. Background, major conclusions, solutions and achievements -- Yield mapping is considered as an initial step for applying precision agriculture technologies. Although many yield mapping systems have been developed for agronomic crops, it remains a difficult task for mapping yield of tree crops. In this project, an autonomous immature fruit yield mapping system was developed. The system could detect and count the number of fruit at early growth stages of citrus fruit so that farmers could apply site-specific management based on the maps. There were two sub-systems, a navigation system and an imaging system. Robot Operating System (ROS) was the backbone for developing the navigation system using an unmanned ground vehicle (UGV). An inertial measurement unit (IMU), wheel encoders and a GPS were integrated using an extended Kalman filter to provide reliable and accurate localization information. A LiDAR was added to support simultaneous localization and mapping (SLAM) algorithms. The color camera on a Microsoft Kinect was used to detect citrus trees and a new machine vision algorithm was developed to enable autonomous navigations in the citrus grove. A multimodal imaging system, which consisted of two color cameras and a thermal camera, was carried by the vehicle for video acquisitions. A novel image registration method was developed for combining color and thermal images and matching fruit in both images which achieved pixel-level accuracy. A new Color- Thermal Combined Probability (CTCP) algorithm was created to effectively fuse information from the color and thermal images to classify potential image regions into fruit and non-fruit classes. Algorithms were also developed to integrate image registration, information fusion and fruit classification and detection into a single step for real-time processing. The imaging system achieved a precision rate of 95.5% and a recall rate of 90.4% on immature green citrus fruit detection which was a great improvement compared to previous studies. Implications – The development of the immature green fruit yield mapping system will help farmers make early decisions for planning operations and marketing so high yield and profit can be achieved.
APA, Harvard, Vancouver, ISO, and other styles
7

Pasupuleti, Murali Krishna. Optimal Control and Reinforcement Learning: Theory, Algorithms, and Robotics Applications. National Education Services, 2025. https://doi.org/10.62311/nesx/rriv225.

Full text
Abstract:
Abstract: Optimal control and reinforcement learning (RL) are foundational techniques for intelligent decision-making in robotics, automation, and AI-driven control systems. This research explores the theoretical principles, computational algorithms, and real-world applications of optimal control and reinforcement learning, emphasizing their convergence for scalable and adaptive robotic automation. Key topics include dynamic programming, Hamilton-Jacobi-Bellman (HJB) equations, policy optimization, model-based RL, actor-critic methods, and deep RL architectures. The study also examines trajectory optimization, model predictive control (MPC), Lyapunov stability, and hierarchical RL for ensuring safe and robust control in complex environments. Through case studies in self-driving vehicles, autonomous drones, robotic manipulation, healthcare robotics, and multi-agent systems, this research highlights the trade-offs between model-based and model-free approaches, as well as the challenges of scalability, sample efficiency, hardware acceleration, and ethical AI deployment. The findings underscore the importance of hybrid RL-control frameworks, real-world RL training, and policy optimization techniques in advancing robotic intelligence and autonomous decision-making. Keywords: Optimal control, reinforcement learning, model-based RL, model-free RL, dynamic programming, policy optimization, Hamilton-Jacobi-Bellman equations, actor-critic methods, deep reinforcement learning, trajectory optimization, model predictive control, Lyapunov stability, hierarchical RL, multi-agent RL, robotics, self-driving cars, autonomous drones, robotic manipulation, AI-driven automation, safety in RL, hardware acceleration, sample efficiency, hybrid RL-control frameworks, scalable AI.
APA, Harvard, Vancouver, ISO, and other styles
8

Lagutin, Andrey, and Tatyana Sidorina. SYSTEM OF FORMATION OF PROFESSIONAL AND PERSONAL SELF-GOVERNMENT AMONG CADETS OF MILITARY INSTITUTES. Science and Innovation Center Publishing House, 2020. http://dx.doi.org/10.12731/self-government.

Full text
Abstract:
When carrying out professional activities, officers of the VNG of the Russian Federation are often in difficult, stressful, emotionally stressful situations associated with the use of weapons as a particularly dangerous means of destruction. The right to use a weapon by an officer makes him responsible for its use. And therefore requires the officer to make a balanced optimal decision, which is associated with the risk and transience of events, and in which no mistake can be made, since the price of it can be someone's life. It is at such a moment that it is important that the officer has stable skills in making a decision on the use of weapons, and this requires skills not only in managing subordinates or the situation,but in managing himself. The complication of the military-professional activity, manifested in the need to develop the ability to quickly and accurately make command decisions, exacerbating the problem of social responsibility of an officer who has the management of unit that leads to an understanding of his singular personal and professional responsibility, as the ability to govern themselves makes it possible to achieve a positive result of the Department for the DBA. This characterizes the need for a commander to have the ability to manage himself, as a "system" that manages others. Forming skills of self-control, patience, compassion, having mastered algorithms of making managerial decisions, the cycle of implementing managerial functions, etc., a person comes to the belief: "before effectively managing others, it is necessary to learn how to manage yourself." The required level of personal and professional maturity can be formed in a person as a result of purposeful self-management, which determines the special role of professional and personal self-management in the training of future officers.
APA, Harvard, Vancouver, ISO, and other styles
9

Hashemi, Sara, Hengameh Ferdosian, and Hadi Zamanian. Accuracy of artificial intelligence in CT interpretation in covid-19: a systematic review protocol for systematic review and meta-analysis. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, 2021. http://dx.doi.org/10.37766/inplasy2021.10.0048.

Full text
Abstract:
Review question / Objective: The aim of this systematic review is to compare the accuracy of artificial intelligence algorithms with radiologist panels in CT interpretation in covid-19. Condition being studied: COVID-19 disease was reported as the cause of the outbreak of pneumonia at the end of 2019. One of the main complications of COVID-19 is pulmonary involvement which could be diagnosed by CT-scan dominantly. Because of the increasing rate of these patients along with considering patients in remote areas, CT interpretations are a heavy burden on radiologists. Therefore artificial intelligence algorithms have become critical and time-saving systems in decision-making for these patients.
APA, Harvard, Vancouver, ISO, and other styles
10

Baader, Franz, and Ralf Küsters. Matching Concept Descriptions with Existential Restrictions Revisited. Aachen University of Technology, 1999. http://dx.doi.org/10.25368/2022.98.

Full text
Abstract:
An abridged version of this technical report has been submitted to KR 2000. Matching of concepts against patterns is a new inference task in Description Logics, which was originally motivated by applications of the CLASSIC system. Consequently, the work on this problem was until now mostly concerned with sublanguages of the Classic language, which does not allow for existential restrictions. Motivated by an application in chemical process engineering, which requires a description language with existential restrictions, this paper investigates the matching problem in Description Logics with existential restrictions. It turns out that existential restrictions make matching more complex in two respects. First, whereas matching in sublanguages of CLASSIC is polynomial, deciding the existence of matchers is an NP-complete problem in the presence of existential restrictions. Second, whereas in sublanguages of Classic solvable matching problems have a unique least matcher, this is not the case for languages with existential restrictions. Thus, it is not a priori clear which of the (possibly infinitely many) matchers should be returned by a matching algorithm. After determining the complexity of the decision problem, the present paper first investigates the question of what are 'interesting' sets of matchers, and then describes algorithms for computing these sets for the languages EL (which allows for conjunction and existential restrictions) and ALE (which additionally allows for value restrictions, primitive negation, and the bottom concept).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography