To see the other types of publications on this topic, follow the link: Trustworthiness of AI.

Journal articles on the topic 'Trustworthiness of AI'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Trustworthiness of AI.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bisconti, Piercosma, Letizia Aquilino, Antonella Marchetti, and Daniele Nardi. "A Formal Account of Trustworthiness: Connecting Intrinsic and Perceived Trustworthiness." Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7 (October 16, 2024): 131–40. http://dx.doi.org/10.1609/aies.v7i1.31624.

Full text
Abstract:
This paper proposes a formal account of AI trustworthiness, connecting both intrinsic and perceived trustworthiness in an operational schematization. We argue that trustworthiness extends beyond the inherent capabilities of an AI system to include significant influences from observers' perceptions, such as perceived transparency, agency locus, and human oversight. While the concept of perceived trustworthiness is discussed in the literature, few attempts have been made to connect it with the intrinsic trustworthiness of AI systems. Our analysis introduces a novel schematization to quantify tru
APA, Harvard, Vancouver, ISO, and other styles
2

Rishika Sen, Shrihari Vasudevan, Ricardo Britto, and Mj Prasath. "Ascertaining trustworthiness of AI systems in telecommunications." ITU Journal on Future and Evolving Technologies 5, no. 4 (2024): 503–14. https://doi.org/10.52953/wibx7049.

Full text
Abstract:
With the rapid uptake of Artificial Intelligence (AI) in the Telecommunications (Telco) industry and the pivotal role AI is expected to play in future generation technologies (e.g., 5G, 5G Advanced and 6G), establishing the trustworthiness of AI used in Telco becomes critical. Trustworthy Artificial Intelligence (TWAI) guidelines need to be implemented to establish trust in AI-powered products and services by being compliant to these guidelines. This paper focuses on measuring compliance to such guidelines. This paper proposes a Large Language Model (LLM)-driven approach to measure TWAI compli
APA, Harvard, Vancouver, ISO, and other styles
3

Schmitz, Anna, Maram Akila, Dirk Hecker, Maximilian Poretschkin, and Stefan Wrobel. "The why and how of trustworthy AI." at - Automatisierungstechnik 70, no. 9 (2022): 793–804. http://dx.doi.org/10.1515/auto-2022-0012.

Full text
Abstract:
Abstract Artificial intelligence is increasingly penetrating industrial applications as well as areas that affect our daily lives. As a consequence, there is a need for criteria to validate whether the quality of AI applications is sufficient for their intended use. Both in the academic community and societal debate, an agreement has emerged under the term “trustworthiness” as the set of essential quality requirements that should be placed on an AI application. At the same time, the question of how these quality requirements can be operationalized is to a large extent still open. In this paper
APA, Harvard, Vancouver, ISO, and other styles
4

Vashistha, Ritwik, and Arya Farahi. "U-trustworthy Models. Reliability, Competence, and Confidence in Decision-Making." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 18 (2024): 19956–64. http://dx.doi.org/10.1609/aaai.v38i18.29972.

Full text
Abstract:
With growing concerns regarding bias and discrimination in predictive models, the AI community has increasingly focused on assessing AI system trustworthiness. Conventionally, trustworthy AI literature relies on the probabilistic framework and calibration as prerequisites for trustworthiness. In this work, we depart from this viewpoint by proposing a novel trust framework inspired by the philosophy literature on trust. We present a precise mathematical definition of trustworthiness, termed U-trustworthiness, specifically tailored for a subset of tasks aimed at maximizing a utility function. We
APA, Harvard, Vancouver, ISO, and other styles
5

Bradshaw, Jeffrey M., Larry Bunch, Michael Prietula, Edward Queen, Andrzej Uszok, and Kristen Brent Venable. "From Bench to Bedside: Implementing AI Ethics as Policies for AI Trustworthiness." Proceedings of the AAAI Symposium Series 4, no. 1 (2024): 102–5. http://dx.doi.org/10.1609/aaaiss.v4i1.31778.

Full text
Abstract:
It is well known that successful human-AI collaboration depends on the perceived trustworthiness of the AI. We argue that a key to securing trust in such collaborations is ensuring that the AI competently addresses ethics' foundational role in engagements. Specifically, developers need to identify, address, and implement mechanisms for accommodating ethical components of AI choices. We propose an approach that instantiates ethics semantically as ontology-based moral policies. To accommodate the wide variation and interpretation of ethics, we capture such variations into ethics sets, which are
APA, Harvard, Vancouver, ISO, and other styles
6

Alzubaidi, Laith, Aiman Al-Sabaawi, Jinshuai Bai, et al. "Towards Risk-Free Trustworthy Artificial Intelligence: Significance and Requirements." International Journal of Intelligent Systems 2023 (October 26, 2023): 1–41. http://dx.doi.org/10.1155/2023/4459198.

Full text
Abstract:
Given the tremendous potential and influence of artificial intelligence (AI) and algorithmic decision-making (DM), these systems have found wide-ranging applications across diverse fields, including education, business, healthcare industries, government, and justice sectors. While AI and DM offer significant benefits, they also carry the risk of unfavourable outcomes for users and society. As a result, ensuring the safety, reliability, and trustworthiness of these systems becomes crucial. This article aims to provide a comprehensive review of the synergy between AI and DM, focussing on the imp
APA, Harvard, Vancouver, ISO, and other styles
7

Kafali, Efi, Davy Preuveneers, Theodoros Semertzidis, and Petros Daras. "Defending Against AI Threats with a User-Centric Trustworthiness Assessment Framework." Big Data and Cognitive Computing 8, no. 11 (2024): 142. http://dx.doi.org/10.3390/bdcc8110142.

Full text
Abstract:
This study critically examines the trustworthiness of widely used AI applications, focusing on their integration into daily life, often without users fully understanding the risks or how these threats might affect them. As AI apps become more accessible, users tend to trust them due to their convenience and usability, frequently overlooking critical issues such as security, privacy, and ethics. To address this gap, we introduce a user-centric framework that enables individuals to assess the trustworthiness of AI applications based on their own experiences and perceptions. The framework evaluat
APA, Harvard, Vancouver, ISO, and other styles
8

Mentzas, Gregoris, Mattheos Fikardos, Katerina Lepenioti, and Dimitris Apostolou. "Exploring the landscape of trustworthy artificial intelligence: Status and challenges." Intelligent Decision Technologies 18, no. 2 (2024): 837–54. http://dx.doi.org/10.3233/idt-240366.

Full text
Abstract:
Artificial Intelligence (AI) has pervaded everyday life, reshaping the landscape of business, economy, and society through the alteration of interactions and connections among stakeholders and citizens. Nevertheless, the widespread adoption of AI presents significant risks and hurdles, sparking apprehension regarding the trustworthiness of AI systems by humans. Lately, numerous governmental entities have introduced regulations and principles aimed at fostering trustworthy AI systems, while companies, research institutions, and public sector organizations have released their own sets of princip
APA, Harvard, Vancouver, ISO, and other styles
9

Vadlamudi, Siddhartha. "Enabling Trustworthiness in Artificial Intelligence - A Detailed Discussion." Engineering International 3, no. 2 (2015): 105–14. http://dx.doi.org/10.18034/ei.v3i2.519.

Full text
Abstract:
Artificial intelligence (AI) delivers numerous chances to add to the prosperity of people and the stability of economies and society, yet besides, it adds up a variety of novel moral, legal, social, and innovative difficulties. Trustworthy AI (TAI) bases on the possibility that trust builds the establishment of various societies, economies, and sustainable turn of events, and that people, organizations, and societies can along these lines just at any point understand the maximum capacity of AI, if trust can be set up in its development, deployment, and use. The risks of unintended and negative
APA, Harvard, Vancouver, ISO, and other styles
10

AJAYI, Wumi, Adekoya Damola Felix, Ojarikre Oghenenerowho Princewill, and Fajuyigbe Gbenga Joseph. "Software Engineering’s Key Role in AI Content Trustworthiness." International Journal of Research and Scientific Innovation XI, no. IV (2024): 183–201. http://dx.doi.org/10.51244/ijrsi.2024.1104014.

Full text
Abstract:
Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. It can also be defined as the science and engineering of making intelligent machines, especially intelligent computer programs. In recent decades, there has been a discernible surge in the focus of the scientific and government sectors on reliable AI. The International Organization for Standardization, which focuses on technical, industrial, and commercial standardization, has devised several strategies to promote trust in AI systems, with an emphasis on fairness, transparen
APA, Harvard, Vancouver, ISO, and other styles
11

Abbott, Ryan, and Brinson S. Elliott. "Putting the Artificial Intelligence in Alternative Dispute Resolution." Amicus Curiae 4, no. 3 (2023): 685–706. http://dx.doi.org/10.14296/ac.v4i3.5627.

Full text
Abstract:
This article argues that the evolving regulatory and governance environment for artificial intelligence (AI) will significantly impact alternative dispute resolution (ADR). Very recently, AI regulation has emerged as a pressing international policy issue, with jurisdictions engaging in a sort of regulatory arms race. In the same way that existing ADR regulations impact the use of AI in ADR, so too will new AI regulations impact ADR, among other reasons, because ADR is already utilizing AI and will increasingly utilize AI in the future. Appropriate AI regulations should thus benefit ADR, as the
APA, Harvard, Vancouver, ISO, and other styles
12

Kuipers, Benjamin. "AI and Society: Ethics, Trust, and Cooperation." Communications of the ACM 66, no. 8 (2023): 39–42. http://dx.doi.org/10.1145/3583134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Avin, Shahar, Haydn Belfield, Miles Brundage, et al. "Filling gaps in trustworthy development of AI." Science 374, no. 6573 (2021): 1327–29. http://dx.doi.org/10.1126/science.abi7176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Xin, Ruyue, Jingye Wang, Peng Chen, and Zhiming Zhao. "Trustworthy AI-based Performance Diagnosis Systems for Cloud Applications: A Review." ACM Computing Surveys 57, no. 5 (2025): 1–37. https://doi.org/10.1145/3701740.

Full text
Abstract:
Performance diagnosis systems are defined as detecting abnormal performance phenomena and play a crucial role in cloud applications. An effective performance diagnosis system is often developed based on artificial intelligence (AI) approaches, which can be summarized into a general framework from data to models. However, the AI-based framework has potential hazards that could degrade the user experience and trust. For example, a lack of data privacy may compromise the security of AI models, and low robustness can be hard to apply in complex cloud environments. Therefore, defining the requireme
APA, Harvard, Vancouver, ISO, and other styles
15

Paulsen, Jens Erik. "AI, Trustworthiness, and the Digital Dirty Harry Problem." Nordic Journal of Studies in Policing 8, no. 02 (2021): 1–19. http://dx.doi.org/10.18261/issn.2703-7045-2021-02-02.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Kundu, Shinjini. "Measuring trustworthiness is crucial for medical AI tools." Nature Human Behaviour 7, no. 11 (2023): 1812–13. http://dx.doi.org/10.1038/s41562-023-01711-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Kshirsagar, Meghana, Krishn Kumar Gupt, Gauri Vaidya, Conor Ryan, Joseph P. Sullivan, and Vivek Kshirsagar. "Insights Into Incorporating Trustworthiness and Ethics in AI Systems With Explainable AI." International Journal of Natural Computing Research 11, no. 1 (2022): 1–23. http://dx.doi.org/10.4018/ijncr.310006.

Full text
Abstract:
Over the past seven decades since the advent of artificial intelligence (AI) technology, researchers have demonstrated and deployed systems incorporating AI in various domains. The absence of model explainability in critical systems such as medical AI and credit risk assessment among others has led to neglect of key ethical and professional principles which can cause considerable harm. With explainability methods, developers can check their models beyond mere performance and identify errors. This leads to increased efficiency in time and reduces development costs. The article summarizes that s
APA, Harvard, Vancouver, ISO, and other styles
18

Serafimova, Silviya. "Questioning the Role of Moral AI as an Adviser within the Framework of Trustworthiness Ethics." Filosofiya-Philosophy 30, no. 4 (2021): 402–12. http://dx.doi.org/10.53656/phil2021-04-07.

Full text
Abstract:
The main objective of this article is to demonstrate why despite the growing interest in justifying AI’s trustworthiness, one can argue for AI’s reliability. By analyzing why trustworthiness ethics in Nickel’s sense provides some wellgrounded hints for rethinking the rational, affective and normative accounts of trust in respect to AI, I examine some concerns about the trustworthiness of Savulescu and Maslen’s model of moral AI as an adviser. Specifically, I tackle one of its exemplifications regarding Klincewicz’s hypothetical scenario of John which is refracted through the lens of the HLEG’s
APA, Harvard, Vancouver, ISO, and other styles
19

Hohma, Ellen, and Christoph Lütge. "From Trustworthy Principles to a Trustworthy Development Process: The Need and Elements of Trusted Development of AI Systems." AI 4, no. 4 (2023): 904–26. http://dx.doi.org/10.3390/ai4040046.

Full text
Abstract:
The current endeavor of moving AI ethics from theory to practice can frequently be observed in academia and industry and indicates a major achievement in the theoretical understanding of responsible AI. Its practical application, however, currently poses challenges, as mechanisms for translating the proposed principles into easily feasible actions are often considered unclear and not ready for practice. In particular, a lack of uniform, standardized approaches that are aligned with regulatory provisions is often highlighted by practitioners as a major drawback to the practical realization of A
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Bijun, Onur Asan, and Mo Mansouri. "What May Impact Trustworthiness of AI in Digital Healthcare: Discussion from Patients’ Viewpoint." Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care 12, no. 1 (2023): 5–10. http://dx.doi.org/10.1177/2327857923121001.

Full text
Abstract:
The healthcare industry is undergoing a transformation of traditional medical relationships from human-physician interactions to digital healthcare focusing on physician-AI-patient interactions. Patients’ trustworthiness is the cornerstone of adopting new technologies expounding the reliability, integrity, and ability of AI-based systems and devices to provide an accurate and safe healthcare environment. The main objective of this study is to investigate the various factors that influence patients’ trustworthiness in AI-based systems and devices, taking into account differences in patients’ ex
APA, Harvard, Vancouver, ISO, and other styles
21

Jansaenroj, Krit. "ATTITUDE OF MILLENNIALS AND GENERATION Z TOWARDS ARTIFICIAL INTELLIGENCE IN SURGERY." International Journal of Advanced Research 10, no. 7 (2022): 921–26. http://dx.doi.org/10.21474/ijar01/15114.

Full text
Abstract:
Because of its increasing ability to turn ambiguity and complexity in data into actionable-though imperfect-clinical choices or suggestions, artificial intelligence (AI) has the potential to change health care practices. Trust is the only mechanism that influences physicians use and adoption of AI in the growing interaction between humans and AI. Trust is a psychological process that enables people to deal with ambiguity in what they know and do not know. The purpose of this online survey was to determine the relationship between age groups, familiarity, and trustworthiness present towards AI
APA, Harvard, Vancouver, ISO, and other styles
22

Mattioli, Juliette, Martin Gonzalez, Lucas Mattioli, Karla Quintero, and Henri Sohier. "Leveraging Tropical Algebra to Assess Trustworthy AI." Proceedings of the AAAI Symposium Series 4, no. 1 (2024): 81–88. http://dx.doi.org/10.1609/aaaiss.v4i1.31775.

Full text
Abstract:
Given the complexity of the application domain, the qualitative and quantifiable nature of the concepts involved, the wide heterogeneity and granularity of trustworthy attributes, and in some cases the non-comparability of the latter, assessing the trustworthiness of AI-based systems is a challenging process. In order to overcome these challenges, the Confiance.ai program proposes an innovative solution based on a Multi-Criteria Decision Aiding (MCDA) methodology. This approach involves several stages: framing trustworthiness as a set of well-defined attributes, exploring attributes to determi
APA, Harvard, Vancouver, ISO, and other styles
23

Nayak, Bhabani Sankar. "ROBUSTNESS AND TRUSTWORTHINESS IN AI SYSTEMS: A TECHNICAL PERSPECTIVE." INTERNATIONAL JOURNAL OF RESEARCH IN COMPUTER APPLICATIONS AND INFORMATION TECHNOLOGY 8, no. 1 (2025): 1849–62. https://doi.org/10.34218/ijrcait_08_01_135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Ucar, Aysegul, Mehmet Karakose, and Necim Kırımça. "Artificial Intelligence for Predictive Maintenance Applications: Key Components, Trustworthiness, and Future Trends." Applied Sciences 14, no. 2 (2024): 898. http://dx.doi.org/10.3390/app14020898.

Full text
Abstract:
Predictive maintenance (PdM) is a policy applying data and analytics to predict when one of the components in a real system has been destroyed, and some anomalies appear so that maintenance can be performed before a breakdown takes place. Using cutting-edge technologies like data analytics and artificial intelligence (AI) enhances the performance and accuracy of predictive maintenance systems and increases their autonomy and adaptability in complex and dynamic working environments. This paper reviews the recent developments in AI-based PdM, focusing on key components, trustworthiness, and futu
APA, Harvard, Vancouver, ISO, and other styles
25

Azzam, Tarek. "Artificial intelligence and validity." New Directions for Evaluation 2023, no. 178-179 (2023): 85–95. http://dx.doi.org/10.1002/ev.20565.

Full text
Abstract:
AbstractThis article explores the interaction between artificial intelligence (AI) and validity and identifies areas where AI can help build validity arguments, and where AI might not be ready to contribute to our work in establishing validity. The validity of claims made in an evaluation is critical to the field, since it highlights the strengths and limitations of findings and can contribute to the utilization of the evaluation. Within this article, validity will be discussed within two broad categories: quantitative validity and qualitative trustworthiness. Within these categories, there ar
APA, Harvard, Vancouver, ISO, and other styles
26

Fehr, Jana, Giovanna Jaramillo-Gutierrez, Luis Oala, et al. "Piloting A Survey-Based Assessment of Transparency and Trustworthiness with Three Medical AI Tools." Healthcare 10, no. 10 (2022): 1923. http://dx.doi.org/10.3390/healthcare10101923.

Full text
Abstract:
Artificial intelligence (AI) offers the potential to support healthcare delivery, but poorly trained or validated algorithms bear risks of harm. Ethical guidelines stated transparency about model development and validation as a requirement for trustworthy AI. Abundant guidance exists to provide transparency through reporting, but poorly reported medical AI tools are common. To close this transparency gap, we developed and piloted a framework to quantify the transparency of medical AI tools with three use cases. Our framework comprises a survey to report on the intended use, training and valida
APA, Harvard, Vancouver, ISO, and other styles
27

Slosser, Jacob Livingston, Birgit Aasa, and Henrik Palmer Olsen. "Trustworthy AI." Technology and Regulation 2023 (October 27, 2023): 58–68. https://doi.org/10.71265/pztsvw73.

Full text
Abstract:
The EU has proposed harmonized rules on artificial intelligence (AI Act) and a directive on adapting non-contractual civil liability rules to AI (AI liability directive) due to increased demand for trustworthy AI. However, the concept of trustworthy AI is unspecific, covering various desired characteristics such as safety, transparency, and accountability. Trustworthiness requires a specific contextual setting that involves human interaction with AI technology, and simply involving humans in decision processes does not guarantee trustworthy outcomes. In this paper, the authors argue for an inf
APA, Harvard, Vancouver, ISO, and other styles
28

Long, Qiyu. "Effect of Racial Homophily on AI Anthropomorphism and News Anchor Credibility." Journal of Education, Humanities and Social Sciences 45 (December 26, 2024): 528–38. https://doi.org/10.54097/bd1tkw72.

Full text
Abstract:
This study explores the influence of anthropomorphism and racial homophily on audience trust in Artificial Intelligence News Anchors (AINAs) in the context of contemporary journalism. Utilizing a comprehensive between-groups experiment, participants were recruited online and presented with audiovisual news clips featuring AINAs. The research investigates the relationships among anthropomorphic cues, viewers’ perceptions of racial homogeneity, and the trustworthiness of news conveyed by these AI entities. Findings indicate a significant positive correlation between visual cues and news trustwor
APA, Harvard, Vancouver, ISO, and other styles
29

Ganguly, Shantanu, and Nivedita Pandey. "Deployment of AI Tools and Technologies on Academic Integrity and Research." Bangladesh Journal of Bioethics 15, no. 2 (2024): 28–32. http://dx.doi.org/10.62865/bjbio.v15i2.122.

Full text
Abstract:
Academic integrity is a set of ethical ideals and values that guide the behavior of individuals in academic and educational settings. It encompasses honesty, trustworthiness, fairness, and a commitment to upholding the highest standards of ethical conduct in the quest for knowledge, learning, and research. Academic integrity is essential in maintaining the trustworthiness, reputation, and effectiveness of educational institutions and scholarly communities. Whereas, AI, or Artificial Intelligence, is a broad field of computer science that focuses on creating frameworks, software, or machines th
APA, Harvard, Vancouver, ISO, and other styles
30

R. S. Deshpande, P. V. Ambatkar. "Interpretable Deep Learning Models: Enhancing Transparency and Trustworthiness in Explainable AI." Proceeding International Conference on Science and Engineering 11, no. 1 (2023): 1352–63. http://dx.doi.org/10.52783/cienceng.v11i1.286.

Full text
Abstract:
Explainable AI (XAI) aims to address the opacity of deep learning models, which can limit their adoption in critical decision-making applications. This paper presents a novel framework that integrates interpretable components and visualization techniques to enhance the transparency and trustworthiness of deep learning models. We propose a hybrid explanation method combining saliency maps, feature attribution, and local interpretable model-agnostic explanations (LIME) to provide comprehensive insights into the model's decision-making process.
 Our experiments with convolutional neural netw
APA, Harvard, Vancouver, ISO, and other styles
31

Psaltis, Athanasios, Kassiani Zafeirouli, Peter Leškovský, et al. "Fostering Trustworthiness of Federated Learning Ecosystem through Realistic Scenarios." Information 14, no. 6 (2023): 342. http://dx.doi.org/10.3390/info14060342.

Full text
Abstract:
The present study thoroughly evaluates the most common blocking challenges faced by the federated learning (FL) ecosystem and analyzes existing state-of-the-art solutions. A system adaptation pipeline is designed to enable the integration of different AI-based tools in the FL system, while FL training is conducted under realistic conditions using a distributed hardware infrastructure. The suggested pipeline and FL system’s robustness are tested against challenges related to tool deployment, data heterogeneity, and privacy attacks for multiple tasks and data types. A representative set of AI-ba
APA, Harvard, Vancouver, ISO, and other styles
32

Farayola, Michael Mayowa, Irina Tal, Regina Connolly, Takfarinas Saber, and Malika Bendechache. "Ethics and Trustworthiness of AI for Predicting the Risk of Recidivism: A Systematic Literature Review." Information 14, no. 8 (2023): 426. http://dx.doi.org/10.3390/info14080426.

Full text
Abstract:
Artificial Intelligence (AI) can be very beneficial in the criminal justice system for predicting the risk of recidivism. AI provides unrivalled high computing power, speed, and accuracy; all harnessed to strengthen the efficiency in predicting convicted individuals who may be on the verge of recommitting a crime. The application of AI models for predicting recidivism has brought positive effects by minimizing the possible re-occurrence of crime. However, the question remains of whether criminal justice system stakeholders can trust AI systems regarding fairness, transparency, privacy and data
APA, Harvard, Vancouver, ISO, and other styles
33

Capelli, Giulia, Daunia Verdi, Isabella Frigerio, et al. "White paper: ethics and trustworthiness of artificial intelligence in clinical surgery." Artificial Intelligence Surgery 3, no. 2 (2023): 111–22. http://dx.doi.org/10.20517/ais.2023.04.

Full text
Abstract:
This white paper documents the consensus opinion of the Artificial Intelligence Surgery (AIS) task force on Artificial Intelligence (AI) Ethics and the AIS Editorial Board Study Group on Ethics on the ethical considerations and current trustworthiness of artificial intelligence and autonomous actions in surgery. The ethics were divided into 6 topics defined by the Task Force: Reliability of robotic and AI systems; Respect for privacy and sensitive data; Use of complete and representative (i.e., unbiased) data; Transparencies and uncertainties in AI; Fairness: are we exacerbating inequalities i
APA, Harvard, Vancouver, ISO, and other styles
34

Ali Khan, Umair, Janne Kauttonen, Lili Aunimo, and Ari V Alamäki. "A System to Ensure Information Trustworthiness in Artificial Intelligence Enhanced Higher Education." Journal of Information Technology Education: Research 23 (2024): 013. http://dx.doi.org/10.28945/5295.

Full text
Abstract:
Aim/Purpose: The purpose of this paper is to address the challenges posed by disinformation in an educational context. The paper aims to review existing information assessment techniques, highlight their limitations, and propose a conceptual design for a multimodal, explainable information assessment system for higher education. The ultimate goal is to provide a roadmap for researchers that meets current requirements of information assessment in education. Background: The background of this paper is rooted in the growing concern over disinformation, especially in higher education, where it can
APA, Harvard, Vancouver, ISO, and other styles
35

Purves, Duncan, Schuyler Sturm, and John Madock. "What to Trust When We Trust Artificial Intelligence (Extended Abstract)." Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7 (October 16, 2024): 1166. http://dx.doi.org/10.1609/aies.v7i1.31713.

Full text
Abstract:
What to Trust When We Trust Artificial Intelligence Abstract: So-called “trustworthy AI” has emerged as a guiding aim of industry leaders, computer and data science researchers, and policy makers in the US and Europe. Often, trustworthy AI is characterized in terms of a list of criteria. These lists usually include at least fairness, accountability, and transparency. Fairness, accountability, and transparency are valuable objectives, and they have begun to receive attention from philosophers and legal scholars. However, those who put forth criteria for trustworthy AI have failed to explain why
APA, Harvard, Vancouver, ISO, and other styles
36

Song, Yao, and Yan Luximon. "Trust in AI Agent: A Systematic Review of Facial Anthropomorphic Trustworthiness for Social Robot Design." Sensors 20, no. 18 (2020): 5087. http://dx.doi.org/10.3390/s20185087.

Full text
Abstract:
As an emerging artificial intelligence system, social robot could socially communicate and interact with human beings. Although this area is attracting more and more attention, limited research has tried to systematically summarize potential features that could improve facial anthropomorphic trustworthiness for social robot. Based on the literature from human facial perception, product, and robot face evaluation, this paper systematically reviews, evaluates, and summarizes static facial features, dynamic features, their combinations, and related emotional expressions, shedding light on further
APA, Harvard, Vancouver, ISO, and other styles
37

Paolanti, Marina, Simona Tiribelli, Benedetta Giovanola, Adriano Mancini, Emanuele Frontoni, and Roberto Pierdicca. "Ethical Framework to Assess and Quantify the Trustworthiness of Artificial Intelligence Techniques: Application Case in Remote Sensing." Remote Sensing 16, no. 23 (2024): 4529. https://doi.org/10.3390/rs16234529.

Full text
Abstract:
In the rapidly evolving field of remote sensing, Deep Learning (DL) techniques have become pivotal in interpreting and processing complex datasets. However, the increasing reliance on these algorithms necessitates a robust ethical framework to evaluate their trustworthiness. This paper introduces a comprehensive ethical framework designed to assess and quantify the trustworthiness of DL techniques in the context of remote sensing. We first define trustworthiness in DL as a multidimensional construct encompassing accuracy, reliability, transparency and explainability, fairness, and accountabili
APA, Harvard, Vancouver, ISO, and other styles
38

Kim, Min-Ji, and DoHoon Lee. "A Study on the Trustworthiness Evaluation of AI Model for Discrimination of Fireblight." Journal of Korea Multimedia Society 26, no. 2 (2023): 420–28. http://dx.doi.org/10.9717/kmms.2023.26.2.420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Zhou, Zhiyu. "Technological Control Tool of Everyday life? Six Questions on the Design Ethics of Artificial Intelligence." Journal of Design Service and Social Innovation 1, no. 1 (2023): 36–43. http://dx.doi.org/10.59528/ms.jdssi2023.0614a5.

Full text
Abstract:
Artificial intelligence (AI) continues to expand into different areas of social life, bringing design ethics and public rights under challenge. Instead of stopping the research application of AI, it is better to urgently study some of the practical problems that AI technology may bring about and promptly formulate corresponding laws to regulate them. This paper discusses the following issues: 1. Security and privacy of face recognition; 2. Political and economic applications of AI; 3. Emotional learning of AI; 3. Human-computer development of brain-computer interface; 5. Ethical supervision of
APA, Harvard, Vancouver, ISO, and other styles
40

Šekrst, Kristina. "Chinese Chat Room: AI Hallucinations, Epistemology and Cognition." Studies in Logic, Grammar and Rhetoric 69, no. 1 (2024): 365–81. https://doi.org/10.2478/slgr-2024-0029.

Full text
Abstract:
Abstract The purpose of this paper is to show that understanding AI hallucination requires an interdisciplinary approach that combines insights from epistemology and cognitive science to address the nature of AI-generated knowledge, with a terminological worry that concepts we often use might carry unnecessary presuppositions. Along with terminological issues, it is demonstrated that AI systems, comparable to human cognition, are susceptible to errors in judgement and reasoning, and proposes that epistemological frameworks, such as reliabilism, can be similarly applied to enhance the trustwort
APA, Harvard, Vancouver, ISO, and other styles
41

Mattioli, Juliette, and Bertrand Braunschweig. "AITA: AI trustworthiness assessment." AI Magazine, June 13, 2023. http://dx.doi.org/10.1002/aaai.12096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Braunschweig, Bertrand, Stefan Buijsman, Faïcel Chamroukhi, et al. "AITA: AI trustworthiness assessment." AI and Ethics, January 3, 2024. http://dx.doi.org/10.1007/s43681-023-00397-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Ferrario, Andrea. "Justifying Our Credences in the Trustworthiness of AI Systems: A Reliabilistic Approach." Science and Engineering Ethics 30, no. 6 (2024). http://dx.doi.org/10.1007/s11948-024-00522-z.

Full text
Abstract:
AbstractWe address an open problem in the philosophy of artificial intelligence (AI): how to justify the epistemic attitudes we have towards the trustworthiness of AI systems. The problem is important, as providing reasons to believe that AI systems are worthy of trust is key to appropriately rely on these systems in human-AI interactions. In our approach, we consider the trustworthiness of an AI as a time-relative, composite property of the system with two distinct facets. One is the actual trustworthiness of the AI and the other is the perceived trustworthiness of the system as assessed by i
APA, Harvard, Vancouver, ISO, and other styles
44

Lahusen, Christian, Martino Maggetti, and Marija Slavkovik. "Trust, trustworthiness and AI governance." Scientific Reports 14, no. 1 (2024). http://dx.doi.org/10.1038/s41598-024-71761-0.

Full text
Abstract:
AbstractAn emerging issue in AI alignment is the use of artificial intelligence (AI) by public authorities, and specifically the integration of algorithmic decision-making (ADM) into core state functions. In this context, the alignment of AI with the values related to the notions of trust and trustworthiness constitutes a particularly sensitive problem from a theoretical, empirical, and normative perspective. In this paper, we offer an interdisciplinary overview of the scholarship on trust in sociology, political science, and computer science anchored in artificial intelligence. On this basis,
APA, Harvard, Vancouver, ISO, and other styles
45

Durán, Juan Manuel, and Giorgia Pozzi. "Trust and Trustworthiness in AI." Philosophy & Technology 38, no. 1 (2025). https://doi.org/10.1007/s13347-025-00843-2.

Full text
Abstract:
Abstract Achieving trustworthy AI is increasingly considered an essential desideratum to integrate AI systems into sensitive societal fields, such as criminal justice, finance, medicine, and healthcare, among others. For this reason, it is important to spell out clearly its characteristics, merits, and shortcomings. This article is the first survey in the specialized literature that maps out the philosophical landscape surrounding trust and trustworthiness in AI. To achieve our goals, we proceed as follows. We start by discussing philosophical positions on trust and trustworthiness, focusing o
APA, Harvard, Vancouver, ISO, and other styles
46

Pink, Sarah, Emma Quilty, John Grundy, and Rashina Hoda. "Trust, artificial intelligence and software practitioners: an interdisciplinary agenda." AI & SOCIETY, March 7, 2024. http://dx.doi.org/10.1007/s00146-024-01882-7.

Full text
Abstract:
AbstractTrust and trustworthiness are central concepts in contemporary discussions about the ethics of and qualities associated with artificial intelligence (AI) and the relationships between people, organisations and AI. In this article we develop an interdisciplinary approach, using socio-technical software engineering and design anthropological approaches, to investigate how trust and trustworthiness concepts are articulated and performed by AI software practitioners. We examine how trust and trustworthiness are defined in relation to AI across these disciplines, and investigate how AI, tru
APA, Harvard, Vancouver, ISO, and other styles
47

Alelyani, Turki. "Establishing trust in artificial intelligence-driven autonomous healthcare systems: an expert-guided framework." Frontiers in Digital Health 6 (November 27, 2024). http://dx.doi.org/10.3389/fdgth.2024.1474692.

Full text
Abstract:
The increasing prevalence of Autonomous Systems (AS) powered by Artificial Intelligence (AI) in society and their expanding role in ensuring safety necessitate the assessment of their trustworthiness. The verification and development community faces the challenge of evaluating the trustworthiness of AI-powered AS in a comprehensive and objective manner. To address this challenge, this study conducts a semi-structured interview with experts to gather their insights and perspectives on the trustworthiness of AI-powered autonomous systems in healthcare. By integrating the expert insights, a compr
APA, Harvard, Vancouver, ISO, and other styles
48

Reinhardt, Karoline. "Trust and trustworthiness in AI ethics." AI and Ethics, September 26, 2022. http://dx.doi.org/10.1007/s43681-022-00200-5.

Full text
Abstract:
AbstractDue to the extensive progress of research in artificial intelligence (AI) as well as its deployment and application, the public debate on AI systems has also gained momentum in recent years. With the publication of the Ethics Guidelines for Trustworthy AI (2019), notions of trust and trustworthiness gained particular attention within AI ethics-debates; despite an apparent consensus that AI should be trustworthy, it is less clear what trust and trustworthiness entail in the field of AI. In this paper, I give a detailed overview on the notion of trust employed in AI Ethics Guidelines thu
APA, Harvard, Vancouver, ISO, and other styles
49

Bostrom, Ann, Julie L. Demuth, Christopher D. Wirz, et al. "Trust and trustworthy artificial intelligence: A research agenda for AI in the environmental sciences." Risk Analysis, November 8, 2023. http://dx.doi.org/10.1111/risa.14245.

Full text
Abstract:
AbstractDemands to manage the risks of artificial intelligence (AI) are growing. These demands and the government standards arising from them both call for trustworthy AI. In response, we adopt a convergent approach to review, evaluate, and synthesize research on the trust and trustworthiness of AI in the environmental sciences and propose a research agenda. Evidential and conceptual histories of research on trust and trustworthiness reveal persisting ambiguities and measurement shortcomings related to inconsistent attention to the contextual and social dependencies and dynamics of trust. Pote
APA, Harvard, Vancouver, ISO, and other styles
50

Erengin, Türkü, Roman Briker, and Simon B. de Jong. "You, Me, and the AI: The Role of Third‐Party Human Teammates for Trust Formation Toward AI Teammates." Journal of Organizational Behavior, January 2025. https://doi.org/10.1002/job.2857.

Full text
Abstract:
ABSTRACTAs artificial intelligence (AI) becomes increasingly integrated in teams, understanding the factors that drive trust formation between human and AI teammates becomes crucial. Yet, the emergent literature has overlooked the impact of third parties on human‐AI teaming. Drawing from social cognitive theory and human‐AI teams research, we suggest that how much a human teammate perceives an AI teammate as trustworthy, and engages in trust behaviors toward the AI, determines a focal employee's trust perceptions and behavior toward this AI teammate. Additionally, we propose these effects hing
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!