Letteratura scientifica selezionata sul tema "Generative AI Risks"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Generative AI Risks".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Generative AI Risks"

1

El-Hadi, Mohamed. "Generative AI Poses Security Risks." مجلة الجمعية المصرية لنظم المعلومات وتکنولوجيا الحاسبات 34, no. 34 (2024): 72–73. http://dx.doi.org/10.21608/jstc.2024.338476.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Hu, Fangfei, Shiyuan Liu, Xinrui Cheng, Pengyou Guo, and Mengqi Yu. "Risks of Generative Artificial Intelligence and Multi-Tool Governance." Academic Journal of Management and Social Sciences 9, no. 2 (2024): 88–93. http://dx.doi.org/10.54097/stvem930.

Testo completo
Abstract (sommario):
While generative AI, represented by ChatGPT, brings a technological revolution and convenience to life, it may raise a series of social and legal risks, mainly including violation of personal privacy and data security issues, infringement of intellectual property rights, generation of misleading and false content, and exacerbation of discrimination and prejudice. However, the traditional AI governance paradigm oriented towards conventional AI may not be adequately adapted to generative AI with generalized potential and based on big models. In order to encourage the innovative development of generative AI technology and regulate the risks, this paper explores the construction of a generative AI governance paradigm that combines legal regulation, technological regulation, and ethical governance of science and technology, and promotes the healthy development of generative AI on the track of safety, order, fairness, and co-governance.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Moon, Su-Ji. "Effects of Perception of Potential Risk in Generative AI on Attitudes and Intention to Use." International Journal on Advanced Science, Engineering and Information Technology 14, no. 5 (2024): 1748–55. http://dx.doi.org/10.18517/ijaseit.14.5.20445.

Testo completo
Abstract (sommario):
Generative artificial intelligence (AI) is rapidly advancing, offering numerous benefits to society while presenting unforeseen potential risks. This study aims to identify these potential risks through a comprehensive literature review and investigate how user’s perceptions of risk factors influence their attitudes and intentions to use generative AI technologies. Specifically, we examined the impact of four key risk factors: fake news generation, trust, bias, and privacy concerns. Our analysis of data collected from experienced generative AI users yielded several significant findings: First, users' perceptions of fake news generation by generative AI were found to have a significant negative impact on their attitudes towards these technologies. Second, user trust in generative AI positively influenced both attitudes toward and intentions to use these technologies. Third, users' awareness of potential biases in generative AI systems was shown to affect their attitudes towards these technologies negatively. Fourth, while users' privacy concerns regarding generative AI did not significantly impact their usage intentions directly, these concerns negatively influenced their overall attitudes toward the technology. Fifth, users' attitudes towards generative AI influenced their intentions to use these technologies positively. Based on the above results, to increase the intention to use generated artificial intelligence, legal, institutional, and technical countermeasures should be prepared for fake news generation, trust issues, bias, and privacy concerns while improving users' negative perceptions through literacy education on generated artificial intelligence, and education that can be used desirable and efficiently.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Campbell, Mark, and Mlađan Jovanović. "Disinfecting AI: Mitigating Generative AI’s Top Risks." Computer 57, no. 5 (2024): 111–16. http://dx.doi.org/10.1109/mc.2024.3374433.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Siyed, Zahm. "Generative AI Increases Cybersecurity Risks for Seniors." Computer and Information Science 17, no. 2 (2024): 39. http://dx.doi.org/10.5539/cis.v17n2p39.

Testo completo
Abstract (sommario):
We evaluate how generative AI exacerbates the cyber risks faced by senior citizens. We assess the risk that powerful LLMs can easily be misconfigured to serve a malicious purpose, and that platforms such as HackGPT or WormGPT can facilitate low-skilled script kiddies to replicate the effectiveness of high-skilled threat actors. We surveyed 85 seniors and found that the combination of loneliness and low cyber literacy places 87% of them at high risk of being hacked. Our survey further revealed that 67% of seniors have already been exposed to potentially exploitable digital intrusions and only 22% of seniors have sufficient awareness of risks to ask techno-literate for remedial assistance. Our risk analysis suggests that existing attack vectors can be augmented with AI to create highly personalized and believable digital exploits that are extremely difficult for seniors to distinguish from legitimate interactions. Technological advances allow for the replication of familiar voices, live digital reconstruction of faces, personalized targeting, and falsification of records. Once an attack vector is identified, certain generative polymorphic capabilities allow rapid mutation and obfuscation to deliver unique payloads. Both inbound and outbound risks exist. In addition to inbound attempts by individual threat actors, seniors are vulnerable to outbound attacks through poisoned LLMs, such as Threat GPT or PoisonGPT. Generative AI can maliciously alter databases to provide incorrect information or compromised instructions to gullible seniors seeking outbound digital guidance. By analyzing the extent to which senior citizens are at risk of exploitation through new developments in AI, the paper will contribute to the development of effective strategies to safeguard this vulnerable population.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Sun, Hanpu. "Risks and Legal Governance of Generative Artificial Intelligence." International Journal of Social Sciences and Public Administration 4, no. 1 (2024): 306–14. http://dx.doi.org/10.62051/ijsspa.v4n1.35.

Testo completo
Abstract (sommario):
Generative Artificial Intelligence (AI) represents a significant advancement in the field of artificial intelligence, characterized by its ability to autonomously generate original content by learning from existing data. Unlike traditional decision-based AI, which primarily aids in decision-making by analyzing data, generative AI can create new texts, images, music, and more, showcasing its immense potential across various domains. However, this technology also presents substantial risks, including data security threats, privacy violations, algorithmic biases, and the dissemination of false information. Addressing these challenges requires a multi-faceted approach involving technical measures, ethical considerations, and robust legal frameworks. This paper explores the evolution and capabilities of generative AI, outlines the associated risks, and discusses the regulatory and legal mechanisms needed to mitigate these risks. By emphasizing transparency, accountability, and ethical responsibility, we aim to ensure that generative AI contributes positively to society while safeguarding against its potential harms.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Mortensen, Søren F. "Generative AI in securities services." Journal of Securities Operations & Custody 16, no. 4 (2024): 302. http://dx.doi.org/10.69554/zlcg3875.

Testo completo
Abstract (sommario):
Generative AI (GenAI) is a technology that, since the launch of ChatGPT in November 2022, has taken the world by storm. While a lot of the conversation around GenAI is hype, there are some real applications of this technology that can bring real value to businesses. There are, however, risks in applying this technology blindly that sometimes can outweigh the value it brings. This paper discusses the potential applicability of GenAI to the processes in post-trade and what impact it could have on financial institutions and their ability to meet challenges in the market, such as T+1. We also discuss the risks of implementing this technology and how these can be mitigated, as well as ensuring that all the objectives are met not only from a business perspective, but also technology and compliance.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Wang, Mingzheng. "Generative AI: A New Challenge for Cybersecurity." Journal of Computer Science and Technology Studies 6, no. 2 (2024): 13–18. http://dx.doi.org/10.32996/jcsts.2024.6.2.3.

Testo completo
Abstract (sommario):
The rapid development of Generative Artificial Intelligence (GAI) technology has shown tremendous potential in various fields, such as image generation, text generation, and video generation, and it has been widely applied in various industries. However, GAI also brings new risks and challenges to cybersecurity. This paper analyzes the application status of GAI technology in the field of cybersecurity and discusses the risks and challenges it brings, including data security risks, scientific and technological ethics and moral challenges, Artificial Intelligence (AI) fraud, and threats from cyberattacks. On this basis, this paper proposes some countermeasures to maintain cybersecurity and address the threats posed by GAI, including: establishing and improving standards and specifications for AI technology to ensure its security and reliability; developing AI-based cybersecurity defense technologies to enhance cybersecurity defense capabilities; improving the AI literacy of the whole society to help the public understand and use AI technology correctly. From the perspective of GAI technology background, this paper systematically analyzes its impact on cybersecurity and proposes some targeted countermeasures and suggestions, possessing certain theoretical and practical significance.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Gabriel, Sonja. "Generative AI in Writing Workshops: A Path to AI Literacy." International Conference on AI Research 4, no. 1 (2024): 126–32. https://doi.org/10.34190/icair.4.1.3022.

Testo completo
Abstract (sommario):
The widespread use of generative AI tools which can support or even take over several part of the writing process has sparked many discussions about integrity, AI literacy and changes to academic writing processes. This paper explores the impact of generative artificial intelligence (AI) tools on the academic writing pro-cess, drawing on data from a writing workshop and interviews with university students of a university teacher college in Austria. Despite the widespread assumption that generative AI, such as ChatGPT, is widely used by students to support their academic tasks, initial findings suggest a notable gap in participants' experience and understanding of these technologies. This discrepancy highlights the critical need for AI literacy and underscores the importance of familiarity with the potential, challenges and risks associated with generative AI to ensure its ethical and effective use. Through reflective discussions and feedback from work-shop participants, this study shows a differentiated perspective on the role of generative AI in academic writing, illustrating its value in generating ideas and overcoming writer's block, as well as its limitations due to the indispensable nature of human involvement in critical writing tasks.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Tong, Yifeng. "Research on Criminal Risk Analysis and Governance Mechanism of Generative Artificial Intelligence such as ChatGPT." Studies in Law and Justice 2, no. 2 (2023): 85–94. http://dx.doi.org/10.56397/slj.2023.06.13.

Testo completo
Abstract (sommario):
The launch of ChatGPT marks a breakthrough in the development of generative artificial intelligence (generative AI) technology, which is based on the collection and learning of big data as its core operating mechanism. Now the generative AI has the characteristics of high intelligence and high generalization and thus has led to various criminal risks. The current criminal law norms are inadequate in terms of the attribution system and crime norms for the criminal risks caused by the generative AI technologies. Thus, we should clarify the types of risks and governance challenges of generative AI, clarify that data is the object of risk governance, and establish a pluralistic liability allocation mechanism and a mixed legal governance framework of criminal, civil, and administrative law on this basis.
Gli stili APA, Harvard, Vancouver, ISO e altri
Più fonti
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia