Siga este enlace para ver otros tipos de publicaciones sobre el tema: Criminal liability of artificial intelligence.

Artículos de revistas sobre el tema "Criminal liability of artificial intelligence"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Criminal liability of artificial intelligence".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Seongjo Ahn. "Artificial Intelligence and Criminal Liability". Korean Journal of Legal Philosophy 20, n.º 2 (agosto de 2017): 77–122. http://dx.doi.org/10.22286/kjlp.2017.20.2.003.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Kirpichnikov, Danila, Albert Pavlyuk, Yulia Grebneva y Hilary Okagbue. "Criminal Liability of the Artificial Intelligence". E3S Web of Conferences 159 (2020): 04025. http://dx.doi.org/10.1051/e3sconf/202015904025.

Texto completo
Resumen
Today, artificial intelligence (hereinafter – AI) becomes an integral part of almost all branches of science. The ability of AI to self-learning and self-development are properties that allow this new formation to compete with the human intelligence and perform actions that put it on a par with humans. In this regard, the author aims to determine whether it is possible to apply criminal liability to AI, since the latter is likely to be recognized as a subject of legal relations in the future. Based on a number of examinations and practical examples, the author makes the following conclusion: AI is fundamentally capable of being criminally liable; in addition, it is capable of correcting its own behavior under the influence of coercive measures.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Радутний, Олександр Едуардович. "Criminal liability of the artificial intelligence". Problems of Legality, n.º 138 (27 de septiembre de 2017): 132–41. http://dx.doi.org/10.21564/2414-990x.138.105661.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

황만성. "A Study about Criminal Liability of Artificial Intelligence". 법과정책 24, n.º 1 (marzo de 2018): 361–84. http://dx.doi.org/10.36727/jjlpr.24.1.201803.012.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Khisamova, Zarina y Ildar Begishev. "Criminal Liability and Artificial Intelligence: Theoretical and Applied Aspects". Russian Journal of Criminology 13, n.º 4 (23 de agosto de 2019): 564–74. http://dx.doi.org/10.17150/2500-4255.2019.13(4).564-574.

Texto completo
Resumen
The humanity is now at the threshold of a new era when a widening use of artificial intelligence (AI) will start a new industrial revolution. Its use inevitably leads to the problem of ethical choice, it gives rise to new legal issues that require urgent actions. The authors analyze the criminal law assessment of the actions of AI. Primarily, the still open issue of liability for the actions of AI that is capable of self-learning and makes a decision to act / not to act, which is qualified as a crime. As a result, there is a necessity to form a system of criminal law measures of counteracting crimes committed with the use of AI. It is shown that the application of AI could lead to four scenarios requiring criminal law regulation. It is stressed that there is a need for a clear, strict and effective definition of the ethical boundaries in the design, development, production, use and modification of AI. The authors argue that it should be recognized as a source of high risk. They specifically state that although the Criminal Code of the Russian Fe­deration contains norms that determine liability for cybercrimes, it does not eliminate the possibility of prosecution for infringements committed with the use of AI under the general norms of punishment for various crimes. The authors also consider it possible to establish a system to standardize and certify the activities of designing AI and putting it into operation. Meanwhile, an autonomous AI that is capable of self-learning is considerably different from other phenomena and objects, and the situation with the liability of AI which independently decides to undertake an action qualified as a crime is much more complicated. The authors analyze the resolution of the European Parliament on the possibility of granting AI legal status and discuss its key principles and meaning. They pay special attention to the issue of recognizing AI as a legal personality. It is suggested that a legal fiction should be used as a technique, when a special legal personality of AI can be perceived as an unusual legal situation that is different from reality. It is believed that such a solution can eliminate a number of existing legal limitations which prevent active involvement of AI into the legal space.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Rahman, Rofi Aulia y Rizki Habibulah. "THE CRIMINAL LIABILITY OF ARTIFICIAL INTELLIGENCE: IS IT PLAUSIBLE TO HITHERTO INDONESIAN CRIMINAL SYSTEM?" Legality : Jurnal Ilmiah Hukum 27, n.º 2 (6 de noviembre de 2019): 147. http://dx.doi.org/10.22219/jihl.v27i2.10153.

Texto completo
Resumen
The pace of technology evolution is very fast. The technology has brought us to the limitless world and becoming our ally in every daily life. The technology has created a visionary autonomous agent that could surpass human capability with little or without human intervention, called by Artificial Intelligence (AI). In the implementation of AI in every area that could be in industrial, health, agriculture, artist, etc. Consequently, AI can damage individual or congregation life that is protected by criminal law. In the current Indonesian criminal system, it just acknowledges natural person and legal person (recht persoon) as the subject of law that can be imposed by criminal sanction. Hitherto and near foreseeable future AI has a notable role in every aspect, which affects also criminal aspects due to the damage resulted. AI has no sufficient legal status to be explained in the Indonesian criminal system. In this paper, the author will assess whether the current criminal system of Indonesia can sue the criminal liability of artificial intelligence, and also will make it clear to whom the possibility of criminal liability of artificial intelligence shall be charged.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Shestak, Victor y Aleksander Volevodz. "Modern Requirements of the Legal Support of Artificial Intelligence: a View from Russia". Russian Journal of Criminology 13, n.º 2 (26 de abril de 2019): 197–206. http://dx.doi.org/10.17150/2500-4255.2019.13(2).197-206.

Texto completo
Resumen
At the present stage of the society’s development the artificial intelligence is quickly widening its possibilities. These changes raise the issue of applying norms, including international law norms, to solve problems connected with the essence and technical protocol of using artificial intelligence. The article is devoted to the problems of legal regulation of the creation and use of artificial intelligence and the development of the conceptual framework and the definition of artificial intelligence according to the widely recognized scientific theories; the analysis of doctrinal approaches to the understanding of the place of artificial intelligence in legal relations; the evidence that giving artificial intelligence the status of a person is not legally grounded; the critical analysis of the ideas put forward by some American researchers that artificial intelligence should comply with the whole set of laws currently used for its human producer and operator. The authors study the legislation on the legal regulation of relations between the human and artificial intelligence in such countries as the Republic of Korea, the USA, Japan, the People’s Republic of China, the Republic of Estonia, the Federal Republic of Germany and the Russian Federation, as well as the European Union. They present various approaches to the classification of artificial intelligence’s features. The authors also examine the problem of defining the legal personality of an «electronic person»; analyze the necessity of making the owner liable for the compensation of moral and material damage inflicted by the «electronic person». The article also discusses key problems of enforcing the legal norms regulating intellectual property and copyright, criminal liability and participation in criminal proceedings within the framework of using artificial intelligence. The authors analyze key risks and uncertainties connected with artificial intelligence and crucial for improving relevant legislation. They work out suggestions for the future discussion of the following issues: the applications of artificial intelligence at the contemporary stage; development prospects in this sector; legally relevant problems researched of this sphere and the problems connected with the use of the existing and the development of new autonomous intelligence systems; the development of new strategies and legal norms to bridge the gaps in the legal regulation of using artificial intelligence, including using it as a participant in criminal proceedings; creation of the concept of liability in the sphere of using artificial intelligence, including the criminal one.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Louis, Mark, Angelina Anne Fernandez, Nazura Abdul Manap, Shamini Kandasamy y Sin Yee Lee. "ARTIFICIAL INTELLIGENCE: IS IT A THREAT OR AN OPPORTUNITY BASED ON ITS LEGAL PERSONALITY AND CRIMINAL LIABILITY?" Journal of Information System and Technology Management 6, n.º 20 (1 de marzo de 2021): 01–09. http://dx.doi.org/10.35631//jistm.620001.

Texto completo
Resumen
Information technology is taking the world by storm. The technological world is changing rapidly and drastically. Human activities are taken over by robots and computers. The usage of computers and robots has increased productivity in various sectors. The emergence of artificial intelligence has stirred up many debates on both its importance and limitations. Artificial intelligence is directed to the usage of Information Technology in conducting tasks that normally require human intelligence. The expectation of artificial intelligence is high, nevertheless, artificial intelligence has its shortcomings namely the impact of artificial intelligence on the concept of a legal personality. The problem with artificial Intelligence is the debate on whether does it have a legal personality? And another problem is under what situation does the law treat artificial intelligence as an entity with its own rights and obligations. The objective of this article is to examine the various definitions of legal personality and whether artificial intelligence can become a legal person. The article will also examine the criminal liability of artificial intelligence when a crime has been committed. The methodology adopted is qualitative namely Doctrinal Legal Research by analyzing the relevant legal views from various journals on artificial intelligence. The study found out that artificial intelligence has its limitations in defining its legal personality and also in examining the criminal liability when a crime has been committed by robots.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Kamalova, G. G. "SOME QUESTIONS OF CRIMINAL LEGAL RESPONSIBILITY IN THE FIELD OF APPLICATION OF ARTIFICIAL INTELLIGENCE SYSTEMS AND ROBOTICS". Bulletin of Udmurt University. Series Economics and Law 30, n.º 3 (26 de junio de 2020): 382–88. http://dx.doi.org/10.35634/2412-9593-2020-30-3-382-388.

Texto completo
Resumen
The article discusses the problems of improving criminal law in the context of the development of one of the types of breakthrough digital technologies - artificial intelligence. The author notes that the explosive development in this area has led to the growth of high-tech crime, a special place among which is occupied by crimes using tools of artificial intelligence technology. Since the subjects of criminal activity traditionally use advanced technologies, at present such crimes are already represented by fraud, computer information crimes, terrorism, violations in the field of road safety, violation of the right to privacy and a number of others. Although there are no special offenses related to artificial intelligence in the criminal law today, this does not mean that existing norms cannot be applied to traditional subjects. Given the current level of development of artificial intelligence technology, it is now necessary to strengthen criminal liability for the compositions provided for by the current Criminal Code of the Russian Federation by introducing an appropriate qualifying attribute. With the recognition of the legal personality of artificial intelligence systems and robots, one of the key issues of applying criminal law rules to them will be the question of the subjective side of the committed act. The lack of “strong” artificial intelligence and the current level of development of solutions and devices based on artificial technology allow us to limit ourselves to classifying the facts of their use as components of the objective side of the crime.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Chernyh, Evgeniya. "Artificial intelligence in the Russian healthcare sector: current situation and criminal and legal risks". Vestnik of the St. Petersburg University of the Ministry of Internal Affairs of Russia 2020, n.º 4 (11 de diciembre de 2020): 127–31. http://dx.doi.org/10.35750/2071-8284-2020-4-127-131.

Texto completo
Resumen
The article discusses the prospects for the development of artificial intelligence systems in healthcare in Russia in the context of the introduction of the digital economy. A brief historical analysis of the use of artificial intelligence in the social sphere is carried out, the main directions of the modern Russianstate concept of the development of artificial intelligence are investigated. The main directions of using intelligent systems are revealed. The author emphasizes the need for legal regulation of digital medicine and, in this regard, analyzes the main criminal legal risks of causing harm to law-protected interests by one of the areas of digital medicine. It is noted that the criminal law problem of the use of artificial intelligence remains to date not developed in Russian criminal law, in this regard, the author emphasizes the urgent need for a more rapid development of criminal law rules governing legal relations in this area of activity. At the same time, the conditionality of the existence of a norm on criminal liability directly depends on the nature and degree of social danger of the act. The author of a brief analysis of foreign experience in the legal regulation of the use of artificial intelligence in the medical field. In the final part of the article, the author proposes qualification options in determining the subject composition causing harm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Rhee, Gina. "BIASED ALGORITHM, ARTIFICIAL INTELLIGENCE AND ITS CRIMINAL LIABILITY FROM PERSPECTIVE OF RETRIBUTIVISM". Yonsei Law Review 29, n.º 2 (30 de junio de 2019): 37–66. http://dx.doi.org/10.21717/ylr.29.2.2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

НАУМОВА, Юлия Николаевна. "FEATURES OF THE QUALIFICATION AND FACT OF PROOF IN THE VIOLATION OF TRAFFIC RULES AND OPERATION OF VEHICLES USING ARTIFICIAL INTELLIGENCE SYSTEMS". Rule-of-law state: theory and practice 17, n.º 1(63) (31 de marzo de 2021): 151–59. http://dx.doi.org/10.33184/pravgos-2021.1.12.

Texto completo
Resumen
The article addresses the problem of criminal liability for harm caused by the use of artificial intelligence in unmanned driving of a motor vehicle and related criminal, criminal procedure and forensic problems, it proposes the author's approaches to their solution. The purpose of the article is to establish the features of the qualification, fact of proof and forensic character of the offences in relation to traffic violations and the operation of vehicles in unmanned control using artificial intelligence. Methods: the author uses dialectical, formal methods, as well as specific scientific methods of criminal law, criminal procedure, administrative law, forensic theory in connection with the specificity of their sectoral content in violation of traffic rules and operation of vehicles using artificial intelligence systems. Results: It is substantiated that when collecting the evidence base on the fact of proof, the principal element of the nature of the crime, such as a method of committing a crime (unmanned driving), is central to the forensics process. The characteristics of the crime committed will determine the other elements of the crime (components).
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Tripathi, Swapnil y Chandni Ghatak. "Artificial Intelligence and Intellectual Property Law". Christ University Law Journal 7, n.º 1 (1 de enero de 2018): 83–98. http://dx.doi.org/10.12728/culj.12.5.

Texto completo
Resumen
Artificial intelligence systems have been gaining widespread momentum in today’s progressing tech-savvy world. With sophisticated technologies being incorporated in the same, it is only a matter of time these systems start to produce marvelous inventions without human intervention of any kind. This brings forth pertinent questions concerning Intellectual Property Rights, (IPR) for, it challenges not only traditional notions of concepts such as patents and copyrights, but also leads to the emergence of questions related to the regulation of such creations amidst others. This paper seeks to provide insight into the expanding scope of IPR laws and artificial intelligence, along with the inevitable challenges it brings from a worldwide lens on the matter. It also attempts to provide suggestions transcending IPR, and seeks to address questions concerning criminal liability for the content created by such technologies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Shestak, Victor, Aleksander Volevodz y Vera Alizade. "On the Possibility of Doctrinal Perception of Artificial Intelligence as the Subject of Crime in the System of Common Law: Using the Example of the U.S. Criminal Legislation". Russian Journal of Criminology 13, n.º 4 (23 de agosto de 2019): 547–54. http://dx.doi.org/10.17150/2500-4255.2019.13(4).547-554.

Texto completo
Resumen
The authors examine the possibility of holding artificial intelligence (AI) criminally liable under the current U.S. criminal legislation and study the opinions of Western lawyers who believe that this possibility for a machine controlled by AI may become reality in the near future. They analyze the requirements for criminal liability as determined by American legislators: a willful unlawful act or omission of an act (actus reus), criminal intent (mens rea), i.e. the person knowingly commits a criminal act or is negligent, as well as three basic models of AI’s criminal liability. In the first model, a crime is committed through the actions of another person, i.e. the cases when the subject of crime does not have sufficient cognitive abilities to understand the criminal intent and, moreover, to be guided by it. This category of persons includes minors, persons with limited legal capacity and modern cybernetic systems, who cannot be viewed as capable of cognition that equals human cognition. The latter are consi­dered to be innocent of a criminal act because their actions are controlled by an algorithm or a person who has indirect program control. In the second model, a crime is committed by a being who is objectively guilty of it. A segment of the program code in intellectual systems allows for some illegal act by default, for example, includes a command to unconditionally destroy all objects that the system recognizes as dange­rous for the purpose that such AI is working to fulfill. According to this model, the person who gives the unlawful command should be held liable. If such a «collaborator» is not hidden, criminal liability should be imposed on the person who gives an unlawful command to the system, not on the performer, because the algorithmic system that determines the actions of the performer is itself unlawful. Thus, criminal liability in this case should be imposed on the persons who write or use the program, on the condition that they were aware of the unlawfulness of orders that guide the actions of the performer. Such crimes include acts that are criminal but cannot be prevented by the performer — the AI system. In the third model, AI is directly liable for the acts that contain both a willful action and the unlawful intent of the machine. Such liability is possible if AI is recognized as a subject of criminal law, and also if it independently works out an algorithm to commit an act leading to publically dangerous consequen­ces, or if such consequences are the result of the system’s omission to act according to the initial algorithm, i.e. if its actions are willful and guilty.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Osmani, Nora. "The Complexity of Criminal Liability of AI Systems". Masaryk University Journal of Law and Technology 14, n.º 1 (26 de junio de 2020): 53–82. http://dx.doi.org/10.5817/mujlt2020-1-3.

Texto completo
Resumen
Technology is advancing at a rapid pace. As we anticipate a rapid increase in artificial intelligence (AI), we may soon find ourselves dealing with fully autonomous technology with the capacity to cause harm and injuries. What then? Who is going to be held accountable if AI systems harm us?Currently there is no answer to this question and the existing regulatory framework falls short in addressing the accountability regime of autonomous systems. This paper analyses criminal liability of AI systems, evaluated under the existing rules of criminal law. It highlights the social and legal implications of the current criminal liability regime as it is applied to the complex nature of industrial robots. Finally, the paper explores whether corporate liability is a viable option and what legal standards are possible for imposing criminal liability on the companies who deploy AI systems.The paper reveals that traditional criminal law and legal theory are not well positioned to answer the questions at hand, as there are many practical problems that require further evaluation. I have demonstrated that with the development of AI, more questions will surface and legal frameworks will inevitably need to adapt. The conclusions of this paper could be the basis for further research.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Mosechkin, Ilya N. "Artificial intelligence and criminal liability: problems of becoming a new type of crime subject". Vestnik of Saint Petersburg University. Law 10, n.º 3 (2019): 461–76. http://dx.doi.org/10.21638/spbu14.2019.304.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Yoon, Young Cheol. "A Study on Criminal Liability of Artificial Intelligence Robot and the ‘Human Character as Personality’ in Criminal Law". Wonkwang University Legal Research Institute 35, n.º 1 (30 de marzo de 2019): 95–123. http://dx.doi.org/10.22397/wlri.2019.35.1.95.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Dremliuga, Roman y Natalia Prisekina. "The Concept of Culpability in Criminal Law and AI Systems". Journal of Politics and Law 13, n.º 3 (30 de agosto de 2020): 256. http://dx.doi.org/10.5539/jpl.v13n3p256.

Texto completo
Resumen
This article focuses on the problems of the application of AI as a tool of crime from the perspective of the norms and principles of Criminal law. The article discusses the question of how the legal framework in the area of culpability determination could be applied to offenses committed with the use of AI. The article presents an analysis of the current state in the sphere of criminal law for both intentional and negligent offenses as well as a comparative analysis of these two forms of culpability. Part of the work is devoted to culpability in intentional crimes. Results of analysis in the paper demonstrate that the law-enforcer and the legislator should reconsider the approach to determining culpability in the case of the application of artificial intelligence systems for committing intentional crimes. As an artificial intelligence system, in some sense, has its own designed cognition and will, courts could not rely on the traditional concept of culpability in intentional crimes, where the intent is clearly determined in accordance with the actions of the criminal. Criminal negligence is reviewed in the article from the perspective of a developer’s criminal liability. The developer is considered as a person who may influence on and anticipate harm caused by AI system that he/she created. If product developers are free from any form of criminal liability for harm caused by their products, it would lead to highly negative social consequences. The situation when a person developing AI system has to take into consideration all potential harm caused by the product also has negative social consequences. The authors conclude that the balance between these two extremums should be found. The authors conclude that the current legal framework does not conform to the goal of a culpability determination for the crime where AI is a tool.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Nedbálek, Karel. "The future inclusion of criminal liability of the robots and the artificial intelligence in the Czech Republic". Expert: Paradigm of Law and Public Administration 1, n.º 2018-1(1) (noviembre de 2018): 86–95. http://dx.doi.org/10.32689/10.32689/2617-9660-2018-1-1-86-95.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Nedbálek, Karel. "The future inclusion of criminal liability of the robots and the artificial intelligence in the Czech Republic". Expert: Paradigm of Law and Public Administration 1, n.º 1 (noviembre de 2018): 86–95. http://dx.doi.org/10.32689/2617-9660-2018-1-1-86-95.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Hildebrandt, Mireille. "Ambient Intelligence, Criminal Liability and Democracy". Criminal Law and Philosophy 2, n.º 2 (27 de octubre de 2007): 163–80. http://dx.doi.org/10.1007/s11572-007-9042-1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

정진명. "Civil Liability for Artificial Intelligence". JOURNAL OF PROPERTY LAW 34, n.º 4 (febrero de 2018): 137–68. http://dx.doi.org/10.35142/prolaw.34.4.201802.005.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Kārkliņš, Jānis. "Artificial Intelligence and Civil Liability". Journal of the University of Latvia. Law 13 (2020): 164–83. http://dx.doi.org/10.22364/jull.13.10.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Zhang, Guang-Jun y Zong-Xing Li. "Medical artificial intelligence damage liability". Wonkwang University Legal Research Institute 20 (31 de diciembre de 2018): 231–53. http://dx.doi.org/10.22397/bml.2018.20.231.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Lie, Han-Young y Seong-Min Cha. "Artificial Intelligence and Product Liability". Asia-pacific Journal of Multimedia services convergent with Art, Humanities, and Sociology 7, n.º 3 (31 de marzo de 2017): 321–28. http://dx.doi.org/10.14257/ajmahs.2017.03.32.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Ko, Seil. "Artificial Intelligence and Tort Liability Rules". Chungnam Law Review 29, n.º 2 (31 de mayo de 2018): 85–117. http://dx.doi.org/10.33982/clr.2018.05.29.2.85.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Tyrranen, V. A. "ARTIFICIAL INTELLIGENCE CRIMES". Territory Development, n.º 3(17) (2019): 10–13. http://dx.doi.org/10.32324/2412-8945-2019-3-10-13.

Texto completo
Resumen
The article is devoted to current threats to information security associated with the widespread dissemination of computer technology. The author considers one of the aspects of cybercrime, namely crime using artificial intelligence. The concept of artificial intelligence is analyzed, a definition is proposed that is sufficient for effective enforcement. The article discusses the problems of criminalizing such crimes, the difficulties of solving the issue of legal personality and delinquency of artificial intelligence are shown. The author gives various cases, explaining why difficulties arise in determining the person responsible for the crime, gives an objective assessment of the possibility of criminal prosecution of the creators of the software, in the work of which there were errors that caused harm to the rights protected by criminal law and legitimate interests.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Kartashov, Igor I. y Ivan I. Kartashov. "Artificial intelligence: criminal and procedural aspects". Current Issues of the State and Law, n.º 17 (2021): 75–89. http://dx.doi.org/10.20310/2587-9340-2021-5-17-75-89.

Texto completo
Resumen
For millennia, mankind has dreamed of creating an artificial creature capable of thinking and acting “like human beings”. These dreams are gradually starting to come true. The trends in the development of modern so-ciety, taking into account the increasing level of its informatization, require the use of new technologies for information processing and assistance in de-cision-making. Expanding the boundaries of the use of artificial intelligence requires not only the establishment of ethical restrictions, but also gives rise to the need to promptly resolve legal problems, including criminal and proce-dural ones. This is primarily due to the emergence and spread of legal expert systems that predict the decision on a particular case, based on a variety of parameters. Based on a comprehensive study, we formulate a definition of artificial intelligence suitable for use in law. It is proposed to understand artificial intelligence as systems capable of interpreting the received data, making optimal decisions on their basis using self-learning (adaptation). The main directions of using artificial intelligence in criminal proceedings are: search and generalization of judicial practice; legal advice; preparation of formalized documents or statistical reports; forecasting court decisions; predictive jurisprudence. Despite the promise of using artificial intelligence, there are a number of problems associated with a low level of reliability in predicting rare events, self-excitation of the system, opacity of the algorithms and architecture used, etc.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Laptev, Vasiliy. "Artificial Intelligence and Liability for its Work". Law. Journal of the Higher School of Economics, n.º 2 (10 de junio de 2019): 79–102. http://dx.doi.org/10.17323/2072-8166.2019.2.79.102.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Price, W. Nicholson, Sara Gerke y I. Glenn Cohen. "Potential Liability for Physicians Using Artificial Intelligence". JAMA 322, n.º 18 (12 de noviembre de 2019): 1765. http://dx.doi.org/10.1001/jama.2019.15064.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Čerka, Paulius, Jurgita Grigienė y Gintarė Sirbikytė. "Liability for damages caused by artificial intelligence". Computer Law & Security Review 31, n.º 3 (junio de 2015): 376–89. http://dx.doi.org/10.1016/j.clsr.2015.03.008.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Наталья Николаевна, Апостолова. "LIABILITY FOR DAMAGE CAUSED BY ARTIFICIAL INTELLIGENCE". NORTH CAUCASUS LEGAL VESTNIK 1, n.º 1 (marzo de 2021): 112–19. http://dx.doi.org/10.22394/2074-7306-2021-1-1-112-119.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Duflot, A. "ARTIFICIAL INTELLIGENCE IN FRENCH LAW". Courier of Kutafin Moscow State Law University (MSAL)), n.º 1 (7 de abril de 2021): 47–55. http://dx.doi.org/10.17803/2311-5998.2021.77.1.047-055.

Texto completo
Resumen
The use of artificial intelligence in France is growing and intensifying in many areas, particularly in the field of justice. This revolution create problems with the liability and intellectual property of systems using artificial intelligence.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Jong-Gu, Jong-Gu y Hye-In Kim. "Criminal Decision-Making based on Artificial Intelligence". Journal of Legal Studies 28, n.º 3 (31 de julio de 2020): 203–30. http://dx.doi.org/10.35223/gnulaw.28.3.9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Plakhotnik, O. "Practical use artificial intelligence in criminal proceeding". Herald of criminal justice, n.º 4 (2019): 45–57. http://dx.doi.org/10.17721/2413-5372.2019.4/45-57.

Texto completo
Resumen
Artificial intelligence a set of scientific methods, theories and techniques whose aim is to reproduce, by a machine, the cognitive abilities of human beings. The artificial intelligence system is capable of using big data, calculating, evaluating, studying, deductive reasoning, abstract analysis and forecasting. The speed of information processing by artificial intelligence and its efficiency in making procedural decisions creates a model for digital automation of procedural decisions. The purpose of the article is to investigate the use of artificial intelligence in the judicial systems of developed countries and to analyze the prospects for its use in criminal proceedings in Ukraine. Such automation simplifies the process of making similar decisions in similar proceedings, which, of course, increases efficiency and simplifies procedural decision-making process in terms of procedural cost savings. Modern developments seek to ensure that machines perform complex tasks that were previously performed by humans. In the near future, accompanying organizational measures for the implementation of artificial intelligence and its regulatory support in public authorities associated with the storage of big data, processing information based on mathematical algorithms and making decisions based on artificial intelligence will be an integral part of our society. Artificial intelligence technologies are already being implemented in the judicial systems of China, the United States of America, the United Kingdom, France and Argentina. In the near future, the chances of using such technologies in the courts of general jurisdiction of Ukraine and in the criminal proceedings of Ukraine can be assessed as extremely high, and its scope is not limited to the work of artificial intelligence in court. You can also talk about the work of artificial intelligence in the activities of the prosecutor and the police. The paper deals with the use of artificial intelligence in the judicial systems of developed countries and analyzes the prospects of its use in criminal proceedings in Ukraine. These systems are reviewed, as: COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) - United States of America, HART (Harm Assessment Risk Tool) - United Kingdom, Prometea - Argentina, Compulsory Similar Cases Search and Reporting Mechanism - China. The advantages of artificial intelligence systems are analyzed and a critique of their use is noted.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Tkacheva, N. V. "ARTIFICIAL INTELLIGENCE AND CRIMINAL IDENTITY: THE RATIO". Issues of Law 20, n.º 4 (2020): 56–60. http://dx.doi.org/10.14529/pro-prava200409.

Texto completo
Resumen
People tend to think about the consequences of the widespread use of artificial intelligence. Criminology deals with the study of socio-legal aspects of human life, in particular crime, the identity of the criminal, the determinants of crime, ways and means of crime prevention, the question arises about the criminological risks of using artificial intelligence. The article examines the types of artificial intelligence, the ratio of moral characteristics of artificial intelligence and human. Issues related to the formation and development of the concept of artificial intelligence are considered, as well as possible criminological risks when using artificial intelligence in everyday life are analyzed. The article analyzes the concept and development of artificial intelligence, as well as the concept and components of criminological risks, the concept of the criminal’s personality as an object of criminology research, in order to draw conclusions about the impact of artificial intelligence on the criminal’s personality
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Maydanyk, Roman А., Nataliia І. Maydanyk y Maryna M. Velykanova. "Liability for damage caused using artificial intelligence technologies". Journal of the National Academy of Legal Sciences of Ukraine 28, n.º 2 (25 de junio de 2021): 150–59. http://dx.doi.org/10.37635/jnalsu.28(2).2021.150-159.

Texto completo
Resumen
Artificial intelligence technologies, which have recently been rapidly developing, along with indisputable advantages, also create many dangers, the implementation of which causes harm. Compensation for such damage raises questions regarding the subjects, the act in itself which caused the damage, the causality, etc. The situation is also complicated by the imperfection of statutory regulation of relations on the use of artificial intelligence technologies and the insufficiency or ambiguity of judicial practice on compensation for damage caused using digital technologies. Therefore, the purpose of this publication is to outline approaches to applying legal liability for damage caused using artificial intelligence technologies. Based on a systematic analysis using dialectical, synergetic, comparative, logical-dogmatic, and other methods, the study analysed the state of legal regulation of liability for damage caused using artificial intelligence technologies and discusses approaches to the application of legal liability for damage caused using these technologies. In particular, it was concluded that despite several resolutions adopted by the European Parliament, relations with the use of artificial intelligence technologies and the application of legal liability for damage caused by artificial intelligence have not received a final statutory regulation. The regulatory framework is merely under development and rules of conduct in the field of digital technologies are still being created. States, including Ukraine, are faced with the task of bringing legislation in the field of the use of artificial intelligence technologies in line with international regulations to protect human and civil rights and freedoms and ensure proper guarantees for the use of such technologies. One of the priority areas of harmonisation of legislation is to address the issue of legal liability regimes for damage caused using artificial intelligence technologies. Such regimes today are strict liability and liability based on the principle of guilt. However, the ability of a particular regime to perform the functions of deterring and compensating for damage caused using artificial intelligence technologies encourages scientific discussion
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Lee, GyeongMi. "Civil liability of errors in Artificial Intelligence software". Gachon Law Review 13, n.º 1 (31 de marzo de 2020): 183–210. http://dx.doi.org/10.15335/glr.2020.13.1.007.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Jongmo Yang. "The Risk of Artificial Intelligence: Liability and Regulation". Journal of hongik law review 17, n.º 4 (diciembre de 2016): 537–65. http://dx.doi.org/10.16960/jhlr.17.4.201612.537.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Kim, Yong Joo. "Liability due to Patent Infringement by Artificial Intelligence". Inha Law Review : The Institute of Legal Studies Inha University 21, n.º 2 (30 de junio de 2018): 65–99. http://dx.doi.org/10.22789/ihlr.2018.06.21.2.65.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Smith, Helen y Kit Fotheringham. "Artificial intelligence in clinical decision-making: Rethinking liability". Medical Law International 20, n.º 2 (junio de 2020): 131–54. http://dx.doi.org/10.1177/0968533220945766.

Texto completo
Resumen
This article theorises, within the context of the law of England and Wales, the potential outcomes in negligence claims against clinicians and software development companies (SDCs) by patients injured due to AI system (AIS) use with human clinical supervision. Currently, a clinician will likely shoulder liability via a negligence claim for allowing defects in an AIS’s outputs to reach patients. We question if this is ‘fair, just and reasonable’ to clinical users: we argue that a duty of care to patients ought to be recognised on the part of SDCs as well as clinicians. As an alternative to negligence claims, we propose ‘risk pooling’ which utilises insurance. Here, a fairer construct of shared responsibility for AIS use could be created between the clinician and the SDC; thus, allowing a rapid mechanism of compensation to injured patients via insurance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Kim, Nam-Wook. "Artificial intelligence legal challenges - Focusing on national liability -". Korean Public Land Law Association 93 (28 de febrero de 2021): 189–216. http://dx.doi.org/10.30933/kpllr.2021.93.189.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Završnik, Aleš. "Criminal justice, artificial intelligence systems, and human rights". ERA Forum 20, n.º 4 (20 de febrero de 2020): 567–83. http://dx.doi.org/10.1007/s12027-020-00602-0.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Divino, Sthéfano Bruno Santos. "Critical considerations on Artificial Intelligence liability: e-personality propositions". Revista Eletrônica Direito e Sociedade - REDES 8, n.º 2 (1 de julio de 2020): 193. http://dx.doi.org/10.18316/redes.v8i2.5614.

Texto completo
Resumen
O presente artigo objetiva discutir a noção de responsabilidade pelos atos praticados por entidades inteligentes artificialmente. Incumbe ao primeiro tópico a análise e definição do termo inteligência artificial, enquanto o segundo pretende discutir e teorizar as diretrizes da responsabilidade em sentido amplo dessas entidades. Ao final, propõe-se que, apesar de tais entidades possuírem certo grau de autonomia, inexiste a presença de subjetividade em seus desígnios. Portanto, atribuir a causa e responsabilidade pessoalmente aos atos ilícitos cometidos por uma IA pode inviabilizar a produção científica-jurídica e legal. Como meio alternativo, propõe-se a incisão de uma responsabilidade objetiva para eventual resolução dos litígios que possam surgir a partir deste contexto. Ancora-se o presente raciocínio nos métodos de pesquisa dedutivo e integrado, bem como na hermenêutica legislativa.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Nwafor, Anthony O. "Corporate Criminal Responsibility: A Comparative Analysis". Journal of African Law 57, n.º 1 (1 de febrero de 2013): 81–107. http://dx.doi.org/10.1017/s0021855312000162.

Texto completo
Resumen
AbstractThis article focuses on the extent of a company's responsibility for the criminal conduct of its employees. It considers the initial reluctance of common law courts to hold corporations criminally responsible for offences requiring mens rea, a mental element not found in artificial persons. The courts overcame this initial difficulty with recourse to the identification doctrine, which seeks to attribute to a company the fault of certain of its officers. However, the restrictiveness and inconsistencies embodied in the various judicial statements of that doctrine precipitated recourse in some jurisdictions to civil law concepts, such as respondeat superior, vicarious liability and even strict liability, to found corporate criminal responsibility. The need to streamline the scope of, if not enhance, corporate criminal liability, has engendered statutory reforms in some jurisdictions. The article considers reforms in Australia, the UK, Canada and the USA, in comparison with the situation in South Africa and Lesotho.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Lapshin, V. F., S. A. Korneev y R. V. Kilimbaev. "The use of artificial intelligence in criminal law and criminal procedure systems". IOP Conference Series: Materials Science and Engineering 1001 (31 de diciembre de 2020): 012144. http://dx.doi.org/10.1088/1757-899x/1001/1/012144.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Nemeikšis, Giedrius. "ARTIFICIAL INTELLIGENCE AS LEGAL ENTITY IN THE CIVIL LIABILITY CONTEXT". Acta Prosperitatis 12 (2021): 89–102. http://dx.doi.org/10.37804/1691-6077-2021-12-89-102.

Texto completo
Resumen
Entity issues of artificial intelligence is no longer a purely theoretical question, as technological progress and its practical application is an increasingly common phenomenon, while the artificial intelligence system's ability to learn and to make decisions autonomously determines its subjective nature and increasingly limited ability to consider it as just a complex tool. So, the purpose of the research is to analyse the legal basis and issues related with acceptance of the artificial intelligence as a separate legal entity and peculiarities of civil liability for damages caused by artificial intelligence. The analysis was based on three issues: a concept of artificial intelligence, its possibility to be recognized as a separate legal entity and peculiarities of civil liability for damages caused by it. The research was carried out by employing the logical, the teleological, the systematic analysis, the linguistic, the synthesising methods and the analysis of legal documents. The analysis shows that there is no single concept of artificial intelligence, but the identified specific elements of its definition would simplify its legal regulation as well as only a fully autonomous artificial intelligence has a potential to be recognized as a separate legal entity, so in this case there is an objective need to review the framework for civil liability for damages caused by artificial intelligence in order to establish at least joint and several civil liability of it with the natural person responsible for it.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Yastrebov, Oleg A. "Book Review: Begishev, I.R. & Khisamova, Z.I. (2021) Iskusstvennyy Intellekt i Ugolovnyy Zakon [Artificial Intelligence and Criminal Law]. Moscow: Prospekt". Ugolovnaya yustitsiya, n.º 16 (2020): 130–32. http://dx.doi.org/10.17223/23088451/16/25.

Texto completo
Resumen
The article provides a brief review of the monograph Artificial Intelligence and Criminal Law. The book examines the criminal law and criminology aspects of using artificial intelligence for criminal purposes. Special attention is paid to the regulation of the legal personality of artificial intelligence as a fundamental issue in delineation of responsibility. The book presents the results of a comprehensive theoretical and applied analysis of the main contemporary doctrines on the regulation of artificial intelligence and robotics, as well as of a comparative legal analysis of international legislation aimed at minimizing the criminological risks associated with the use of artificial intelligence.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Wei, Lei. "Legal risk and criminal imputation of weak artificial intelligence". IOP Conference Series: Materials Science and Engineering 490 (12 de abril de 2019): 062085. http://dx.doi.org/10.1088/1757-899x/490/6/062085.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Aagaard, Lise. "Artificial intelligence decision support systems and liability for medical injuries". Journal of Research in Pharmacy Practice 9, n.º 3 (2020): 125. http://dx.doi.org/10.4103/jrpp.jrpp_20_65.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía