To see the other types of publications on this topic, follow the link: Ethics of artificial intelligence.

Dissertations / Theses on the topic 'Ethics of artificial intelligence'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Ethics of artificial intelligence.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Piloidis, Loukas. "Ethics in Artificial Intelligence : How Relativism is Still Relevant." Thesis, Södertörns högskola, Medie- och kommunikationsvetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-41760.

Full text
Abstract:
This essay tries to demarcate and analyse Artificial Intelligence ethics. Going away from the traditional distinction in normative, meta, and applied ethics, a different split is executed, inspired by the three most prominent schools of thought: deontology, consequentialism, and Aristotelian virtue ethics. The reason behind this alternative approach is to connect all three schools back to ancient Greek philosophy. Having proven that the majority of arguments derive from some ancient Greek scholars (Socrates, Plato, Aristotle), a new voice is initiated into the discussion, Protagoras the Sophist. A big advocate of -the later conceived- humanism and relativism, Protagoras is used as a prism to examine a new ethical model that is based on the personalization of agents. In other words, even though theories of objectivity have overflown contemporary discussions of finding a robust ethical model for Artificial Intelligence agents, there is potential in a subjective model, personalised after each and every user.
APA, Harvard, Vancouver, ISO, and other styles
2

Vaseigaran, Ajanth, and Gobi Sripathy. "Artificial Intelligence in Healthcare." Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-296643.

Full text
Abstract:
Healthcare systems play a critical role in ensuring people's health. Establishing accurate diagnoses is a vital element of this process. As sources highlight misdiagnoses and missed diagnoses as a common issue, a solution must be sought. Diagnostic errors are common in the emergency departments, which has been recognized as a stressful work environment. Today's industries are forced to deal with rapidly changing technological advances that result in reshaped systems, products, and services. Artificial Intelligence (AI) is one of such technologies that can work as a solution to diagnosis issues but comes with technical, ethical and legal challenges. Hence, the thesis intends to investigate how AI can affect the accuracy of diagnosis as well as how its integration in healthcare relates to the technical, ethical and legal aspects. The thesis begins with a literature review, which serves as a theoretical foundation and allows for a conceptual framework to be formed. The conceptual framework is used to select interviewees, which results in 12 interviews with professors, researchers, doctors and politicians. In addition, a survey is conducted to obtain the general public’s opinion on the matter. The findings present that AI is already mature enough to make more accurate diagnoses than doctors as well as release burden from medical practitioners in the form of administrative tasks. One obstacle is the incomplete data available since laws hinder sharing of patient data. Furthermore, the AI algorithms must be fit for all social minorities and not demonstrate racial discrimination. The European AI Alliance was established in 2018 with the aim to keep the technology in check. Similar initiatives can be created on a national- and regional level to maintain some form of control over its proper use.
Sjukvårdssystem utgör en avgörande roll för att säkerställa människors välmående och hälsa. Att fastställa korrekta diagnoser är en viktig del av denna process. Enligt källor är feldiagnoser och uteblivna diagnoser ett vanligt problem och bör därför lösas. Diagnostiska fel är vanligt förekommande på akutmottagningar, vilka karaktäriseras som en stressig arbetsmiljö. Dagens industrier tvingas hantera snabbt föränderliga tekniska framsteg som resulterar i omformade system, produkter och tjänster. Artificiell Intelligens (AI) är en av sådana tekniker som kan fungera som en lösning på diagnosfrågor. Dock kommer den med tekniska, etiska och legala utmaningar. Examensarbetet avser därför att undersöka hur AI kan påverka diagnosens precision samt hur integrationen i vården relaterar till de tekniska, etiska och legala aspekterna. Rapporten inleds med en litteraturstudie, vilket fungerar som en teoretisk grund och bidrar till att skapa ett konceptuellt ramverk. Det konceptuella ramverket används för att välja intervjupersoner, vilket resulterar i 12 intervjuer med professorer, forskare, läkare och politiker. Dessutom genomförs en enkätundersökning för att få allmänhetens åsikt i frågan. Rapportens resultat visar att AI redan är tillräckligt utvecklad för att göra en mer precisionssäker diagnos än en läkare samt kan avlasta läkare i form av administrativa uppgifter. Ett hinder är att den data som finns tillgänglig är ofullständig på grund av lagar som hindrar delning av patientdata. AI-algoritmerna måste dessutom vara lämpliga för alla sociala minoriteter och inte leda till rasdiskriminering. European AI Alliance grundades 2018 med målet att hålla tekniken i schack i förhållande till de etiska och legala aspekterna. Liknande initiativ kan skapas på nationell och regional nivå för att bibehålla någon form av kontroll över dess korrekta användning.
APA, Harvard, Vancouver, ISO, and other styles
3

Alaieri, Fahad. "Ethics in Social Autonomous Robots: Decision-Making, Transparency, and Trust." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37941.

Full text
Abstract:
Autonomous decision-making machines – ranging from autonomous vehicles to chatbots – are already able to make decisions that have ethical consequences. If these machines are eventually deployed on a large scale, members of society will have to be able to trust the decisions that are made by these machines. For these machines to be trustworthy, their decisions must be overseen by socially accepted ethical principles; moreover, these principles and their role in machine decision-making must be transparent and explainable: it must be possible to explain why machine decisions are made and such explanations require that the mechanisms involved for making them are transparent. Furthermore, manufacturing companies have a corporate social responsibility to design such robots in ways that make them not only safe but also trustworthy. Members of society will not trust a robot that works in mysterious, ambiguous, or inexplicable ways, particularly if this robot is required to make decisions based on ethical principles. The current literature on embedding ethics in robots is sparse. This thesis aims to partially fill this gap in order to help different stakeholders (including policy makers, the robot industry, robots designers, and the general public) to understand the many dimensions of machine- executable ethics. To this end, I provide a framework for understanding the relationships among different stakeholders who legislate, create, deploy, and use robots and their reasons for requiring transparency and explanations. This framework aims to provide an account of the relationships between the transparency of the decision-making process in ethical robots, explanations for their behaviour, and the individual and social trust that results. This thesis also presents a model that decomposes the stages of ethical decision-making into their elementary components with a view to enabling stakeholders to allocate the responsibility for such choices. In addition, I propose a model for transparency which demonstrates the importance of and relationships between disclosure, transparency, and explanation which are needed for societies to accept and trust robots. One of the important stakeholders of robotics is the general public and, in addition to providing an analytical framework with which to conceptualize ethical decision-making, this thesis also performs an analysis of opinions drawn from hundreds of written comments posted on public forums concerning the behaviour of socially autonomous robots. This analysis provides insights into the layperson’s responses to machines that make decisions and offers support for policy recommendations that should be considered by regulators in the future. This thesis contributes to the area of ethics and governance of artificial intelligence.
APA, Harvard, Vancouver, ISO, and other styles
4

Samore', Erika <1997&gt. "Ethical implications of Artificial intelligence: cases from Italy." Master's Degree Thesis, Università Ca' Foscari Venezia, 2021. http://hdl.handle.net/10579/19784.

Full text
Abstract:
Despite public belief, Artificial intelligence technologies are frequently embedded in software and applications. On one hand, they facilitate aspects of the daily life, for example, by using instant translation tools it is possible to communicate with everyone regardless of language barriers, or it is easier to search for a movie on Netflix because they are rearranged based on individual taste through a recommender system. On the other hand, the rise of AI tools is increasing discrimination and hence inequality and people are not even aware of such problems. For instance, fitting candidates for a job position are not called for an interview, because their resume has not been selected by a biased software that is favouring a specific race or gender. After discussing the main findings and ethical implications on the usage of AI presented by the today literature, the aim of the paper is to analyse whether Italian companies developing such technologies are aware of the negative impacts that these technologies have on the final users and what are the steps they take to prevent them.
APA, Harvard, Vancouver, ISO, and other styles
5

Karnouskos, Stamatis. "The Interplay of Law, Robots and Society,in an Artificial Intelligence Era." Thesis, Umeå universitet, Juridiska institutionen, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-150599.

Full text
Abstract:
The rapid advances in Artificial Intelligence (AI) and Robotics will have a profound impacton the society and its laws. Robots, humanoid / anthropomorphic or not, e.g., selfdrivingcars, will have physical presence, take autonomous decisions and interact with allstakeholders in the society. While already some AI robots are available, e.g., self-driving cars,their intelligence is expected to skyrocket in the next decades, and many prominent scientistspredict that they will reach (and potentially surpass) human intelligence. The symbiosiswith robots, may lead to a civilizational change with far reaching effects. In a future society,where robots and humans are in a symbiotic relationship, philosophical, legal and societalquestions on consciousness, citizenship, rights and legal entity of robots are raised such as:Should robots be recognized as a legal entity? How should criminal and civil law deal withtheir decisions? What are the societal implications? etc. This work, starts from the theoreticalviewpoints pertaining law and society, is inspired by intersectional approaches, andutilizes existing literature in order to analyze the interplay of law, robots and society via differentangles such as law, social, economic, gender and ethical perspectives. In addition,a discussion, especially pertaining core elements of law and society provides new insightsand challenges that may emerge. The analysis and discussions presented here make it evidentthat although robots are inexorably integrated, the law systems as well as society arenot prepared for their prevalence. What is clear is that robots and AI will create a new erafor humanity. What that era might be, is still under discussion and not well understood. AsStephenHawking put it “The rise of powerful AI will be either the best or the worst thing everto happen to humanity. We do not yet know which”. Overall one has to consider that law ismeant to be proactive in nature and ideally be in place to prevent problems before they happen,and therefore it is now the time to start a detailed discussion, involving all stakeholders,and derive the necessary policies and frameworks, for the key issues that pertain law, robotsand society. This work makes a contribution towards the multi-faceted aspects that such adiscussion pertains.
APA, Harvard, Vancouver, ISO, and other styles
6

Xu, Guo Dong. "Thoughts on Design Ethics Applied to Kitchen Management System in the Information and Digitization Age." University of Cincinnati / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1523633653648084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hugosson, Beatrice, Donna Dinh, and Gabriella Esmerson. "Why you should care: Ethical AI principles in a business setting : A study investigating the relevancy of the Ethical framework for AI in the context of the IT and telecom industry in Sweden." Thesis, Internationella Handelshögskolan, Högskolan i Jönköping, IHH, Företagsekonomi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-44236.

Full text
Abstract:
Background: The development of artificial intelligence (AI) is ever increasing, especially in the telecom and IT industry due to its great potential competitive advantage. However, AI is implemented at a fast phase in society with insufficient consideration for the ethical implications. Luckily, different initiatives and organizations are now launching ethical principles to prevent possible negative effects stemming from AI usage. One example is the Ethical Framework for AI by Floridi et al., (2018) who established five ethical principles for sustainable AI with inspiration from bioethics. Moreover, Sweden as a country is taking AI ethics seriously since the government is on a mission to be the world leader in harnessing artificial intelligence. Problem: The research in the field of ethical artificial intelligence is increasing but is still in its infancy where the majority of the academic articles are conceptual papers. Moreover, the few frameworks that exist for responsible AI are not always action-guiding and applicable to all AI applications and contexts. Purpose: This study aims to contribute with empirical evidence within the topic of artificial intelligence ethics and investigate the relevancy of an existing framework, namely the Ethical Framework for AI by Floridi et al., (2018), in the IT and telecom industry in Sweden. Method: A qualitative multiple-case study of ten semi-structured interviews with participants from the companies EVRY and Ericsson. The findings have later been connected to the literature within the field of artificial intelligence and ethics. Results: The most reasonable interpretation from the findings and analysis is that some parts of the framework are relevant, while others are not. Specifically, the principles of autonomy and non- maleficence seem to be applicable, meanwhile justice and explicability appear to only be partially supported by the participants and beneficence is suggested to not be relevant due to several reasons.
APA, Harvard, Vancouver, ISO, and other styles
8

Evans, Katherine. "The Implementation of Ethical Decision Procedures in Autonomous Systems : the Case of the Autonomous Vehicle." Thesis, Sorbonne université, 2021. http://www.theses.fr/2021SORUL003.

Full text
Abstract:
Les problèmes éthiques liés à l’arrivée de formes d’intelligence artificielles différentes a sollicité beaucoup d’attention aussi bien académique que publique. Cependant, ces inquiétudes se concentrent sur un problème particulier : comment assurer que les décisions prises par les agents artificiels comme des voitures autonomes ne nuisent pas aux êtres humains présents dans leur environnement ? Cette question a incité la création de ceux qui sont communément appelés les agents moraux artificiels dans la littérature, la prise de décision desquels est contrainte par une moralité artificielle : un système de principes normatifs implémenté dans le processus de raisonnement de la machine. A ce jour, la forme que prend cette moralité artificielle relève de deux approches différentes : soit une forme maximalement éthique, qui dépend de l’implémentation stricte des théories morales préexistantes comme la déontologie Kantienne ou l’Utilitarisme, soit une forme minimaliste, qui applique des techniques de l’IA stochastique à l’analyse et agrégation de données portant sur les préférences morales d’une population, afin d’en tirer des principes généraux mobilisés ensuite dans la prise de décision des machines. Prises individuellement, aucune des deux approches n’arrivent à pondérer l’importance des contraintes morales avec la pertinence de l’acceptabilité publique des agents moraux artificiels. Nous proposons une approche alternative à la moralité artificielle, la théorie des valences éthiques, qui s’efforce d’accommoder ce genre de pondération, et nous l’appliquons au cas du véhicule autonome
The ethics of emerging forms of artificial intelligence has become a prolific subject in both academic and public spheres. A great deal of these concerns flow from the need to ensure that these technologies do not cause harm—physical, emotional or otherwise—to the human agents with which they will interact. In the literature, this challenge has been met with the creation of artificial moral agents: embodied or virtual forms of artificial intelligence whose decision procedures are constrained by explicit normative principles, requiring the implementation of what is commonly called artificial morality into these agents. To date, the types of reasoning structures and principles which inform artificial morality have been of two kinds: first, an ethically maximal vision of artificial morality which relies on the strict implementation of traditional moral theories such as Kantian deontology or Utilitarianism, and second, a more minimalist vision which applies stochastic AI techniques to large data sets of human moral preferences so as to illicit or intuit general principles and preferences for the design of artificial morality. Taken individually, each approach is unable to fully answer the challenge of producing inoffensive behavior in artificial moral agents, most especially since both forms are unable to strike a balance between the ideal set of constraints which morality imposes on one hand, and the types of constraints public acceptability imposes, on the other. We provide an alternative approach to the design of artificial morality, the Ethical Valence Theory, whose purpose is to accommodate this balance, and apply this approach to the case of autonomous vehicles
APA, Harvard, Vancouver, ISO, and other styles
9

Dinis, Filipa Alexandra dos Santos Pereira. "Principais dilemas éticos das novas tecnologias de informação : survey teórico exploratório." Master's thesis, Instituto Superior de Economia e Gestão, 2019. http://hdl.handle.net/10400.5/19318.

Full text
Abstract:
Mestrado em Gestão de Recursos Humanos
Com o evoluir dos tempos temos assistido a um desenvolvimento tecnológico histórico, cujo avanço propicia a abertura de vários caminhos, a partilha de informação à velocidade da luz, bem como a formação e transformação de novos conceitos. O ritmo célere do progresso tecnológico e científico que se tem feito sentir oferece oportunidades para o futuro, ao mesmo tempo que nos confronta com novas questões e dilemas éticos. O objetivo deste estudo é identificar os dilemas éticos que existem nas novas tecnologias de informação. Para tal foi realizada uma análise acerca da definição de ética e dilemas éticos de acordo com a posição de vários autores e, de seguida, abordou-se as dimensões da técnica, inteligência artificial e Big Data relativamente aos desafios éticos que se colocam hoje em dia nas sociedades modernas. Este estudo é um survey teórico exploratório realizado através do levantamento de uma pesquisa bibliográfica documental, assim como o levantamento de artigos da base de dados Scopus, entre 2016 e 2019. Por fim, comparam-se os dilemas éticos identificados na técnica, inteligência artificial e Big Data.
Over the years we have witnessed an historical technological development, work who leads to the opening of new paths, the sharing of information at the speed of light all around the world, as well as the formation of new concepts. The fast pace of the technological and scientific process that has been felt offers new opportunities for the future, at the same time confronts us with new and challenging and ethical dilemmas. The goal of this study is to understand what are the ethical dilemmas that exist in the new information technologies and, for that, an analysis was made about the definition of ethics and ethical dilemmas, based on the opinion of several authors and, after that the analysis touched on the technique, artificial intelligence and Big Data relative to the ethical challenges that they put on modern day society. The analysis was made through an exploratory theoretical survey through a bibliographical research and the identification of articles from Scopus between 2016 and 2019. The conclusion is made comparing the ethical dilemmas identified in technique, artificial intelligence and Big Data.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
10

Campano, Erik. "Artificially Intelligent Black Boxes in Emergency Medicine : An Ethical Analysis." Thesis, Umeå universitet, Institutionen för psykologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-160696.

Full text
Abstract:
Det blir allt vanligare att föreslå att icke-transparant artificiell intelligens, s.k. black boxes, används inom akutmedicinen. I denna uppsats används etisk analys för att härleda sju riktlinjer för utveckling och användning av black boxes i akutmedicin. Analysen är grundad på sju variationer av ett tankeexperiment som involverar en läkare, en black box och en patient med bröstsmärta på en akutavdelning. Grundläggande begrepp, inklusive artificiell intelligens, black boxes, metoder för transparens, akutmedicin och etisk analys behandlas detaljerat. Tre viktiga områden av etisk vikt identifieras: samtycke; kultur, agentskap och privatliv; och skyldigheter. Dessa områden ger upphov till de sju variationerna. För varje variation urskiljs en viktig etisk fråga som identifieras och analyseras. En riktlinje formuleras och dess etiska rimlighet testas utifrån konsekventialistiska och deontologiska metoder. Tillämpningen av riktlinjerna på medicin i allmänhet, och angelägenheten av fortsatt etiska analys av black boxes och artificiell intelligens inom akutmedicin klargörs.
Artificially intelligent black boxes are increasingly being proposed for emergency medicine settings; this paper uses ethical analysis to develop seven practical guidelines for emergency medicine black box creation and use. The analysis is built around seven variations of a thought experiment involving a doctor, a black box, and a patient presenting chest pain in an emergency department. Foundational concepts, including artificial intelligence, black boxes, transparency methods, emergency medicine, and ethical analysis are expanded upon. Three major areas of ethical concern are identified, namely consent; culture, agency, and privacy; and fault. These areas give rise to the seven variations. For each, a key ethical question it illustrates is identified and analyzed. A practical guideline is then stated, and its ethical acceptability tested using consequentialist and deontological approaches. The applicability of the guidelines to medicine more generally, and the urgency of continued ethical analysis of black box artificial intelligence in emergency medicine, are clarified.
APA, Harvard, Vancouver, ISO, and other styles
11

Pistilli, Giada. "Pour une éthique de l'intelligence artificielle conversationnelle." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUL038.

Full text
Abstract:
Cette recherche vise à sonder les complexités éthiques de l'intelligence artificielle (IA) conversationnelle, en se concentrant spécifiquement sur les grands modèles de langage et les agents conversationnels. Ce manuscrit construit un cadre qui allie l'analyse empirique au discours philosophique. Notre objectif est de plaider de toute urgence en faveur d'une structure éthique bien fondée pour l'IA conversationnelle, en soulignant la nécessité d'impliquer toutes les parties prenantes, des développeurs aux utilisateurs finaux. Tout d'abord, nous défendons l'intégration de l'ingénierie et d'autres disciplines scientifiques avec la philosophie, facilitant ainsi une compréhension plus nuancée des dimensions éthiques qui sous-tendent l'IA. Cette approche collaborative permet un discours éthique plus riche et mieux informé. Deuxièmement, nous préconisons l'utilisation dynamique de cadres éthiques appliqués en tant que guides fondamentaux pour la définition des objectifs initiaux d'un système d'IA. Ces cadres servent d'outils évolutifs qui s'adaptent aux complexités éthiques rencontrées au cours du développement et du déploiement. Enfin, sur la base d'une recherche pratique et interdisciplinaire, nous plaidons en faveur de la priorisation de l'IA étroite et spécifique à une tâche par rapport à l'intelligence artificielle générale, une position qui repose sur la faisabilité accrue de la surveillance éthique et de la contrôlabilité technique.Avec cette recherche, nous souhaitons contribuer à la littérature sur l'éthique de l'IA, en enrichissant le discours académique à la fois en philosophie et en informatique
This research aims to probe the ethical intricacies of conversational Artificial Intelligence (AI), specifically focusing on Large Language Models and conversational agents. This manuscript constructs a framework that melds empirical analysis with philosophical discourse. We aim to urgently advocate for a well-founded ethical structure for conversational AI, highlighting the necessity to involve all stakeholders, from developers to end-users. Firstly, we champion the integration of engineering and other scientific disciplines with philosophy, facilitating a more nuanced understanding of the ethical dimensions underpinning AI. This collaborative approach allows for a richer, more informed ethical discourse. Secondly, we advocate for the dynamic use of applied ethical frameworks as foundational guides for setting the initial objectives of an AI system. These frameworks serve as evolving tools that adapt to the ethical complexities encountered during development and deployment. Lastly, grounded in hands-on, interdisciplinary research, we make an argument for the prioritization of narrow, task-specific AI over Artificial General Intelligence, a stance that is based on the enhanced feasibility of ethical oversight and technical controllability.With this research, we aim to contribute to the literature on AI ethics, enriching the academic discourse in both philosophy and computer science
APA, Harvard, Vancouver, ISO, and other styles
12

Claudé, Mélanie, and Dorian Combe. "THE ROLES OF ARTIFICIAL INTELLIGENCE AND HUMANS IN DECISION MAKING: TOWARDS AUGMENTED HUMANS? : A focus on knowledge-intensive firms." Thesis, Umeå universitet, Företagsekonomi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-150022.

Full text
Abstract:
With the recent boom in big data and the continuous need for innovation, Artificial Intelligence is carving out a bigger place in our society. Through its computer-based capabilities, it brings new possibilities to tackle many issues within organizations. It also raises new challenges about its use and limits. This thesis aims to provide a better understanding of the role of humans and Artificial Intelligence in the organizational decision making process. The research focuses on knowledge-intensive firms. The main research question that guides our study is the following one:   How can Artificial Intelligence re-design and develop the process of organizational decision making within knowledge-intensive firms?   We formulated three more detailed questions to guide us: (1) What are the roles of humans and Artificial Intelligence in the decision making process? (2) How can organizational design support the decision making process through the use of Artificial Intelligence? (3) How can Artificial Intelligence help to overcome the challenges experienced by decision makers within knowledge-intensive firms and what are the new challenges that arise from the use of Artificial Intelligence in the decision making process?   We adopted an interpretivist paradigm together with a qualitative study, as presented in section 3. We investigated our research topic within two big IT firms and two real estate startups that are using AI. We conducted six semi-structured interviews to enable us to gain better knowledge and in-depth understanding about the roles of humans and Artificial Intelligence in the decision making process within knowledge-intensive firms. Our review led us to the theoretical framework explained in section 2, on which we based our interviews.   The results and findings that emerged from the interviews follow the same structure than the theoretical review and provide insightful information in order to answer the research question. To analyze and discuss our empirical findings that are summarized in the chapter 5 and in a chart in the appendix 4, we used the general analytical procedure for qualitative studies. The structure of chapter 5 follows the same order than the three sub questions.   The thesis highlights how a deep understanding of Artificial Intelligence and its integration in the process of organizational decision making of knowledge-intensive firms enable humans to be augmented and to make smarter decisions. It appears that Artificial Intelligence is used as a decision making support rather than an autonomous decision maker, and that organizations adopt smoother and more collaborative designs in order to make the best of it within their decision making process. Artificial Intelligence is an efficient tool to deal with complex situations, whereas human capabilities seem to be more relevant in situations of uncertainty and ambiguity. Artificial Intelligence also raises new issues for organizations regarding its responsibility and acceptation by society as there is a grey area surrounding machines in front of ethics and laws.
APA, Harvard, Vancouver, ISO, and other styles
13

Munyengeterwa, Tariro S. "The Impact of Artificial Intelligence in the Customer Journey: A Case Study of Bosch USA and Defy South Africa." Digital Commons @ East Tennessee State University, 2021. https://dc.etsu.edu/etd/3870.

Full text
Abstract:
Artificial intelligence (AI) continues to gain traction and is increasingly reshaping the media and marketing communications field. While significant research has been conducted on the impact of AI in other fields, there is little empirical evidence on how AI is affecting the customer journey. The present study explored both organizations’ current use of AI tools and how customer perceptions about AI affect AI usage and adoption through the lens of diffusion of innovation theory. The research was conducted using mixed-method qualitative research. In-depth interviews and a case study content analysis were conducted to collect and analyze the data. The results suggest that consumer perceptions about AI impact levels of adoption when AI is recognized, but there appears to be cognitive dissonance regarding what constitutes AI and complicit acceptance of some of its benefits. Companies in different geographical locations have different levels of AI adoption along the diffusion of innovation stages.
APA, Harvard, Vancouver, ISO, and other styles
14

Sharma, Agni. "Assigning Liability in an Autonomous World." Scholarship @ Claremont, 2017. http://scholarship.claremont.edu/cmc_theses/1531.

Full text
Abstract:
Liability laws currently in use rely on a fault-based system that focuses on a causal connection between driver actions and the resulting road accident. The role of the driver is set to reduce with the emergence of autonomous vehicles, so how will liability adapt to meet the needs of an autonomous world? The paper discusses possible frameworks of liability that could be implemented in the future, and accentuates the importance of the causal aspects of the current framework in the new system.
APA, Harvard, Vancouver, ISO, and other styles
15

Liliequist, Erik. "Artificial Intelligence - Are there any social obstacles? : An empirical study of social obstacles." Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229506.

Full text
Abstract:
Artificial Intelligence is currently one of the most talked about topics with regard to technical development. The possibilities are enormous and it might revolutionize how we live our lives. There are talk of robots and AI removing the need for human workers. At the same time there are also those who view this as deeply troublesome. Either from an individual perspective, asking the question what we should do once we do not need to work more? Or from an existential perspective, raising issues of what responsibilities we have as humans and what it means to be human? This study does not aim to answer these grand questions, but rather shift the focus to the near future of three to five years. Yet, there is still a focus on the social aspects of the development of AI. What are the perceived greatest social issues and obstacles for a continued implementation of AI solutions in society? To answer these question interviews have been conducted with representatives for the Swedish society, ranging from politicians, union and employers’ organizations to philosophers and AI researchers. Further a literature study has been made of similar studies, comparing and reflecting their findings with the views of the interviewees. In short, the interviewees have a very positive view of AI in the near future, believing that a continued implementation would go relatively smoothly. Yet, they pointed to a few key obstacles that might need to be addressed. Mainly there is a risk of increased polarization of wages and power due to AI, although stressed that it depends on how we use the technology rather than the technology itself. Another obstacle was connected to individual uncertainty of the development of AI, causing a fear of what might happen. Further, several different ethical issues were raised. There was an agreement that we need to address these as soon as possible, but they did not view this as an obstacle.
APA, Harvard, Vancouver, ISO, and other styles
16

Liljekvist, Mirella, and Ellen Lindberg. "Ethical views on the influence of interactive systems." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20725.

Full text
Abstract:
Människan har i alla år, och i varierande form, influerat varandra till förändrat beteenden och med dagens tekniska utveckling finns det större möjlighet att analysera stora mängder data och hitta mönster som tidigare varit svårt för människan att nå. Med artificiell intelligens, maskininlärning och data mining-tekniker blir det allt lättare för företag att med teknik individanpassa budskap till användare - dels för att göra användarupplevelsen bättre i interaktiva system, men också för att i en förlängning påverka användare att förändra sitt beteende i utvalda situationer. Detta blev tydligt år 2018 när Cambridge Analytica-skandalen blev känd för allmänheten som en situation där data utvunnits för att påverka användare i det amerikanska presidentvalet 2016. Det stod då klart att de tekniska möjligheterna till såväl insamling, analysering och användningen av data är större än vi tidigare trott.För att undersöka användares etiska inställning till påverkan via interaktiva system utfördes denna studie. Grunden för studien bygger på tidigare forskning inom såväl mänsklig påverkan, etik inom IT och dagens digitala tekniker för dataanalys. Genom en enkät och intervjuer med faktiska användare undersöktes användares inställning till påverkan i olika situationer och genom olika tekniska enheter. Studien undersökte även om det finns någon skillnad i inställning mellan användare av interaktiva system och utvecklare av interaktiva system. Resultatet av studien visar att användare inte gör skillnad på påverkan genom olika enheter, utan att det är situationerna i sig som avgör användarens etiska inställning till påverkan. Utvecklare skattar sig generellt mer positiva till påverkan via interaktiva system än vad användare gör, men resultaten av studien visar ändå att situationerna av påverkan spelar större roll än de tekniska enheterna. Genom studiens intervjuer bekräftades enkätens resultat samtidigt som det blev uppenbart att användare av interaktiva system inte vill bli påverkade, men att många av dem också känner sig så pass medvetna att de själva kan välja bort att bli påverkade.
Humans have always, and in different ways, influenced each other’s behaviors and with today's technological development there are greater opportunities for analysis of large amounts of data to find patterns that were previously difficult to find. With artificial intelligence, machine learning and data mining techniques, it is becoming easier and easier for companies to tailor make individual messages to users - both to make the user experience better in interactive systems, but also to influence users to change their behavior in specific situations. This became clear in 2018 when the Cambridge Analytica scandal became known to the public, where data was extracted to influence users in the 2016 US presidential election. This made it clear that the technical possibilities of collecting, analyzing and using data were greater than previously known.This study was conducted in order to examine the ethical views of users on influence through interactive systems. The study is based on previous research on influence, the ethics of IT and today's digital data analysis techniques. Through a survey and interviews, users' views on influence in different situations and through different devices were examine. The study also examined whether there is any difference in views between users of interactive systems and developers of interactive systems.The results of the study show that users do not make a difference in the influence through different devices, but that the situations in which the influence occurs determine the user's ethical views towards the influence. Developers generally feel more positive about influence through interactive systems than users do, but the results of the study still show that the situations play a greater role than the technical devices. Through the study's interviews, the results of the survey were confirmed. It became clear that users of interactive systems do not want to be influenced, but that many of them also feel so aware of influence that they can choose not to be influenced.
APA, Harvard, Vancouver, ISO, and other styles
17

Almer, Jasmine, and Julia Ivert. "Artificiell Intelligens framtidsutsikter inom sjukvården : En studie om studerande sjuksköterskors attityder gällande Artificiell Intelligens inom sjukvården." Thesis, Uppsala universitet, Institutionen för informatik och media, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-413725.

Full text
Abstract:
Artificial Intelligence is an area that has developed radically in recent years and is constantly evolving in several industries. In this thesis, a qualitative case study is conducted which addresses student nurses' attitudes regarding Artificial Intelligence in Swedish healthcare and its use in the future. Through interviews conducted with Uppsala University student nurses, the empirical material is analyzed using the Technology Acceptance Model (TAM) theory to finally produce a result regarding the future use of Artificial Intelligence in healthcare. The analysis resulted in two evident areas regarding AI usage: decision-making AI and non-decision-making AI where the participants’ attitudes differed between the two divisions. The attitudes towards decision-making AI were rather negative partly because of the lack of responsibility and accountability together with the reduced patient contact it would entail. The attitudes towards non-decision-making AI were, in contrast, considered positive, partly because of the efficiency it would imply using AI technology as an appliance and the areas of improvement it would entail in the profession. For example by creating time for more care and attention, something that nursing students imply is the main focus in health and social care. Finally, the results of the analysis are discussed based on various aspects such as ethics and morals, the profession itself and further research.
Artificiell Intelligens är ett område vilket utvecklats radikalt senaste åren och som konstant fortsätter att utvecklas inom flera branscher. I denna uppsats utförs en kvalitativ fallstudie, vilken behandlar studerande sjuksköterskors attityder gällande Artificiell Intelligens inom sjukvården och dess användning i framtiden. Genom utförda intervjuer av Uppsala Universitets studerande sjuksköterskor, analyseras det empiriska materialet med hjälp av teorin Technology Acceptance Model (TAM), för att slutligen ta fram ett resultat vad det gäller ett framtida användandet av Artificiell Intelligens inom sjukvården. Analysen resulterade i två tydliga områden gällande användningen av AI inom sjukvården: beslutsfattande AI respektive icke-beslutsfattande AI, där intervjudeltagarnas attityder urskiljdes mellan de två indelningarna. De studerande sjuksköterskornas attityder gentemot beslutsfattande AI var tämligen negativ, dels på grund av de bristande faktorer som identifierades gällande ett ansvarsutkrävande, samt den minskade patientkontakten systemet kan komma att medföra. Attityderna gentemot icke-beslutsfattande AI ansågs i kontrast mycket positiva. Dels på grund av den effektivisering systemet möjligtvis kan medföra genom att använda AI-teknik som ett hjälpmedel eller komplement samt de förbättringsområden som inträder relaterat till arbetsrollen. Ett exempel på förbättringsområde som framkom var att skapa mer tid för vård och omsorg, något som sjuksköterskestudenterna menar på att yrket faktiskt är till för. Avslutningsvis diskuteras resultatet från analysen vilket intressanta resonemang om etik och moral, arbetsrollen i fråga samt vidare forskning förs på tal.
APA, Harvard, Vancouver, ISO, and other styles
18

Wortham, Robert H. "Using other minds : transparency as a fundamental design consideration for artificial intelligent systems." Thesis, University of Bath, 2018. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.760960.

Full text
Abstract:
The human cognitive biases that result in anthropomorphism, the moral confusion surrounding the status of robots, and wider societal concerns related to the deployment of artificial intelligence at scale all motivate the study of robot transparency --- the design of robots such that they may be fully understood by humans. Based on the hypothesis that robot transparency leads to better (in the sense of more accurate) mental models of robots, I investigate how humans perceive and understand a robot when they encounter it, both in online video and direct physical encounter. I also use Amazon Mechanical Turk as a platform to facilitate online experiments with larger population samples. To improve transparency I use a visual real-time transparency tool providing a graphical representation of the internal processing and state of a robot. I also describe and deploy a vocalisation algorithm for transparency. Finally, I modify the form of the robot with a simple bee-like cover, to investigate the effect of appearance on transparency. I find that the addition of a visual or vocalised representation of the internal processing and state of a robot significantly improves the ability of a naive observer to form an accurate model of a robot's capabilities, intentions and purpose. This is a significant result across a diverse, international population sample and provides a robust result about humans in general, rather than one geographic, ethnic or socio-economic group in particular. However, all the experiments were unable to achieve a Mental Model Accuracy (MMA) of more than 59%, indicating that despite improved transparency of the internal state and processing, naive observers' models remain inaccurate, and there is scope for further work. A vocalising, or 'talking', robot greatly increases the confidence of naive observers to report that they understand a robot's behaviour when observed on video. Perhaps we might be more easily deceived by talking robots than silent ones. A zoomorphic robot is perceived as more intelligent and more likeable than a very similar mechanomorphic robot, even when the robots exhibit almost identical behaviour. A zoomorphic form may attract closer visual attention, and whilst this results in an improved MMA, it also diverts attention away from transparency measures, reducing their efficacy to further increase MMA. The trivial embellishment of a robot to alter its form has significant effects on our understanding and attitude towards it. Based on the concerns that motivate this work, together with the results of the robot transparency experiments, I argue that we have a moral responsibility to make robots transparent, so as to reveal their true machine nature. I recommend the inclusion of transparency as a fundamental design consideration for intelligent systems, particularly for autonomous robots. This research also includes the design and development of the 'Instinct' reactive planner, developed as a controller for a mobile robot of my own design. Instinct provides facilities to generate a real-time 'transparency feed'--- a real-time trace of internal processing and state. Instinct also controls agents within a simulation environment, the 'Instinct Robot World'. Finally, I show how two instances of Instinct can be used to achieve a second order control architecture.
APA, Harvard, Vancouver, ISO, and other styles
19

Boulianne, Gobeil Gabriel. "The Next Arms Race? A Military Ethical Reflection on the Effects of Artificial Superintelligence on Drone Warfare and American Counterterrorism." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32247.

Full text
Abstract:
The trend towards the automation and robotization of warfare, enabling the exercise of violence from a distance, has been long-present, dating back to such inventions as the bow and arrow that allowed an archer to kill from afar. Today’s military drones now permit an extreme separation between rivals. James Der Derian’s concept of virtuous war encapsulates a certain normative view of current and future wars. A rationale of the actors waging virtuous war is that violence is used in a cleaner way, resulting in ever fewer battle deaths. In this thesis, I argue that the next step in the progression of military weaponry is the inclusion of artificial superintelligence (hereinafter ASI) in the American drone and counterterrorism program. While including this technology into the American war machine may represent a moral objective, I conclude that the use of ASI for military purposes is immoral because accountability becomes indeterminable.
APA, Harvard, Vancouver, ISO, and other styles
20

Stenberg, Louise, and Svante Nilsson. "Factors influencing readiness of adopting AI : A qualitative study of how the TOE framework applies to AI adoption in governmental authorities." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279583.

Full text
Abstract:
Artificial intelligence is increasing in interest and it is creating value to many organizations world-wide. Due to the potential, governmental authorities in Sweden who work with large volumes of text documents are interested in natural language processing models, which is a sub field of AI and have started to incorporate it to their organizations. This study explores and discusses factors that are influential for governmental authorities when adopting AI and highlights ethical aspects which are of importance for the adoption process. This is explored through a literature review which lead to a frame of reference built on the Technology Organization Environment framework (TOE), which then was tested through interviews with project leaders and AI architects at governmental authorities who are working with language models. The results show that the TOE framework is suitable for analysing AI adoption for governmental authorities. The factors that are found influential are Relative Advantage, Compatibility and Complexity, Management support, Staff capacity, Regulatory environment and Cooperation. Furthermore, the findings suggest that AI Ethics and Data access are influential in all three contexts of technology, organization and environment. The findings of this study confirm results from previous research regarding adoption of new technology, and also provides the literature with exploring the adoption process of AI in governmental authorities, which was not widely explored in literature on beforehand.
Allt fler intresserar sig för artificiell intelligens då det skapar värde för många organisationer. Svenska myndigheter som arbetar med stora mängder textdokument ser potentialen i AI och har börjat implementera språkmodeller, ett sorts AI, i sina organisationer. Den här studien utforskar och diskuterar faktorer som är inflytelserika inför implementering av AI och belyser etiska aspekter som är viktiga för implementationsprocessen. Detta har utforskats först genom en litteraturstudie, ur vilken ett ramverk som bygger på Teknologi Organisation Miljö-ramverket (TOE) har tagits fram. Detta har sedan testats genom intervjuer med projektledare och AI arkitekter på svenska myndigheter som arbetar med språkmodeller. Resultaten visar att TOE-ramverket lämpar sig väl för att analysera adoptering av AI i myndigheter. Faktorerna som har identifierats som inflytelserika är relativ fördel, kompatibilitet, komplexitet, ledningsstöd, anställdas kapacitet, regleringskontext och samarbete. Dessutom föreslås det att etik för AI och datatillgång ska spänna över alla tre kontexter inom TOE. Resultaten av studien bekräftar tidigare forskning gällande adoptering av nya teknologier, och den bidrar även till litteraturen genom att utforska adopteringsprocessen av AI i myndigheter, vilket inte har utforskats i större utsträckning tidigare.
APA, Harvard, Vancouver, ISO, and other styles
21

Borg, Stina, and Michael Ferlin. "Bibliotekarien som access point : En undersökning av artificiell intelligens inom svenska bibliotek." Thesis, Uppsala universitet, Institutionen för ABM, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-449446.

Full text
Abstract:
Introduction. Artificial intelligence is growing in society at large and within libraries specifically. There are both positive and negative consequences of this development. In this essay, ethical issues concerning bias, transparency and integrity are examined in a Library and Information Science context. Method and theory. Qualitative survey questionnaires with questions about how the libraries work with AI, the informant’s thoughts on ethical problems with it and how they saw the library’s future with AI were created and sent to employees at research libraries in Sweden. Nine answers to the questionnaires and one article formed the data for analysis. Employing Anthony Gidden’s structuration theory, the essay uses concepts like access point, ontological security and reembedding of trust. Analysis. A qualitative content analysis was carried out on the data. The analysis employed a thematic sectioning of the analyzed text, where the themes were developed through content analysis of the analyzed data in relation to the previous research presented in the essay. Results. Five different themes were sectioned out from the data; bias, integrity, transparency, curation and media- and information literacy. The answers were sectioned into these themes and compared to what the previous research said about the subject. The results are presented in a thematic overview where each section analyses the answers in the specific theme. Conclusion. When using and developing AI, the libraries can use ethical guidelines and curation to be aware of and counteract building bias into the systems. An important part of the libraries’ work for the development of the democratic society is media- and information literacy and teaching about information technology, which AI and the way it is developed is a part of. This is a two years master’s thesis in Library and Information Science.
APA, Harvard, Vancouver, ISO, and other styles
22

Haviland, Hannah. ""The Machine Made Me Do It!" : An Exploration of Ascribing Agency and Responsibility to Decision Support Systems." Thesis, Linköping University, Centre for Applied Ethics, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2922.

Full text
Abstract:

Are agency and responsibility solely ascribable to humans? The advent of artificial intelligence (AI), including the development of so-called “affective computing,” appears to be chipping away at the traditional building blocks of moral agency and responsibility. Spurred by the realization that fully autonomous, self-aware, even rational and emotionally-intelligent computer systems may emerge in the future, professionals in engineering and computer science have historically been the most vocal to warn of the ways in which such systems may alter our understanding of computer ethics. Despite the increasing attention of many philosophers and ethicists to the development of AI, there continues to exist a fair amount of conceptual muddiness on the conditions for assigning agency and responsibility to such systems, from both an ethical and a legal perspective. Moral and legal philosophies may overlap to a high degree, but are neither interchangeable nor identical. This paper attempts to clarify the actual and hypothetical ethical and legal situations governing a very particular type of advanced, or “intelligent,” computer system: medical decision support systems (MDSS) that feature AI in their system design. While it is well-recognized that MDSS can be categorized by type and function, further categorization of their mediating effects on users and patients is needed in order to even begin ascribing some level of moral or legal responsibility. I conclude that various doctrines of Anglo legal systems appear to allow for the possibility of assigning specific types of agency – and thus specific types of legal responsibility – to some types of MDSS. Strong arguments for assigning moral agency and responsibility are still lacking, however.

APA, Harvard, Vancouver, ISO, and other styles
23

Mellberg, Amanda, and Emma Skog. "Artificiell Intelligens inom rekryteringsprocessen : objektivitet med subjektiv data?" Thesis, Högskolan i Borås, Akademin för bibliotek, information, pedagogik och IT, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-15078.

Full text
Abstract:
Artificiell Intelligens (AI) har flera användningsområden som bland annat robotik, ansiktsigenkänning och stöd vid beslutsfattande. Organisationer kommer använda AI mer för att möta utmaningar inom Human Resources (HR) de närmaste fem åren vilket pekar på att AI sannolikt kommer bli en vanligare förekomst inom rekryteringsprocessen. En av de viktigaste tillgångarna i ett företag är dess anställda och felaktiga rekryteringar kan komma att medföra stora kostnader. Med maskininlärning och AI-system som beslutsfattare kan det vara av vikt att fundera på vad det är för data som dessa system förses med då en av riskerna med maskininlärning inom AI är att man inte vet vad maskinerna lär sig när den lär sig själv. En större datamängd behöver i sig inte medföra mer subjektiva resultat men risken att direkt koda in diskriminering finns fortfarande eftersom den data AI-system förses med i sig kan innehålla bias. Det har även visat sig att kandidater inte vill bli bedömda på politiska åsikter, relationer eller annat som kan tas fram via big data och data mining. Syftet med studien är att skapa en djupare förståelse för vad som behövs för att automatisera rekryteringsprocessen med hjälp av AI och maskininlärning samt till att utforma en lista på hur AI kan vara ett stöd för företag att ha i åtanke vid en möjlig implementering. Till studien har tre metoder till empiriinsamling valts ut varav samtliga med en kvalitativ ansats. Intervjuer och en enkät har samlat in de data som analyserats i Excel 2016 samt Google Docs. Intervjuerna utfördes i flera skeden och riktade sig mot två anställda på varsitt rekryteringsföretag. Enkäten riktade sig främst till individer som kommer att ta/har tagit examen inom det närmaste året. Urvalet har skett enligt studiens syfte och vid enstaka tillfällen har ett bekvämlighetsurval gjorts. Resultatet visar att rekryterarna lägger mycket tid på att screena kandidater och gör det manuellt. Enkäten visar att kommande kandidater främst är neutrala i sin tillit till att screening utförs av ett AI-system. Respondenten i uppföljningsintervjun säger att en automatisering med AI hade underlättat arbetet och håller med det enkätrespondenterna anser om fördelar och nackdelar med AI men skulle samtidigt inte lita på resultatet. Vidare tror respondenten att det är den automatiserade vägen rekryteringsprocessen kommer att gå. Resultatet av studien kan komma att nyttjas av rekryteringsföretag som funderar på att införa AI i sina rekryteringsprocesser.
Artificial Intelligence (AI) has several areas of use such as robotics, facial recognition and decision-making support. Organizations will use AI more to meet challenges within Human Resources (HR) over the next five years, indicating that AI is likely to become a more common occurrence in the recruitment process. One of the most important assets of a company is its employees and incorrect recruitments can lead to high costs. With machine learning and AI systems as decision makers it may be important to think about what data is provided to these systems, since one of the risks of machine learning within AI is that you do not know what the machines learn as they learn themselves. A larger amount of data does not necessarily lead to more subjective results, but the risk of directly encode discrimination still exists because of the data the AI system is provided with can contain bias. It has also been found that candidates do not want to be judged on political views, relationships or anything that can be gained through big data and data mining. The purpose of the study is to provide a deeper understanding of what is needed to automate the recruitment process using AI and machine learning and to design a list of how AI can be a support for companies to keep in mind during a possible implementation. The study has chosen three methods for empirical gathering, all of which are qualitative. Interviews and a survey has collected the data which is analyzed in Excel 2016 as well as Google Docs. The interviews were conducted in several stages and aimed towards two employees working at two different recruiting companies. The survey was aimed primarily towards individuals who will have graduated within this year. The selection of participants has been made for the purpose of the study and on some occasions a comfort selection has been made. The result shows that the recruiters spend a lot of time screening candidates and do this manually. The survey shows that future candidates have a neutral stance when it comes to trusting in an AI system performing the screening process. The respondent in the follow-up interview says that automation using AI would facilitate the work and agrees with the survey respondents considering the pros and cons of AI, but at the same time would not rely on the results. Further, the respondent believes that it is in the automated way the recruitment process will continue. The result of the study may be used by recruitment companies that are considering introducing AI into their recruitment processes.
APA, Harvard, Vancouver, ISO, and other styles
24

Beltran, Nicole. "Artificial Intelligence in Lethal Automated Weapon Systems - What's the Problem? : Analysing the framing of LAWS in the EU ethics guidelines for trustworthy AI, the European Parliament Resolution on autonomous weapon systems and the CCW GGE guiding principles." Thesis, Uppsala universitet, Teologiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-412188.

Full text
Abstract:
Lethal automated weapon systems (LAWS) are developed and deployed by a growing number of state and non-state actors, although no international legally binding framework exists as of yet. As a first attempt to regulate LAWS the UN appointed a group of governmental experts (GGE) to create the guiding principles on the issue of LAWS AI. A few years later the EU appointed an expert group to create the Ethics guideline for trustworthy and the European Parliament passed a resolution on the issue of LAWS.  This thesis attempts to make the underlying norms and discourses that have shaped these guiding principles and guidelines visible. By scrutinizing the documents through the ‘What’s the problem presented to be’-approach, the discursive practices that enables the framing is illuminated. The obscured problems not spoken of in the EU and UN documents are emphasised, suggesting that both documents oversimplifies  and downplays the danger of LAWS, leaving issues such as gender repercussions, human dignity and the dangers of the sophisticated weapons system itself largely unproblematised and hidden behind their suggested dichotomised and anthropocentric solutions, which largely results in a simple “add human and stir”-kind of solution. The underlying cause of this tendency seems to stem from a general unwillingness of states to regulate as LAWS are quickly becoming a matter of have- and have nots and may potentially change warfare as we know it. A case can also be made as to AI’s ‘Hollywood-problem’ as influencing the framing of LAWS, where the dystopian terminator-like depiction in popular culture can be seen reflected in international policy papers and statements.
APA, Harvard, Vancouver, ISO, and other styles
25

Pouget, Jonathan. "La réparation du dommage impliquant une intelligence artificielle." Thesis, Aix-Marseille, 2019. http://theses.univ-amu.fr.lama.univ-amu.fr/191212_POUGET_871qcngtj900zlfid640gfeuf393zytl_TH.pdf.

Full text
Abstract:
Le droit positif de la responsabilité civile ainsi que les solutions actuelles d’indemnisation des victimes permettent-ils une réparation adéquate des dommages causés, directement ou non, par une intelligence artificielle ? Cette thèse dessine les contours d'une réponse négative sans pour autant prôner un bouleversement du droit positif. Elle tend à démontrer à la fois la nécessité d'interprétation du droit de la responsabilité civile et du droit des assurances, et la nécessité d'évolution d'une partie du droit de la responsabilité civile et des contrats d'assurance de responsabilité. La piste d'une personnalité juridique aux fonctions encadrées et attribuable aux intelligences artificielles sera également à ces fins, étudiée
Do the current tort law and insurance law allow an adequate compensation for the damage caused, directly or indirectly, by anartificial intelligence? Firstly, this thesis demonstrates that tort law and insurance law need to be interprated. Secondly, it demonstrates that a part of tort law and insurance contracts have to evolve. The track of a legal personhood with supervised functions and attributable to artificial intelligences will also be studied for these purposes
APA, Harvard, Vancouver, ISO, and other styles
26

Olsson, Catrin, and Sara Ekström. "Utmaningar med AI för patientcentrerad och jämlik vård : En studie om varför hälso- och sjukvården möter utmaningar vid implementering av AI i syfte att stödja patientcentrerad och jämlik vård." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-44280.

Full text
Abstract:
Efter en kartläggning som Socialstyrelsen utfört över hälso- och sjukvårdens nyttjande avArtificiell Intelligens (AI), har det konstaterats att det finns utrymme och potential för ett mereffektivt användande av tekniken. Men trots att potential och fördelar med AI presenteras,möter branschen motgångar och utmaningar vid införandet av tekniken. I rapportenpresenteras några av de utmaningar som kan uppstå vid arbetet med att införa AI-teknik inomhälso- och sjukvården samt varför de förväntade effekterna inte uppnås. Studien har utgåttfrån en kvalitativ forskningsansats i syfte att genomföra empirisk datainsamling genomsemistrukturerade intervjuer och det insamlade materialet har bearbetats utifrån en tematiskanalys. Därefter presenteras och sammankopplas resultatet med tidigare forskning ochlitteratur för att avslutningsvis presentera bakomliggande anledningar till utmaningar vidimplementering av AI inom hälso- och sjukvården.
The National Board of Health and Welfare (Socialstyrelsen) has mapped in which extent thehealthcare is using Artificial Intelligence (AI) and it’s been established that there is potentialfor a more efficient use of the technology. But despite the potential and benefits that AI couldentail, the Healthcare industry faces setbacks and challenges with implementing thetechnology. This paper presents some of the challenges that may arise in the process ofintroducing AI technology to healthcare and also why the expected effects are not achieved.The study is based on a qualitative research approach, with the aim of conducting empiricaldata collection through semi-structured interviews. The collected material was then processedbased on a thematic analysis. The results are then presented and linked to previous researchand literature, in order to finally present the underlying reasons for challenges withimplementing AI in healthcare.
APA, Harvard, Vancouver, ISO, and other styles
27

Papineau, Christelle. "Droit et intelligence artificielle : essai pour la reconnaissance du droit computationnel." Thesis, Paris 1, 2019. http://www.theses.fr/2019PA01D049.

Full text
Abstract:
La porosité entre le droit et l’intelligence artificielle a aujourd’hui soixante ans. Cette porosité qui devait, à l’origine, permettre aux informaticiens d’améliorer les performances cognitives de leurs algorithmes, ce, grâce au syllogisme juridique, a très vite suscité, en retour, l’intérêt des juristes pour l’informatique. Ces derniers ont compris qu’ils pouvaient tirer avantage des opportunités offertes par l’automatisation du droit, grâce aux nouvelles technologies. Ainsi, à l’aube de la décennie 2020, le machine learning, l’un des sous champ disciplinaires les plus récents de l’intelligence artificielle, permet autant d’améliorer les modalités d’application des lois et sanctions que de calculer les citoyens (police prédictive, justice prédictive, mais aussi accès au droit et à la justice en ligne). Ce déploiement du machine learning, qui s’opère autant dans les pays common lawyer que dans les pays de tradition civiliste, s’opère surtout dans un contexte d’a-légalité, au préjudice des droits des citoyens calculés. Les juristes nord-américains ont résolu ce défaut d’encadrement législatif en usant de la technique du judge made law. Cette technique, qui présente l’avantage de son adaptabilité, à la fois à chaque situation de vide législatif, mais aussi aux tendances de la société et des époques, sera à nouveau mobilisée, à l’avenir, lorsque les prochains sous champs disciplinaires de l’intelligence artificielle, dont le deep learning, seront déployés. La souplesse du système common lawyer nord américain peut servir de modèle au système français, pour mettre en place un cadre juridique régulant, aujourd’hui, le machine learning et, demain, les prochains sous champs disciplinaires de l’intelligence artificielle. Il convient toutefois d’adapter la dimension casuistique du modèle nord-américain au système français, en proposant d’user des normativités alternatives, et de l’éthique, comme instruments équivalant à la souplesse du judge made law. Cette approche permettrait d’observer les questions de droit qui se posent à chaque progrès technologique et d’envisager des réponses juridiques, dans un contexte enclin à légiférer une fois que l’ensemble des achoppements techniques et des problèmes de droit, que ceux-ci posent, a été saisi. Cette méthode, qui impose d’observer les nouveaux phénomènes technologiques au prisme des normativités alternatives avant de légiférer, devrait permettre d’orienter et d’influencer le travail de légistique du législateur français pour améliorer l’effectivité du droit du numérique quant à la protection des personnes calculées par des algorithmes, déployés autant par les administrations que par les sociétés de type legaltech
The porosity between law and artificial intelligence is now sixty years old. This porosity, which was originally intended to enable computer scientists to improve the cognitive performances of their algorithms, thanks to the legal syllogism, quickly aroused, in return, the interest of jurists in computer science. They understood that they could take advantage of the opportunities offered by the automation of the law, thanks to new technologies. Thus, at the dawn of the 2020s, machine learning, one of the most recent disciplinary subfields of artificial intelligence, allows as much to improve the methods of application of the laws and sanctions than to calculate the citizens (predictive policing, predictive justice, but also access to legal information and justice, online). This deployment of machine learning, which operates as much in the common law countries as in the countries of civil law tradition, takes place mainly in a context of a legality, to the detriment of the rights of the scored citizens. North American lawyers have resolved this lack of legislative framework by using the technique of judge-made law. This technique, which has the advantage of its adaptability, at the same time to each situation of legislative gap, but also to the tendencies of the society and the times, will be mobilized again, in the future, when the next disciplinary subfields of artificial intelligence, including deep learning, will be deployed. The flexibility of the North American common lawyer system can serve as a model for the French system, to set up a legal framework regulating, today, machine learning and, tomorrow, the next disciplinary subfields of artificial intelligence. However, the casuistic dimension of the North American model should be adapted to the French system, proposing the use of alternative normativities and ethics as instruments equivalent to the flexibility of the judge-made law. This approach would make it possible to observe the legal issues that arise with each technological advance and to consider legal responses, in a context inclined to legislate once the whole set of technical and legal issues, such as these pose, has been seized. This method, which requires observing new technological phenomena through the prism of alternative normativities before legislating, should make it possible to guide and influence the legislative work of the French legislator to improve the effectiveness of the digital law in terms of protecting people scored by algorithms, deployed as much by administrations as by legaltech type companies
APA, Harvard, Vancouver, ISO, and other styles
28

Umurerwa, Janviere, and Maja Lesjak. "AI IMPLEMENTATION AND USAGE : A qualitative study of managerial challenges in implementation and use of AI solutions from the researchers’ perspective." Thesis, Umeå universitet, Institutionen för informatik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-187810.

Full text
Abstract:
Artificial intelligence (AI) technologies are developing rapidly and cause radical changes in organizations, companies, society, and individual levels. Managers are facing new challenges that they might not be prepared for. In this work, we seek to explore managerial challenges experienced while implementing and using AI technologies from the researchers’ perspective. Moreover, we explore how appropriate ethical deliberations should be applied when using big data concerning AI and the meaning of understanding or defining it. We describe qualitative research, the triangulation that includes related literature, in-depth interviews with researchers working on related topics from various fields, and a focus group discussion. Our findings show that AI algorithms are not universal, objective, or neutral and therefore researchers believe, it requires managers to have a solid understanding of the complexity of AI technologies and the nature of big data. Those are necessary to develop sufficient purchase capabilities and apply appropriate ethical considerations. Based on our results, we believe researchers are aware that those issues should be handled, but so far have too little attention. Therefore, we suggest further discussion and encourage research in this field.
APA, Harvard, Vancouver, ISO, and other styles
29

Friedman, Cindy. "The Ethics of Artificially Intelligent Sexbots : A Philosophical Investigation into the Ethical Conditions for Human-Sexbot Interaction." Diss., University of Pretoria, 2020. http://hdl.handle.net/2263/78033.

Full text
Abstract:
Discussion surrounding the development of artificial intelligence (AI), and its moral and social impacts, is one of the most important conversations of our time. Research within the ethics of AI is vital in ensuring that AI is developed and utilised in such a way that it remains beneficial to society. AI technology is utilised in various ways. This dissertation, however, focuses upon the use of AI technology in the form of sex robots (‘sexbots’). Although research on sexbots within the context of AI and human-robot interaction (HRI) is gaining momentum, there is space for much more work to be done. Where a lot of focus has been on the moral and social impact of sexbots upon society, there is a gap in the literature as far as putting forward ways in which we may deal with these impacts is concerned – particularly preventing negative impacts, so that sexbot technology may remain beneficial to society. As such, my research enters this relatively new debate by way of not only drawing attention to the moral and social ramifications of sexbots, but also investigating ethical conditions for our interaction with them that may ensure sexbot technology is beneficial to society. As such, this dissertation specifically conducts a philosophical investigation into the ethical conditions for human-sexbot interaction. It does so by way of investigating the moral and social problematics that may arise in relation to our interacting with sexbots in terms of what we may use them for (i.e. the various roles they may fulfill in society), as well as how we may use them (i.e. how we interact with them). Given the moral problematics that are discussed, the dissertation puts forward possible ethical conditions for human-sexbot interaction that we may consider as a way to try and ensure that sexbot technology is mainly beneficial to us as human interactants. Specifically, this dissertation emphasises that when we consider the moral and social impacts of sexbots, as well as possible ethical conditions in relation to our interaction with them so as to prevent potential moral and social harms, we must do so from an anthropocentric perspective i.e. focus on how we use sexbots, as opposed to what we use them for. This is because we should not deem sexbot technology itself to be essentially ‘good’ or ‘bad’. It is never technology itself that is good, bad, or neutral, but rather how we use it that deems it so. As such, we are the masters of our own moral fates in the context of ensuring that sexbot technology is beneficial to society.
Dissertation (MPhil)--University of Pretoria, 2020.
Centre for AI Research (CAIR)
Philosophy
MPhil
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
30

Radosavljevic, Bojan, and Axel Kimblad. "Etik och säkerhet när AI möter IoT." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20613.

Full text
Abstract:
I dagens samhälle går den tekniska utvecklingen fort framåt. Artificiell intelligens och Internet of Things är två tekniker inom utvecklingen vars popularitet har ökat på senare tid. Dessa tekniker i integration har visat sig kunna bidra med stora verksamhetsnyttor, bland annat i form av ökad precishet vad gäller analyser, bättre kundvärde och effektivisering av ”downtime”. Med ny teknik kommer även utmaningar. I takt med att teknologierna ständigt växer uppstår frågor kring säkerhet och etik och hur detta ska hanteras. Målet med denna studien var att ta reda på hur experter värderar etiska frågor när artificiell intelligens används i kombination med Internet of Things-enheter. Vi fokuserade på följande forskningsfråga för att nå vårt mål: Hur värderas frågor om etik när artificiell intelligens används i kombination med Internet of Things? Resultatet vi kom fram till visar att både forskare och näringslivet värderar de etiska aspekterna högt. Studien visar även att de ansåg att teknikerna kan vara lösningen till många samhällsproblem men att etiken bör vara ett ämne som löpande bör diskuteras.
In today's society, technological developments are moving fast. Artificial intelligence and the Internet of Things are two technologies within the development whose popularity has increased in recent years. These technologies in integration have proven to be able to contribute with major business benefits, including in the form of increased precision with regard to analyzes, better customer value and efficiency of downtime. New technology also presents challenges. As the technologies are constantly growing, issues arise regarding safety and ethics and how this should be managed. The aim of this study is to find out how experts value ethical issues when using artificial intelligence in combination with the Internet of Things devices. We focused on the following research question to reach our goal: How are ethical issues evaluated when using artificial intelligence in combination with the Internet of Things? The result we found shows that both researchers and the business world value the ethical aspects highly. The study also shows that they considered the techniques to be the solution to many societal problems, but that ethics should be a topic that should be discussed on an ongoing basis.
APA, Harvard, Vancouver, ISO, and other styles
31

Shapovalenko, A. "Artificial intelligence." Thesis, Сумський державний університет, 2013. http://essuir.sumdu.edu.ua/handle/123456789/33830.

Full text
Abstract:
Technology of artificial intelligence will soon be able to help ensure the safety of your family, protecting it from the threat of international and domestic burglaries. In the USA the Department of Security also implements special software in its monitoring system, which scans phone calls and other messages. These programs can rapidly process large volumes of data and are even able to distinguish between a normal conversation from a potential threat. The old security system simply relied on sensors that reacted to a movement. Modern systems are directly related to artificial intelligence, which helps detect the presence of a stranger in the house. When you are citing the document, use the following link http://essuir.sumdu.edu.ua/handle/123456789/33830
APA, Harvard, Vancouver, ISO, and other styles
32

Башлак, Ірина Анатоліївна, Ирина Анатольевна Башлак, Iryna Anatoliivna Bashlak, and I. Kurinnyy. "Artificial intelligence." Thesis, Видавництво СумДУ, 2011. http://essuir.sumdu.edu.ua/handle/123456789/13517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Дядечко, Алла Миколаївна, Алла Николаевна Дядечко, Alla Mykolaivna Diadechko, and M. Chernyakova. "Artificial intelligence." Thesis, Вид-во СумДУ, 2009. http://essuir.sumdu.edu.ua/handle/123456789/16890.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Bekeniova. "ARTIFICIAL INTELLIGENCE." Thesis, Київ 2018, 2018. http://er.nau.edu.ua/handle/NAU/33666.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Kessing, Maria. "Fairness in AI : Discussion of a Unified Approach to Ensure Responsible AI Development." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-299936.

Full text
Abstract:
Besides entailing various benefits, AI technologies have also led to increased ethical concerns. Due to the growing attention, a large number of frameworks discussing responsible AI development have been released since 2016. This work aims at analyzing some of these proposals to answer the question (1) “Which approaches can be found to ensure responsible AI development?” For this, the theory section of this paper is looking at various approaches, including (inter-)governmental regulations, research organizations and private companies.  Further, expert interviews have been conducted to answer the second research question (2) “How can a unified solution be reached to ensure responsible AI development?” The results of the study have identified the governments as the main driver of this process. Overall, a detailed plan is necessary that brings together the public and private sector as well as research organizations. The paper also points out the importance of education in regard to making AI explainable and comprehensive for everyone.
Utöver de fördelar som AI-teknologier har bidragit med, så har även etiska dilemman och problem uppstått. På grund av ökat fokus, har ett stort antal förslag till system och regelverk som diskuterar ansvarstagande AI-utveckling publicerats sedan 2016. Denna rapport kommer analysera ett urval av dessa förslag med avsikt att besvara frågan (1) “Vilka tillvägagångssätt kan försäkra oss om en ansvarsfull AI-utveckling?” För att utforska denna fråga kommer denna rapport analysera olika metoder och tillvägagångssätt, på bland annat mellanstatliga- och statliga regelverk, forskningsgrupper samt privata företag. Dessutom har expertintervjuer genomförts för att besvara den andra problemformuleringen (2) “Hur kan vi nå en övergripande, gemensam, lösning för att försäkra oss om ansvarsfull AI-utveckling?” Denna rapport redogör för att statliga organisationer och myndigheter är den främsta drivkraften för att detta ska ske. Vidare krävs en detaljerad plan som knyter ihop forskningsgrupper med den offentliga- och privata sektorn. Slutligen anser rapporten även att det är av stor vikt för vidare utbildning när det kommer till att göra AI förklarbart och tydligt för alla.
APA, Harvard, Vancouver, ISO, and other styles
36

Klee, Christopher. "AI in Marketing – Curse or Blessing? : Impacts of Programmatic Advertising and Personalized Content on Society." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-299572.

Full text
Abstract:
With the help of Programmatic Advertising and the resulting personalized content, consumers can be targeted precisely and with the help of Artificial Intelligence. The associated use of customer data creates ethical conflicts. Therefore, the research question is asked: How does Programmatic Advertising influence consumer's data security and diversity of opinion and what effect does this have on the further development of the technology? For the purpose of elaborating the research problem a literature analysis and expert interviews are carried out. The analysis shows that Programmatic Advertising has already taken up the majority of digital advertising activities. This results in advantages for advertisers, since consumers can be addressed in a targeted manner. Nevertheless, this provokes data law issues and the demand for more data security for the individual customer, which, among other things, is given more attention by big tech companies. Due to the constant change within this trend, new possibilities arise, such as contextual targeting, in order to continue to do an efficient display of advertising. Nonetheless, this work calls for more regulations to be able to give customers a better overview and control of their used data and to avoid restricted diversities of opinion, which can be promoted through microtargeting and therefore the addressed display of content. The prospects of Programmatic Advertising, however, are predicted with a steady increase because other areas within the media landscape will be pervaded by this technology in the future.
Med hjälp av programmatisk reklam och artificiell intelligens får kunder och individer reklam som är specifikt utformade för just dem. Användningen av underliggande användardata ger upphov till etiska dilemman. Således, har vi följande problemformulering: Hur påverkar programmatisk reklam användarnas data och åsikter, och vilken effekt har detta på den fortsatta utvecklingen av området? Med avsikt att vidareutveckla problemformuleringen utfördes expertintervjuer samt en litteraturstudie. Analysen visar att programmatisk reklam utgör majoriteten av all digital marknadsföring. Detta har resulterat i fördelar för marknadsförare, ty användarna kan bli bemötta mer precist. Icke desto mindre, medför detta legala problem relaterat till användardata och ett ökat kraf av dataskydd för användaren, vilket är något som får stort fokus av stora tech företag. På grund av den konstanta utvecklingen av denna metod, föds nya möjligheter, exempelvis "kontextuell riktad marknadsföring", för att fortsätta vara en effektiv marknadsföringsmetod. Därmed, redogör denna rapport för en mer reglerad spelplan där användarna får en bättre överblick och kontroll över hur deras användardata utnyttjas, samt en mindre inskränkt åsiktspåverkan, vilket är något som skulle kunna ske genom knappnålsfin riktad marknadsföring. Dock är utvecklingen för programmatisk reklam förutspådd ljus, då andra områden inom medielandskapet kommer att genomsyras av denna teknologi i framtiden
APA, Harvard, Vancouver, ISO, and other styles
37

Yakushchenko, I. V., and L. M. Chuchilina. "The artificial intelligence." Thesis, Видавництво СумДУ, 2008. http://essuir.sumdu.edu.ua/handle/123456789/16003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Malmborn, Albin, and Linus Sjöberg. "Implementing Artificial intelligence." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20942.

Full text
Abstract:
Den här uppsatsen har som syfte att undersöka huruvida det är möjligt att ta fram riktlinjer för vad privata verksamheter behöver ta i beaktande inför en planerad implementering av artificiell intelligens. Studien kommer belysa faktorer som hjälper företag att förstå vad som krävs inför en sådan omställning, men även de hinder som måste övervinnas för att lyckas. Studiens datainsamling har genomförts med två metoder, först en litteraturstudie sedan kvalitativa, semistrukturerade forskningsintervjuer. Dessa har sedan analyserats med vars en analysmetod som kompletterar varandra och därefter tolkats för att se mönster som kan besvara studiens frågeställning: Vad måste svenska organisationer inom den privata sektorn beakta för att lyckas implementera Artificiell intelligens i sin verksamhet? Resultatet har tagits fram genom att jämföra vetenskapliga texter och intervjuer, för att undersöka om den akademiska och praktiska synen skiljer sig åt. Studien resulterade i åtta faktorer som företag borde ta i beaktning inför en implementering av artificiell intelligens. Författarna hoppas att med den här studien kunna främja svensk utveckling inom artificiell intelligens och på så vis generera ett större nationellt mervärde och en starkare konkurrenskraft internationell.
The purpose of this paper is to investigate the possibilities to develop guidelines for businesses to take into account before an implementation of artificial intelligence. The study will highlight different factors that will help companies to understand what is required to make this kind of digital transition, it will also highlight the obstacles companies have to overcome in order to succeed. The data collection was conducted in two parts, first a literature study and then qualitative, semi-structured interviews. These were analyzed with their own analysis which supplement each other, and interpreted to identify patterns that could answer the study's main question: What must Swedish organizations in the private sector consider in order to successfully implement Artificial Intelligence in their operations?The result of the study has been produced by comparing scientific texts and interviews, to investigate whether the academic and practical views differ. The study resulted in eight factors that companies should consider before implementing artificial intelligence. The authors hope that the study will promote Swedish development in artificial intelligence and thus generate a greater national value and international competitiveness.
APA, Harvard, Vancouver, ISO, and other styles
39

Gray, David Michael Ph D. Massachusetts Institute of Technology. "Ethics for artificial agents." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120677.

Full text
Abstract:
Thesis: Ph. D. in Philosophy, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 121-125).
Machine ethics is a nascent subfield of computer ethics that focuses on the ethical issues involved in the design of autonomous software agents ("artificial agents"). Chapter 1 of this thesis considers how best to understand the central projects of this new subfield, and reconstructs a prominent theory of how artificial agents ought to be designed. This theory, which I call the "agential theory" of machine ethics, says that artificial agents morally ought to be designed to behave only in ways that would be permissible for a human agent to behave, and that only artificial agents that have been designed in this way are morally permissible for human beings to use. Chapter 2 critically assesses two versions of the agential theory-one that assumes that artificial agents are moral agents, and another that does not. After considering arguments for both versions of the theory, I argue that both versions should be rejected. Chapter 3 sets out and analyzes a case study in machine ethics, focusing on the development of an artificial agent to assist with the planning of a public health social work intervention.
by David Michael Gray.
Ph. D. in Philosophy
APA, Harvard, Vancouver, ISO, and other styles
40

Tennenbaum, Christopher D. "Intentionality in Artificial Intelligence." Scholarship @ Claremont, 2011. http://scholarship.claremont.edu/cmc_theses/269.

Full text
Abstract:
This paper addresses the question of whether Artificial Intelligence can have intentionality. This question is part of a larger discussion of whether or not Artificial Intelligence can ever be 'conscious'. Ultimately, I come to the conclusion that while we can see how intentionality can be transferred, it has yet to be shown that intentionality can be created within Artificial Intelligence. To begin, I define intentionality. I then discuss the Turing Test (Alan Turing, "Computing Machinery and Intelligence" and the Chinese Room (John R. Searle, "Minds, Brains, and Programs"). I conclude by expressing my own opinions and where I believe Artificial Intelligence will be in the near future.
APA, Harvard, Vancouver, ISO, and other styles
41

Chaus, Oleksandr. "Artificial intelligence and robotics." Thesis, Дніпровський національний університет залізничного транспорту імені академіка В. Лазаряна, 2019. https://er.knutd.edu.ua/handle/123456789/14698.

Full text
Abstract:
The work deals with artificial intelligence as one of the most exciting field in robotics. AI has penetrated almost every industry, from construction, transport and manufacturing to business intelligence, education and healthcare.
Робота стосується штучного інтелекту як однієї з найбільш захоплюючих галузей робототехніки. Штучний інтелект проник майже в усі галузі, від будівництва, транспорту та виробництва до бізнес-розвідки, освіти та охорони здоров'я.
Работа посвящена искусственному интеллекту как одной из самых захватывающих областей робототехники. Искусственный интеллект проник почти во все отрасли, от строительства, транспорта и производства до бизнес-аналитики, образования и здравоохранения.
APA, Harvard, Vancouver, ISO, and other styles
42

Matorina. "ARTIFICIAL INTELLIGENCE IN MEDICINE." Thesis, Київ 2018, 2018. http://er.nau.edu.ua/handle/NAU/33813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

R, Maksimov K. "ARTIFICIAL INTELLIGENCE: MORAL ASPECT." Thesis, Національний авіаційний університет, 2017. http://er.nau.edu.ua/handle/NAU/28076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Larchenko. "ARTIFICIAL INTELLIGENCE IN ROBOTICS." Thesis, Київ 2018, 2018. http://er.nau.edu.ua/handle/NAU/33789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Napoli, Christian. "A-I: Artificial intelligence." Doctoral thesis, Università di Catania, 2016. http://hdl.handle.net/10761/3974.

Full text
Abstract:
In this thesis we proposed new neural architectures and information theory approaches. By means of wavelet analysis, neural networks, and the results of our own creations, namely the wavelet recurrent neural networks and the radial basis probabilistic neural networks,we tried to better understand, model and cope with the human behavior itself. The first idea was to model the workers of a crowdsourcing project as nodes on a cloud-computing system, we also hope to have exceeded the limits of such a definition. We hope to have opened a door on new possibilities to model the behavior of socially interconnected groups of people cooperating for the execution of a common task. We showed how it is possible to use the Wavelet Recurrent Neural Networks to model a quite complex thing such as the availability of resources on an online service or a computational cloud, then we showed that, similarly, the availability of crowd workers can be modeled, as well as the execution time of tasks performed by crowd workers. Doing that we created a tool to tamper with the timeline, hence allowing us to obtain predictions regarding the status of the crowd in terms of available workers and executed workflows. Moreover, with our inanimate reasoner based on the developed Radial Basis Probabilistic Neural Networks, firstly applied to social networks, then applied to living companies, we also understood how to model and manage cooperative networks in terms of workgroups creation and optimization. We have done that by automatically interpreting worker profiles, then automatically extrapolating and interpreting the relevant information among hundreds of features for each worker in order to create workgroups based on their skills, professional attitudes, experience, etc. Finally, also thanks to the suggestions of prof. Michael Bernstein of the Stanford University, we simply proposed to connect the developed automata. We made use of artificial intelligence to model the availability of human resources, but then we had to use a second level of artificial intelligence in order to model human workgroups and skills, finally we used a third level of artificial intelligence to model workflows executed by the said human resources once organized in groups and levels according to their experiences. In our best intentions, such a three level artificial intelligence could address the limits that, until now, have refrained the crowds from growing up as companies, with a well recognizable pyramidal structure, in order to reward experience, skill and professionalism of their workers. We cannot frankly say whether our work will really contribute or not to the so called "crowdsourcing revolution", but we hope at least to have shedded some light on the agreeable possibilities that are yet to come.
APA, Harvard, Vancouver, ISO, and other styles
46

NAPOLI, CHRISTIAN. "A-I: Artificial intelligence." Doctoral thesis, Università degli studi di Catania, 2016. http://hdl.handle.net/20.500.11769/490996.

Full text
Abstract:
In this thesis we proposed new neural architectures and information theory approaches. By means of wavelet analysis, neural networks, and the results of our own creations, namely the wavelet recurrent neural networks and the radial basis probabilistic neural networks,we tried to better understand, model and cope with the human behavior itself. The first idea was to model the workers of a crowdsourcing project as nodes on a cloud-computing system, we also hope to have exceeded the limits of such a definition. We hope to have opened a door on new possibilities to model the behavior of socially interconnected groups of people cooperating for the execution of a common task. We showed how it is possible to use the Wavelet Recurrent Neural Networks to model a quite complex thing such as the availability of resources on an online service or a computational cloud, then we showed that, similarly, the availability of crowd workers can be modeled, as well as the execution time of tasks performed by crowd workers. Doing that we created a tool to tamper with the timeline, hence allowing us to obtain predictions regarding the status of the crowd in terms of available workers and executed workflows. Moreover, with our inanimate reasoner based on the developed Radial Basis Probabilistic Neural Networks, firstly applied to social networks, then applied to living companies, we also understood how to model and manage cooperative networks in terms of workgroups creation and optimization. We have done that by automatically interpreting worker profiles, then automatically extrapolating and interpreting the relevant information among hundreds of features for each worker in order to create workgroups based on their skills, professional attitudes, experience, etc. Finally, also thanks to the suggestions of prof. Michael Bernstein of the Stanford University, we simply proposed to connect the developed automata. We made use of artificial intelligence to model the availability of human resources, but then we had to use a second level of artificial intelligence in order to model human workgroups and skills, finally we used a third level of artificial intelligence to model workflows executed by the said human resources once organized in groups and levels according to their experiences. In our best intentions, such a three level artificial intelligence could address the limits that, until now, have refrained the crowds from growing up as companies, with a well recognizable pyramidal structure, in order to reward experience, skill and professionalism of their workers. We cannot frankly say whether our work will really contribute or not to the so called "crowdsourcing revolution", but we hope at least to have shedded some light on the agreeable possibilities that are yet to come.
APA, Harvard, Vancouver, ISO, and other styles
47

Lila, Serxho <1995&gt. "Applications of Artificial Intelligence." Master's Degree Thesis, Università Ca' Foscari Venezia, 2021. http://hdl.handle.net/10579/18837.

Full text
Abstract:
The world is changing rapidly. Developments in information technology are affecting every aspect of people’s lives, but also economies or politics all over the world. The fourth industrial revolution started at the beginning of the twenty-first century, it is manifested by emerging technology breakthroughs in many fields, which include artificial intelligence (AI), biotechnology, 3D printing, the Internet of Things, and blockchain. In the last decade, the technology sector has driven the worldwide economy, and big tech companies such as Amazon, Apple, Microsoft, Tencent, Baidu, and Alibaba are dominating the market. Many other companies are riding the wave, while others are staying behind. AI is impacting many industries, from transportation to agriculture, passing through healthcare, communication, media, retail, entertainment, and others. But one of the major sector involved is finance. This qualitative research focuses on applications of AI, especially in the financial field. The theoretical part aims to introduce the concept of AI, the different types of AI, examples of AI technologies, and countries and companies that are leading the AI race. The second part of the research focuses on AI applications in different sectors, examining the influence, benefits, and challenges of this breakthrough innovation.
APA, Harvard, Vancouver, ISO, and other styles
48

Boyce, Gavin John. "Artificial intelligence : thought and content." Thesis, University of Sheffield, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.265914.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Gkiokas, Alexandros. "Imitation learning in artificial intelligence." Thesis, University of Warwick, 2016. http://wrap.warwick.ac.uk/94683/.

Full text
Abstract:
Acquiring new knowledge often requires an agent or a system to explore, search and discover. Yet us humans build upon the knowledge of our forefathers, as did they, using previous knowledge; there does exist a mechanism which allows transference of knowledge without searching, exploration or discovery. That mechanism is known as imitation and it exists everywhere in nature; in animals, insects, primates, and humans. Enabling artificial, cognitive and software agents to learn by imitation could potentially be crucial to the emergence of the field of autonomous systems, robotics, cyber-physical and software agents. Imitation in AI implies that agents can learn from their human users, other AI agents, through observation or using physical interaction in robotics, and therefore learn a lot faster and easier. Describing an imitation learning framework in AI which uses the Internet as the source of knowledge requires a rather unconventional approach: the procedure is a temporal-sequential process which uses reinforcement based on behaviouristic Psychology, deep learning and a plethora of other Algorithms. Ergo an agent using a hybrid simulating-emulating strategy is formulated, implemented and experimented with. That agent learns from RSS feeds using examples provided by the user; it adheres to previous research work and theoretical foundations and demonstrates that not only is imitation learning in AI possible, but it compares and in some cases outperforms traditional approaches.
APA, Harvard, Vancouver, ISO, and other styles
50

Polova, M. V., L. M. Mahas, М. В. Польова, and Л. М. Магас. "Major advances of artificial intelligence." Thesis, Вінницький національний аграрний університет, 2015. http://ir.lib.vntu.edu.ua/handle/123456789/5009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography