To see the other types of publications on this topic, follow the link: Artificial autonomous agent.

Journal articles on the topic 'Artificial autonomous agent'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Artificial autonomous agent.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Dong, Daqi. "The Observable Mind: Enabling an Autonomous Agent Sharing Its Conscious Contents Using a Cognitive Architecture." Proceedings of the AAAI Symposium Series 2, no. 1 (2024): 172–76. http://dx.doi.org/10.1609/aaaiss.v2i1.27666.

Full text
Abstract:
We enable an autonomous agent sharing its artificial mind to its audiences like humans. This supports the autonomous human robot interactions relying on a cognitive architecture, LIDA, which explains and predicts how minds work and is used as the controllers of intelligent autonomous agents. We argue that LIDA’s cognitive representations and processes may serve as the source of the mind content its agent shares out, autonomously. We proposed a new description (sub) model into LIDA, letting its agent describing its conscious contents. Through this description, the agent’s mind is more observable so we can understand the agent’s entity and intelligence more directly. Also, this helps the agent explains its behaviors to its audiences so engage into its living society better. We built an initial LIDA agent embedding with this description model. The agent shares its conscious content autonomously, reasonably explaining its behaviors.
APA, Harvard, Vancouver, ISO, and other styles
2

Dodig-Crnkovic, Gordana, and Mark Burgin. "A Systematic Approach to Autonomous Agents." Philosophies 9, no. 2 (2024): 44. http://dx.doi.org/10.3390/philosophies9020044.

Full text
Abstract:
Agents and agent-based systems are becoming essential in the development of various fields, such as artificial intelligence, ubiquitous computing, ambient intelligence, autonomous computing, and intelligent robotics. The concept of autonomous agents, inspired by the observed agency in living systems, is also central to current theories on the origin, development, and evolution of life. Therefore, it is crucial to develop an accurate understanding of agents and the concept of agency. This paper begins by discussing the role of agency in natural systems as an inspiration and motivation for agential technologies and then introduces the idea of artificial agents. A systematic approach is presented for the classification of artificial agents. This classification aids in understanding the existing state of the artificial agents and projects their potential future roles in addressing specific types of problems with dedicated agent types.
APA, Harvard, Vancouver, ISO, and other styles
3

Pooja, Dr. Manish Varshney. "The Study of Fundamental Concepts of Agent and Multi-agent Systems." Tuijin Jishu/Journal of Propulsion Technology 44, no. 3 (2023): 3237–38. http://dx.doi.org/10.52783/tjjpt.v44.i3.1592.

Full text
Abstract:
The concept of n intelligent agent is a concept that is born from the area of artificial intelligence; in fact, a commonly-accepted definition relates the discipline of artificial intelligence with the analysis and design of autonomous entities capable of exhibitin intelligent behavior. From that perspective, it is assumed that an intelligent agent must be able to perceive its environment, reason about how to achieve its objectives, act towards achieving them through the application of some principle of rationality, and interact with other intelligent agents, being artificial or human [1]. Multi-agent systems are a particular case of a distributed system, and its particularity lies in the fact that the components of the system are autonomous and selfish, seeking to satisfy their own objectives. In addition, these systems also stand out for being open systems without a centralized design [2]. One main reason for the great interest and attention that multi-agent systems have received is that they are seen as an enabling technology for complex applications that require distributed and parallel processing of data and operate autonomously in complex and dynamic domains.
APA, Harvard, Vancouver, ISO, and other styles
4

Rao, Swaneet D. "Multi-Agent Autonomous Cleaning." International Journal for Research in Applied Science and Engineering Technology 9, no. 10 (2021): 1872–75. http://dx.doi.org/10.22214/ijraset.2021.38714.

Full text
Abstract:
Abstract: In today’s world, robots are taking over the world by doing the tasks which used to be done by humans a while ago. Robots are continuously evolving into better and more efficient autonomous agents, makes substantial growth in fields like adaptive artificial intelligence. Our main objective of this people is to create an efficient multi agent autonomous environment for robots for cleaning purposes. Keywords: Gradient Descent, Centralized controller, autonomous agents, LiDAR
APA, Harvard, Vancouver, ISO, and other styles
5

Schaub Jr., Gary. "Controlling the Autonomous Warrior." Journal of International Humanitarian Legal Studies 10, no. 1 (2019): 184–202. http://dx.doi.org/10.1163/18781527-01001007.

Full text
Abstract:
The challenges posed by weapons with autonomous functions are not a tabula rasa. The capabilities of both State principals and military agents to control and channel violence for political purposes have improved across the centuries as technology has increased the range and lethality of weapons as well as the scope of warfare. The institutional relations between principals and agents have been adapted to account for, and take advantage of, these developments. Air forces encompass one realm where distance, speed, and lethality have been subjected to substantial and effective control. Air forces are also where systems with autonomous functionality will likely drive the most visible adaptation to command and control arrangements. This process will spread across other domains as States pursue institution-centric and agent-centric strategies to secure meaningful human control over artificial agents as they become increasingly capable of replacing human agents in military (and other) functions. Agent-centric approaches that consider emergent behaviour as akin to human judgment and institutional approaches that improve the ability to understand, interrogate, monitor, and audit the decisions and behaviour of artificial agents can together drive improvements in meaningful human control over warfare, just as previous adaptations have.
APA, Harvard, Vancouver, ISO, and other styles
6

Such, Jose M., Agustín Espinosa, and Ana García-Fornes. "A survey of privacy in multi-agent systems." Knowledge Engineering Review 29, no. 3 (2013): 314–44. http://dx.doi.org/10.1017/s0269888913000180.

Full text
Abstract:
AbstractPrivacy has been a concern for humans long before the explosive growth of the Internet. The advances in information technologies have further increased these concerns. This is because the increasing power and sophistication of computer applications offers both tremendous opportunities for individuals, but also significant threats to personal privacy. Autonomous agents and multi-agent systems are examples of the level of sophistication of computer applications. Autonomous agents usually encapsulate personal information describing their principals, and therefore they play a crucial role in preserving privacy. Moreover, autonomous agents themselves can be used to increase the privacy of computer applications by taking advantage of the intrinsic features they provide, such as artificial intelligence, pro-activeness, autonomy, and the like. This article introduces the problem of preserving privacy in computer applications and its relation to autonomous agents and multi-agent systems. It also surveys privacy-related studies in the field of multi-agent systems and identifies open challenges to be addressed by future research.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Jiasheng, and Aziz Nazha. "Autonomous Analysis of CIBMTR Datasets Using Artificial Intelligence Agents." Blood 144, Supplement 1 (2024): 7489. https://doi.org/10.1182/blood-2024-207380.

Full text
Abstract:
Background: Analyzing complex medical data requires specialized knowledge and expertise, making it both time-consuming and resource-intensive. Large language models (LLMs), such as GPT-4, excel in tasks like coding and medical statistics. However, analyzing datasets is more intricate than interacting with a chatbot. It involves several critical steps: planning, tracking information, locating data, and developing and refining the right statistical analyses. Artificial intelligence (AI) agents represent a new trend, where each AI agent can perform specific tasks based on prior defined instructions. We have developed a framework in which multiple AI agents collaborate to accomplish a specific task. In this framework, each agent is assigned a distinct role in the data analysis process. They communicate by sending and receiving messages to coordinate their efforts, ensuring the task is completed in a systematic yet collaborative manner. Methods: We included the latest 20 studies from the Center for International Blood and Marrow Transplant Research (CIBMTR) with publicly available data. The primary objective was to evaluate the accuracy of AI agents in replicating the primary outcomes of these studies. Using the AutoGen platform, a six-party AI agent framework was constructed. This framework included a user proxy, planner, data retriever, data cleaner, coder, and results reviewer, with GPT-4o serving as the underlying LLM. It was given simple instructions to replicate the primary outcomes, such as “compare overall survival based on different reduced-intensity conditioning regimens.” The results were then compared to the original studies. Each experiment was repeated three times to assure accuracy. The included studies, instructions, and results can be viewed at https://github.com/jwang-580/CIBMTR_data. Results: The 20 included studies were published between 2021 and 2023, with topics spanning chronic leukemias (20%), health disparities (15%), immunobiology (15%), acute leukemias (10%), lymphomas (10%), graft-versus-host disease (10%), infection (10%), and survivorship (10%). The primary study objectives were either related to survival outcomes (75%) or specific complications (25%), such as the incidence of bacterial and viral infections (5%), pulmonary toxicities (5%), primary graft failure (5%), and secondary malignancies (5%). The statistical methods used for generating primary outcomes were multivariable regression analysis (45%), descriptive statistics (40%), and univariate regression analysis (15%). The AI agents successfully adhered to their designated roles by automatically downloading datasets and data dictionaries, drafting data analysis plans, selecting relevant variables, cleaning datasets, generating and debugging computational analysis codes, and interpreting the results. The multi-AI agent framework accurately replicated 53% of the primary outcomes (95% confidence interval [CI] 41-66%). This rate was significantly higher than that achieved using ChatGPT alone without the multi-AI agent framework, which replicated 35% of the results (95% CI 24-47%; p=0.04, t-test). Specifically, the multi-AI agent framework correctly replicated 58% of primary outcomes related to survival and 40% related to complications. It also successfully replicated 44%, 71%, and 67% of results from studies using multivariable regression, descriptive statistics, and univariate regression, respectively. The most common cause of failure to achieve accurate results was issues related to data transformation, such as converting time units or selecting data subsets. Notably, hallucination of data or results was not observed due to framework optimizations. The average cost of each analysis, which includes the expenses for processing input and output was $1.2, and each analysis was completed in < 1 minute. Conclusion: We developed a multi-AI agent framework capable of collaboratively extracting, cleaning, organizing, and analyzing CIBMTR datasets. The agents successfully replicated published results efficiently and cost-effectively. Our approach significantly improves accuracy over the state-of-the-art, non-agent-based AI methods. It has the potential to transform complex data analysis, facilitating major advancements in medical research.
APA, Harvard, Vancouver, ISO, and other styles
8

Contreras, Daniel Silva, and Salvador Godoy-Calderon. "Autonomous Agent Navigation Model Based on Artificial Potential Fields Assisted by Heuristics." Applied Sciences 14, no. 8 (2024): 3303. http://dx.doi.org/10.3390/app14083303.

Full text
Abstract:
When autonomous agents are deployed in an unknown environment, obstacle-avoiding movement and navigation are required basic skills, all the more so when agents are limited by partial-observability constraints. This paper addresses the problem of autonomous agent navigation under partial-observability constraints by using a novel approach: Artificial Potential Fields (APF) assisted by heuristics. The well-known problem of local minima is addressed by providing the agents with the ability to make individual choices that can be exploited in a swarm. We propose a new potential function, which provides precise control of the potential field’s reach and intensity, and the use of auxiliary heuristics provides temporary target points while the agent explores, in search of the position of the real intended target. Artificial Potential Fields, together with auxiliary search heuristics, are integrated into a novel navigation model for autonomous agents who have limited or no knowledge of their environment. Experimental results are shown in 2D scenarios that pose challenging situations with multiple obstacles, local minima conditions and partial-observability constraints, clearly showing that an agent driven using the proposed model is capable of completing the navigation task, even under the partial-observability constraints.
APA, Harvard, Vancouver, ISO, and other styles
9

Shinde, Dr Pravin, Himali Paradkar, Poojan Vig, Sanchay Thalnerkar, and Vinay Jain. "HELIX: Autonomous AI Agent." International Journal for Research in Applied Science and Engineering Technology 12, no. 4 (2024): 4647–54. http://dx.doi.org/10.22214/ijraset.2024.61080.

Full text
Abstract:
Abstract: Artificial Intelligence has transformed the way we interact with technology, introducing us to agents that can think and make choices like humans. At the heart of this evolution is our project, 'HELIX'. Through 'HELIX', we've developed AI agents specialized in a variety of tasks: from digging deep into the web for research, streamlining email communication through automation, efficiently sending out emails in bulk, to strategically identifying and generating potential business leads. By weaving together cutting-edge machine learning algorithms and advanced language models, our system stands as a testament to the proficiency and versatility of AI. What makes 'HELIX' especially groundbreaking is its user-friendly approach, ensuring anyone, regardless of their technical background, can harness its power. As we embrace the dawn of this technological era, 'HELIX' not only showcases the potential of today's AI capabilities but also paves the way for future innovations, promising an expansive horizon for AI-driven solutions across myriad domains.
APA, Harvard, Vancouver, ISO, and other styles
10

Montagna, Sara, Daniel Castro Silva, Pedro Henriques Abreu, Marcia Ito, Michael Ignaz Schumacher, and Eloisa Vargiu. "Autonomous agents and multi-agent systems applied in healthcare." Artificial Intelligence in Medicine 96 (May 2019): 142–44. http://dx.doi.org/10.1016/j.artmed.2019.02.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Fernandez, Domingos Elias, Inês Terrucha, Rémi Suchon, Rial Juan Carlos Burguillo, and Tom Lenaerts. "Delegation to artificial agents fosters prosocial behaviors in the collective risk dilemma." Scientific reports 12, no. 1 (2022): 8492. https://doi.org/10.1038/s41598-022-11518-9.

Full text
Abstract:
Home assistant chat-bots, self-driving cars, drones or automated negotiation systems are some of the several examples of autonomous (artificial) agents that have pervaded our society. These agents enable the automation of multiple tasks, saving time and (human) effort. However, their presence in social settings raises the need for a better understanding of their effect on social interactions and how they may be used to enhance cooperation towards the public good, instead of hindering it. To this end, we present an experimental study of human delegation to autonomous agents and hybrid human-agent interactions centered on a non-linear public goods dilemma with uncertain returns in which participants face a collective risk. Our aim is to understand experimentally whether the presence of autonomous agents has a positive or negative impact on social behaviour, equality and cooperation in such a dilemma. Our results show that cooperation and group success increases when participants delegate their actions to an artificial agent that plays on their behalf. Yet, this positive effect is less pronounced when humans interact in hybrid human-agent groups, where we mostly observe that humans in successful hybrid groups make higher contributions earlier in the game. Also, we show that participants wrongly believe that artificial agents will contribute less to the collective effort. In general, our results suggest that delegation to autonomous agents has the potential to work as commitment devices, which prevent both the temptation to deviate to an alternate (less collectively good) course of action, as well as limiting responses based on betrayal aversion.
APA, Harvard, Vancouver, ISO, and other styles
12

Shin, Sangkyu. "Can Artificial Intelligence be an Autonomous Moral Agent?" Korean Journal of Philosophy 132 (August 31, 2017): 265–92. http://dx.doi.org/10.18694/kjp.2017.08.132.265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Yunchyk, Valentyna, Natalia Kunanets, Volodymyr Pasichnyk, and Anatolii Fedoniuk. "Analysis of Artificial Intellectual Agents for E-Learning Systems." Vìsnik Nacìonalʹnogo unìversitetu "Lʹvìvsʹka polìtehnìka". Serìâ Ìnformacìjnì sistemi ta merežì 10 (December 2021): 41–57. http://dx.doi.org/10.23939/sisn2021.10.041.

Full text
Abstract:
The key terms and basic concepts of the agent are analyzed. The structured general classification of agents according to the representation of the model of the external environment, by the type of processing information and by the functions performed is given. The classification of artificial agents (intellectual, reflex, impulsive, trophic) also is s analyzed. The necessary conditions for the implementation of a certain behavior by the agent are given, as well as the scheme of functioning of the intelligent agent. The levels of knowledge that play a key role in the architecture of the agent are indicated. The functional diagram of a learning agent that works relatively independently, demonstrating flexible behavior. It is discussed that the functional scheme of the reactive agent determines the dependence on the environment. The properties of the intelligent agent are described in detail and the block diagram is indicated. Various variants of agent architectures, in particular neural network agent architectures, are considered. The organization of level interaction in the multilevel agent architecture is proposed. Considerable attention is paid to the Will-architecture and InteRRaP- architecture of agents. A multilevel architecture for an autonomous agent of a Turing machine is considered.
APA, Harvard, Vancouver, ISO, and other styles
14

Brandonisio, Andrea, Michèle Lavagna, and Davide Guzzetti. "Reinforcement Learning for Uncooperative Space Objects Smart Imaging Path-Planning." Journal of the Astronautical Sciences 68, no. 4 (2021): 1145–69. http://dx.doi.org/10.1007/s40295-021-00288-7.

Full text
Abstract:
AbstractLeading space agencies are increasingly investing in the gradual automation of space missions. In fact, autonomous flight operations may be a key enabler for on-orbit servicing, assembly and manufacturing (OSAM) missions, carrying inherent benefits such as cost and risk reduction. Within the spectrum of proximity operations, this work focuses on autonomous path-planning for the reconstruction of geometry properties of an uncooperative target. The autonomous navigation problem is called active Simultaneous Localization and Mapping (SLAM) problem, and it has been largely studied within the field of robotics. Active SLAM problem may be formulated as a Partially Observable Markov Decision Process (POMDP). Previous works in astrodynamics have demonstrated that is possible to use Reinforcement Learning (RL) techniques to teach an agent that is moving along a pre-determined orbit when to collect measurements to optimize a given mapping goal. In this work, different RL methods are explored to develop an artificial intelligence agent capable of planning sub-optimal paths for autonomous shape reconstruction of an unknown and uncooperative object via imaging. Proximity orbit dynamics are linearized and include orbit eccentricity. The geometry of the target object is rendered by a polyhedron shaped with a triangular mesh. Artificial intelligent agents are created using both the Deep Q-Network (DQN) and the Advantage Actor Critic (A2C) method. State-action value functions are approximated using Artificial Neural Networks (ANN) and trained according to RL principles. Training of the RL agent architecture occurs under fixed or random initial environment conditions. A large database of training tests has been collected. Trained agents show promising performance in achieving extended coverage of the target. Policy learning is demonstrated by displaying that RL agents, at minimum, have higher mapping performance than agents that behave randomly. Furthermore, RL agent may learn to maneuver the spacecraft to control target lighting conditions as a function of the Sun location. This work, therefore, preliminary demonstrates the applicability of RL to autonomous imaging of an uncooperative space object, thus setting a baseline for future works.
APA, Harvard, Vancouver, ISO, and other styles
15

Kota, Sunil Karthik. "The Evolution of AI Agents: From Rule‐Based Systems to Autonomous Intelligence – A Comprehensive Review." Journal of Artificial Intelligence & Cloud Computing 4, no. 2 (2025): 1–5. https://doi.org/10.47363/jaicc/2025(4)433.

Full text
Abstract:
Artificial Intelligence (AI) agents have evolved from early rule‐based systems to today’s sophisticated autonomous systems. This comprehensive review examines the historical development, technical advancements, and emerging trends in AI agent research.
APA, Harvard, Vancouver, ISO, and other styles
16

Seel, Nigel. "The Second European Workshop on Modelling Autonomous Agents and Multi-Agent Worlds." AI Communications 3, no. 4 (1990): 193–96. http://dx.doi.org/10.3233/aic-1990-3405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Dignum, Virginia, Nigel Gilbert, and Michael P. Wellman. "Introduction to the special issue on autonomous agents for agent-based modeling." Autonomous Agents and Multi-Agent Systems 30, no. 6 (2016): 1021–22. http://dx.doi.org/10.1007/s10458-016-9345-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Deepa.T.P. "DEVELOPMENT OF AUTONOMOUS GAME AGENT WITH LEARNING AND REACTIVE BEHAVIORS." International Journal of Research - Granthaalayah 5, no. 4 (2017): 91–96. https://doi.org/10.5281/zenodo.572998.

Full text
Abstract:
The main goal of this paper is to develop software agent which is autonomous with reactive behavior and learning abilities. One of the applications of such agents are in gaming. Gaming characters are expected to work in unpredictable environment with decision making capabilities like weapon selection for different targets, wall following. This can be achieved artificial intelligence (AI) techniques and methods. In this paper, agent is designed to exhibit capabilities like - Moving Abilities, Steering behavior and obstacle avoidance, Synthesis of movement enhancing movement, weapon selection, adapting defense strategies, strategic Decision making.
APA, Harvard, Vancouver, ISO, and other styles
19

YAMAGUCHI, Toru, Shunsuke MASUDA, and Hiroki MURAKAMI. "Human Centered Autonomous Agent System by Using Artificial Ontology." Journal of Japan Society for Fuzzy Theory and Intelligent Informatics 16, no. 2 (2004): 160–70. http://dx.doi.org/10.3156/jsoft.16.160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Hu, Yue. "The System Construction of Moral Artificial Intelligence: From the Perspective of Normative Ethical Theories." Scientific and Social Research 7, no. 1 (2025): 112–18. https://doi.org/10.26689/ssr.v7i1.9343.

Full text
Abstract:
With the rapid progress of technology, the decision-making and behaviors of artificial intelligence have begun to shift from external settings to internal development. Intelligent agents gradually possess varying degrees of adaptive, decision-making, and behavioral abilities, and their autonomous capabilities are continuously enhanced. For moral considerations, artificial intelligence with autonomous decision-making and behaviors has begun to be regarded as a moral agent. Therefore, how traditional morality can play an autonomous role in intelligent technologies has become a problem that must be faced. The three main theories of normative ethics, consequentialism, deontology, and virtue ethics all have the potential to solve this problem. This article aims to use normative ethical theories to construct an artificial intelligence system capable of making moral decisions, and it is necessary to ensure that the autonomous reasoning of artificial intelligence can be constrained by human social morality and values, remain consistent with human values, and assume the “responsibility” of decision-making.
APA, Harvard, Vancouver, ISO, and other styles
21

Dennis, Louise A. "Computational Goals, Values and Decision-Making." Science and Engineering Ethics 26, no. 5 (2020): 2487–95. http://dx.doi.org/10.1007/s11948-020-00244-y.

Full text
Abstract:
Abstract Considering the popular framing of an artificial intelligence as a rational agent that always seeks to maximise its expected utility, referred to as its goal, one of the features attributed to such rational agents is that they will never select an action which will change their goal. Therefore, if such an agent is to be friendly towards humanity, one argument goes, we must understand how to specify this friendliness in terms of a utility function. Wolfhart Totschnig (Fully Autonomous AI, Science and Engineering Ethics, 2020), argues in contrast that a fully autonomous agent will have the ability to change its utility function and will do so guided by its values. This commentary examines computational accounts of goals, values and decision-making. It rejects the idea that a rational agent will never select an action that changes its goal but also argues that an artificial intelligence is unlikely to be purely rational in terms of always acting to maximise a utility function. It nevertheless also challenges the idea that an agent which does not change its goal cannot be considered fully autonomous. It does agree that values are an important component of decision-making and explores a number of reasons why.
APA, Harvard, Vancouver, ISO, and other styles
22

DOVIER, AGOSTINO, ANDREA FORMISANO, and ENRICO PONTELLI. "Autonomous agents coordination: Action languages meet CLP() and Linda." Theory and Practice of Logic Programming 13, no. 2 (2012): 149–73. http://dx.doi.org/10.1017/s1471068411000615.

Full text
Abstract:
AbstractThe paper presents a knowledge representation formalism, in the form of a high-levelAction Description Language (ADL)for multi-agent systems, where autonomous agents reason and act in a shared environment. Agents are autonomously pursuing individual goals, but are capable of interacting through a shared knowledge repository. In their interactions through shared portions of the world, the agents deal with problems of synchronization and concurrency; the action language allows the description of strategies to ensure a consistent global execution of the agents’ autonomously derived plans. A distributed planning problem is formalized by providing the declarative specifications of the portion of the problem pertaining to a single agent. Each of these specifications is executable by a stand-alone CLP-based planner. The coordination among agents exploits a Linda infrastructure. The proposal is validated in a prototype implementation developed in SICStus Prolog.
APA, Harvard, Vancouver, ISO, and other styles
23

Casadei, Roberto, Gianluca Aguzzi, and Mirko Viroli. "A Programming Approach to Collective Autonomy." Journal of Sensor and Actuator Networks 10, no. 2 (2021): 27. http://dx.doi.org/10.3390/jsan10020027.

Full text
Abstract:
Research and technology developments on autonomous agents and autonomic computing promote a vision of artificial systems that are able to resiliently manage themselves and autonomously deal with issues at runtime in dynamic environments. Indeed, autonomy can be leveraged to unburden humans from mundane tasks (cf. driving and autonomous vehicles), from the risk of operating in unknown or perilous environments (cf. rescue scenarios), or to support timely decision-making in complex settings (cf. data-centre operations). Beyond the results that individual autonomous agents can carry out, a further opportunity lies in the collaboration of multiple agents or robots. Emerging macro-paradigms provide an approach to programming whole collectives towards global goals. Aggregate computing is one such paradigm, formally grounded in a calculus of computational fields enabling functional composition of collective behaviours that could be proved, under certain technical conditions, to be self-stabilising. In this work, we address the concept of collective autonomy, i.e., the form of autonomy that applies at the level of a group of individuals. As a contribution, we define an agent control architecture for aggregate multi-agent systems, discuss how the aggregate computing framework relates to both individual and collective autonomy, and show how it can be used to program collective autonomous behaviour. We exemplify the concepts through a simulated case study, and outline a research roadmap towards reliable aggregate autonomy.
APA, Harvard, Vancouver, ISO, and other styles
24

Kaminka, Gal A. "I Have a Robot, and I’m Not Afraid to Use It!" AI Magazine 33, no. 3 (2012): 66. http://dx.doi.org/10.1609/aimag.v33i3.2422.

Full text
Abstract:
Robots (and roboticists) increasingly appear at the Autonomous Agents and Multi-Agent Systems (AAMAS) conferences because the community uses robots both to inspire AAMAS research as well as to conduct it. In this article, I submit that the growing success of robotics at AAMAS is due not only to the nurturing efforts of the AAMAS community, but mainly to the increasing recognition of an important, deeper, truth: it is scientifically useful to roboticists and agent researchers to think of robots as agents.
APA, Harvard, Vancouver, ISO, and other styles
25

Hooker, John, and Tae Wan Kim. "Truly Autonomous Machines Are Ethical." AI Magazine 40, no. 4 (2019): 66–73. http://dx.doi.org/10.1609/aimag.v40i4.2863.

Full text
Abstract:
There is widespread concern that as machines move toward greater autonomy, they may become a law unto themselves and turn against us. Yet the threat lies more in how we conceive of an autonomous machine rather than the machine itself. We tend to see an autonomous agent as one that sets its own agenda, free from external constraints, including ethical constraints. A deeper and more adequate understanding of autonomy has evolved in the philosophical literature, specifically in deontological ethics. It teaches that ethics is an internal, not an external, constraint on autonomy, and that a truly autonomous agent must be ethical. It tells us how we can protect ourselves from smart machines by making sure they are truly autonomous rather than simply beyond human control.
APA, Harvard, Vancouver, ISO, and other styles
26

Cardoso, Rafael C., and Angelo Ferrando. "A Review of Agent-Based Programming for Multi-Agent Systems." Computers 10, no. 2 (2021): 16. http://dx.doi.org/10.3390/computers10020016.

Full text
Abstract:
Intelligent and autonomous agents is a subarea of symbolic artificial intelligence where these agents decide, either reactively or proactively, upon a course of action by reasoning about the information that is available about the world (including the environment, the agent itself, and other agents). It encompasses a multitude of techniques, such as negotiation protocols, agent simulation, multi-agent argumentation, multi-agent planning, and many others. In this paper, we focus on agent programming and we provide a systematic review of the literature in agent-based programming for multi-agent systems. In particular, we discuss both veteran (still maintained) and novel agent programming languages, their extensions, work on comparing some of these languages, and applications found in the literature that make use of agent programming.
APA, Harvard, Vancouver, ISO, and other styles
27

Santara, Anirban, Sohan Rudra, Sree Aditya Buridi, et al. "MADRaS : Multi Agent Driving Simulator." Journal of Artificial Intelligence Research 70 (April 30, 2021): 1517–55. http://dx.doi.org/10.1613/jair.1.12531.

Full text
Abstract:
Autonomous driving has emerged as one of the most active areas of research as it has the promise of making transportation safer and more efficient than ever before. Most real-world autonomous driving pipelines perform perception, motion planning and action in a loop. In this work we present MADRaS, an open-source multi-agent driving simulator for use in the design and evaluation of motion planning algorithms for autonomous driving. Given a start and a goal state, the task of motion planning is to solve for a sequence of position, orientation and speed values in order to navigate between the states while adhering to safety constraints. These constraints often involve the behaviors of other agents in the environment. MADRaS provides a platform for constructing a wide variety of highway and track driving scenarios where multiple driving agents can be trained for motion planning tasks using reinforcement learning and other machine learning algorithms. MADRaS is built on TORCS, an open-source car-racing simulator. TORCS offers a variety of cars with different dynamic properties and driving tracks with different geometries and surface. MADRaS inherits these functionalities from TORCS and introduces support for multi-agent training, inter-vehicular communication, noisy observations, stochastic actions, and custom traffic cars whose behaviors can be programmed to simulate challenging traffic conditions encountered in the real world. MADRaS can be used to create driving tasks whose complexities can be tuned along eight axes in well-defined steps. This makes it particularly suited for curriculum and continual learning. MADRaS is lightweight and it provides a convenient OpenAI Gym interface for independent control of each car. Apart from the primitive steering-acceleration-brake control mode of TORCS, MADRaS offers a hierarchical track-position – speed control mode that can potentially be used to achieve better generalization. MADRaS uses a UDP based client server model where the simulation engine is the server and each client is a driving agent. MADRaS uses multiprocessing to run each agent as a parallel process for efficiency and integrates well with popular reinforcement learning libraries like RLLib. We show experiments on single and multi-agent reinforcement learning with and without curriculum
APA, Harvard, Vancouver, ISO, and other styles
28

Seel, Nigel. "The First European Workshop on Modelling Autonomous Agents in a Multi-Agent World." AI Communications 2, no. 3-4 (1989): 164–67. http://dx.doi.org/10.3233/aic-1989-23-406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

T.P., Deepa. "DEVELOPMENT OF AUTONOMOUS GAME AGENT WITH LEARNING AND REACTIVE BEHAVIORS." International Journal of Research -GRANTHAALAYAH 5, no. 4RACSIT (2017): 91–96. http://dx.doi.org/10.29121/granthaalayah.v5.i4racsit.2017.3360.

Full text
Abstract:
The main goal of this paper is to develop software agent which is autonomous with reactive behavior and learning abilities. One of the applications of such agents are in gaming. Gaming characters are expected to work in unpredictable environment with decision making capabilities like weapon selection for different targets, wall following. This can be achieved artificial intelligence (AI) techniques and methods. In this paper, agent is designed to exhibit capabilities like - Moving Abilities, Steering behavior and obstacle avoidance, Synthesis of movement enhancing movement, weapon selection, adapting defense strategies, strategic Decision making.
APA, Harvard, Vancouver, ISO, and other styles
30

Fioretto, Ferdinando, Enrico Pontelli, and William Yeoh. "Distributed Constraint Optimization Problems and Applications: A Survey." Journal of Artificial Intelligence Research 61 (March 29, 2018): 623–98. http://dx.doi.org/10.1613/jair.5565.

Full text
Abstract:
The field of multi-agent system (MAS) is an active area of research within artificial intelligence, with an increasingly important impact in industrial and other real-world applications. In a MAS, autonomous agents interact to pursue personal interests and/or to achieve common objectives. Distributed Constraint Optimization Problems (DCOPs) have emerged as a prominent agent model to govern the agents' autonomous behavior, where both algorithms and communication models are driven by the structure of the specific problem. During the last decade, several extensions to the DCOP model have been proposed to enable support of MAS in complex, real-time, and uncertain environments. This survey provides an overview of the DCOP model, offering a classification of its multiple extensions and addressing both resolution methods and applications that find a natural mapping within each class of DCOPs. The proposed classification suggests several future perspectives for DCOP extensions and identifies challenges in the design of efficient resolution algorithms, possibly through the adaptation of strategies from different areas.
APA, Harvard, Vancouver, ISO, and other styles
31

Weber, Karsten. "What is it like to encounter an autonomous artificial agent?" AI & SOCIETY 28, no. 4 (2013): 483–89. http://dx.doi.org/10.1007/s00146-013-0453-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Urovi, Visara, Alex C. Olivieri, Albert Brugués de la Torre, Stefano Bromuri, Nicoletta Fornara, and Michael Schumacher. "Secure P2P Cross-Community Health Record Exchange in IHE Compatible Systems." International Journal on Artificial Intelligence Tools 23, no. 01 (2014): 1440006. http://dx.doi.org/10.1142/s0218213014400065.

Full text
Abstract:
This paper investigates a secure mechanism for Electronic Health Records (EHR) exchange over a Peer to Peer (P2P) agent based coordination framework. Our study is based on the SemHealthCoord framework, a platform for the exchange of EHR between autonomous health organisations that extends the existing interoperability standards as proposed by the Integrating Healthcare Enterprise (IHE). Every health organisation in SemHealthCoord represents a community within a P2P network. Communities use a set of autonomous agents and a set of distributed coordination rules to coordinate the agents in the search of specific health records. To enable secure interactions among communities, we propose the use of asymmetric keys and digital certificates. We specify the interaction protocols to provide integrity and authenticity between the communities, and, to illustrate the scalability of our approach, we evaluate the proposed solution in distributed settings by comparing the performance between secured and unsecured data exchange. The contribution of this work is that enables IHE based health communities to dynamically exchange EHR of patients with a secure P2P agent coordination framework.
APA, Harvard, Vancouver, ISO, and other styles
33

Greenwald, A., S. Lee, and V. Naroditskiy. "RoxyBot-06: Stochastic Prediction and Optimization in TAC Travel." Journal of Artificial Intelligence Research 36 (December 29, 2009): 513–46. http://dx.doi.org/10.1613/jair.2904.

Full text
Abstract:
In this paper, we describe our autonomous bidding agent, RoxyBot, who emerged victorious in the travel division of the 2006 Trading Agent Competition in a photo finish. At a high level, the design of many successful trading agents can be summarized as follows: (i) price prediction: build a model of market prices; and (ii) optimization: solve for an approximately optimal set of bids, given this model. To predict, RoxyBot builds a stochastic model of market prices by simulating simultaneous ascending auctions. To optimize, RoxyBot relies on the sample average approximation method, a stochastic optimization technique.
APA, Harvard, Vancouver, ISO, and other styles
34

ALONSO, EDUARDO, MARK D'INVERNO, DANIEL KUDENKO, MICHAEL LUCK, and JASON NOBLE. "Learning in multi-agent systems." Knowledge Engineering Review 16, no. 3 (2001): 277–84. http://dx.doi.org/10.1017/s0269888901000170.

Full text
Abstract:
In recent years, multi-agent systems (MASs) have received increasing attention in the artificial intelligence community. Research in multi-agent systems involves the investigation of autonomous, rational and flexible behaviour of entities such as software programs or robots, and their interaction and coordination in such diverse areas as robotics (Kitano et al., 1997), information retrieval and management (Klusch, 1999), and simulation (Gilbert & Conte, 1995). When designing agent systems, it is impossible to foresee all the potential situations an agent may encounter and specify an agent behaviour optimally in advance. Agents therefore have to learn from, and adapt to, their environment, especially in a multi-agent setting.
APA, Harvard, Vancouver, ISO, and other styles
35

Saisubramanian, Sandhya, Ece Kamar, and Shlomo Zilberstein. "Avoiding Negative Side Effects of Autonomous Systems in the Open World." Journal of Artificial Intelligence Research 74 (May 10, 2022): 143–77. http://dx.doi.org/10.1613/jair.1.13581.

Full text
Abstract:
Autonomous systems that operate in the open world often use incomplete models of their environment. Model incompleteness is inevitable due to the practical limitations in precise model specification and data collection about open-world environments. Due to the limited fidelity of the model, agent actions may produce negative side effects (NSEs) when deployed. Negative side effects are undesirable, unmodeled effects of agent actions on the environment. NSEs are inherently challenging to identify at design time and may affect the reliability, usability and safety of the system. We present two complementary approaches to mitigate the NSE via: (1) learning from feedback, and (2) environment shaping. The solution approaches target settings with different assumptions and agent responsibilities. In learning from feedback, the agent learns a penalty function associated with a NSE. We investigate the efficiency of different feedback mechanisms, including human feedback and autonomous exploration. The problem is formulated as a multi-objective Markov decision process such that optimizing the agent’s assigned task is prioritized over mitigating NSE. A slack parameter denotes the maximum allowed deviation from the optimal expected reward for the agent’s task in order to mitigate NSE. In environment shaping, we examine how a human can assist an agent, beyond providing feedback, and utilize their broader scope of knowledge to mitigate the impacts of NSE. We formulate the problem as a human-agent collaboration with decoupled objectives. The agent optimizes its assigned task and may produce NSE during its operation. The human assists the agent by performing modest reconfigurations of the environment so as to mitigate the impacts of NSE, without affecting the agent’s ability to complete its assigned task. We present an algorithm for shaping and analyze its properties. Empirical evaluations demonstrate the trade-offs in the performance of different approaches in mitigating NSE in different settings.
APA, Harvard, Vancouver, ISO, and other styles
36

STRONGER, DANIEL, and PETER STONE. "POLYNOMIAL REGRESSION WITH AUTOMATED DEGREE: A FUNCTION APPROXIMATOR FOR AUTONOMOUS AGENTS." International Journal on Artificial Intelligence Tools 17, no. 01 (2008): 159–74. http://dx.doi.org/10.1142/s0218213008003820.

Full text
Abstract:
In order for an autonomous agent to behave robustly in a variety of environments, it must have the ability to learn approximations to many different functions. The function approximator used by such an agent is subject to a number of constraints that may not apply in a traditional supervised learning setting. Many different function approximators exist and are appropriate for different problems. This paper proposes a set of criteria for function approximators for autonomous agents. Additionally, for those problems on which polynomial regression is a candidate technique, the paper presents an enhancement that meets these criteria. In particular, using polynomial regression typically requires a manual choice of the polynomial's degree, trading off between function accuracy and computational and memory efficiency. Polynomial Regression with Automated Degree (PRAD) is a novel function approximation method that uses training data to automatically identify an appropriate degree for the polynomial. PRAD is fully implemented. Empirical tests demonstrate its ability to efficiently and accurately approximate both a wide variety of synthetic functions and real-world data gathered by a mobile robot.
APA, Harvard, Vancouver, ISO, and other styles
37

Radice, G., and C. R. McInnes. "Autonomous behavioural algorithm for space applications." Aeronautical Journal 107, no. 1074 (2003): 521–27. http://dx.doi.org/10.1017/s0001924000013403.

Full text
Abstract:
Abstract The purpose of this paper is to present a new approach in the concept and implementation of autonomy for autonomous spacecraft. The one true ‘artificial agent’ approach to autonomy requires the spacecraft to interact in a direct manner with the environment through the use of sensors and actuators. Rather than using complex world models, the spacecraft is allowed to exploit the dynamics of its environment for cues as to appropriate actions to take to achieve its mission goals. The particular artificial agent implementation used here has been inspired by studies of biological systems. The so-called ‘cue-deficit’ action selection algorithm considers the spacecraft to be a non-linear dynamical system with a number of observable states. Using optimal control theory a set of rules is derived which determine which of a finite repertoire of behaviours the spacecraft will perform. A simple model of a single imaging spacecraft in low polar Earth orbit is used to demonstrate the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
38

Minello, Murilo. "Criação de proposta nacional de coordenação de respostas a eventos climáticos extremos: uma abordagem baseada em agentes autônomos e integração de instituições de pesquisa." Boletim do Observatório Ambiental Alberto Ribeiro Lamego 19, no. 1 (2025): 29–43. https://doi.org/10.19180/2177-4560.v19n12025p29-43.

Full text
Abstract:
Climate change intensifies extreme events such as storms and droughts, requiring efficient solutions. This study proposes the use of autonomous agents in managing these events, with three objectives: identifying management phases, training coordinators of teams supported by autonomous agents, and developing a national plan. The proposed structure includes six agents: one general coordinator and five agents responsible for diagnosis, preparation, monitoring, training, and recovery. Each agent operates in an integrated manner to ensure an effective response to climate challenges. Additionally, a national network of connected municipal agents is proposed, integrating education, scientific research, and artificial intelligence to optimize resources and strengthen community resilience against this urgent issue.
APA, Harvard, Vancouver, ISO, and other styles
39

Kong, Guojie, Jie Cai, Jianwei Gong, Zheming Tian, Lu Huang, and Yuan Yang. "Cooperative Following of Multiple Autonomous Robots Based on Consensus Estimation." Electronics 11, no. 20 (2022): 3319. http://dx.doi.org/10.3390/electronics11203319.

Full text
Abstract:
When performing a specific task, a Multi-Agent System (MAS) not only needs to coordinate the whole formation but also needs to coordinate the dynamic relationship among all the agents, which means judging and adjusting their positions in the formation according to their location, velocity, surrounding obstacles and other information to accomplish specific tasks. This paper devises an integral separation feedback method for a single-agent control with a developed robot motion model; then, an enhanced strategy incorporating the dynamic information of the leader robot is proposed for further improvement. On this basis, a method of combining second-order formation control with path planning is proposed for multiple-agents following control, which uses the system dynamic of one agent and the Laplacian matrix to generate the consensus protocol. Due to a second-order consensus, the agents exchange information according to a pre-specified communication digraph and keep in a certain following formation. Moreover, an improved path planning method using an artificial potential field is developed to guide the MAS to reach the destination and avoid collisions. The effectiveness of the proposed approach is verified with simulation results in different scenarios.
APA, Harvard, Vancouver, ISO, and other styles
40

Hao, Qi, Weiming Shen, and Zhan Zhang. "An autonomous agent development environment for engineering applications." Advanced Engineering Informatics 19, no. 2 (2005): 123–34. http://dx.doi.org/10.1016/j.aei.2005.05.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Ohradzansky, Michael, Eugene Rush, Danny Riley, et al. "Multi-Agent Autonomy: Advancements and Challenges in Subterranean Exploration." Field Robotics 2, no. 1 (2022): 1068–104. http://dx.doi.org/10.55417/fr.2022035.

Full text
Abstract:
Artificial intelligence has undergone immense growth and maturation in recent years, though autonomous systems have traditionally struggled when fielded in diverse and previously unknown environments. DARPA is seeking to change that with the Subterranean Challenge, by providing roboticists the opportunity to support civilian and military first responders in complex and high-risk underground scenarios. The subterranean domain presents a handful of challenges, such as limited communication, diverse topology and terrain, and degraded sensing. Team MARBLE proposes a solution for autonomous exploration of unknown subterranean environments in which coordinated agents search for artifacts of interest. The team presents two navigation algorithms in the form of a metric-topological graph-based planner and a continuous frontier-based planner. To facilitate multi-agent coordination, agents share and merge new map information and candidate goal points. Agents deploy communication beacons at different points in the environment, extending the range at which maps and other information can be shared. Onboard autonomy reduces the load on human supervisors, allowing agents to detect and localize artifacts and explore autonomously outside established communication networks. Given the scale, complexity, and tempo of this challenge, a range of lessons was learned, most importantly, that frequent and comprehensive field testing in representative environments is key to rapidly refining system performance.
APA, Harvard, Vancouver, ISO, and other styles
42

Bryson, Joanna. "Cross-paradigm analysis of autonomous agent architecture." Journal of Experimental & Theoretical Artificial Intelligence 12, no. 2 (2000): 165–89. http://dx.doi.org/10.1080/095281300409829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Ekenberg, Love, and Mats Danielson. "Automatized Decision Making for Autonomous Agents." International Journal of Intelligent Mechatronics and Robotics 3, no. 3 (2013): 22–28. http://dx.doi.org/10.4018/ijimr.2013070102.

Full text
Abstract:
Utility theory and the principle of maximising the expected utility have, within the multi-agent community, had a great influence on multi-agent based decision. Even though this principle is often useful when evaluating a decision situation it is virtually impossible, except in very artificial situations, to use the more basic decision rules with its unrealistically strong requirements for the input data, and other candidate methods must be considered instead. This article provides an overview and brings attention to some of the possibilities to utilize more elaborated decision methods, while still keeping the computational issues at a tractable level.
APA, Harvard, Vancouver, ISO, and other styles
44

Xia, Zheng You, and Chen Ling Gu. "The Role of Belief in the Emergence of Social Conventions in Artificial Social System." Advanced Materials Research 159 (December 2010): 210–15. http://dx.doi.org/10.4028/www.scientific.net/amr.159.210.

Full text
Abstract:
The emergence of social conventions in multi-agent systems has been analyzed mainly by considering a group of homogeneous autonomous agents that can reach a global agreement using locally available information. We use novel viewpoint to consider that the process through which agents coordinate their behaviors to reduce conflict is also the process agents use to evaluate trust relations with their neighbors during local interactions. In this paper, we propose using the belief update rule called Instances of Satisfying and Dissatisfying (ISD) to study the evolution of agents' beliefs during local interactions. We also define an action selection rule called “highest cumulative belief” (HCB) to coordinate their behavior to reduce conflicts among agents in MAS (multi-agent systems). We find that the HCB can cause a group of agents to achieve the emergence of social conventions. Furthermore, we discover that if a group of agents can achieve the emergence of social conventions through ISD and HCB rules in an artificial social system, after a number of iterations this group of agents can enter the harmony state wherein each agent fully believes its neighbors.
APA, Harvard, Vancouver, ISO, and other styles
45

Suresh, Shreyas Bangalore, Tarun Raghunandan Kaushik, and Varun Tarikere Shankarappa. "Impact of Hofstede’s Cultural Dimensions on Intelligent Ethical Agent." International Journal of Innovative Research in Computer and Communication Engineering 10, no. 08 (2022): 7532–37. http://dx.doi.org/10.15680/ijircce.2022.1008011.

Full text
Abstract:
Since the advent of Artificial Intelligence, many autonomous machines are making their way into the society. With the burgeoning development of autonomous systems like self-driving cars have come concerns about how machines will make moral decisions and thus a new field called Machine Ethics has emerged. Machine ethics deals with moral dilemmas in machines while interacting with humans, or possibly other machines as well, and ensures the decisions taken by the algorithm are morally acceptable. This is in contrast to computer ethics, which solely focuses on ethical problems and protocol surrounding humans' use of technology. In this article, we have explored the moral dilemmas faced by autonomous vehicles and have tried to train an artificial intelligence model that makes ethically acceptable decisions based on the data collected by the famous moral machine experiment. Here, we describe the results obtained from the model. Firstly, we summarize the accuracies obtained upon training multiple models with different techniques. Later, we document the variation of accuraciesin the model upon using the Hofstede model of six dimensions of national cultures as a factor when pre-processing the data.
APA, Harvard, Vancouver, ISO, and other styles
46

EL-GHAMRAWY, SALLY M., and ALI I. ELDESOUKY. "AN AGENT DECISION SUPPORT MODULE BASED ON GRANULAR ROUGH MODEL." International Journal of Information Technology & Decision Making 11, no. 04 (2012): 793–820. http://dx.doi.org/10.1142/s0219622012500216.

Full text
Abstract:
A multi-agent system (MAS) is a branch of distributed artificial intelligence, composed of a number of distributed and autonomous agents. In a MAS, effective coordination is essential for autonomous agents to achieve their goals. Any decision based on a foundation of knowledge and reasoning can lead agents into successful cooperation; to achieve the necessary degree of flexibility in coordination, an agent must decide when to coordinate and which coordination mechanism to use. The performance of any MAS depends directly on the decisions made by the agents. The agents must therefore be able to make correct decisions. This paper proposes a decision support module in a distributed MAS that is concerned with two main decisions: the decision needed to allocate a task to specific agent/s and the decision needed to select the appropriate coordination mechanism when agents must coordinate with other agent/s to accomplish a specific task. An algorithm for the task allocation decision maker (TADM) and the coordination mechanism selection decision maker (CMSDM) algorithm are proposed that are based on the granular rough model (GRM). Furthermore, a number of experiments were performed to validate the effectiveness of the proposed algorithms; the efficiency of the proposed algorithms is compared with recent works. The preliminary results demonstrate the efficiency of our algorithms.
APA, Harvard, Vancouver, ISO, and other styles
47

Gath, Max, Stefan Edelkamp, and Otthein Herzog. "Agent-Based Dispatching Enables Autonomous Groupage Traffic." Journal of Artificial Intelligence and Soft Computing Research 3, no. 1 (2013): 27–40. http://dx.doi.org/10.2478/jaiscr-2014-0003.

Full text
Abstract:
Abstract The complexity and dynamics in groupage traffic require flexible, efficient, and adaptive planning and control processes. The general problem of allocating orders to vehicles can be mapped into the Vehicle Routing Problem (VRP). However, in practical applications additional requirements complicate the dispatching processes and require a proactive and reactive system behavior. To enable automated dispatching processes, this article presents a multiagent system where the decision making is shifted to autonomous, interacting, intelligent agents. Beside the communication protocols and the agent architecture, the focus is on the individual decision making of the agents which meets the specific requirements in groupage traffic. To evaluate the approach we apply multiagent-based simulation and model several scenarios of real world infrastructures with orders provided by our industrial partner. Moreover, a case study is conducted which covers the autonomous groupage traffic in the current processes of our industrial parter. The results reveal that agent-based dispatching meets the sophisticated requirements of groupage traffic. Furthermore, the decision making supports the combination of pickup and delivery tours efficiently while satisfying logistic request priorities, time windows, and capacity constraints.
APA, Harvard, Vancouver, ISO, and other styles
48

Rogers, Alex, Sarvapali Ramchurn, and Nicholas Jennings. "Delivering the Smart Grid: Challenges for Autonomous Agents and Multi-Agent Systems Research." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (2021): 2166–72. http://dx.doi.org/10.1609/aaai.v26i1.8445.

Full text
Abstract:
Restructuring electricity grids to meet the increased demand caused by the electrification of transport and heating, while making greater use of intermittent renewable energy sources, represents one of the greatest engineering challenges of our day. This modern electricity grid, in which both electricity and information flow in two directions between large numbers of widely distributed suppliers and generators — commonly termed the ‘smart grid’ — represents a radical reengineering of infrastructure which has changed little over the last hundred years. However, the autonomous behaviour expected of the smart grid, its distributed nature, and the existence of multiple stakeholders each with their own incentives and interests, challenges existing engineering approaches. In this challenge paper, we describe why we believe that artificial intelligence, and particularly, the fields of autonomous agents and multi-agent systems are essential for delivering the smart grid as it is envisioned. We present some recent work in this area and describe many of the challenges that still remain.
APA, Harvard, Vancouver, ISO, and other styles
49

Praneeth, Vadlapati. "Agent-Supervisor: Supervising Actions of Autonomous AI Agents to Ensure Ethical Compliance." International Journal on Science and Technology 14, no. 4 (2023): 1–9. https://doi.org/10.5281/zenodo.14288330.

Full text
Abstract:
The rapid adoption of Artificial Intelligence (AI) agents in decision-making involves autonomous selection of tools and execution of actions. User interactions with agents create concerns regarding the autonomous selection of inappropriate tools and the oversharing of unnecessary or sensitive data of the users with APIs, which causes concerns regarding privacy. The selection of malicious tools causes further concerns related to user safety. This paper proposes a comprehensive framework to evaluate actions performed by AI agents through a Large Language Model (LLM), which acts as a supervisory model designed to detect unexpected behavior of agents, such as unsafe, biased, inappropriate, or malicious behavior. The supervisory model also serves as an explainer to enhance the transparency of the decision-making process of agents. The method detects privacy risks, unauthorized actions, and misuse of AI by tool providers, which are critical concerns in the trustability of AI. The experiment demonstrates the effectiveness of this approach through examples illustrating both safe and unsafe agent behaviors. The results of the experiment proved a successful implementation of the framework by successfully generating warnings based on a set of criteria regarding unexpected behavior by the agent.The source code is available atgithub.com/Pro-GenAI/Agent-Supervisor.
APA, Harvard, Vancouver, ISO, and other styles
50

Kim, Tae Woo, and Adam Duhachek. "Artificial Intelligence and Persuasion: A Construal-Level Account." Psychological Science 31, no. 4 (2020): 363–80. http://dx.doi.org/10.1177/0956797620904985.

Full text
Abstract:
Although more individuals are relying on information provided by nonhuman agents, such as artificial intelligence and robots, little research has examined how persuasion attempts made by nonhuman agents might differ from persuasion attempts made by human agents. Drawing on construal-level theory, we posited that individuals would perceive artificial agents at a low level of construal because of the agents’ lack of autonomous goals and intentions, which directs individuals’ focus toward how these agents implement actions to serve humans rather than why they do so. Across multiple studies (total N = 1,668), we showed that these construal-based differences affect compliance with persuasive messages made by artificial agents. These messages are more appropriate and effective when the message represents low-level as opposed to high-level construal features. These effects were moderated by the extent to which an artificial agent could independently learn from its environment, given that learning defies people’s lay theories about artificial agents.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography