Academic literature on the topic 'Artificial intelligence in games'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Artificial intelligence in games.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Artificial intelligence in games"

1

Lucas, Simon. "Artificial Intelligence and Games." KI - Künstliche Intelligenz 34, no. 1 (February 17, 2020): 87–88. http://dx.doi.org/10.1007/s13218-020-00646-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Schaeffer, Jonathan, and H. Jaap van den Herik. "Games, computers, and artificial intelligence." Artificial Intelligence 134, no. 1-2 (January 2002): 1–7. http://dx.doi.org/10.1016/s0004-3702(01)00165-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

El Rhalibi, Abdennour, Kok Wai Wong, and Marc Price. "Artificial Intelligence for Computer Games." International Journal of Computer Games Technology 2009 (2009): 1–3. http://dx.doi.org/10.1155/2009/251652.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Barash, Guy, Mauricio Castillo-Effen, Niyati Chhaya, Peter Clark, Huáscar Espinoza, Eitan Farchi, Christopher Geib, et al. "Reports of the Workshops Held at the 2019 AAAI Conference on Artificial Intelligence." AI Magazine 40, no. 3 (September 30, 2019): 67–78. http://dx.doi.org/10.1609/aimag.v40i3.4981.

Full text
Abstract:
The workshop program of the Association for the Advancement of Artificial Intelligence’s 33rd Conference on Artificial Intelligence (AAAI-19) was held in Honolulu, Hawaii, on Sunday and Monday, January 27–28, 2019. There were fifteen workshops in the program: Affective Content Analysis: Modeling Affect-in-Action, Agile Robotics for Industrial Automation Competition, Artificial Intelligence for Cyber Security, Artificial Intelligence Safety, Dialog System Technology Challenge, Engineering Dependable and Secure Machine Learning Systems, Games and Simulations for Artificial Intelligence, Health Intelligence, Knowledge Extraction from Games, Network Interpretability for Deep Learning, Plan, Activity, and Intent Recognition, Reasoning and Learning for Human-Machine Dialogues, Reasoning for Complex Question Answering, Recommender Systems Meet Natural Language Processing, Reinforcement Learning in Games, and Reproducible AI. This report contains brief summaries of the all the workshops that were held.
APA, Harvard, Vancouver, ISO, and other styles
5

Ganguly, Rajjeshwar, Dubba Rithvik Reddy, Revathi Venkataraman, and Sharanya S. "Review on foreground artificial intelligence in games." International Journal of Engineering & Technology 7, no. 2.8 (March 19, 2018): 453. http://dx.doi.org/10.14419/ijet.v7i2.8.10482.

Full text
Abstract:
Artificial Intelligence (AI) is applied in almost every field existing in today's world and video games prove to be an excellent ground due to its responsive and intelligent behaviour. The games can be put to use model human- level AI, machine learning and scripting behaviour. This work deals with AI used in games to create more complicated and human like behaviour in the non player characters. Unlike most commercial games, games involving AI don’t use the AI in the background rather it is used in the foreground to enhance player experience. An analysis of use of the AI in a number of existing games is made to identify patterns for AI in games which include decision trees, scripted behaviour and learning agents.
APA, Harvard, Vancouver, ISO, and other styles
6

Rana, Priya, Parthik Bhardwaj, and Jyotsna Singh. "Artificial Intelligence (AI) in Video Games." International Journal of Computer Applications 181, no. 19 (September 18, 2018): 1–3. http://dx.doi.org/10.5120/ijca2018917818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rodin, E. Y., Y. Lirov, S. Mittnik, B. G. McElhaney, and L. Wilbur. "Artificial intelligence in air combat games." Computers & Mathematics with Applications 13, no. 1-3 (1987): 261–74. http://dx.doi.org/10.1016/0898-1221(87)90109-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hanley, John T. "GAMES, game theory and artificial intelligence." Journal of Defense Analytics and Logistics 5, no. 2 (December 7, 2021): 114–30. http://dx.doi.org/10.1108/jdal-10-2021-0011.

Full text
Abstract:
PurposeThe purpose of this paper is to illustrate how game theoretic solution concepts inform what classes of problems will be amenable to artificial intelligence and machine learning (AI/ML), and how to evolve the interaction between human and artificial intelligence.Design/methodology/approachThe approach addresses the development of operational gaming to support planning and decision making. It then provides a succinct summary of game theory for those designing and using games, with an emphasis on information conditions and solution concepts. It addresses how experimentation demonstrates where human decisions differ from game theoretic solution concepts and how games have been used to develop AI/ML. It concludes by suggesting what classes of problems will be amenable to AI/ML, and which will not. It goes on to propose a method for evolving human/artificial intelligence.FindingsGame theoretic solution concepts inform classes of problems where AI/ML 'solutions' will be suspect. The complexity of the subject requires a campaign of learning.Originality/valueThough games have been essential to the development of AI/ML, practitioners have yet to employ game theory to understand its limitations.
APA, Harvard, Vancouver, ISO, and other styles
9

Bátfai, Norbert. "A játékok és a mesterséges intelligencia mint a kultúra jövője – egy kísérlet a szubjektivitás elméletének kialakítására." Információs Társadalom 18, no. 2 (July 31, 2018): 28. http://dx.doi.org/10.22503/inftars.xviii.2018.2.2.

Full text
Abstract:
A cikk célja a mesterséges intelligencia kutatásokat az emberi önmegismerés szolgálatába állítani. Ehhez egyrészt filozófiai hátteret biztosítani, másrészt a mesterséges intelligencia társadalmi elfogadottságát megalapozni. Tézisünk, hogy az emberi kultúra fenntartásához és fejlesztéséhez a játékokon és a mesterséges intelligencián keresztül vezet az út. E tézis alátámasztásnak támogatására kísérletet teszünk a szubjektivitás elméletének megalapozására. --- Games and artificial intelligence as the future of culture: an attempt to develop a theory of subjectivity The goal of this paper is to use artificial intelligence research to acquire more extensive knowledge of ourselves. On the one hand, we provide a philosophical background to facilitate this, and on the other hand, we try to improve the social acceptance of artificial intelligence. We argue that the way to maintain and further develop human culture is through gaming and artificial intelligence. In support of this thesis we make an attempt to create a theory of subjectivity. Keywords: artificial intelligence, complexity, entropy, meme, computer games, esport
APA, Harvard, Vancouver, ISO, and other styles
10

Naumov, Pavel, and Yuan Yuan. "Intelligence in Strategic Games." Journal of Artificial Intelligence Research 71 (July 24, 2021): 521–56. http://dx.doi.org/10.1613/jair.1.12883.

Full text
Abstract:
If an agent, or a coalition of agents, has a strategy, knows that she has a strategy, and knows what the strategy is, then she has a know-how strategy. Several modal logics of coalition power for know-how strategies have been studied before. The contribution of the article is three-fold. First, it proposes a new class of know-how strategies that depend on the intelligence information about the opponents’ actions. Second, it shows that the coalition power modality for the proposed new class of strategies cannot be expressed through the standard know-how modality. Third, it gives a sound and complete logical system that describes the interplay between the coalition power modality with intelligence and the distributed knowledge modality in games with imperfect information.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Artificial intelligence in games"

1

Zhadan, Anastasiia. "Artificial Intelligence Adaptation in Video Games." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-79131.

Full text
Abstract:
One of the most important features of a (computer) game that makes it memorable is an ability to bring a sense of engagement. This can be achieved in numerous ways, but the most major part is a challenge, often provided by in-game enemies and their ability to adapt towards the human player. However, adaptability is not very common in games. Throughout this thesis work, aspects of the game control systems that can be improved in order to be adaptable were studied. Based on the results gained from the study of the literature related to artificial intelligence in games, a classification of games was developed for grouping the games by the complexity of the control systems and their ability to adapt different aspects of enemies behavior including individual and group behavior. It appeared that only 33% of the games can not be considered adaptable. This classification was then used to analyze the popularity of games regarding their challenge complexity. Analysis revealed that simple, familiar behavior is more welcomed by players. However, highly adaptable games have got competitively high scores and excellent reviews from game critics and reviewers, proving that adaptability in games deserves further research.
APA, Harvard, Vancouver, ISO, and other styles
2

KARLSSON, BORJE FELIPE FERNANDES. "AN ARTIFICIAL INTELLIGENCE MIDDLEWARE FOR DIGITAL GAMES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2005. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=7861@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
A aplicação de inteligência artificial (IA) em jogos digitais atualmente se encontra sob uma constante necessidade de melhorias, na tentaiva de atender as crescentes demandas dos jogadores por realismo e credibilidade no comportamento dos personagens do universo do jogo. De modo a facilitar o atendimento destas demandas, técnicas e metodologias de engenharia de software vêm sendo utilizadas no desenvolvimento de jogos. No entanto, o uso destas técnicas e a construção de middlewares na área de IA ainda está longe de gerar ferramentas genéricas e flexíveis o suficiente para o uso nesse tipo de aplicação. Outro fator importante é a falta de literatura disponível tratando de propostas relacionadas a esse campo de estudo. Esta dissertação discute o esforço de pesquisa no desenvolvimento de uma arquitetura flexível aplicável a diferentes estilos de jogos, que dê suporte a várias funcionalidades de IA em jogos e sirva com base a introdução de novas técnicas que possam melhorar a jogabilidade. Neste trabalho são apresentadas: questões de projeto de tal sistema e de sua integração com jogos; um estudo sobre a arquitetura de middlewares de IA; uma análise dos poucos exemplos desse tipo de software disponíveis; e um levantamento da literatura disponível. Com base nessa pesquisa, foi realizado o projeto e a implementação da arquitetura de um middleware de IA; também descritos nesse trabalho. Além da implementação propriamente dita, é apresentado um estudo sobre a aplicação de padrões de projeto no contexto do desenvolvimento e evolução de um framework de IA para jogos.
The usage of artificial intelligence (AI) techniques in digital games is currently facing a steady need of improvements, so it can cater to players higher and higher expectations that require realism and believability in the game environment and in its characters' behaviours. In order to ease the fulfillment of these goals, software engineering techniques and methodologies have started to be used during game development. However, the use of such techniques and the creation of AI middleware are still far from being a generic and flexible enough tool for developing this kind of application. Another important factor to be mentioned in this discussion is the lack of available literature related to studies in this field. This dissertation discusses the research effort in developing a flexible architecture that can be applied to diferent game styles, provides support for several game AI functionalities and serves as basis for the introduction of more powerful techniques that can improve gameplay and user experience. This work presents: design issues of such system and its integration with games; a study on AI middleware architecture for games; an analysis of the state-of-the-art in the field; and a survey of the available relevant literature. Taking this research as starting point, the design and implementation of the proposed AI middleware architecture was conducted and is also described here. Besides the implementation itself, a study on the use of design patterns in the context of the development and evolution of an AI framework for digital games is also presented.
APA, Harvard, Vancouver, ISO, and other styles
3

Edlund, Mattias. "Artificial Intelligence in Games : Faking Human Behavior." Thesis, Uppsala universitet, Institutionen för speldesign, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-258222.

Full text
Abstract:
This paper examines the possibilities of faking human behavior with artificial intelligence in computer games, by using efficient methods that save valuable development time and also creates a more rich experience for the players of a game. The specific implementation of artificial intelligence created and discussed is a neural network controlling a finite-state machine. The objective was to mimic human behavior rather than simulating true intelligence. A 2D shooter game is developed and used for experiments performed with human and artificial intelligence controlled players. The game sessions played were recorded in order for other humans to replay. Both players and spectators of the game sessions left feedbacks and reports that could later be analyzed. The data collected from these experiments was then analyzed, and reflections were made on the entire project. Tips and ideas are proposed to developers of shooter games who are interested in making human-like artificial intelligence. Conclusions are made and extra information is provided in order to further iterate on this research.
Denna rapport undersöker möjligheterna att förfalska mänskligt beteende genom artificiell intelligens i datorspel, med hjälp av effektiva metoder som sparar värdefull utvecklingstid och som även skapar en rikare upplevelse för spelare. Den specifika implementationen av artificiell intelligens som utvecklas och diskuteras är ett neuralt nätverk som kontrollerar en finite-state machine. Målet var att efterlikna mänskligt beteende snarare än att simulera verklig intelligens. Ett 2D shooter-spel utvecklas och används för utförda experiment med mänskliga och artificiell intelligens-kontrollerade spelare. De sessioner som spelades under experimenten spelades in, för att sedan låta andra människor titta på inspelningarna. Både spelare och åskådare av spelsessionerna lämnade återkoppling och rapporter för senare analysering. Datan som samlats in från experimenten analyserades, och reflektioner utfördes på hela projektet. Tips och idéer presenteras till utvecklare av shooter-spel som är intresserade av en mer människolik artificiell intelligens. Slutsatser läggs fram och extra information presenteras för att kunna fortsätta iterera vidare på denna undersökning.
APA, Harvard, Vancouver, ISO, and other styles
4

Hedberg, Charlie Forsberg, and Alexander Pedersen. "Artificial Intelligence : Memory-driven decisions in games." Thesis, Blekinge Tekniska Högskola, Institutionen för teknik och estetik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3640.

Full text
Abstract:
Developing AI (Artificial Intelligence) for games can be a hard and challenging task. It is sometimes desired to create behaviors that follow some sort of logical pattern. In order to do this, information must be gathered and processed. This bachelor thesis presents an algorithm that could assist current AI technologies to collect and memorize environmental data. The thesis also covers practical implementation guidelines, established through research and testing.
Att utveckla AI (Artificiell Intelligence) i spel kan vara en hård och utmanande uppgift. Ibland är det önskvärt att skapa beteenden som följer något sorts logiskt mönster. För att kunna göra detta måste information samlas in och processas. I detta kandidatarbete presenteras en algoritm som kan assistera nuvarande AI-teknologier för att samla in och memorera omgivningsinformation. Denna uppsats täcker också riktlinjer för praktisk implementering fastställda genom undersökning och tester.
Detta är en reflekstionsdel till en digital medieproduktion.
APA, Harvard, Vancouver, ISO, and other styles
5

Allis, Louis Victor. "Searching for solutions in games and artificial intelligence." Maastricht : Maastricht : Rijksuniversiteit Limburg ; University Library, Maastricht University [Host], 1994. http://arno.unimaas.nl/show.cgi?fid=6868.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Saini, Simardeep S. "Mimicking human player strategies in fighting games using game artificial intelligence techniques." Thesis, Loughborough University, 2014. https://dspace.lboro.ac.uk/2134/16380.

Full text
Abstract:
Fighting videogames (also known as fighting games) are ever growing in popularity and accessibility. The isolated console experiences of 20th century gaming has been replaced by online gaming services that allow gamers to play from almost anywhere in the world with one another. This gives rise to competitive gaming on a global scale enabling them to experience fresh play styles and challenges by playing someone new. Fighting games can typically be played either as a single player experience, or against another human player, whether it is via a network or a traditional multiplayer experience. However, there are two issues with these approaches. First, the single player offering in many fighting games is regarded as being simplistic in design, making the moves by the computer predictable. Secondly, while playing against other human players can be more varied and challenging, this may not always be achievable due to the logistics involved in setting up such a bout. Game Artificial Intelligence could provide a solution to both of these issues, allowing a human player s strategy to be learned and then mimicked by the AI fighter. In this thesis, game AI techniques have been researched to provide a means of mimicking human player strategies in strategic fighting games with multiple parameters. Various techniques and their current usages are surveyed, informing the design of two separate solutions to this problem. The first solution relies solely on leveraging k nearest neighbour classification to identify which move should be executed based on the in-game parameters, resulting in decisions being made at the operational level and being fed from the bottom-up to the strategic level. The second solution utilises a number of existing Artificial Intelligence techniques, including data driven finite state machines, hierarchical clustering and k nearest neighbour classification, in an architecture that makes decisions at the strategic level and feeds them from the top-down to the operational level, resulting in the execution of moves. This design is underpinned by a novel algorithm to aid the mimicking process, which is used to identify patterns and strategies within data collated during bouts between two human players. Both solutions are evaluated quantitatively and qualitatively. A conclusion summarising the findings, as well as future work, is provided. The conclusions highlight the fact that both solutions are proficient in mimicking human strategies, but each has its own strengths depending on the type of strategy played out by the human. More structured, methodical strategies are better mimicked by the data driven finite state machine hybrid architecture, whereas the k nearest neighbour approach is better suited to tactical approaches, or even random button bashing that does not always conform to a pre-defined strategy.
APA, Harvard, Vancouver, ISO, and other styles
7

Nilsson, Joakim, and Andreas Jonasson. "Using Artificial Intelligence for Gameplay Testing On Turn-Based Games." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16716.

Full text
Abstract:
Background. Game development is a constantly evolving multi billion dollar in-dustry, and the need for quality products is very high. Testing the games however isa very time consuming and tedious task, often coming down to repeating sequencesuntil a requirement has been met. But what if some parts of it could be automated,handled by an artificial intelligence that can play the game day and night, givingstatistics about the gameplay as well as reports about errors that occurred duringthe session? Objectives. This thesis is done in cooperation with Fall Damage Studio AB, andaims to find and implement a suitable artificial intelligent agent to perform auto-mated test on a game Fall Damage Studio AB are currently developing, ProjectFreedom. The objective is to identify potential problems, benefits, and use casesof using a technique such as this. A secondary objective is to also identify what isneeded by the game for this kind of technique to be useful. Methods. To test the technique, aMonte-Carlo Tree Searchalgorithm was identi-fied as the most suitable algorithm and implemented for use in two different typesof experiments. The first being to evaluate how varying limitations in terms of thenumber of iterations and depth affected the results of the algorithm. This was doneto see if it was possible to change these factors and find a point where acceptablelevels of plays were achieved and further increases to these factors gave limited en-hancements to this level but increased the time. The second experiment aimed toevaluate what useful data can be extracted from a game, both in terms of gameplayrelated data as well as error information from crashes. Project Freedom was onlyused for the second test due to constraints that was out of scope for this thesis totry and repair. Results. The thesis has identified several requirements that is needed for a game touse a technique such as this in an useful way. For Monte-Carlo Tree Search specifi-cally, the game is required to have a gamestate that is quick to create a copy of anda game simulation that can be run in a short time. The game must also be testedfor the depth and iteration point that hit the point where the value of increasingthese values diminish. More generally, the algorithm of choice must be a part of thedesign process and different games might require different kind of algorithms to use.Adding this type of algorithm at a late stage in development, as was done for thisthesis, might be possible if precautions are taken. Conclusions. This thesis shows that using artificial intelligence agents for game-play testing is definitely possible, but it needs to be considered in the early part ofthe development process as no one size fits all approach is likely to exist. Differentgames will have their own requirements, some potentially more general for that typeof game, and some will be unique for that specific game. Thus different algorithmswill work better on certain types of games compared to other ones, and they willneed to be tweaked to perform optimally on a specific game.
APA, Harvard, Vancouver, ISO, and other styles
8

Wei, Ermo. "Learning to Play Cooperative Games via Reinforcement Learning." Thesis, George Mason University, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13420351.

Full text
Abstract:

Being able to accomplish tasks with multiple learners through learning has long been a goal of the multiagent systems and machine learning communities. One of the main approaches people have taken is reinforcement learning, but due to certain conditions and restrictions, applying reinforcement learning in a multiagent setting has not achieved the same level of success when compared to its single agent counterparts.

This thesis aims to make coordination better for agents in cooperative games by improving on reinforcement learning algorithms in several ways. I begin by examining certain pathologies that can lead to the failure of reinforcement learning in cooperative games, and in particular the pathology of relative overgeneralization. In relative overgeneralization, agents do not learn to optimally collaborate because during the learning process each agent instead converges to behaviors which are robust in conjunction with the other agent's exploratory (and thus random), rather than optimal, choices. One solution to this is so-called lenient learning, where agents are forgiving of the poor choices of their teammates early in the learning cycle. In the first part of the thesis, I develop a lenient learning method to deal with relative overgeneralization in independent learner settings with small stochastic games and discrete actions.

I then examine certain issues in a more complex multiagent domain involving parameterized action Markov decision processes, motivated by the RoboCup 2D simulation league. I propose two methods, one batch method and one actor-critic method, based on state of the art reinforcement learning algorithms, and show experimentally that the proposed algorithms can train the agents in a significantly more sample-efficient way than more common methods.

I then broaden the parameterized-action scenario to consider both repeated and stochastic games with continuous actions. I show how relative overgeneralization prevents the multiagent actor-critic model from learning optimal behaviors and demonstrate how to use Soft Q-Learning to solve this problem in repeated games.

Finally, I extend imitation learning to the multiagent setting to solve related issues in stochastic games, and prove that given the demonstration from an expert, multiagent Imitation Learning is exactly the multiagent actor-critic model in Maximum Entropy Reinforcement Learning framework. I further show that when demonstration samples meet certain conditions the relative overgeneralization problem can be avoided during the learning process.

APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Siming. "Evolving effective micro behaviors for real-time strategy games." Thesis, University of Nevada, Reno, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3707862.

Full text
Abstract:

Real-Time Strategy games have become a new frontier of artificial intelligence research. Advances in real-time strategy game AI, like with chess and checkers before, will significantly advance the state of the art in AI research. This thesis aims to investigate using heuristic search algorithms to generate effective micro behaviors in combat scenarios for real-time strategy games. Macro and micro management are two key aspects of real-time strategy games. While good macro helps a player collect more resources and build more units, good micro helps a player win skirmishes against equal numbers of opponent units or win even when outnumbered. In this research, we use influence maps and potential fields as a basis representation to evolve micro behaviors. We first compare genetic algorithms against two types of hill climbers for generating competitive unit micro management. Second, we investigated the use of case-injected genetic algorithms to quickly and reliably generate high quality micro behaviors. Then we compactly encoded micro behaviors including influence maps, potential fields, and reactive control into fourteen parameters and used genetic algorithms to search for a complete micro bot, ECSLBot. We compare the performance of our ECSLBot with two state of the art bots, UAlbertaBot and Nova, on several skirmish scenarios in a popular real-time strategy game StarCraft. The results show that the ECSLBot tuned by genetic algorithms outperforms UAlbertaBot and Nova in kiting efficiency, target selection, and fleeing. In addition, the same approach works to create competitive micro behaviors in another game SeaCraft. Using parallelized genetic algorithms to evolve parameters in SeaCraft we are able to speed up the evolutionary process from twenty one hours to nine minutes. We believe this work provides evidence that genetic algorithms and our representation may be a viable approach to creating effective micro behaviors for winning skirmishes in real-time strategy games.

APA, Harvard, Vancouver, ISO, and other styles
10

Stene, Sindre Berg. "Artificial Intelligence Techniques in Real-Time Strategy Games - Architecture and Combat Behavior." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9497.

Full text
Abstract:

The general purpose of this research is to investigate the possibilities offered for the use of Artificial Intelligence theory and methods in advanced game environments. The real-time strategy (RTS) game genre is investigated in detail, and an architecture and solutions to some common issues are presented. An RTS AI controlled opponent named “KAI” is implemented for the “TA Spring” game engine in order to advance the state of the art in usin AI techniques in games and to gain some insight into the strengths and weaknesses of AI Controlled Player (AI CP) architectures. A goal was to create an AI with behavior that gave the impression of intelligence to the human player, by taking on certain aspects of the style in which human players play the game. Another goal for the benefit of the TA Spring development community was to create an AI which played with sufficient skill to provide experienced players with resistance, without using obvious means of cheating such as getting free resources or military assets. Several common techniques were used, among others Rule-based decision making, path planning and path replanning, influence maps, and a variant of the A* search algorithm was used for searches of various kinds. The AI also has an approach to micromanagement of units that are fighting in combination with influence maps. The AI CP program was repeatedly tested against human players and other AI CP programs in various settings throughout development. The availability of testing by the community but the sometimes sketchy feedback lead to the production of consistent behavior for tester and developer alike in order to progress. One obstacle that was met was that the rule-based approach to combat behavior resulted in high complexity. The architecture of the RTS AI CP is designed to emerge a strategy from separate agents that were situation aware. Both the actions of the enemy and the properties of the environment are taken into account. The overall approach is to strengthen the AI CP through better economic and military decisions. Micromanagement and frequent updates for moving units is an important part of improving military decisions in this architecture. This thesis goes into the topics of RTS strategies, tactics, economic decisions and military decisions and how they may be made by AI in an informed way. Direct attempts at calculation and prediction rather than having the AI learn from experience resulted in behavior that was superior to most AI CPs and many human players without a learning period. However, having support for all of the game types for TA Spring resulted in extra development time. Keywords: computer science information technology RTS real time strategy game artificial intelligence architecture emergent strategy emergence humanlike behavior situation situational aware awareness combat behavior micro micromanagement pathfinder pathfinding path planning replanning influence maps threat DPS iterative algorithm algorithms defense placement terrain analysis attack defense military control artificial intelligence controlled player computer opponent game games gaming environmental awareness autonomous action actions agent hierarchy KAI TA Spring Total Annihilation

APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Artificial intelligence in games"

1

1968-, Funge John David, ed. Artificial intelligence for games. 2nd ed. Burlington, MA: Elsevier Morgan Kaufmann, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yannakakis, Georgios N., and Julian Togelius. Artificial Intelligence and Games. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-63519-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Artificial intelligence and computer games. London: Century Communications, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Calero, Pedro A. González. Artificial Intelligence for Computer Games. New York, NY: Springer Science+Business Media, LLC, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

González-Calero, Pedro Antonio, and Marco Antonio Gómez-Martín, eds. Artificial Intelligence for Computer Games. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4419-8188-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Baba, Norio. Computational Intelligence in Games. Heidelberg: Physica-Verlag HD, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

name, No. Chips challenging champions: Games, computers and artificial intelligence. Amsterdam: Elsevier, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Artificial morality: Virtuous robots for virtual games. London: Routledge, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ahlquist, John. Game development essentials: Game artificial intelligence. Clifton Park, NY: Thomson/Delmar Learning, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Artificial morality: Virtuous robots for virtual games. London: Routledge, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Artificial intelligence in games"

1

Satoi, Daiki, and Yuta Mizuno. "Meta Artificial Intelligence and Artificial Intelligence Director." In Encyclopedia of Computer Graphics and Games, 1–8. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-08234-9_309-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bouzy, Bruno, Tristan Cazenave, Vincent Corruble, and Olivier Teytaud. "Artificial Intelligence for Games." In A Guided Tour of Artificial Intelligence Research, 313–37. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-06167-8_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Joshi, Abhisht, Moolchand Sharma, and Jafar Al Zubi. "Artificial Intelligence in Games." In Deep Learning in Gaming and Animations, 103–22. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003231530-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gravot, Fabien. "Navigation Artificial Intelligence." In Encyclopedia of Computer Graphics and Games, 1–10. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-08234-9_310-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yannakakis, Georgios N., and Julian Togelius. "Playing Games." In Artificial Intelligence and Games, 91–150. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-63519-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Halpern, Jared. "Artificial Intelligence and Slingshots." In Developing 2D Games with Unity, 277–372. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3772-4_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hogg, Chad, Stephen Lee-Urban, Héctor Muñoz-Avila, Bryan Auslander, and Megan Smith. "Game AI for Domination Games." In Artificial Intelligence for Computer Games, 83–101. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4419-8188-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hildmann, Hanno, and Benjamin Hirsch. "Overview of Artificial Intelligence." In Encyclopedia of Computer Graphics and Games, 1–9. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-08234-9_228-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Finzi, Alberto, and Thomas Lukasiewicz. "Relational Markov Games." In Logics in Artificial Intelligence, 320–33. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30227-8_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hildmann, Hanno. "Computer Games and Artificial Intelligence." In Encyclopedia of Computer Graphics and Games, 1–11. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-08234-9_234-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Artificial intelligence in games"

1

Monteiro, Juarez, Roger Granada, Rafael C. Pinto, and Rodrigo C. Barros. "Beating Bomberman with Artificial Intelligence." In XV Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2018. http://dx.doi.org/10.5753/eniac.2018.4430.

Full text
Abstract:
Artificial Intelligence (AI) seeks to bring intelligent behavior for machines by using specific techniques. These techniques can be employed in order to solve tasks, such as planning paths or controlling intelligent agents. Some tasks that use AI techniques are not trivially testable, since it can handle a high number of variables depending on their complexity. As digital games can provide a wide range of variables, they become an efficient and economical means for testing artificial intelligence techniques. In this paper, we propose a combination of a behavior tree and a Pathfinding algorithm to solve a maze-based problem using the digital game Bomberman of the Nintendo Entertainment System (NES) platform. We perform an analysis of the AI techniques in order to verify the feasibility of future experiments in similar complex environments. Our experiments show that our intelligent agent can be successfully implemented using the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
2

Lecchi, Stefano. "Artificial intelligence in racing games." In 2009 IEEE Symposium on Computational Intelligence and Games (CIG). IEEE, 2009. http://dx.doi.org/10.1109/cig.2009.5286512.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Simon, Sunil, and Dominik Wojtczak. "Synchronisation Games on Hypergraphs." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/57.

Full text
Abstract:
We study a strategic game model on hypergraphs where players, modelled by nodes, try to coordinate or anti-coordinate their choices within certain groups of players, modelled by hyperedges. We show this model to be a strict generalisation of symmetric additively separable hedonic games to the hypergraph setting and that such games always have a pure Nash equilibrium, which can be computed in pseudo-polynomial time. Moreover, in the pure coordination setting, we show that a strong equilibrium exists and can be computed in polynomial time when the game possesses a certain acyclic structure.
APA, Harvard, Vancouver, ISO, and other styles
4

Harrenstein, Paul, Paolo Turrini, and Michael Wooldridge. "Characterising the Manipulability of Boolean Games." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/150.

Full text
Abstract:
The existence of (Nash) equilibria with undesirable properties is a well-known problem in game theory, which has motivated much research directed at the possibility of mechanisms for modifying games in order to eliminate undesirable equilibria, or induce desirable ones. Taxation schemes are a well-known mechanism for modifying games in this way. In the multi-agent systems community, taxation mechanisms for incentive engineering have been studied in the context of Boolean games with costs. These are games in which each player assigns truth-values to a set of propositional variables she uniquely controls in pursuit of satisfying an individual propositional goal formula; different choices for the player are also associated with different costs. In such a game, each player prefers primarily to see the satisfaction of their goal, and secondarily, to minimise the cost of their choice, thereby giving rise to lexicographic preferences over goal-satisfaction and costs. Within this setting, where taxes operate on costs only, however, it may well happen that the elimination or introduction of equilibria can only be achieved at the cost of simultaneously introducing less desirable equilibria or eliminating more attractive ones. Although this framework has been studied extensively, the problem of precisely characterising the equilibria that may be induced or eliminated has remained open. In this paper we close this problem, giving a complete characterisation of those mechanisms that can induce a set of outcomes of the game to be exactly the set of Nash Equilibrium outcomes.
APA, Harvard, Vancouver, ISO, and other styles
5

Chaperot, Benoit, and Colin Fyfe. "Improving Artificial Intelligence In a Motocross Game." In 2006 IEEE Symposium on Computational Intelligence and Games. IEEE, 2006. http://dx.doi.org/10.1109/cig.2006.311698.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Maubert, Bastien, Sophie Pinchinat, Francois Schwarzentruber, and Silvia Stranieri. "Concurrent Games in Dynamic Epistemic Logic." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/260.

Full text
Abstract:
Action models of Dynamic Epistemic Logic (DEL) represent precisely how actions are perceived by agents. DEL has recently been used to define infinite multi-player games, and it was shown that they can be solved in some cases. However, the dynamics being defined by the classic DEL update product for individual actions, only turn-based games have been considered so far. In this work we define a concurrent DEL product, propose a mechanism to resolve conflicts between actions, and define concurrent DEL games. As in the turn-based case, the obtained concurrent infinite game arenas can be finitely represented when all actions are public, or all are propositional. Thus we identify cases where the strategic epistemic logic ATL*K can be model checked on such games.
APA, Harvard, Vancouver, ISO, and other styles
7

Siljebråt, Henrik, Caspar Addyman, and Alan Pickering. "Towards human-like artificial intelligence using StarCraft 2." In FDG '18: Foundations of Digital Games 2018. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3235765.3235811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Tianyu, Zijie Zheng, Hongchang Li, Kaigui Bian, and Lingyang Song. "Playing Card-Based RTS Games with Deep Reinforcement Learning." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/631.

Full text
Abstract:
Game AI is of great importance as games are simulations of reality. Recent research on game AI has shown much progress in various kinds of games, such as console games, board games and MOBA games. However, the exploration in RTS games remains a challenge for their huge state space, imperfect information, sparse rewards and various strategies. Besides, the typical card-based RTS games have complex card features and are still lacking solutions. We present a deep model SEAT (selection-attention) to play card-based RTS games. The SEAT model includes two parts, a selection part for card choice and an attention part for card usage, and it learns from scratch via deep reinforcement learning. Comprehensive experiments are performed on Clash Royale, a popular mobile card-based RTS game. Empirical results show that the SEAT model agent makes it to reach a high winning rate against rule-based agents and decision-tree-based agent.
APA, Harvard, Vancouver, ISO, and other styles
9

Cermak, Jiri, Branislav Bošanský, and Viliam Lisý. "An Algorithm for Constructing and Solving Imperfect Recall Abstractions of Large Extensive-Form Games." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/130.

Full text
Abstract:
We solve large two-player zero-sum extensive-form games with perfect recall. We propose a new algorithm based on fictitious play that significantly reduces memory requirements for storing average strategies. The key feature is exploiting imperfect recall abstractions while preserving the convergence rate and guarantees of fictitious play applied directly to the perfect recall game. The algorithm creates a coarse imperfect recall abstraction of the perfect recall game and automatically refines its information set structure only where the imperfect recall might cause problems. Experimental evaluation shows that our novel algorithm is able to solve a simplified poker game with 7.10^5 information sets using an abstracted game with only 1.8% of information sets of the original game. Additional experiments on poker and randomly generated games suggest that the relative size of the abstraction decreases as the size of the solved games increases.
APA, Harvard, Vancouver, ISO, and other styles
10

Ben Amor, Nahla, Helene Fargier, and Régis Sabbadin. "Equilibria in Ordinal Games: A Framework based on Possibility Theory." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/16.

Full text
Abstract:
The present paper proposes the first definition of mixed equilibrium for ordinal games. This definition naturally extends possibilistic (single agent) decision theory. This allows us to provide a unifying view of single and multi-agent qualitative decision theory. Our first contribution is to show that ordinal games always admit a possibilistic mixed equilibrium, which can be seen as a qualitative counterpart to mixed (probabilistic) equilibrium.Then, we show that a possibilistic mixed equilibrium can be computed in polynomial time (wrt the size of the game), which contrasts with pure Nash or mixed probabilistic equilibrium computation in cardinal game theory.The definition we propose is thus operational in two ways: (i) it tackles the case when no pure Nash equilibrium exists in an ordinal game; and (ii) it allows an efficient computation of a mixed equilibrium.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Artificial intelligence in games"

1

Rodin, Ervin Y. Artificial Intelligence Methods in Pursuit Evasion Differential Games. Fort Belvoir, VA: Defense Technical Information Center, July 1990. http://dx.doi.org/10.21236/ada227366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rodin, Ervin Y. Artificial Intelligence Methodologies in Flight Related Differential Game, Control and Optimization Problems. Fort Belvoir, VA: Defense Technical Information Center, January 1993. http://dx.doi.org/10.21236/ada262405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Novak, Jr, Simmons Gordon S., Porter Robert F., Kumar Bruce W., Causey Vipin, and Robert L. Artificial Intelligence Project. Fort Belvoir, VA: Defense Technical Information Center, January 1990. http://dx.doi.org/10.21236/ada230793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lesser, Victor R., Paul Cohen, and Wendy Lehnert. Center for Artificial Intelligence. Fort Belvoir, VA: Defense Technical Information Center, March 1992. http://dx.doi.org/10.21236/ada282272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Boros, E., P. L. Hammer, and F. S. Roberts. Optimization and Artificial Intelligence. Fort Belvoir, VA: Defense Technical Information Center, July 1996. http://dx.doi.org/10.21236/ada311365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lessor, Victor R., Paul Cohen, and Wendy Lehnert. Center for Artificial Intelligence. Fort Belvoir, VA: Defense Technical Information Center, March 1992. http://dx.doi.org/10.21236/ada275812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gowens, J. W. Applied Artificial Intelligence Seminar. Fort Belvoir, VA: Defense Technical Information Center, July 1989. http://dx.doi.org/10.21236/ada268571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Stevenson, Charles A. Artificial Intelligence and Expert Systems. Fort Belvoir, VA: Defense Technical Information Center, March 1986. http://dx.doi.org/10.21236/ada436516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Aghion, Philippe, Benjamin Jones, and Charles Jones. Artificial Intelligence and Economic Growth. Cambridge, MA: National Bureau of Economic Research, October 2017. http://dx.doi.org/10.3386/w23928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Acemoglu, Daron, and Pascual Restrepo. Artificial Intelligence, Automation and Work. Cambridge, MA: National Bureau of Economic Research, January 2018. http://dx.doi.org/10.3386/w24196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography