Academic literature on the topic 'Atari Video Games'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Atari Video Games.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Atari Video Games"

1

Zhang, Ruohan, Calen Walshe, Zhuode Liu, et al. "Atari-HEAD: Atari Human Eye-Tracking and Demonstration Dataset." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 6811–20. http://dx.doi.org/10.1609/aaai.v34i04.6161.

Full text
Abstract:
Large-scale public datasets have been shown to benefit research in multiple areas of modern artificial intelligence. For decision-making research that requires human data, high-quality datasets serve as important benchmarks to facilitate the development of new methods by providing a common reproducible standard. Many human decision-making tasks require visual attention to obtain high levels of performance. Therefore, measuring eye movements can provide a rich source of information about the strategies that humans use to solve decision-making tasks. Here, we provide a large-scale, high-quality dataset of human actions with simultaneously recorded eye movements while humans play Atari video games. The dataset consists of 117 hours of gameplay data from a diverse set of 20 games, with 8 million action demonstrations and 328 million gaze samples. We introduce a novel form of gameplay, in which the human plays in a semi-frame-by-frame manner. This leads to near-optimal game decisions and game scores that are comparable or better than known human records. We demonstrate the usefulness of the dataset through two simple applications: predicting human gaze and imitating human demonstrated actions. The quality of the data leads to promising results in both tasks. Moreover, using a learned human gaze model to inform imitation learning leads to an 115% increase in game performance. We interpret these results as highlighting the importance of incorporating human visual attention in models of decision making and demonstrating the value of the current dataset to the research community. We hope that the scale and quality of this dataset can provide more opportunities to researchers in the areas of visual attention, imitation learning, and reinforcement learning.
APA, Harvard, Vancouver, ISO, and other styles
2

Nyitray, Kristen J. "The Alert Collector: Game On to Game After: Sources for Video Game History." Reference & User Services Quarterly 59, no. 1 (2019): 7. http://dx.doi.org/10.5860/rusq.59.1.7219.

Full text
Abstract:
Kristen Nyitray began her immersion in video games with an Atari 2600 and ColecoVision console and checking out games from her local public library. Later in life, she had the opportunity to start building a video game studies collection in her professional career as an archivist and special collections librarian. While that project has since ended, you get the benefit of her expansive knowledge of video game sources in “Game On to Game After: Sources for Video Game History.” There is much in this column to help librarians wanting to support research in this important entertainment form. Ready player one?—Editor
APA, Harvard, Vancouver, ISO, and other styles
3

Widendaële, Arnaud. "Michael Z. Newman, Atari Age. The Emergence of Video Games in America." 1895, no. 86 (December 1, 2018): 193–96. http://dx.doi.org/10.4000/1895.7192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Köster, Raphael, and Martin J. Chadwick. "What can classic Atari video games tell us about the human brain?" Neuron 109, no. 4 (2021): 568–70. http://dx.doi.org/10.1016/j.neuron.2021.01.021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Williams, Deborah A., and Jack O. Jenkins. "Role of Competitive Anxiety in the Performance of Black College Basketball Players." Perceptual and Motor Skills 63, no. 2 (1986): 847–53. http://dx.doi.org/10.2466/pms.1986.63.2.847.

Full text
Abstract:
Scores on an Atari video game, Martens's Competitive Anxiety Scale and Life Experience Survey, and coaches' ratings of actual performance over 10 home basketball games were obtained for 8 black players and 7 black controls. Several significant ( p ≤ .01) correlations of anxiety and performance with stress were low to moderate and encourage a full-scale study with a larger sample and standardized measures.
APA, Harvard, Vancouver, ISO, and other styles
6

Lin, Jhang, Lee, Lin, and Young. "Using a Reinforcement Q-Learning-Based Deep Neural Network for Playing Video Games." Electronics 8, no. 10 (2019): 1128. http://dx.doi.org/10.3390/electronics8101128.

Full text
Abstract:
This study proposed a reinforcement Q-learning-based deep neural network (RQDNN) that combined a deep principal component analysis network (DPCANet) and Q-learning to determine a playing strategy for video games. Video game images were used as the inputs. The proposed DPCANet was used to initialize the parameters of the convolution kernel and capture the image features automatically. It performs as a deep neural network and requires less computational complexity than traditional convolution neural networks. A reinforcement Q-learning method was used to implement a strategy for playing the video game. Both Flappy Bird and Atari Breakout games were implemented to verify the proposed method in this study. Experimental results showed that the scores of our proposed RQDNN were better than those of human players and other methods. In addition, the training time of the proposed RQDNN was also far less than other methods.
APA, Harvard, Vancouver, ISO, and other styles
7

COLLINS, KAREN. "In the Loop: Creativity and Constraint in 8-bit Video Game Audio." Twentieth-Century Music 4, no. 2 (2007): 209–27. http://dx.doi.org/10.1017/s1478572208000510.

Full text
Abstract:
AbstractThis article explores the sound capabilities of video game consoles of the 8-bit era (c.1975–85) in order to discuss the impact that technological constraints had on shaping aesthetic decisions in the composition of music for the early generation of games. Comparing examples from the Commodore 64 (C64), the Nintendo Entertainment System (NES), the Atari VCS, and the arcade consoles, I examine various approaches and responses (in particular the use of looping) to similar technological problems, and illustrate how these responses are as much a decision made by the composer as a matter of technical necessity.
APA, Harvard, Vancouver, ISO, and other styles
8

Lawlor, Shannon. "Book Review: Atari Age: The Emergence of Video Games in America, by Michael Z. Newman." Television & New Media 20, no. 3 (2018): 311–13. http://dx.doi.org/10.1177/1527476418784657.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hu, Yueyue, Shiliang Sun, Xin Xu, and Jing Zhao. "Multi-View Deep Attention Network for Reinforcement Learning (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 10 (2020): 13811–12. http://dx.doi.org/10.1609/aaai.v34i10.7177.

Full text
Abstract:
The representation approximated by a single deep network is usually limited for reinforcement learning agents. We propose a novel multi-view deep attention network (MvDAN), which introduces multi-view representation learning into the reinforcement learning task for the first time. The proposed model approximates a set of strategies from multiple representations and combines these strategies based on attention mechanisms to provide a comprehensive strategy for a single-agent. Experimental results on eight Atari video games show that the MvDAN has effective competitive performance than single-view reinforcement learning methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Barr, Matthew. "The Force Is Strong with This One (but Not That One): What Makes a Successful Star Wars Video Game Adaptation?" Arts 9, no. 4 (2020): 131. http://dx.doi.org/10.3390/arts9040131.

Full text
Abstract:
The Star Wars films have probably spawned more video game adaptations than any other franchise. From the 1982 release of The Empire Strikes Back on the Atari 2600 to 2019’s Jedi: Fallen Order, around one hundred officially licensed Star Wars games have been published to date. Inevitably, the quality of these adaptations has varied, ranging from timeless classics such as Star Wars: Knights of the Old Republic, to such lamentable cash grabs as the Attack of the Clones movie tie-in. But what makes certain ludic adaptations of George Lucas’ space opera more successful than others? To answer this question, the critical response to some of the best-reviewed Star Wars games is analysed here, revealing a number of potential factors to consider, including the audio-visual quality of the games, the attendant story, and aspects of the gameplay. The tension between what constitutes a good game and what makes for a good Star Wars adaptation is also discussed. It is concluded that, while many well-received adaptations share certain characteristics—such as John Williams’ iconic score, a high degree of visual fidelity, and certain mythic story elements—the very best Star Wars games are those which advance the state of the art in video games, while simultaneously evoking something of Lucas’ cinematic saga.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Atari Video Games"

1

Avverahalli, Ravi Darshan. "Identifying and Prioritizing Critical Information in Military IoT: Video Game Demonstration." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/104070.

Full text
Abstract:
Current communication and network systems are not built for delay-sensitive applications. The most obvious fact is that the communication capacity is only achievable in theory with infinitely long codes, which means infinitely long delays. One remedy for this is to use shorter codes. Conceptually, there is a deeper reason for the difficulties in such solutions: in Shannon's original 1948 paper, he started out by stating that the "semantic aspects" of information is "irrelevant" to communications. Hence, in Shannon's communication system, as well as every network built after him, we put all information into a uniform bit-stream, regardless what meanings they carry, and we transmit these bits over the network as a single type of commodity. Consequently, the network system can only provide a uniform level of error protection and latency control to all these bits. We argue that such a single measure of latency, or Age of Information (AoI), is insufficient for military Internet of Things (IoT) applications that inherently connect the communication network witha cyber-physical system. For example, a self-driving military vehicle might send to the controller a front-view image. Clearly, not everything in the image is equally important for the purpose of steering the vehicle: an approaching vehicle is a much more urgent piece of information than a tree in the background. Similar examples can be seen for other military IoT devices, such as drones and sensors. In this work, we present a new approach that inherently extarcts the most critical information in a Military Battlefield IoT scenatio by using a metric - called H-Score. This ensures the neural network to only concentrate on the most important information and ignore all background informaiton. We then carry out extensive evaluation of this a by testing it against various inputs, ranging from a vector of numbers to a 1000x1000 pixel image. Next, we introduce the concept of Manual Marginalization, which helps us to make independent decisions for each object in the image. We also develop a video game that captures the essence of a military battlefield scenario and test our developed algorithm here. Finally, we apply our approach on a simple Atari Space Invaders video game to shoot down enemies before they fire at us.<br>Master of Science<br>The IoT is transforming military and civilian environments into truly integrated cyberphysical systems (CPS), in which the dynamic physical world is tightly embedded with communication capabilities. This CPS nature of the military IoT will enable it to integrate a plethora of devices, ranging from small sensors to autonomous aerial, ground, and naval vehicles. This results in huge amount of information being transferred between the devices. However, not all the information is equally important. Broadly we can categorize information into two types: Critical and Non-Critical. For example in a military battlefield, the information about enemies is critical and information abouut the background trees is not so important. Therefore, it is essential to isolate the critical information from non-critical informaiton. This is the focus of our work. We use neural networks and some domain knowledge about the enemies to extract the critical information and use the extracted information to take control decisions. We then evalue the performance of this approach by testing it against various kinds of synthetic data sets. Finally we use an Atari Space Invaders video game to demonstrate how the extracted information can be used to make crucial decisions about enemies.
APA, Harvard, Vancouver, ISO, and other styles
2

Naddaf, Yavar. "Game-independent AI agents for playing Atari 2600 console games." Master's thesis, 2010. http://hdl.handle.net/10048/1081.

Full text
Abstract:
Thesis (M.Sc.)--University of Alberta, 2010.<br>Title from PDF file main screen (viewed on July 15, 2010). A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for the degree of Master of Science, Department of Computing Science, University of Alberta. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Atari Video Games"

1

Gaming: From Atari to Xbox. Rosen Educational Services, LLC, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ian, Bogost, ed. Video computer system: The Atari 2600 platform. The MIT Press, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Montfort, Nick. Racing the Beam: The Atari Video Computer System. The MIT Press, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fox, Matt. The Video Games Guide. Boxtree Ltd, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Weiss, Brett. Classic Home Video Games, 1972-1984: A Complete Reference Guide. McFarland & Company, Inc., Publishers, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Meston, Zach. Atari Jaguar: Official Gamer's Guide. Sandwich Islands Publishing, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Classic Home Video Games, 1985-1988: A Complete Reference Guide. McFarland, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Video Game Bible, 1985-2002. Trafford Publishing, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Baer, Ralph H. Videogames: In the Beginning. Rolenta Press, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gamemaster: The Complete Video Game Guide 1995. St. Martin's Paperbacks, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Atari Video Games"

1

O’Regan, Gerard. "Atari Video Games." In The Innovation in Computing Companion. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-02619-6_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

"Video Games as Computers, Computers as Toys." In Atari Age. The MIT Press, 2017. http://dx.doi.org/10.7551/mitpress/10021.003.0008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

"Atari and Commodore." In The Golden Age of Video Games. A K Peters/CRC Press, 2011. http://dx.doi.org/10.1201/b10818-14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

"Introduction: Early Video Games and New Media History." In Atari Age. The MIT Press, 2017. http://dx.doi.org/10.7551/mitpress/10021.003.0004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

"Atari, Pong and the Jackals." In The Golden Age of Video Games. A K Peters/CRC Press, 2011. http://dx.doi.org/10.1201/b10818-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

"“Don’t Watch TV Tonight. Play It!” Early Video Games and Television." In Atari Age. The MIT Press, 2017. http://dx.doi.org/10.7551/mitpress/10021.003.0006.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Atari Video Games"

1

Kelly, Stephen, and Malcolm I. Heywood. "Multi-task learning in Atari video games with emergent tangled program graphs." In GECCO '17: Genetic and Evolutionary Computation Conference. ACM, 2017. http://dx.doi.org/10.1145/3071178.3071303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Goyal, Prasoon, Scott Niekum, and Raymond J. Mooney. "Using Natural Language for Reward Shaping in Reinforcement Learning." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/331.

Full text
Abstract:
Recent reinforcement learning (RL) approaches have shown strong performance in complex domains, such as Atari games, but are highly sample inefficient. A common approach to reduce interaction time with the environment is to use reward shaping, which involves carefully designing reward functions that provide the agent intermediate rewards for progress towards the goal. Designing such rewards remains a challenge, though. In this work, we use natural language instructions to perform reward shaping. We propose a framework that maps free-form natural language instructions to intermediate rewards, that can seamlessly be integrated into any standard reinforcement learning algorithm. We experiment with Montezuma's Revenge from the Atari video games domain, a popular benchmark in RL. Our experiments on a diverse set of 15 tasks demonstrate that for the same number of interactions with the environment, using language-based rewards can successfully complete the task 60% more often, averaged across all tasks, compared to learning without language.
APA, Harvard, Vancouver, ISO, and other styles
3

Kelly, Stephen, and Malcolm Heywood. "Emergent Tangled Program Graphs in Multi-Task Learning." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/740.

Full text
Abstract:
We propose a Genetic Programming (GP) framework to address high-dimensional Multi-Task Reinforcement Learning (MTRL) through emergent modularity. A bottom-up process is assumed in which multiple programs self-organize into collective decision-making entities, or teams, which then further develop into multi-team policy graphs, or Tangled Program Graphs (TPG). The framework learns to play three Atari video games simultaneously, producing a single control policy that matches or exceeds leading results from (game-specific) deep reinforcement learning in each game. More importantly, unlike the representation assumed for deep learning, TPG policies start simple and adaptively complexify through interaction with the task environment, resulting in agents that are exceedingly simple, operating in real-time without specialized hardware support such as GPUs.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Lun, Zaynah Javed, Xian Wu, Wenbo Guo, Xinyu Xing, and Dawn Song. "BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/509.

Full text
Abstract:
Recent research has confirmed the feasibility of backdoor attacks in deep reinforcement learning (RL) systems. However, the existing attacks require the ability to arbitrarily modify an agent's observation, constraining the application scope to simple RL systems such as Atari games. In this paper, we migrate backdoor attacks to more complex RL systems involving multiple agents and explore the possibility of triggering the backdoor without directly manipulating the agent's observation. As a proof of concept, we demonstrate that an adversary agent can trigger the backdoor of the victim agent with its own action in two-player competitive RL systems. We prototype and evaluate BackdooRL in four competitive environments. The results show that when the backdoor is activated, the winning rate of the victim drops by 17% to 37% compared to when not activated. The videos are hosted at https://github.com/wanglun1996/multi_agent_rl_backdoor_videos.
APA, Harvard, Vancouver, ISO, and other styles
5

Yang, Hsuan-Kung, Po-Han Chiang, Min-Fong Hong, and Chun-Yi Lee. "Flow-based Intrinsic Curiosity Module." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/286.

Full text
Abstract:
In this paper, we focus on a prediction-based novelty estimation strategy upon the deep reinforcement learning (DRL) framework, and present a flow-based intrinsic curiosity module (FICM) to exploit the prediction errors from optical flow estimation as exploration bonuses. We propose the concept of leveraging motion features captured between consecutive observations to evaluate the novelty of observations in an environment. FICM encourages a DRL agent to explore observations with unfamiliar motion features, and requires only two consecutive frames to obtain sufficient information when estimating the novelty. We evaluate our method and compare it with a number of existing methods on multiple benchmark environments, including Atari games, Super Mario Bros., and ViZDoom. We demonstrate that FICM is favorable to tasks or environments featuring moving objects, which allow FICM to utilize the motion features between consecutive observations. We further ablatively analyze the encoding efficiency of FICM, and discuss its applicable domains comprehensively. See here for our codes and demo videos.
APA, Harvard, Vancouver, ISO, and other styles
6

Lin, Yen-Chen, Zhang-Wei Hong, Yuan-Hong Liao, Meng-Li Shih, Ming-Yu Liu, and Min Sun. "Tactics of Adversarial Attack on Deep Reinforcement Learning Agents." In Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/525.

Full text
Abstract:
We introduce two tactics, namely the strategically-timed attack and the enchanting attack, to attack reinforcement learning agents trained by deep reinforcement learning algorithms using adversarial examples. In the strategically-timed attack, the adversary aims at minimizing the agent's reward by only attacking the agent at a small subset of time steps in an episode. Limiting the attack activity to this subset helps prevent detection of the attack by the agent. We propose a novel method to determine when an adversarial example should be crafted and applied. In the enchanting attack, the adversary aims at luring the agent to a designated target state. This is achieved by combining a generative model and a planning algorithm: while the generative model predicts the future states, the planning algorithm generates a preferred sequence of actions for luring the agent. A sequence of adversarial examples is then crafted to lure the agent to take the preferred sequence of actions. We apply the proposed tactics to the agents trained by the state-of-the-art deep reinforcement learning algorithm including DQN and A3C. In 5 Atari games, our strategically-timed attack reduces as much reward as the uniform attack (i.e., attacking at every time step) does by attacking the agent 4 times less often. Our enchanting attack lures the agent toward designated target states with a more than 70% success rate. Example videos are available at http://yclin.me/adversarial_attack_RL/.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography