Academic literature on the topic 'Turing machines. Computers. Artificial intelligence'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Turing machines. Computers. Artificial intelligence.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Turing machines. Computers. Artificial intelligence"

1

Grosz, Barbara. "What Question Would Turing Pose Today?" AI Magazine 33, no. 4 (2012): 73. http://dx.doi.org/10.1609/aimag.v33i4.2441.

Full text
Abstract:
In 1950, when Turing proposed to replace the question "Can machines think?" with the question "Are there imaginable digital computers which would do well in the imitation game?" computer science was not yet a field of study, Shannon’s theory of information had just begun to change the way people thought about communication, and psychology was only starting to look beyond behaviorism. It is stunning that so many predictions in Turing’s 1950 Mind paper were right. In the decades since that paper appeared, with its inspiring challenges, research in computer science, neuroscience, and the behavior
APA, Harvard, Vancouver, ISO, and other styles
2

ADAM, RUTH, URI HERSHBERG, YAACOV SCHUL, and SORIN SOLOMON. "TESTING THE TURING TEST — DO MEN PASS IT?" International Journal of Modern Physics C 15, no. 08 (2004): 1041–47. http://dx.doi.org/10.1142/s0129183104006522.

Full text
Abstract:
We are fascinated by the idea of giving life to the inanimate. The fields of Artificial Life and Artificial Intelligence (AI) attempt to use a scientific approach to pursue this desire. The first steps on this approach hark back to Turing and his suggestion of an imitation game as an alternative answer to the question "can machines think?".1To test his hypothesis, Turing formulated the Turing test1to detect human behavior in computers. But how do humans pass such a test? What would you say if you would learn that they do not pass it well? What would it mean for our understanding of human behav
APA, Harvard, Vancouver, ISO, and other styles
3

Kulikov, Vadim. "Preferential Engagement and What Can We Learn from Online Chess?" Minds and Machines 30, no. 4 (2020): 617–36. http://dx.doi.org/10.1007/s11023-020-09550-7.

Full text
Abstract:
AbstractAn online game of chess against a human opponent appears to be indistinguishable from a game against a machine: both happen on the screen. Yet, people prefer to play chess against other people despite the fact that machines surpass people in skill. When the philosophers of 1970’s and 1980’s argued that computers will never surpass us in chess, perhaps their intuitions were rather saying “Computers will never be favored as opponents”? In this paper we analyse through the introduced concepts of psychological affordances and psychological interplay, what are the mechanisms that make a hum
APA, Harvard, Vancouver, ISO, and other styles
4

Weng, Juyang. "Autonomous Programming for General Purposes: Theory." International Journal of Humanoid Robotics 17, no. 04 (2020): 2050016. http://dx.doi.org/10.1142/s0219843620500164.

Full text
Abstract:
The universal Turing Machine (TM) is a model for Von Neumann computers — general-purpose computers. A human brain, linked with its biological body, can inside-skull-autonomously learn a universal TM so that he acts as a general-purpose computer and writes a computer program for any practical purposes. It is unknown whether a robot can accomplish the same. This theoretical work shows how the Developmental Network (DN), linked with its robot body, can accomplish this. Unlike a traditional TM, the TM learned by DN is a super TM — Grounded, Emergent, Natural, Incremental, Skulled, Attentive, Motiv
APA, Harvard, Vancouver, ISO, and other styles
5

ITO, AKIRA, KATSUSHI INOUE, ITSUO TAKANAMI, and YUE WANG. "THE EFFECT OF INKDOTS FOR TWO-DIMENSIONAL AUTOMATA." International Journal of Pattern Recognition and Artificial Intelligence 09, no. 05 (1995): 777–96. http://dx.doi.org/10.1142/s0218001495000328.

Full text
Abstract:
Recently, related to the open problem of whether deterministic and nondeterministic space (especially lower-level) complexity classes are separated, the inkdot Turing machine was introduced. An inkdot machine is a conventional Turing machine capable of dropping an inkdot on a given input tape for a landmark, but not to pick it up nor further erase it. In this paper, we introduce a finite state version of the inkdot machine as a weak recognizer of the properties of digital pictures, rather than a Turing machine supplied with a one-dimensional working tape. We first investigate the sufficient sp
APA, Harvard, Vancouver, ISO, and other styles
6

Harel, David. "Niépce–Bell or Turing: how to test odour reproduction." Journal of The Royal Society Interface 13, no. 125 (2016): 20160587. http://dx.doi.org/10.1098/rsif.2016.0587.

Full text
Abstract:
Decades before the existence of anything resembling an artificial intelligence system, Alan Turing raised the question of how to test whether machines can think, or, in modern terminology, whether a computer claimed to exhibit intelligence indeed does so. This paper raises the analogous issue for olfaction: how to test the validity of a system claimed to reproduce arbitrary odours artificially, in a way recognizable to humans. Although odour reproduction systems are still far from being viable, the question of how to test candidates thereof is claimed to be interesting and non-trivial, and a n
APA, Harvard, Vancouver, ISO, and other styles
7

Welch, Philip. "Characterisations of variant transfinite computational models: Infinite time Turing, ordinal time Turing, and Blum–Shub–Smale machines." Computability 10, no. 2 (2021): 159–80. http://dx.doi.org/10.3233/com-200301.

Full text
Abstract:
We consider how changes in transfinite machine architecture can sometimes alter substantially their capabilities. We approach the subject by answering three open problems touching on: firstly differing halting time considerations for machines with multiple as opposed to single heads, secondly space requirements, and lastly limit rules. We: 1) use admissibility theory, Σ 2 -codes and Π 3 -reflection properties in the constructible hierarchy to classify the halting times of ITTMs with multiple independent heads; the same for Ordinal Turing Machines which have On length tapes; 2) determine which
APA, Harvard, Vancouver, ISO, and other styles
8

ITO, AKIRA, KATSUSHI INOUE, ITSUO TAKANAMI, and YASUYOSHI INAGAKI. "CONSTANT LEAF-SIZE HIERARCHY OF TWO-DIMENSIONAL ALTERNATING TURING MACHINES." International Journal of Pattern Recognition and Artificial Intelligence 08, no. 02 (1994): 509–24. http://dx.doi.org/10.1142/s0218001494000267.

Full text
Abstract:
“Leaf-size” (or “branching”) is the minimum number of leaves of some accepting computation trees of alternating devices. For example, one leaf corresponds to nondeterministic computation. In this paper, we investigate the effect of constant leaves of two-dimensional alternating Turing machines, and show the following facts: (1) For any function L(m, n), k leaf- and L(m, n) space-bounded two-dimensional alternating Turing machines which have only universal states are equivalent to the same space bounded deterministic Turing machines for any integer k≥1, where m (n) is the number of rows (column
APA, Harvard, Vancouver, ISO, and other styles
9

Cook, S. D. Noam. "Turing, Searle, and the Wizard of Oz." Techné: Research in Philosophy and Technology 14, no. 2 (2010): 88–102. http://dx.doi.org/10.5840/techne201014212.

Full text
Abstract:
Since the middle of the 20th century there has been a significant debate about the attribution of capacities of living systems, particularly humans, to technological artefacts, especially computers—from Turing’s opening gambit, to subsequent considerations of artificial intelligence, to recent claims about artificial life. Some now argue that the capacities of future technologies will ultimately make it impossible to draw any meaningful distinctions between humans and machines. Such issues center on what sense, if any, it makes to claim that gadgets can actually think, feel, act, live, etc. I
APA, Harvard, Vancouver, ISO, and other styles
10

Legato, Marianne J., Francoise Simon, James E. Young, Tatsuya Nomura, and Ibis Sánchez-Serrano. "Roundtable Discussion III: The Development and Uses of Artificial Intelligence in Medicine: A Work in Progress." Gender and the Genome 4 (January 1, 2020): 247028971989870. http://dx.doi.org/10.1177/2470289719898701.

Full text
Abstract:
Humans have devised machines to replace computation by individuals since ancient times: The abacus predated the written Hindu–Arabic numeral system by centuries. We owe a quantum leap in the development of machines to help problem solve to the British mathematician Charles Babbage who built what he called the Difference Engine in the mid-19th century. But the Turing formula created in 1936 is the foundation for the modern computer; it produced printed symbols on paper tape that listed a series of logical instructions. Three decades later, Olivetti manufactured the first mass-marketed desktop c
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Turing machines. Computers. Artificial intelligence"

1

Krebs, Peter R. History &amp Philosophy of Science UNSW. "Turing machines, computers and artificial intelligence." Awarded by:University of New South Wales. History & Philosophy of Science, 2002. http://handle.unsw.edu.au/1959.4/19053.

Full text
Abstract:
This work investigates some of the issues and consequences for the field of artificial intelligence and cognitive science, which are related to the perceived limits of computation with current digital equipment. The Church -Turing thesis and the specific properties of Turing machines are examined and some of the philosophical 'in principle' objections, such as the application of G??del's incompleteness theorem, are discussed. It is argued that the misinterpretation of the Church-Turing thesis has led to unfounded assumptions about the limitations of computing machines in general. Modern digita
APA, Harvard, Vancouver, ISO, and other styles
2

Zenil, Hector. "Une approche expérimentale à la théorie algorithmique de la complexité." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2011. http://tel.archives-ouvertes.fr/tel-00839374.

Full text
Abstract:
Une caractéristique contraignante de la complexité de Kolmogorov-Chaitin (dénotée dans ce chapitre par K) est qu'elle n'est pas calculable à cause du problème de l'arrêt, ce qui limite son domaine d'application. Une autre critique concerne la dépendance de K à un langage particulier ou une machine de Turing universelle particulière, surtout pour les suites assez courtes, par exemple, plus courtes que les longueurs typiques des compilateurs des langages de programmation. En pratique, on peut obtenir une approximation de K(s), grâce à des méthodes de compression. Mais les performances de ces mét
APA, Harvard, Vancouver, ISO, and other styles
3

Lagerkvist, Love. "Computation as Strange Material : Excursions into Critical Accidents." Thesis, Malmö universitet, Institutionen för konst, kultur och kommunikation (K3), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-43639.

Full text
Abstract:
Waking up in a world where everyone carries a miniature supercomputer, interaction designers find themselves in their forerunners dreams. Faced with the reality of planetary-scale we have to confront the task of articulating approaches responsive this accidental ubiquity of computation. This thesis attempts such a formulation by defining computation as a strange material, a plasticity shaped equally by its technical properties and the mode of production by which is its continuously re-produced. The definition is applied through a methodology of excursions — participatory explorations into two
APA, Harvard, Vancouver, ISO, and other styles
4

Krebs, Peter R. "Turing machines, computers, and artificial intelligence /." 2002. http://www.library.unsw.edu.au/~thesis/adt-NUN/public/adt-NUN20030721.112755/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

(8083571), Veeraraghava Raju Hasti. "HIGH-PERFORMANCE COMPUTING MODEL FOR A BIO-FUEL COMBUSTION PREDICTION WITH ARTIFICIAL INTELLIGENCE." Thesis, 2019.

Find full text
Abstract:
<p>The main accomplishments of this research are </p> <p>(1) developed a high fidelity computational methodology based on large eddy simulation to capture lean blowout (LBO) behaviors of different fuels; </p> <p>(2) developed fundamental insights into the combustion processes leading to the flame blowout and fuel composition effects on the lean blowout limits; </p> <p>(3) developed artificial intelligence-based models for early detection of the onset of the lean blowout in a realistic complex combustor. </p> <p>The methodologies are demonstrated by performing the lean blowout (LBO) calcula
APA, Harvard, Vancouver, ISO, and other styles
6

Anbil, Parthipan Sarath Chandar. "On challenges in training recurrent neural networks." Thèse, 2019. http://hdl.handle.net/1866/23435.

Full text
Abstract:
Dans un problème de prédiction à multiples pas discrets, la prédiction à chaque instant peut dépendre de l’entrée à n’importe quel moment dans un passé lointain. Modéliser une telle dépendance à long terme est un des problèmes fondamentaux en apprentissage automatique. En théorie, les Réseaux de Neurones Récurrents (RNN) peuvent modéliser toute dépendance à long terme. En pratique, puisque la magnitude des gradients peut croître ou décroître exponentiellement avec la durée de la séquence, les RNNs ne peuvent modéliser que les dépendances à court terme. Cette thèse explore ce problème dans les
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Turing machines. Computers. Artificial intelligence"

1

Wells, Andrew. Rethinking cognitive computation: Turing and the science of the mind. Palgrave Macmillan, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rethinking cognitive computation: Turing and the science of the mind. Palgrave Macmillan, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Denning, Peter J. Will machines ever think? Research Institute for Advanced Computer Science, NASA Ames Research Center, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kurzweil, Raymond. The age of intelligent machines. MIT Press, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

The age of intelligent machines. MIT Press, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Thinking machines: The evolution of artificial intelligence. B. Blackwell, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

The age of spiritual machines: How we will live, work and think in the new age of intelligent machines. Orion Business, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kurzweil, Raymond. The age of spiritual machines: When computers exceed human intelligence. Texere Publishing, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Johnson, R. Colin. Cognizers: Neural networks and machines that think. Wiley, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Nadeau, Robert. Mind, machines, and human consciousness. Contemporary Books, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Turing machines. Computers. Artificial intelligence"

1

Mainzer, Klaus. "Computers Learn to Speak." In Artificial intelligence - When do machines take over? Springer Berlin Heidelberg, 2019. http://dx.doi.org/10.1007/978-3-662-59717-0_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Oliveira, Arlindo. "The Quest for Intelligent Machines." In The Digital Mind. The MIT Press, 2017. http://dx.doi.org/10.7551/mitpress/9780262036030.003.0005.

Full text
Abstract:
This chapter addresses the question of whether a computer can become intelligent and how to test for that possibility. It introduces the idea of the Turing test, a test developed to determine, in an unbiased way, whether a program running in a computer is, or is not, intelligent. The development of artificial intelligence led, in time, to many applications of computers that are not possible using “non-intelligent” programs. One important area in artificial intelligence is machine learning, the technology that makes possible that computers learn, from existing data, in ways similar to the ways humans learn. A number of approach to perform machine learning is addressed in this chapter, including neural networks, decision trees and Bayesian learning. The chapter concludes by arguing that the brain is, in reality, a very sophisticated statistical machine aimed at improving the chances of survival of its owner.
APA, Harvard, Vancouver, ISO, and other styles
3

Copeland, Jack. "Intelligent machinery." In The Turing Guide. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198747826.003.0035.

Full text
Abstract:
This chapter explains why Turing is regarded as founding father of the field of artificial intelligence (AI), and analyses his famous method for testing whether a computer is capable of thought. In the weeks before his 1948 move from the National Physical Laboratory to Manchester, Turing wrote what was, with hindsight, the first manifesto of artificial intelligence (AI). His provocative title was simply Intelligent Machinery. While the rest of the world was just beginning to wake up to the idea that computers were the new way to do high-speed arithmetic, Turing was talking very seriously about ‘programming a computer to behave like a brain’. Among other shatteringly original proposals, Intelligent Machinery contained a short outline of what we now refer to as ‘genetic’ algorithms—algorithms based on the survival-of-the-fittest principle of Darwinian evolution—as well as describing the striking idea of building a computer out of artificial human nerve cells, an approach now called ‘connectionism’. Turing’s early connectionist architecture is outlined in Chapter 29. Strangely enough, Turing’s 1940 anti-Enigma bombe was the first step on the road to modern AI. As Chapter 12 explains, the bombe worked by searching at high speed for the correct settings of the Enigma machine—and once it had found the right settings, the random-looking letters of the encrypted message turned into plain German. The bombe was a spectacularly successful example of the mechanization of thought processes: Turing’s extraordinary machine performed a job, codebreaking, that requires intelligence when human beings do it. The fundamental idea behind the bombe, and one of Turing’s key discoveries at Bletchley Park, was what modern AI researchers call ‘heuristic search’. The use of heuristics—shortcuts or rules of thumb that cut down the amount of searching required to find the answer—is still a fundamental technique in AI today. The difficulty Turing confronted in designing the bombe was that the Enigma machine had far too many possible settings for the bombe just to search blindly through them until it happened to stumble on the right answer—the war might have been over before it produced a result. Turing’s brilliant idea was to use heuristics to narrow, and so to speed up, the search. Turing’s idea of using crib-loops to narrow the search was the principal heuristic employed in the bombe (as Chapter 12 explains).
APA, Harvard, Vancouver, ISO, and other styles
4

Carpenter, Brian, and Robert Doran. "Turing’s Zeitgeist." In The Turing Guide. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198747826.003.0031.

Full text
Abstract:
This chapter reviews the history of Alan Turing’s design proposal for an Automatic Computing Engine (ACE) and how he came to write it in 1945, and takes a fresh look at the numerous formative ideas it included. All of these ideas resurfaced in the young computing industry over the following fifteen years. We cannot tell to what extent Turing’s unpublished foresights were passed on to other pioneers, or to what extent they were rediscovered independently as their time came. In any case, they all became part of the Zeitgeist of the computing industry. At some universities, such as ours in New Zealand, the main computer in 1975 was a Burroughs B6700, a ‘stack’ machine. In this kind of machine, data, including items such as the return address for a subroutine, are stored on top of one another so that the last one in becomes the first one out. In effect, each new item on the stack ‘buries’ the previous one. Apart from the old English Electric KDF9, and the recently introduced Digital Equipment Corporation PDP-11, stack machines were unusual. Where had this idea come from? It just seemed to be part of computing’s Zeitgeist, the intellectual climate of the discipline, and it remains so to this day. Computer history was largely American in the 1970s—the computer was called the von Neumann machine and everybody knew about the early American machines such as ENIAC and EDVAC. Early British computers were viewed as a footnote; the fact that the first stored program in history ran in Manchester was largely overlooked, which is probably why the word ‘program’ is usually spelt in the American way. There was a tendency to assume that all the main ideas in computing, such as the idea of a stack, had originated in the United States. At that time, Alan Turing was known as a theoretician and for his work on artificial intelligence. The world didn’t know that he was a cryptanalyst, didn’t know that he tinkered with electronics, didn’t know that he designed a computer, and didn’t know that he was gay. He was hardly mentioned in the history of practical computing.
APA, Harvard, Vancouver, ISO, and other styles
5

Turing, Alan. "Chess (1953)." In The Essential Turing. Oxford University Press, 2004. http://dx.doi.org/10.1093/oso/9780198250791.003.0023.

Full text
Abstract:
Chess and some other board games are a test-bed for ideas in Artificial Intelligence. Donald Michie—Turing’s wartime colleague and subsequently founder of the Department of Machine Intelligence and Perception at the University of Edinburgh—explains the relevance of chess to AI: Computer chess has been described as the Drosophila melanogaster of machine intelligence. Just as Thomas Hunt Morgan and his colleagues were able to exploit the special limitations and conveniences of the Drosophila fruit fly to develop a methodology of genetic mapping, so the game of chess holds special interest for the study of the representation of human knowledge in machines. Its chief advantages are: (1) chess constitutes a fully defined and well-formalized domain; (2) the game challenges the highest levels of human intellectual capacity; (3) the challenge extends over the full range of cognitive functions such as logical calculation, rote learning, concept-formation, analogical thinking, imagination, deductive and inductive reasoning; (4) a massive and detailed corpus of chess knowledge has accumulated over the centuries in the form of chess instructional works and commentaries; (5) a generally accepted numerical scale of performance is available in the form of the U.S. Chess Federation and International ELO rating system. In 1945, in his paper ‘Proposed Electronic Calculator’, Turing predicted that computers would probably play ‘very good chess’, an opinion echoed in 1949 by Claude Shannon of Bell Telephone Laboratories, another leading early theoretician of computer chess. By 1958, Herbert Simon and Allen Newell were predicting that within ten years the world chess champion would be a computer, unless barred by the rules. Just under forty years later, on 11 May 1997, IBM’s Deep Blue beat the reigning world champion, Gary Kasparov, in a six-game match. Turing was theorizing about the mechanization of chess as early as 1941. Fellow codebreakers at GC &amp; CS remember him experimenting with two heuristics now commonly used in computer chess, minimax and best-first. The minimax heuristic involves assuming that one’s opponent will move in such a way as to maximize their gains; one then makes one’s own move in such a way as to minimize the losses caused by the opponent’s expected move.
APA, Harvard, Vancouver, ISO, and other styles
6

Copeland, Jack, and Diane Proudfoot. "Connectionism: computing with neurons." In The Turing Guide. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198747826.003.0039.

Full text
Abstract:
Modern ‘connectionists’ are exploring the idea of using artificial neurons (artificial brain cells) to compute. Many see connectionist research as the route not only to artificial intelligence (AI) but also to achieving a deep understanding of how the human brain works. It is less well known than it should be that Turing was the first pioneer of connectionism. Digital computers are superb number crunchers. Ask them to predict a rocket’s trajectory or calculate the financial figures for a large multinational corporation and they can churn out the answers in seconds. But seemingly simple actions that people routinely perform, such as recognizing a face or reading handwriting, have been devilishly tricky to program. Perhaps the networks of neurons that make up a brain have a natural facility for these and other tasks that standard computers simply lack (Fig. 29.1). Scientists have therefore been investigating computers modelled more closely on the biological brain. Connectionism is the science of computing with networks of artificial neurons. Currently researchers usually simulate the neurons and their interconnections within an ordinary digital computer, just as engineers create virtual models of aircraft wings and skyscrapers. A training algorithm that runs on the computer adjusts the connections between the neurons, honing the network into a special-purpose machine dedicated to performing some particular function, such as forecasting international currency markets. In a famous demonstration of the potential of connectionism in the 1980s, James McClelland and David Rumelhart trained a network of 920 neurons to form the past tenses of English verbs. Verbs such as ‘come’, ‘look’, and ‘sleep’ were presented (suitably encoded) to the layer of input neurons. The automatic training system noted the difference between the actual response at the output neurons and the desired response (such as ‘came’) and then mechanically adjusted the connections throughout the network in such a way as to give the network a slight push in the direction of the correct response. About 400 different verbs were presented to the network one by one, and after each presentation the network’s connections were adjusted. By repeating this whole procedure approximately 200 times, the connections were honed to meet the needs of all the verbs in the training set. The network’s training was now complete, and without further intervention it could form the past tenses of all the verbs in the training set.
APA, Harvard, Vancouver, ISO, and other styles
7

Proudfoot, Diane. "The Turing test—from every angle." In The Turing Guide. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198747826.003.0037.

Full text
Abstract:
Can machines think? Turing’s famous test is one way of determining the answer. On the sixtieth anniversary of his death, the University of Reading announced that a ‘historic milestone in artificial intelligence’ had been reached at the Royal Society: a computer program had passed the ‘iconic’ Turing test. According to an organizer, this was ‘one of the most exciting’ advances in human understanding. In a frenzy of worldwide publicity, the news was described as a ‘breakthrough’ showing that ‘robot overlords creep closer to assuming control’ of human beings. Yet after only a single day it was claimed that ‘almost everything about the story is bogus’: it was ‘nonsense, complete nonsense’ to say that the Turing test had been passed. The program concerned ‘actually got an F’ on the test. The backlash spread to the test itself; critics said that the ‘whole concept of the Turing Test is kind of a joke . . . a needless distraction’. So, what is the Turing test—and why does it matter? In 1948, in a report entitled ‘Intelligent machinery’, Turing described a ‘little experiment’ that, he said, was ‘a rather idealized form of an experiment I have actually done’. It involved three subjects, all chess players. Player A plays chess as he/she normally would, while player B is proxy for a computer program, following a written set of rules and working out what to do using pencil and paper—this ‘paper machine’ was the only sort of programmable computer freely available in 1948 (see Ch. 31). Both of these players are hidden from the third player, C. Turing said, ‘Two rooms are used with some arrangement for communicating moves, and a game is played between C and either A or the paper machine’. How did the experiment fare? According to Turing, ‘C may find it quite difficult to tell which he is playing’. This is the first version of what has come to be known as ‘Turing’s imitation game’ or the ‘Turing test’.
APA, Harvard, Vancouver, ISO, and other styles
8

Franco, John, and John Martin. "Chapter 1. A History of Satisfiability." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2021. http://dx.doi.org/10.3233/faia200984.

Full text
Abstract:
This chapter traces the links between the notion of Satisfiability and the attempts by mathematicians, philosophers, engineers, and scientists over the last 2300 years to develop effective processes for emulating human reasoning and scientific discovery, and for assisting in the development of electronic computers and other electronic components. Satisfiability was present implicitly in the development of ancient logics such as Aristotle’s syllogistic logic, its extentions by the Stoics, and Lull’s diagrammatic logic of the medieval period. From the renaissance to Boole algebraic approaches to effective process replaced the logics of the ancients and all but enunciated the meaning of Satisfiability for propositional logic. Clarification of the concept is credited to Tarski in working out necessary and sufficient conditions for “p is true” for any formula p in first-order syntax. At about the same time, the study of effective process increased in importance with the resulting development of lambda calculus, recursive function theory, and Turing machines, all of which became the foundations of computer science and are linked to the notion of Satisfiability. Shannon provided the link to the computer age and Davis and Putnam directly linked Satisfiability to automated reasoning via an algorithm which is the backbone of most modern SAT solvers. These events propelled the study of Satisfiability for the next several decades, reaching “epidemic proportions” in the 1990s and 2000s, and the chapter concludes with a brief history of each of the major Satisfiability-related research tracks that developed during that period.
APA, Harvard, Vancouver, ISO, and other styles
9

Sprevak, Mark. "Turing’s model of the mind." In The Turing Guide. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198747826.003.0036.

Full text
Abstract:
This chapter examines Alan Turing’s contribution to the field that offers our best understanding of the mind: cognitive science. The idea that the human mind is (in some sense) a computer is central to cognitive science. Turing played a key role in developing this idea. The precise course of Turing’s influence on cognitive science is complex and shows how seemingly abstract work in mathematical logic can spark a revolution in psychology. Alan Turing contributed to a revolutionary idea: that mental activity is computation. Turing’s work helped lay the foundation for what is now known as cognitive science. Today, computation is an essential element for explaining how the mind works. In this chapter, I return to Turing’s early attempts to understanding the mind using computation and examine the role that Turing played in the early days of cognitive science. Turing is famous as a founding figure in artificial intelligence (AI) but his contribution to cognitive science is less well known. The aim of AI is to create an intelligent machine. Turing was one of the first people to carry out research in AI, working on machine intelligence as early as 1941 and, as Chapters 29 and 30 explain, he was responsible for, or anticipated, many of the ideas that were later to shape AI. Unlike AI, cognitive science does not aim to create an intelligent machine. It aims instead to understand the mechanisms that are peculiar to human intelligence. On the face of it, human intelligence is miraculous. How do we reason, understand language, remember past events, come up with a joke? It is hard to know how even to begin to explain these phenomena. Yet, like a magic trick that looks like a miracle to the audience, but which is explained by revealing the pulleys and levers behind the stage, so human intelligence could be explained if we knew the mechanisms that lie behind its production. A first step in this direction is to examine a piece of machinery that is usually hidden from view: the human brain. A challenge is the astonishing complexity of the human brain: it is one of the most complex objects in the universe, containing 100 billion neurons and a web of around 100 trillion connections.
APA, Harvard, Vancouver, ISO, and other styles
10

"The Irrelevance of Turing Machines to Artificial Intelligence." In Computationalism. The MIT Press, 2003. http://dx.doi.org/10.7551/mitpress/2030.003.0006.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Turing machines. Computers. Artificial intelligence"

1

Deb, Sankha, and Kalyan Ghosh. "Artificial Intelligence Based Inference Techniques for Automated Process Planning for Machined Parts." In ASME 2002 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2002. http://dx.doi.org/10.1115/detc2002/cie-34507.

Full text
Abstract:
Many areas of research in manufacturing are increasingly turning to applications of Artificial Intelligence (AI). The problem of developing inference strategies for automated process planning in machining is one such area of successful application of AI based approaches. Given the high complexity of the process planning expertise, development of inference techniques for automated process planning is a big challenge to researchers. The traditional inference methods based on variant and generative approaches using decision trees and decision tables suffer from a number of shortcomings, which hav
APA, Harvard, Vancouver, ISO, and other styles
2

Bozinovski, Stevo, and Adrijan Bozinovski. "Artificial Intelligence and infinity: Infinite series generated by Turing Machines." In SoutheastCon 2017. IEEE, 2017. http://dx.doi.org/10.1109/secon.2017.7925371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mao, Qianren, Jianxin Li, Senzhang Wang, et al. "Aspect-Based Sentiment Classification with Attentive Neural Turing Machines." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/714.

Full text
Abstract:
Aspect-based sentiment classification aims to identify sentiment polarity expressed towards a given opinion target in a sentence. The sentiment polarity of the target is not only highly determined by sentiment semantic context but also correlated with the concerned opinion target. Existing works cannot effectively capture and store the inter-dependence between the opinion target and its context. To solve this issue, we propose a novel model of Attentive Neural Turing Machines (ANTM). Via interactive read-write operations between an external memory storage and a recurrent controller, ANTM can l
APA, Harvard, Vancouver, ISO, and other styles
4

Baltatanu, Adrian, and Marin-Leonard Florea. "Multiphase machines used in electric vehicles propulsion." In 2013 International Conference on Electronics, Computers and Artificial Intelligence (ECAI). IEEE, 2013. http://dx.doi.org/10.1109/ecai.2013.6636204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Iorgulescu, Mariana. "Electrical machines learning and training in One2One project." In 2015 7th International Conference on Electronics, Computers and Artificial Intelligence (ECAI). IEEE, 2015. http://dx.doi.org/10.1109/ecai.2015.7301227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jasinski, Mark, Elzbieta Jasinska, Michal Jasinski, and Lukasz Jasinski. "Computer-aided appliances to underground machines maintenance – Selected issues." In 2018 10th International Conference on Electronics, Computers and Artificial Intelligence (ECAI). IEEE, 2018. http://dx.doi.org/10.1109/ecai.2018.8679066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pusca, Remus, and Raphael Romary. "Advances in diagnosis of electrical machines through external magnetic field." In 2015 7th International Conference on Electronics, Computers and Artificial Intelligence (ECAI). IEEE, 2015. http://dx.doi.org/10.1109/ecai.2015.7301137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Du, Dongmei, and Qing He. "Orbit Shape Automatic Recognition Based on Artificial Neural Network." In ASME 2005 Power Conference. ASMEDC, 2005. http://dx.doi.org/10.1115/pwr2005-50208.

Full text
Abstract:
Orbit is a significant symptom in the fault diagnosis of rotating machine. The orbit is a 2-D image and can be described by moment invariants, the shape property of 2-D image, which is a description with translating-, rotating-, and scaling-invariants for 2-D image. The descriptive method of orbit image is investigated and an automatic orbit shape recognition based on artificial neural network (ANN) with moment invariants is proposed in this paper. The ANN of orbit shape recognition is trained by the training patterns generated by computer simulation for plenty of orbit shapes. It is shown tha
APA, Harvard, Vancouver, ISO, and other styles
9

Morstatter, Fred, Aram Galstyan, Gleb Satyukov, et al. "SAGE: A Hybrid Geopolitical Event Forecasting System." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/955.

Full text
Abstract:
Forecasting of geopolitical events is a notoriously difficult task, with experts failing to significantly outperform a random baseline across many types of forecasting events. One successful way to increase the performance of forecasting tasks is to turn to crowdsourcing: leveraging many forecasts from non-expert users. Simultaneously, advances in machine learning have led to models that can produce reasonable, although not perfect, forecasts for many tasks. Recent efforts have shown that forecasts can be further improved by ``hybridizing'' human forecasters: pairing them with the machine mode
APA, Harvard, Vancouver, ISO, and other styles
10

Bright, Curtis, Kevin K. H. Cheung, Brett Stevens, Ilias Kotsireas, and Vijay Ganesh. "Unsatisfiability Proofs for Weight 16 Codewords in Lam's Problem." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/203.

Full text
Abstract:
In the 1970s and 1980s, searches performed by L. Carter, C. Lam, L. Thiel, and S. Swiercz showed that projective planes of order ten with weight 16 codewords do not exist. These searches required highly specialized and optimized computer programs and required about 2,000 hours of computing time on mainframe and supermini computers. In 2010, these searches were verified by D. Roy using an optimized C program and 16,000 hours on a cluster of desktop machines. We performed a verification of these searches by reducing the problem to the Boolean satisfiability problem (SAT). Our verification uses t
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Turing machines. Computers. Artificial intelligence"

1

Raychev, Nikolay. Can human thoughts be encoded, decoded and manipulated to achieve symbiosis of the brain and the machine. Web of Open Science, 2020. http://dx.doi.org/10.37686/nsrl.v1i2.76.

Full text
Abstract:
This article discusses the current state of neurointerface technologies, not limited to deep electrode approaches. There are new heuristic ideas for creating a fast and broadband channel from the brain to artificial intelligence. One of the ideas is not to decipher the natural codes of nerve cells, but to create conditions for the development of a new language for communication between the human brain and artificial intelligence tools. Theoretically, this is possible if the brain "feels" that by changing the activity of nerve cells that communicate with the computer, it is possible to "achieve
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!