Academic literature on the topic 'Quantum computational complexity'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Quantum computational complexity.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Dissertations / Theses on the topic "Quantum computational complexity"

1

Kawachi, Akinori. "Studies on quantum query complexity and quantum computational cryptography." 京都大学 (Kyoto University), 2004. http://hdl.handle.net/2433/145315.

Full text
Abstract:
Kyoto University (京都大学)<br>0048<br>新制・課程博士<br>博士(情報学)<br>甲第11157号<br>情博第132号<br>新制||情||30(附属図書館)<br>22726<br>UT51-2004-R32<br>京都大学大学院情報学研究科通信情報システム専攻<br>(主査)教授 岩間 一雄, 教授 福嶋 雅夫, 教授 北野 正雄<br>学位規則第4条第1項該当
APA, Harvard, Vancouver, ISO, and other styles
2

Mehraban, Saeed. "Computational complexity of certain quantum theories in 1+1 dimensions." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101472.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 141-145).<br>While physical theories attempt to break down the observed structure and behavior of possibly large and complex systems to short descriptive axioms, the perspective of a computer scientist is to start with simple and believable set of rules to discover their large scale behaviors. Computer science and physics, however, can be combined into a new framework, wherein structures can be compared with each other according to scalar observables like mass and temperature, and also complexity at the same time. For example, similar to saying that one object is heavier than the other, we can discuss which system is more complex. According to this point of view, a more complex system can be interpreted as the one which can be programmed to encode and simulate the behavior of the others within its own degrees of freedom. Within this framework, at the most fundamental layer, physical systems are related to languages. In this thesis, I try to exemplify this point of view through an analysis of certain quantum theories in two dimensional space-time. In simple words, these models are the quantum analogues of elastic scattering of colored balls moving on a line. The models are closely related to each other in both relativistic and non-relativistic regimes. Physical examples that motivate this are the factorized scattering matrix of quantum field theory, and the repulsive delta interactions of quantum mechanics, in 1+1 dimensions. In classical mechanics, when two hard balls collide, they bounce off and remain in the same order. However, in the quantum setting, during a collision, either the balls bounce off, or otherwise they tunnel through each other, and exchange their configurations. Each event occurs with a certain probability. As a result, moving balls are put in a superposition of being in different color configurations. Thereby, if we consider n distinct balls, the state space is according to their n! possible arrangements, and their collisions act as quantum transpositions. Quantum transpositions can then be viewed as local quantum gates. I therefore consider the general Hilbert space of permutations, and study the space of unitary operators that can be generated by the local permuting gates. I first show that all of the discussed quantum theories can be programmed into an idealized model, the quantum ball permuting model, and then I will try to pin down the language of this model within the already known complexity classes. The main approach is to consider a series of models, as the variations of the ball scattering problem, and then to relate them to each other, using tools of computational complexity and quantum complexity theory. I find that the computational complexity of the ball permuting model depends on the initial superposition of the balls. More precisely, if the balls start out from the identity permutation, the model can be simulated in a one clean qubit, which is believed to be strictly weaker than the standard model of quantum computing. Given this upper-bound on the ball permuting model, the result is that the model of ball scatterings can be simulated within a one clean qubit, if they start out from an identity permutation. Furthermore, I will demonstrate that if special superpositions are allowed in the initial state, then the ball permuting model can efficiently simulate and sample from the output distribution of standard quantum computers. Next, I show how to use intermediate demolition ball detections to simulate the ball permuting model nondeterministically. According to this result, using post-selection on the outcome of these measurements, one obtains the original ball permuting model. Therefore, the post-selected analogue of ball scattering model can efficiently simulate standard quantum computers, when arbitrary initial superpositions are allowed. In the end, I formalize a quantum computer based on ball collisions and intermediate ball detections, and then I prove that the possibility of efficient simulation of this model on a classical computer is ruled out, unless the polynomial hierarchy collapses to its third level.<br>by Saeed Mehraban.<br>S.M.
APA, Harvard, Vancouver, ISO, and other styles
3

Hangleiter, Dominik [Verfasser]. "Sampling and the complexity of nature : Assessing, testing, and challenging the computational power of quantum devices / Dominik Hangleiter." Berlin : Freie Universität Berlin, 2021. http://d-nb.info/1225349451/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Grilo, Alex Bredariol 1987. "Computação quântica e teoria de computação." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275508.

Full text
Abstract:
Orientador: Arnaldo Vieira Moura<br>Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação<br>Made available in DSpace on 2018-08-25T06:09:05Z (GMT). No. of bitstreams: 1 Grilo_AlexBredariol_M.pdf: 1279418 bytes, checksum: 80f0b105ffcfb57f6e43c530b32cb7a9 (MD5) Previous issue date: 2014<br>Resumo: A Computação Quântica é um tópico relativamente recente e pouco conhecido, principalmente no meio da Computação. Seu estudo surgiu na tentativa de físicos simularem sistemas regidos pela Mecânica Quântica por computadores clássicos, o que se conjecturou inviável. Portanto, um novo modelo computacional que utiliza a estrutura quântica da matéria para computar foi teorizado para suprir estas deficiências. Este trabalho tem como objetivo principal estudar as influências da Computação Quântica na Teoria da Computação. Para atingir tal objetivo, primeiramente são expostos os conhecimentos básicos da Mecânica Quântica através de uma linguagem voltada para Teóricos de Computação sem conhecimento prévio na área, de forma a remover a barreira inicial sobre o tema. Em seguida, serão apresentadas inovações na área da Teoria de Computação oriundas da Computação Quântica. Começaremos com os principais Algoritmos Quânticos desenvolvidos até hoje, que foram os primeiros passos para demonstrar a possível superioridade computacional do novo modelo. Dentre estes algoritmos, apresentaremos o famoso Algoritmo de Shor, que fatora números em tempo polinomial. Adicionalmente, neste trabalho foram estudados tópicos mais avançados e atuais em Computabilidade e Complexidade Quânticas. Sobre Autômatos Quânticos, foram estudados aspectos de um modelo que mistura estados clássicos e quânticos, focando na comparação do poder computacional em relação aos Autômatos Finitos Clássicos. Do ponto de vista de Classes de Complexidade, será abordada a questão se em linguagens da classe QMA, o análogo quântico da classe NP, consegue-se atingir probabilidade de erro nulo na aceitação de instâncias positivas<br>Abstract: Quantum Computing is a relatively new area and it is not well known, mainly among Computer Scientists. It has emerged while physicists tried to simulate Quantum Systems with classical computers efficiently, which has been conjectured impossible. Then, a new computational model that uses the quantum structure of matter to perform computations has been theorized in order to perform these operations. We intend in this work to study the influences of Quantum Computing in Theoretical Computer Science. In order to achieve this goal, we start by presenting the basics of Quantum Computing to Theoretical Computer Science readers with no previous knowledge in this area, removing any initial barriers for a clean understanding of the topic. We will then follow by showing innovations in Theoretical Computer Science introduced by Quantum Computation. We start by showing the main Quantum Algorithms, that exemplify advantages of the new computational model. Among these algorithms, we will present the Shor Algorithm that factors numbers in polynomial time. We follow with more advanced topics in Quantum Computability and Complexity. We study Quantum Finite Automata Models that work with quantum and classical states, focusing on comparing their computational power with Deterministic Finite Automata. In Complexity Theory, we study the question if for languages in QMA, the quantum analogue of NP, zero probability error can be achieved in yes-instances<br>Mestrado<br>Ciência da Computação<br>Mestre em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
5

Pégny, Maël. "Sur les limites empiriques du calcul : calculabilité, complexité et physique." Thesis, Paris 1, 2013. http://www.theses.fr/2013PA010673/document.

Full text
Abstract:
Durant ces dernières décennies, la communauté informatique a montré un intérêt grandissant pour les modèles de calcul non-standard, inspirés par des phénomènes physiques, biologiques ou chimiques. Les propriétés exactes de ces modèles ont parfois été l'objet de controverses: que calculent-ils? Et à quelle vitesse? Les enjeux de ces questions sont renforcés par la possibilité que certains de ces modèles pourraient transgresser les limites acceptées du calcul, en violant soit la thèse de Church-Turing soit la thèse de Church-Turing étendue. La possibilité de réaliser physiquement ces modèles a notamment été au coeur des débats. Ainsi, des considérations empiriques semblent introduites dans les fondements même de la calculabilité et de la complexité computationnelle, deux théories qui auraient été précédemment considérées comme des parties purement a priori de la logique et de l'informatique. Par conséquent, ce travail est consacré à la question suivante : les limites du calcul reposent-elles sur des fondements empiriques? Et si oui, quels sont-ils? Pour ce faire, nous examinons tout d'abord la signification précise des limites du calcul, et articulons une conception épistémique du calcul, permettant la comparaison des modèles les plus variés. Nous répondrons à la première question par l'affirmative, grâce à un examen détaillé des débats entourant la faisabilité des modèles non-standard. Enfin, nous montrerons les incertitudes entourant la deuxième question dans l'état actuel de la recherche, en montrant les difficultés de la traduction des concepts computationnels en limites physiques<br>Recent years have seen a surge in the interest for non-standard computational models, inspired by physical, biological or chemical phenomena. The exact properties of some of these models have been a topic of somewhat heated discussion: what do they compute? And how fast do they compute? The stakes of these questions were heightened by the claim that these models would violate the accepted limits of computation, by violating the Church-Turing Thesis or the Extended Church-Turing Thesis. To answer these questions, the physical realizability of some of those models - or lack thereof - has often been put at the center of the argument. It thus seems that empirical considerations have been introduced into the very foundations of computability and computational complexity theory, both subjects that would have been previously considered purely a priori parts of logic and computer science. Consequently, this dissertation is dedicated to the following question: do computability and computational complexity theory rest on empirical foundations? If yes, what are these foundations? We will first examine the precise meaning of those limits of computation, and articulate a philosophical conception of computation able to make sense of this variety of models. We then answer the first question by the affirmative, through a careful examination of current debates around non-standard models. We show the various difficulties surrounding the second question, and study how they stem from the complex translation of computational concepts into physical limitations
APA, Harvard, Vancouver, ISO, and other styles
6

Montanaro, Ashley. "Structure, randomness and complexity in quantum computation." Thesis, University of Bristol, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.443658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kashefi, Elham. "Complexity analysis and semantics for quantum computation." Thesis, Imperial College London, 2003. http://hdl.handle.net/10044/1/11786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dam, Wim van. "Nonlocality and communication complexity." Thesis, University of Oxford, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.325982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Koh, Dax Enshan. "Classical simulation complexity of restricted models of quantum computation." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122164.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2019<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 355-372).<br>Restricted models of quantum computation are mathematical models which describe quantum computers that have limited access to certain resources. Well-known examples of such models include the boson sampling model, extended Clifford circuits, and instantaneous quantum polynomial-time circuits. While unlikely to be universal for quantum computation, several of these models appear to be able to outperform classical computers at certain computational tasks, such as sampling from certain probability distributions. Understanding which of these models are capable of performing such tasks and characterizing the classical simulation complexity of these models--i.e. how hard it is to simulate these models on a classical computer--are some of the central questions we address in this thesis. Our first contribution is a classification of various extended Clifford circuits according to their classical simulation complexity.<br>Among these circuits are the conjugated Clifford circuits, which we prove cannot be efficiently classically simulated up to multiplicative or additive error, under certain plausible conjectures in computational complexity theory. Our second contribution is an estimate of the number of qubits needed in various restricted quantum computation models in order for them to be able to demonstrate quantum computational supremacy. Our estimate is obtained by fine-graining existing hardness results for these restricted models. Our third contribution is a new alternative proof of the Gottesman-Knill theorem, which states that Clifford circuits can be efficiently simulated by a classical computer. Our proof uses the sum-over-paths technique and establishes a correspondence between quantum circuits and a class of exponential sums. Our final contribution is a theorem characterizing the operations that can be efficiently simulated using a particular rebit simulator.<br>An application of this result is a generalization of the Gottesman-Knill theorem that allows for the efficient classical simulation of certain nonlinear operations.<br>"Funding support from the National Science Scholarship awarded by the Agency for Science, Technology and Research (A*STAR), Singapore, as well as the Enabling Practical-scale Quantum Computing (EPiQC) Expedition, an NSF expedition in computing"--Page 6.<br>by Dax Enshan Koh.<br>Ph. D.<br>Ph.D. Massachusetts Institute of Technology, Department of Mathematics
APA, Harvard, Vancouver, ISO, and other styles
10

Gheorghiu, Alexandru. "Robust verification of quantum computation." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31542.

Full text
Abstract:
Quantum computers promise to offer a considerable speed-up in solving certain problems, compared to the best classical algorithms. In many instances, the gap between quantum and classical running times is conjectured to be exponential. While this is great news for those applications where quantum computers would provide such an advantage, it also raises a significant challenge: how can classical computers verify the correctness of quantum computations? In attempting to answer this question, a number of protocols have been developed in which a classical client (referred to as verifier) can interact with one or more quantum servers (referred to as provers) in order to certify the correctness of a quantum computation performed by the server(s). These protocols are of one of two types: either there are multiple non-communicating provers, sharing entanglement, and the verifier is completely classical; or, there is a single prover and the classical verifier has a device for preparing or measuring quantum states. The latter type of protocols are, arguably, more relevant to near term quantum computers, since having multiple quantum computers that share a large amount of entanglement is, from a technological standpoint, extremely challenging. Before the realisation of practical single-prover protocols, a number of challenges need to be addressed: how robust are these protocols to noise on the verifier's device? Can the protocols be made fault-tolerant without significantly increasing the requirements of the verifier? How do we know that the verifier's device is operating correctly? Could this device be eliminated completely, thus having a protocol with a fully classical verifier and a single quantum prover? Our work attempts to provide answers to these questions. First, we consider a single-prover verification protocol developed by Fitzsimons and Kashefi and show that this protocol is indeed robust with respect to deviations on the quantum state prepared by the verifier. We show that this is true even if those deviations are the result of a correlation with the prover's system. We then use this result to give a verification protocol which is device- independent. The protocol consists of a verifier with a measurement device and a single prover. Device-independence means that the verifier need not trust the measurement device (nor the prover) which can be assumed to be fully malicious (though not communicating with the prover). A key element in realising this protocol is a robust technique of Reichardt, Unger and Vazirani for testing, using non-local correlations, that two untrusted devices share a large number of entangled states. This technique is referred to as rigidity of non-local correlations. Our second result is to prove a rigidity result for a type of quantum correlations known as steering correlations. To do this, we first show that steering correlations can be used in order to certify maximally entangled states, in a setting in which each test is independent of the previous one. We also show that the fidelity with which we characterise the state, in this specific test, is optimal. We then improve the previous result by removing the independence assumption. This then leads to our desired rigidity result. We make use of it, in a similar fashion to the device-independent case, in order to give a verification protocol that is one-sided device-independent. The importance of this application is to show how different trust assumptions affect the efficiency of the protocol. Next, we describe a protocol for fault-tolerantly verifying quantum computations, with minimal "quantum requirements" for the verifier. Specifically, the verifier only requires a device for measuring single-qubit states. Both this device, and the prover's operations are assumed to be prone to errors. We show that under standard assumptions about the error model, it is possible to achieve verification of quantum computation using fault-tolerant principles. As a proof of principle, and to better illustrate the inner workings of the protocol, we describe a toy implementation of the protocol in a quantum simulator, and present the results we obtained, when running it for a small computation. Finally, we explore the possibility of having a verification protocol, with a classical verifier and a single prover, such that the prover is blind with respect to the verifier's computation. We give evidence that this is not possible. In fact, our result is only concerned with blind quantum computation with a classical client, and uses complexity theoretic results to argue why it is improbable for such a protocol to exist. We then use these complexity theoretic techniques to show that a client, with the ability to prepare and send quantum states to a quantum server, would not be able to delegate arbitrary NP problems to that server. In other words, even a client with quantum capabilities cannot exploit those capabilities to delegate the computation of NP problems, while keeping the input, to that computation, private. This is again true, provided certain complexity theoretic conjectures are true.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography