Academic literature on the topic 'Computational Advantage'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Computational Advantage.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Computational Advantage"

1

Zhong, Han-Sen, Hui Wang, Yu-Hao Deng, et al. "Quantum computational advantage using photons." Science 370, no. 6523 (2020): 1460–63. http://dx.doi.org/10.1126/science.abe8770.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Madsen, Lars S., Fabian Laudenbach, Mohsen Falamarzi Askarani, et al. "Quantum computational advantage with a programmable photonic processor." Nature 606, no. 7912 (2022): 75–81. http://dx.doi.org/10.1038/s41586-022-04725-x.

Full text
Abstract:
AbstractA quantum computer attains computational advantage when outperforming the best classical computers running the best-known algorithms on well-defined tasks. No photonic machine offering programmability over all its quantum gates has demonstrated quantum computational advantage: previous machines1,2 were largely restricted to static gate sequences. Earlier photonic demonstrations were also vulnerable to spoofing3, in which classical heuristics produce samples, without direct simulation, lying closer to the ideal distribution than do samples from the quantum hardware. Here we report quantum computational advantage using Borealis, a photonic processor offering dynamic programmability on all gates implemented. We carry out Gaussian boson sampling4 (GBS) on 216 squeezed modes entangled with three-dimensional connectivity5, using a time-multiplexed and photon-number-resolving architecture. On average, it would take more than 9,000 years for the best available algorithms and supercomputers to produce, using exact methods, a single sample from the programmed distribution, whereas Borealis requires only 36 μs. This runtime advantage is over 50 million times as extreme as that reported from earlier photonic machines. Ours constitutes a very large GBS experiment, registering events with up to 219 photons and a mean photon number of 125. This work is a critical milestone on the path to a practical quantum computer, validating key technological features of photonics as a platform for this goal.
APA, Harvard, Vancouver, ISO, and other styles
3

May, Mike. "Computational Tools Take Advantage of the Data Deluge." Genetic Engineering & Biotechnology News 43, no. 4 (2023): 42–44. http://dx.doi.org/10.1089/gen.43.04.14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bravyi, Sergey, David Gosset, and Robert König. "Quantum advantage with shallow circuits." Science 362, no. 6412 (2018): 308–11. http://dx.doi.org/10.1126/science.aar3106.

Full text
Abstract:
Quantum effects can enhance information-processing capabilities and speed up the solution of certain computational problems. Whether a quantum advantage can be rigorously proven in some setting or demonstrated experimentally using near-term devices is the subject of active debate. We show that parallel quantum algorithms running in a constant time period are strictly more powerful than their classical counterparts; they are provably better at solving certain linear algebra problems associated with binary quadratic forms. Our work gives an unconditional proof of a computational quantum advantage and simultaneously pinpoints its origin: It is a consequence of quantum nonlocality. The proposed quantum algorithm is a suitable candidate for near-future experimental realizations, as it requires only constant-depth quantum circuits with nearest-neighbor gates on a two-dimensional grid of qubits (quantum bits).
APA, Harvard, Vancouver, ISO, and other styles
5

Kendon, Viv, Angelika Sebald, and Susan Stepney. "Heterotic computing: past, present and future." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 373, no. 2046 (2015): 20140225. http://dx.doi.org/10.1098/rsta.2014.0225.

Full text
Abstract:
We introduce and define ‘heterotic computing’ as a combination of two or more computational systems such that they provide an advantage over either substrate used separately. This first requires a definition of physical computation. We take the framework in Horsman et al. (Horsman et al. 2014 Proc. R. Soc. A 470, 20140182. ( doi:10.1098/rspa.2014.0182 )), now known as abstract-representation theory, then outline how to compose such computational systems. We use examples to illustrate the ubiquity of heterotic computing, and to discuss the issues raised when one or more of the substrates is not a conventional silicon-based computer. We briefly outline the requirements for a proper theoretical treatment of heterotic computational systems, and the advantages such a theory would provide.
APA, Harvard, Vancouver, ISO, and other styles
6

Oliveira, Michael de, Luís S. Barbosa, and Ernesto F. Galvão. "Quantum advantage in temporally flat measurement-based quantum computation." Quantum 8 (April 9, 2024): 1312. http://dx.doi.org/10.22331/q-2024-04-09-1312.

Full text
Abstract:
Several classes of quantum circuits have been shown to provide a quantum computational advantage under certain assumptions. The study of ever more restricted classes of quantum circuits capable of quantum advantage is motivated by possible simplifications in experimental demonstrations. In this paper we study the efficiency of measurement-based quantum computation with a completely flat temporal ordering of measurements. We propose new constructions for the deterministic computation of arbitrary Boolean functions, drawing on correlations present in multi-qubit Greenberger, Horne, and Zeilinger (GHZ) states. We characterize the necessary measurement complexity using the Clifford hierarchy, and also generally decrease the number of qubits needed with respect to previous constructions. In particular, we identify a family of Boolean functions for which deterministic evaluation using non-adaptive MBQC is possible, featuring quantum advantage in width and number of gates with respect to classical circuits.
APA, Harvard, Vancouver, ISO, and other styles
7

Yoganathan, Mithuna, Richard Jozsa, and Sergii Strelchuk. "Quantum advantage of unitary Clifford circuits with magic state inputs." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 475, no. 2225 (2019): 20180427. http://dx.doi.org/10.1098/rspa.2018.0427.

Full text
Abstract:
We study the computational power of unitary Clifford circuits with solely magic state inputs (CM circuits), supplemented by classical efficient computation. We show that CM circuits are hard to classically simulate up to multiplicative error (assuming polynomial hierarchy non-collapse), and also up to additive error under plausible average-case hardness conjectures. Unlike other such known classes, a broad variety of possible conjectures apply. Along the way, we give an extension of the Gottesman–Knill theorem that applies to universal computation, showing that for Clifford circuits with joint stabilizer and non-stabilizer inputs, the stabilizer part can be eliminated in favour of classical simulation, leaving a Clifford circuit on only the non-stabilizer part. Finally, we discuss implementational advantages of CM circuits.
APA, Harvard, Vancouver, ISO, and other styles
8

Caha, Libor, Xavier Coiteux-Roy, and Robert Koenig. "Single-qubit gate teleportation provides a quantum advantage." Quantum 8 (December 4, 2024): 1548. https://doi.org/10.22331/q-2024-12-04-1548.

Full text
Abstract:
Gate-teleportation circuits are arguably among the most basic examples of computations believed to provide a quantum computational advantage: In seminal work \cite{TerhalDiVincenzo04}, Terhal and DiVincenzo have shown that these circuits elude simulation by efficient classical algorithms under plausible complexity-theoretic assumptions. Here we consider possibilistic simulation \cite{wang2021possibilistic}, a particularly weak form of this task where the goal is to output any string appearing with non-zero probability in the output distribution of the circuit. We show that even for single-qubit Clifford-gate-teleportation circuits this simulation problem cannot be solved by constant-depth classical circuits with bounded fan-in gates. Our results are unconditional and are obtained by a reduction to the problem of computing the parity, a well-studied problem in classical circuit complexity.
APA, Harvard, Vancouver, ISO, and other styles
9

Ferrara, Roberto, Riccardo Bassoli, Christian Deppe, Frank H. P. Fitzek, and Holger Boche. "The Computational and Latency Advantage of Quantum Communication Networks." IEEE Communications Magazine 59, no. 6 (2021): 132–37. http://dx.doi.org/10.1109/mcom.011.2000863.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

SAKAI, Mikio. "Learning Advantage of Computational Granular Dynamics from the Applications." JAPANESE JOURNAL OF MULTIPHASE FLOW 32, no. 3 (2018): 314–20. http://dx.doi.org/10.3811/jjmf.2018.t005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Computational Advantage"

1

Bergdahl, Joakim. "Asynchronous Advantage Actor-Critic with Adam Optimization and a Layer Normalized Recurrent Network." Thesis, KTH, Optimeringslära och systemteori, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-220698.

Full text
Abstract:
State-of-the-art deep reinforcement learning models rely on asynchronous training using multiple learner agents and their collective updates to a central neural network. In this thesis, one of the most recent asynchronous policy gradientbased reinforcement learning methods, i.e. asynchronous advantage actor-critic (A3C), will be examined as well as improved using prior research from the machine learning community. With application of the Adam optimization method and addition of a long short-term memory (LSTM) with layer normalization, it is shown that the performance of A3C is increased.<br>Moderna modeller inom förstärkningsbaserad djupinlärning förlitar sig på asynkron träning med hjälp av ett flertal inlärningsagenter och deras kollektiva uppdateringar av ett centralt neuralt nätverk. I denna studie undersöks en av de mest aktuella policygradientbaserade förstärkningsinlärningsmetoderna, i.e. asynchronous advantage actor-critic (A3C) med avsikt att förbättra dess prestanda med hjälp av tidigare forskning av maskininlärningssamfundet. Genom applicering av optimeringsmetoden Adam samt långt korttids minne (LSTM) med nätverkslagernormalisering visar det sig att prestandan för A3C ökar.
APA, Harvard, Vancouver, ISO, and other styles
2

Sun, Shupeng. "The clarity of disclosure in patents: An economic analysis using computational linguistics." Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/122181/1/Shupeng_Sun_Thesis.pdf.

Full text
Abstract:
This thesis aims to explore and demonstrate the use of computational linguistic analysis to measure the "readability" of patent documents. By using readability as a proxy for the extent of disclosure in patent documents, this thesis studies whether patent applicants may strategically choose the disclosure level for their patents, and how the disclosure level would affect the patent acquisitions and patent examination. This thesis introduces a new method to the quantitative economic analysis of patents, and generates research results with important implications for patent policy and practice.
APA, Harvard, Vancouver, ISO, and other styles
3

Littlefield, William Joseph II. "Abductive Humanism: Comparative Advantages of Artificial Intelligence and Human Cognition According to Logical Inference." Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case1554480107736449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Hang. "Distributed Support Vector Machine With Graphics Processing Units." ScholarWorks@UNO, 2009. http://scholarworks.uno.edu/td/991.

Full text
Abstract:
Training a Support Vector Machine (SVM) requires the solution of a very large quadratic programming (QP) optimization problem. Sequential Minimal Optimization (SMO) is a decomposition-based algorithm which breaks this large QP problem into a series of smallest possible QP problems. However, it still costs O(n2) computation time. In our SVM implementation, we can do training with huge data sets in a distributed manner (by breaking the dataset into chunks, then using Message Passing Interface (MPI) to distribute each chunk to a different machine and processing SVM training within each chunk). In addition, we moved the kernel calculation part in SVM classification to a graphics processing unit (GPU) which has zero scheduling overhead to create concurrent threads. In this thesis, we will take advantage of this GPU architecture to improve the classification performance of SVM.
APA, Harvard, Vancouver, ISO, and other styles
5

Gu, Yilan. "Advanced Reasoning about Dynamical Systems." Thesis, 2010. http://hdl.handle.net/1807/26274.

Full text
Abstract:
In this thesis, we study advanced reasoning about dynamical systems in a logical framework -- the situation calculus. In particular, we consider promoting the efficiency of reasoning about action in the situation calculus from three different aspects. First, we propose a modified situation calculus based on the two-variable predicate logic with counting quantifiers. We show that solving the projection and executability problems via regression in such language are decidable. We prove that generally these two problems are co-NExpTime-complete in the modified language. We also consider restricting the format of regressable formulas and basic action theories (BATs) further to gain better computational complexity for reasoning about action via regression. We mention possible applications to formalization of Semantic Web services. Then, we propose a hierarchical representation of actions based on the situation calculus to facilitate development, maintenance and elaboration of very large taxonomies of actions. We show that our axioms can be more succinct, while still using an extended regression operator to solve the projection problem. Moreover, such representation has significant computational advantages. For taxonomies of actions that can be represented as finitely branching trees, the regression operator can sometimes work exponentially faster with our theories than it works with the BATs current situation calculus. We also propose a general guideline on how a taxonomy of actions can be constructed from the given set of effect axioms. Finally, we extend the current situation calculus with the order-sorted logic. In the new formalism, we add sort theories to the usual initial theories to describe taxonomies of objects. We then investigate what is the well-sortness for BATs under such framework. We consider extending the current regression operator with well-sortness checking and unification techniques. With the modified regression, we gain computational efficiency by terminating the regression earlier when reasoning tasks are ill-sorted and by reducing the search spaces for well-sorted objects. We also study that the connection between the order-sorted situation calculus and the current situation calculus.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Computational Advantage"

1

Vickery, JR. Food Science and Technology in Australia. CSIRO Publishing, 1990. http://dx.doi.org/10.1071/9780643105003.

Full text
Abstract:
The main purpose of this book is to give food technologists in industry and students in training a comprehensive review of research findings by Australian workers in government, university and industrial laboratories from 1900 to 1990.&#x0D; To further its aims as a reference book, detailed bibliographies of some 1400 research papers have been compiled particularly for the period prior to access of references through databases.&#x0D; Another aim was to draw attention to the many contributions which brought international recognition to their authors; particularly those who did not have the advantage s of modern separation, analytical and computational techniques.
APA, Harvard, Vancouver, ISO, and other styles
2

Brown, Andrew R., Damián Keller, and Maria Helena de Lima. How Ubiquitous Technologies Support Ubiquitous Music. Edited by Brydie-Leigh Bartleet and Lee Higgins. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780190219505.013.5.

Full text
Abstract:
Pervasive computing technologies are providing opportunities and challenges for new musical practices and offering greater access to musical interactions for people at all levels of musical experience. In this chapter we review theoretical insights and practical experiences of taking advantage of these opportunities and meeting these challenges; we describe how to leverage ubiquitous technologies to support ubiquitous music; and we discuss ideas and techniques that can assist in ensuring that social music activities provide an appropriate variety of experiences and strategies to maximize socially positive and musically creative outcomes. Strategies include starting with what is known and available, enhancing human skills with computational automation, and increasing participation through simplification to improve access and promote cultures of open sharing. Three case studies illustrate how these ideas are put into practice, covering experiences from across the world based in varied social contexts and using differing technologies, but sharing the same ambition of enhancing everyday experience through musical interactions mediated by pervasive technologies.
APA, Harvard, Vancouver, ISO, and other styles
3

Coveney, Peter V., and Shunzhou Wan. Molecular Dynamics: Probability and Uncertainty. Oxford University PressOxford, 2025. https://doi.org/10.1093/9780198893479.001.0001.

Full text
Abstract:
Abstract This book explores the intersection of molecular dynamics (MD) simulation with advanced probabilistic methodologies to address the inherent uncertainties in the approach. Beginning with a clear and comprehensible introduction to classical mechanics, the book transitions into the probabilistic formulation of MD, highlighting the importance of ergodic theory, kinetic theory, and unstable periodic orbits, concepts which are largely unknown to current practitioners within the domain. It discussed ensemble-based simulations, free energy calculations and the study of polymer nanocomposites using multi-scale modelling, providing detailed guidance on performing reliable and reproducible computations. A thorough discussion on Verification, Validation, and Uncertainty Quantification (VVUQ) lays out a definitive approach to formulating the uncertainty of MD modelling and simulation. Its interaction with artificial intelligence is examined in the light of these issues. While machine learning (ML) methods offer some advantages and less often-noted drawbacks, the integration of ML with physics-based MD simulations may in future enhance our ability to predict new drugs and advanced materials. The final chapter, ‘The End of Certainty’, synthesizes these themes, advocating a systematic and careful approach to computational research and the embrace of uncertainty as an integral part of innovation. This book serves as a highly original, conceptual and comprehensible guide for researchers and practitioners, emphasizing the need for rapid, accurate, precise and actionable techniques in the rapidly-evolving field of molecular dynamics.
APA, Harvard, Vancouver, ISO, and other styles
4

Allen, Colin, Peter M. Todd, and Jonathan M. Weinberg. Reasoning and Rationality. Edited by Eric Margolis, Richard Samuels, and Stephen P. Stich. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780195309799.013.0003.

Full text
Abstract:
The article explores five parts of Cartesian thought that include individualism, internalism, rationalism, universalism, and human exceptionalism demonstrating the philosophical and psychological theories of rationality. Ecological rationality comes about through the coadaptation of minds and their environments. The internal bounds comprising the capacities of the cognitive system can be shaped by evolution, learning, or development to take advantage of the structure of the external environment. The external bounds, comprising the structure of information available in the environment, can be shaped by the effects of minds making decisions in the world, including most notably in humans the process of cultural evolution. The internal constraints on decision-making including limited computational power and limited memory in the organism and the external ones include limited time push toward simple cognitive mechanisms for making decisions quickly and without much information. Human exceptionalism is one of the strands of Residual Cartesianism that puts the greatest focus on language and symbolic reasoning as the basis for human rationality. The invention of symbolic systems exhibits how humans deliberately and creatively alter their environments to enhance learning and memory and to support reasoning. Nonhuman animals also alter their environments in ways that support adaptive behavior. Stigmergy, an important mechanism for swarm intelligence, is the product of interactions among multiple agents and their environments. It is enhanced through cumulative modification, of the environment by individuals.
APA, Harvard, Vancouver, ISO, and other styles
5

Jemielniak, Dariusz. Thick Big Data. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198839705.001.0001.

Full text
Abstract:
The social sciences are becoming datafied. The questions that have been considered the domain of sociologists, now are answered by data scientists, operating on large datasets, and breaking with the methodological tradition for better or worse. The traditional social sciences, such as sociology or anthropology, are thus under the double threat of becoming marginalized or even irrelevant; both because of the new methods of research, which require more computational skills, and because of the increasing competition from the corporate world, which gains an additional advantage based on data access. However, sociologists and anthropologists still have some important assets, too. Unlike data scientists, they have a long history of doing qualitative research. The more quantified datasets we have, the more difficult it is to interpret them without adding layers of qualitative interpretation. Big Data needs Thick Data. This book presents the available arsenal of new tools for studying the society quantitatively, but also show the new methods of analysis from the qualitative side and encourages their combination. In shows that Big Data can and should be supplemented and interpreted through thick data, as well as cultural analysis, in a novel approach of Thick Big Data.The book is critically important for students and researchers in the social sciences to understand the possibilities of digital analysis, both in the quantitative and qualitative area, and successfully build mixed-methods approaches.
APA, Harvard, Vancouver, ISO, and other styles
6

Patisaul, Heather B., and Scott M. Belcher. The Path Forward. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780199935734.003.0008.

Full text
Abstract:
This chapter focuses on the contemporary approaches of research being used to understand the actions of EDCs and emerging high-throughput screening approaches to examine new and existing chemicals for endocrine-disrupting activities. Concepts arising from the 2007 NRC report “Toxicity Testing in the 21st Century: A Vision and a Strategy” are delineated and the ongoing development of predictive computational toxicology approaches are addressed. The screening strategies being developed under the Tox21 and Toxicity Forecaster (ToxCast) programs are described, with a review of advantages, challenges, and progress to date. There is a brief overview of the EPA’s Interactive Chemical Safety for Sustainability (iCSS) Dashboard as a portal for accessing the ToxCast data through ToxCastDB, and the EPA’s Aggregated Computational Toxicology data warehouse (ACToR), which contains all publicly available EPA chemical toxicity data. Additional challenges related to the inability of current screening approaches to address complex physiology involved in neuroendocrine disruption are addressed.
APA, Harvard, Vancouver, ISO, and other styles
7

Flarend, Alice, and Robert Hilborn. Quantum Computing: From Alice to Bob. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780192857972.001.0001.

Full text
Abstract:
Quantum Computing: From Alice to Bob provides a distinctive and accessible introduction to the rapidly growing fields of quantum information science (QIS) and quantum computing (QC). The book is designed for undergraduate students and upper-level secondary school students with little or no background in physics, computer science, or mathematics beyond secondary school algebra and trigonometry. While broadly accessible, the book provides a solid conceptual and formal understanding of quantum states and entanglement—the key ingredients in quantum computing. The authors give detailed treatments of many of the classic quantum algorithms that demonstrate how and when QC has an advantage over classical computers. The book provides a solid explanation of the physics of QC and QIS and then weds that knowledge to the mathematics of QC algorithms and how those algorithms deploy the principles of quantum physics to solve the problem. This book connects the physics concepts, the computer science vocabulary, and the mathematics, providing a complete picture of how QIS and QC work. The authors give multiple representations of the concept—textual, graphical, and symbolic (state vectors, matrices, and Dirac notation)—which are the lingua franca of QIS and QC. Those multiple representations allow the readers to develop a broader and deeper understanding of the fundamental concepts and their applications. In addition, the book provides examples of recent experimental demonstrations of quantum teleportation and the applications of quantum computational chemistry. The last chapter connects to the growing commercial world of QC and QIS and provides recommendations for further study.
APA, Harvard, Vancouver, ISO, and other styles
8

Anderson, James A. Computing Hardware. Oxford University Press, 2018. http://dx.doi.org/10.1093/acprof:oso/9780199357789.003.0002.

Full text
Abstract:
Chapter 2 presents a kind of computation currently unfamiliar to most, the analog computer. Fifty years ago, they were considered viable competitors to the newer digital computer. Analog computers compute by the use of physical analogs, using, for example, voltages, currents, or shaft positions to represent numbers. They compute using the device properties, not logic. Examples include the balance, a simple device known for millennia; the “Antikythera mechanism,” a complex astronomical calculator from the first century BC; the slide rule; the US Navy’s Mark I fire control computer used for much of the 20th century to aim naval gunfire; and electronic analog computers built in large numbers after World War II. Analog computers can have advantages in ruggedness, simplicity, and reliability but lack the flexibility of digital computers.
APA, Harvard, Vancouver, ISO, and other styles
9

Allwein, Gerard, and Jon Barwise. Logical Reasoning with Diagrams. Oxford University Press, 1996. http://dx.doi.org/10.1093/oso/9780195104271.001.0001.

Full text
Abstract:
One effect of information technology is the increasing need to present information visually. The trend raises intriguing questions. What is the logical status of reasoning that employs visualization? What are the cognitive advantages and pitfalls of this reasoning? What kinds of tools can be developed to aid in the use of visual representation? This newest volume on the Studies in Logic and Computation series addresses the logical aspects of the visualization of information. The authors of these specially commissioned papers explore the properties of diagrams, charts, and maps, and their use in problem solving and teaching basic reasoning skills. As computers make visual representations more commonplace, it is important for professionals, researchers and students in computer science, philosophy, and logic to develop an understanding of these tools; this book can clarify the relationship between visuals and information.
APA, Harvard, Vancouver, ISO, and other styles
10

Green, Peter, Kanti Mardia, Vysaul Nyirongo, and Yann Ruffieux. Bayesian modelling for matching and alignment of biomolecules. Edited by Anthony O'Hagan and Mike West. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198703174.013.2.

Full text
Abstract:
This article describes Bayesian modelling for matching and alignment of biomolecules. One particular task where statistical modelling and inference can be useful in scientific understanding of protein structure is that of matching and alignment of two or more proteins. In this regard, statistical shape analysis potentially has something to offer in solving biomolecule matching and alignment problems. The article discusses the use of Bayesian methods for shape analysis to assist with understanding the three-dimensional structure of protein molecules, with a focus on the problem of matching instances of the same structure in the CoMFA (Comparative Molecular Field Analysis) database of steroid molecules. It introduces a Bayesian hierarchical model for pairwise matching and for alignment of multiple configurations before concluding with an overview of some advantages of the Bayesian approach to problems in protein bioinformatics, along with modelling and computation issues, alternative approaches, and directions for future research.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Computational Advantage"

1

McCloskey, Scott. "Computational Imaging." In Multimedia Forensics. Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-7621-5_3.

Full text
Abstract:
AbstractSince the advent of smartphones, photography is increasingly being done with small, portable, multi-function devices. Relative to the purpose-built cameras that dominated previous eras, smartphone cameras must overcome challenges related to their small form factor. Smartphone cameras have small apertures that produce a wide depth of field, small sensors with rolling shutters that lead to motion artifacts, and small form factors which lead to more camera shake during exposure. Along with these challenges, smartphone cameras have the advantage of tight integration with additional sensors and the availability of significant computational resources. For these reasons, the field of computational imaging has advanced significantly in recent years, with academic groups and researchers from smartphone manufacturers helping these devices become more capable replacements for purpose-built cameras.
APA, Harvard, Vancouver, ISO, and other styles
2

Lipinski, Piotr, Dariusz Puchala, Agnieszka Wosiak, and Liliana Byczkowska-Lipinska. "Transformer Monitoring System Taking Advantage of Hybrid Wavelet Fourier Transform." In Studies in Computational Intelligence. Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-78490-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gauffre, Guillaume, and Emmanuel Dubois. "Taking Advantage of Model-Driven Engineering Foundations for Mixed Interaction Design." In Studies in Computational Intelligence. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-14562-9_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xu, Jin. "Graphs and Computational Complexity." In Biological Computing. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-3870-3_2.

Full text
Abstract:
Abstract Chapter 1 pointed out that NP-complete problems are a “stumbling block” hindering the development of today’s technology. Due to the natural advantage of DNA computing’s parallelism in solving NP-complete problems, research on DNA computing over the past decades has mainly focused on solving NP-complete problems. Considering that many NP-complete problems are graph theory problems, this chapter first introduces some basic knowledge in graph theory; then, it gradually reveals the true nature of NP-complete problems and introduces some related theories of NP-complete problems, especially computational complexity theory.
APA, Harvard, Vancouver, ISO, and other styles
5

Sakas, Damianos P., Dimitrios P. Reklitis, and Panagiotis Trivellas. "Digital Marketing Strategy for Competitive Advantage Acquisition Through Neuromarketing in the Logistics Sector." In Computational and Strategic Business Modelling. Springer International Publishing, 2024. http://dx.doi.org/10.1007/978-3-031-41371-1_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Margetts, Helen, and Cosmina Dorobantu. "Computational Social Science for Public Policy." In Handbook of Computational Social Science for Policy. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-16624-2_1.

Full text
Abstract:
AbstractComputational Social Science (CSS), which brings together the power of computational methods and the analytical rigour of the social sciences, has the potential to revolutionise policymaking. This growing field of research can help governments take advantage of large-scale data on human behaviour and provide policymakers with insights into where policy interventions are needed, which interventions are most likely to be effective, and how to avoid unintended consequences. In this chapter, we show how Computational Social Science can improve policymaking by detecting, measuring, predicting, explaining, and simulating human behaviour. We argue that the improvements that CSS can bring to government are conditional on making ethical considerations an integral part of the process of scientific discovery. CSS has an opportunity to reveal bias and inequalities in public administration and a responsibility to tackle them by taking advantage of research advancements in ethics and responsible innovation. Finally, we identify the primary factors that prevented Computational Social Science from realising its full potential during the Covid-19 pandemic and posit that overcoming challenges linked to limited data flows, siloed models, and rigid organisational structures within government can usher in a new era of policymaking.
APA, Harvard, Vancouver, ISO, and other styles
7

Ad-Dalain, Farah Niaz, Mohammad Subhi Ahmad Abu Al Hayaja’a, and Louai A. Maghrabi. "The Impact of Innovation on Achieving Competitive Advantage: A Case Study of Jordan Telecom Group—Orange." In Studies in Computational Intelligence. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-57242-5_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rashwan, Abdul Rahman Mohammed Suleiman, and Zainab Abd-Elhafiz Ahmed Kassem. "The Role of Digital Transformation in Increasing the Efficiency of Banks’ Performance to Enhance Competitive Advantage." In Studies in Computational Intelligence. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-73057-4_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Masadeh, Majed Abdul-Mahdi, Sabha Maria, Nawaf Al-Qaawneh, and Salem Aljazzar. "The Impact of Strategic Thinking on Enhancing Competitive Advantage (A Case Study: Jordanian Pharmaceutical Manufacturing Companies)." In Studies in Computational Intelligence. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43300-9_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Schatz, Markus, Robert Schweikle, Christian Lausch, Michael Jentsch, and Werner Konrad. "Taking Advantage of 3D Printing So as to Simultaneously Reduce Weight and Mechanical Bonding Stress." In Computational Methods in Applied Sciences. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-89890-2_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Computational Advantage"

1

Bergamaschi, Thiago, Chi-Fang Chen, and Yunchao Liu. "Quantum Computational Advantage with Constant-Temperature Gibbs Sampling." In 2024 IEEE 65th Annual Symposium on Foundations of Computer Science (FOCS). IEEE, 2024. http://dx.doi.org/10.1109/focs61266.2024.00071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Varga, János, and Ágnes Csiszárik-Kocsir. "Small Business Digital Solutions for Competitive Advantage in Operations." In 2024 IEEE 18th International Symposium on Applied Computational Intelligence and Informatics (SACI). IEEE, 2024. http://dx.doi.org/10.1109/saci60582.2024.10619761.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ting, Kai-Ting, Benjamin Scheck, Daniel Feldkhun, Josh Combes, and Kelvin Wagner. "Interferometric Multi-beam Photon-Counting Enhanced Lidar." In Digital Holography and Three-Dimensional Imaging. Optica Publishing Group, 2024. http://dx.doi.org/10.1364/dh.2024.m2b.2.

Full text
Abstract:
A non-redundant array of frequency-shifted beams is transmitted to overlap and interfere on the target for structured-illumination computational lidar imaging. Time-tagged photons are detected and Fourier analyzed to demonstrate a quantum advantage of multi-beam interferometry.
APA, Harvard, Vancouver, ISO, and other styles
4

Adey, Robert A., Andres Peratta, and John M. W. Baynham. "Application of Computational Modeling to Predict the Effectiveness of CP on a PCCP Transmission Pipeline." In CORROSION 2011. NACE International, 2011. https://doi.org/10.5006/c2011-11002.

Full text
Abstract:
Abstract Prestressed Concrete Cylinder Pipe (PCCP) is a rigid pipe designed to take optimum advantage of the tensile strength of steel and of the compressive strength and corrosion inhibiting properties of concrete and is frequently used for water transmission. PCCP consists of a steel cylinder embedded in a concrete core, which is helically wrapped with high-strength, hard-drawn wire after curing. The wire is embedded in thick cement slurry and coated with a dense cement mortar. While the cement mortar and additional coatings usually protect the prestressing wires from corrosion, in certain circumstances chlorides can diffuse into the mortar and reach the wires. Therefore PCCP transmission pipelines can also be protected by CP systems to mitigate the risk of corrosion damage when chlorides are high in the soil. This paper describes a computer modeling study which was designed to determine the protection provided by a CP system, to evaluate different design options, and to optimize the design. Results will be presented showing the model predictions for the different cases considered.
APA, Harvard, Vancouver, ISO, and other styles
5

Blauch, Nicholas, Marlene Behrmann, and David Plaut. "Visual Expertise and the Familiar Face Advantage." In 2019 Conference on Cognitive Computational Neuroscience. Cognitive Computational Neuroscience, 2019. http://dx.doi.org/10.32470/ccn.2019.1414-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zaridis, Apostolos D., George Maroulis, and Theodore E. Simos. "Competitive Advantage and its Sources in an Evolving Market." In COMPUTATIONAL METHODS IN SCIENCE AND ENGINEERING: Advances in Computational Science: Lectures presented at the International Conference on Computational Methods in Sciences and Engineering 2008 (ICCMSE 2008). AIP, 2009. http://dx.doi.org/10.1063/1.3225468.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bavajigari, Sai Kiran Kanchari, and Chanan Singh. "Investigation of Computational Advantage of using Importance sampling in Monte Carlo Simulation." In 2019 North American Power Symposium (NAPS). IEEE, 2019. http://dx.doi.org/10.1109/naps46351.2019.9000323.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Ziqiong, Wei Li, and Ying Huang. "SPEA2-NIA: A Strength Pareto Evolutionary Algorithm with a Neighborhood Interval Advantage Indicator." In 9th International Conference on Computational Intelligence and Security. Research Publishing (S) Pte. Ltd., 2023. http://dx.doi.org/10.3850/978-981-18-8813-7_p144-cd.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Li. "The Effects of Manufacturing Firm's Supply Chain Performance on Competitive Advantage." In 2011 Fourth International Joint Conference on Computational Sciences and Optimization (CSO). IEEE, 2011. http://dx.doi.org/10.1109/cso.2011.264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rusert, Jonathan. "VertAttack: Taking Advantage of Text Classifiers’ Horizontal Vision." In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.naacl-long.41.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Computational Advantage"

1

Zheng, Jinhui, Matteo Ciantia, and Jonathan Knappett. On the efficiency of coupled discrete-continuum modelling analyses of cemented materials. University of Dundee, 2021. http://dx.doi.org/10.20933/100001236.

Full text
Abstract:
Computational load of discrete element modelling (DEM) simulations is known to increase with the number of particles. To improve the computational efficiency hybrid methods using continuous elements in the far-field, have been developed to decrease the number of discrete particles required for the model. In the present work, the performance of using such coupling methods is investigated. In particular, the coupled wall method, known as the “wall-zone” method when coupling DEM and the continuum Finite Differences Method (FDM) using the Itasca commercial codes PFC and FLAC respectively, is here analysed. To determine the accuracy and the efficiency of such a coupling approach, 3-point bending tests of cemented materials are simulated numerically. To validate the coupling accuracy first the elastic response of the beam is considered. The advantage of employing such a coupling method is then investigated by loading the beam until failure. Finally, comparing the results between DEM, DEM-FDM coupled and FDM models, the advantages and disadvantages of each method are outlined.
APA, Harvard, Vancouver, ISO, and other styles
2

Pasupuleti, Murali Krishna. Quantum-Enhanced Machine Learning: Harnessing Quantum Computing for Next-Generation AI Systems. National Education Services, 2025. https://doi.org/10.62311/nesx/rrv125.

Full text
Abstract:
Abstract Quantum-enhanced machine learning (QML) represents a paradigm shift in artificial intelligence by integrating quantum computing principles to solve complex computational problems more efficiently than classical methods. By leveraging quantum superposition, entanglement, and parallelism, QML has the potential to accelerate deep learning training, optimize combinatorial problems, and enhance feature selection in high-dimensional spaces. This research explores foundational quantum computing concepts relevant to AI, including quantum circuits, variational quantum algorithms, and quantum kernel methods, while analyzing their impact on neural networks, generative models, and reinforcement learning. Hybrid quantum-classical AI architectures, which combine quantum subroutines with classical deep learning models, are examined for their ability to provide computational advantages in optimization and large-scale data processing. Despite the promise of quantum AI, challenges such as qubit noise, error correction, and hardware scalability remain barriers to full-scale implementation. This study provides an in-depth evaluation of quantum-enhanced AI, highlighting existing applications, ongoing research, and future directions in quantum deep learning, autonomous systems, and scientific computing. The findings contribute to the development of scalable quantum machine learning frameworks, offering novel solutions for next-generation AI systems across finance, healthcare, cybersecurity, and robotics. Keywords Quantum machine learning, quantum computing, artificial intelligence, quantum neural networks, quantum kernel methods, hybrid quantum-classical AI, variational quantum algorithms, quantum generative models, reinforcement learning, quantum optimization, quantum advantage, deep learning, quantum circuits, quantum-enhanced AI, quantum deep learning, error correction, quantum-inspired algorithms, quantum annealing, probabilistic computing.
APA, Harvard, Vancouver, ISO, and other styles
3

Parekh, Ojas, John Kallaugher, Kevin Thompson, Yipu Wang, and Cynthia Phillips. Unconventional Quantum Advantages for Computation (U-QuAC). Office of Scientific and Technical Information (OSTI), 2024. http://dx.doi.org/10.2172/2462899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hunter, R., S. Ross, and Jing-Ru Cheng. A general-purpose multiplatform GPU-accelerated ray tracing API. Engineer Research and Development Center (U.S.), 2023. http://dx.doi.org/10.21079/11681/47260.

Full text
Abstract:
Real-time ray tracing is an important tool in computational research. Among other things, it is used to model sensors for autonomous vehicle simulation, efficiently simulate radiative energy propagation, and create effective data visualizations. However, raytracing libraries currently offered for GPU platforms have a high level of complexity to facilitate the detailed configuration needed by gaming engines and high-fidelity renderers. A researcher wishing to take advantage of the performance gains offered by the GPU for simple ray casting routines would need to learn how to use these ray tracing libraries. Additionally, they would have to adapt this code to each GPU platform they run on. Therefore, a C++ API has been developed that exposes simple ray casting endpoints that are implemented in GPU-specific code for several contemporary device platforms. This API currently supports the NVIDIA OptiX ray tracing library, Vulkan, AMD Radeon Rays, and even Intel Embree. Benchmarking tests using this API provide insight to help users determine the optimal backend library to select for their ray tracing needs. HPC research will be well-served by the ability to perform general purpose raytracing on the increasing amount of graphics and machine learning nodes offered by the DoD High Performance Computing Modernization Program.
APA, Harvard, Vancouver, ISO, and other styles
5

Smith, John, Aaron Hill, Leah Reeder, et al. Neuromorphic scaling advantages for energy-efficient random walk computations. Office of Scientific and Technical Information (OSTI), 2020. http://dx.doi.org/10.2172/1671377.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Apesteguia, Jose, Miguel A. Ballester, and Ángelo Gutiérrez-Daza. Random Discounted Expected Utilit. Banco de México, 2024. http://dx.doi.org/10.36095/banxico/di.2024.03.

Full text
Abstract:
This paper introduces the random discounted expected utility (RDEU) model, which we have developed as a means to deal with heterogeneous risk and time preferences. The RDEU model provides an explicit linkage between preference and choice heterogeneity. We prove it has solid comparative statics, discuss its identification, and demonstrate its computational convenience. Finally, we use two distinct experimental datasets to illustrate the advantages of the RDEU model over common alternatives for estimating heterogeneity in preferences across individuals.
APA, Harvard, Vancouver, ISO, and other styles
7

Idakwo, Gabriel, Sundar Thangapandian, Joseph Luttrell, Zhaoxian Zhou, Chaoyang Zhang, and Ping Gong. Deep learning-based structure-activity relationship modeling for multi-category toxicity classification : a case study of 10K Tox21 chemicals with high-throughput cell-based androgen receptor bioassay data. Engineer Research and Development Center (U.S.), 2021. http://dx.doi.org/10.21079/11681/41302.

Full text
Abstract:
Deep learning (DL) has attracted the attention of computational toxicologists as it offers a potentially greater power for in silico predictive toxicology than existing shallow learning algorithms. However, contradicting reports have been documented. To further explore the advantages of DL over shallow learning, we conducted this case study using two cell-based androgen receptor (AR) activity datasets with 10K chemicals generated from the Tox21 program. A nested double-loop cross-validation approach was adopted along with a stratified sampling strategy for partitioning chemicals of multiple AR activity classes (i.e., agonist, antagonist, inactive, and inconclusive) at the same distribution rates amongst the training, validation and test subsets. Deep neural networks (DNN) and random forest (RF), representing deep and shallow learning algorithms, respectively, were chosen to carry out structure-activity relationship-based chemical toxicity prediction. Results suggest that DNN significantly outperformed RF (p &lt; 0.001, ANOVA) by 22–27% for four metrics (precision, recall, F-measure, and AUPRC) and by 11% for another (AUROC). Further in-depth analyses of chemical scaffolding shed insights on structural alerts for AR agonists/antagonists and inactive/inconclusive compounds, which may aid in future drug discovery and improvement of toxicity prediction modeling.
APA, Harvard, Vancouver, ISO, and other styles
8

Young, David, Sean McGill, Ashley Elkins, Marin Kress, Brandan Scully, and Rachel Bain. New metrics for managing waterways : vessel encroachment volume for selected South Atlantic Division ports. Engineer Research and Development Center (U.S.), 2024. http://dx.doi.org/10.21079/11681/49216.

Full text
Abstract:
The US Army Corps of Engineers (USACE) uses two metrics to evaluate maintenance for coastal navigation projects: cargo tonnage at the associated port and the controlling depth in the channel relative to the authorized channel depth. These are calculated through normal business practices and describe the relative importance (tonnage) of the port and the operating condition (controlling depth) of the channel. They are incorporated into a risk-based decision framework that directs funds to locations where channel conditions have deteriorated. Using Automatic Identification System (AIS) vessel-position data, USACE is pioneering the computation of metrics related to the space between the hull of transiting vessels and the waterway bed for channels, the underkeel clearance. This and related metrics describe how waterway users take advantage of the service provided directly by USACE (maintained channel depth). This study compares the underkeel clearance metrics among 13 ports in the South Atlantic Division over a span of 3 years by combining marine vessel AIS data, tidal predictions, channel bathymetric surveys, and vessel sailing draft. Comparing these values across ports allows these metrics to be integrated into the decision framework that drives dredge funding allocations.
APA, Harvard, Vancouver, ISO, and other styles
9

Tan, Peng, and Nicholas Sitar. Parallel Level-Set DEM (LS-DEM) Development and Application to the Study of Deformation and Flow of Granular Media. Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA, 2023. http://dx.doi.org/10.55461/kmiz5819.

Full text
Abstract:
We present a systematic investigation of computational approaches to the modeling of granular materials. Granular materials are ubiquitous in everyday life and in a variety of engineering and industrial applications. Despite the apparent simplicity of the laws governing particle-scale interactions, predicting the continuum mechanical response of granular materials still poses extraordinary challenges. This is largely due to the complex history dependence resulting from continuous rearrangement of the microstructure of granular material, as well as the mechanical interlocking due to grain morphology and surface roughness. X-Ray Computed Tomography (XRCT) is used to characterize the grain morphology and the fabric of the granular media, naturally deposited sand in this study. The Level-Set based Discrete Element Method (LS-DEM) is then used to bridge the granular behavior gap between the micro and macro scale. The LS-DEM establishes a one-to-one correspondence between granular objects and numerical avatars and captures the details of grain morphology and surface roughness. However, the high-fidelity representation significantly increases the demands on computational resources. To this end a parallel version of LS-DEM is introduced to significantly decrease the computational demands. The code employs a binning algorithm, which reduces the search complexity of contact detection from O(n2) to O(n), and a domain decomposition strategy is used to elicit parallel computing in a memory- and communication-efficient manner. The parallel implementation shows good scalability and efficiency. High fidelity LS avatars obtained from XRCT images of naturally deposited sand are then used to replicate the results of triaxial tests using the new, parallel LS-DEM code. The result show that both micro- and macro-mechanical behavior of natural material is well captured and is consistent with experimental data, confirming experimental observation that the primary source of peak strength of sand is the mechanical interlocking between irregularly shaped grains. Specifically, triaxial test simulations with a flexible membrane produce a very good match to experimentally observed relationships between deviatoric stress and mobilized friction angle for naturally deposited sand. We then explore the viability of modeling dynamic problems with a new formulation of an impulse based LS-DEM. The new formulation is stable, fast, and energy conservative. However, it can be numerically stiff when the assembly has substantial mass differences between particles. We also demonstrate the feasibility of modeling deformable structures in the rigid body framework and propose several enhancements to improve the convergence of collision resolution, including a hybrid time integration scheme to separately handle at rest contacts and dynamic collisions. Finally, we extend the impulse-based LS-DEM to include arbitrarily shaped topographic surfaces and exploit its algorithmic advantages to demonstrate the feasibility of modeling realistic behavior of granular flows. The novel formulation significantly improves performance of dynamic simulations by allowing larger time steps, which is advantageous for observing the full development of physical phenomena such as rock avalanches, which we present as an illustrative example.
APA, Harvard, Vancouver, ISO, and other styles
10

Stebbing, Nicola, Claire Witham, Frances Beckett, Helen Webster, Lois Huggett, and David Thomson. © Crown copyright 2024, Met Office Page 1 of 43 Can we improve plume dispersal modelling for fire related emergency response operations by utilising short-range dispersion schemes? Met Office, 2024. http://dx.doi.org/10.62998/wnnr5415.

Full text
Abstract:
Large fires that produce plumes of smoke and other contaminants can cause harm to both people and the environment. To support UK emergency responders, the Met Office Environmental Monitoring and Response Centre (EMARC) provides dedicated weather advice and forecasts of the plume in the form of CHEmical METeorological (CHEMET) reports. The plume’s expected location, extent and relative air concentrations of pollutants are predicted using the Numerical Atmospheric-dispersion Modelling Environment (NAME), which simulates the transport and dispersion of pollutants using numerical weather prediction data. During major fires, air quality monitoring equipment is deployed to confirm the presence of elevated concentrations of contaminants. We use ground-level air concentration measurements from multiple events to evaluate the operational set-up of NAME. We investigate both the output averaging depth used to calculate air concentrations and the use of three optional NAME schemes that are designed to improve the representation of short-range dispersal dynamics: the near-source scheme, the plume-rise scheme, and the urban scheme. We find that using the current operational output averaging depth of 100 m produces model air concentrations that compare best to point observations at the surface, and that using the near-source and urban schemes further improves the fit. However, using these more computationally expensive schemes has little impact on the modelled location and extent of the plume, suggesting they may offer no advantage over using the current operational set-up to produce CHEMETs. Using the plume-rise scheme strongly influences the predicted plume location, extent and surface concentrations. Further work is needed to understand whether its application is appropriate for simulating plumes from fires. We conclude that the current operational set-up can be maintained while the significance of the impact the optional schemes have on CHEMET plume dispersal forecasts is considered further.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography