To see the other types of publications on this topic, follow the link: Von Neumann, Algoritmo de.

Journal articles on the topic 'Von Neumann, Algoritmo de'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Von Neumann, Algoritmo de.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Urquhart, Alasdair. "Von Neumann, Gödel and Complexity Theory." Bulletin of Symbolic Logic 16, no. 4 (December 2010): 516–30. http://dx.doi.org/10.2178/bsl/1294171130.

Full text
Abstract:
AbstractAround 1989, a striking letter written in March 1956 from Kurt Gödel to John von Neumann came to light. It poses some problems about the complexity of algorithms; in particular, it asks a question that can be seen as the first formulation of the P = ? NP question. This paper discusses some of the background to this letter, including von Neumann's own ideas on complexity theory. Von Neumann had already raised explicit questions about the complexity of Tarski's decision procedure for elementary algebra and geometry in a letter of 1949 to J. C. C. McKinsey. The paper concludes with a discussion of why theoretical computer science did not emerge as a separate discipline until the 1960s.
APA, Harvard, Vancouver, ISO, and other styles
2

Braga, Henrique Costa, Gray Farias Moita, and Paulo Eduardo Maciel de Almeida. "Comparação entre os Algoritmos de Busca pela Vizinhança de Von Neumann ou de Moore para Geração do Mapa de Distâncias em um Ambiente Construído." Abakós 4, no. 2 (May 19, 2016): 20. http://dx.doi.org/10.5752/p.2316-9451.2016v4n2p20.

Full text
Abstract:
Existem vários algoritmos de busca na literatura, mas os poucos dedicados a simulações de grandes edificações geralmente carecem de maior detalhamento, e também de uma análise quantitativa dos erros gerados pela sua utilização na obtenção dos mapas de distância. Assim, este trabalho apresenta o passo-a-passo de um algoritmo de busca <em>pathfinder</em> especificamente apropriado para simulações em ambientes construídos, considerando duas variações do mesmo em função da vizinhança pesquisada: Von Neumann ou Moore. As duas variações do algoritmo apresentado foram computacionalmente implementadas e diversos experimentos realizados de modo a se conhecer melhor várias de suas características como o mapa de distâncias gerado e os erros inerentes. Verificou-se que o algoritmo aqui apresentado possui diversas características importantes como simplicidade lógica, funcionamento automático, independência tanto do tamanho, quanto do leiaute interno ou externo da edificação a ser pesquisada, e baixo custo computacional para uma aplicação não dinâmica. Entretanto, a variação considerando a vizinhança de Moore forneceu os melhores resultados pelo menor erro na determinação das distâncias (erro médio de + 5.8% e pontual máximo de + 7.9% nos exemplos estudados).
APA, Harvard, Vancouver, ISO, and other styles
3

Haney, Matthew M. "Generalization of von Neumann analysis for a model of two discrete half-spaces: The acoustic case." GEOPHYSICS 72, no. 5 (September 2007): SM35—SM46. http://dx.doi.org/10.1190/1.2750639.

Full text
Abstract:
Evaluating the performance of finite-difference algorithms typically uses a technique known as von Neumann analysis. For a given algorithm, application of the technique yields both a dispersion relation valid for the discrete time-space grid and a mathematical condition for stability. In practice, a major shortcoming of conventional von Neumann analysis is that it can be applied only to an idealized numerical model — that of an infinite, homogeneous whole space. Experience has shown that numerical instabilities often arise in finite-difference simulations of wave propagation at interfaces with strong material contrasts. These interface instabilities occur even though the conventional von Neumann stability criterion may be satisfied at each point of the numerical model. To address this issue, I generalize von Neumann analysis for a model of two half-spaces. I perform the analysis for the case of acoustic wave propagation using a standard staggered-grid finite-difference numerical scheme. By deriving expressions for the discrete reflection and transmission coefficients, I study under what conditions the discrete reflection and transmission coefficients become unbounded. I find that instabilities encountered in numerical modeling near interfaces with strong material contrasts are linked to these cases and develop a modified stability criterion that takes into account the resulting instabilities. I test and verify the stability criterion by executing a finite-difference algorithm under conditions predicted to be stable and unstable.
APA, Harvard, Vancouver, ISO, and other styles
4

Wu, Xiu Mei, Tao Zi Si, and Lei Jiang. "Stable Computer Control Algorithm of Von Neumann Model." Advanced Materials Research 634-638 (January 2013): 4026–29. http://dx.doi.org/10.4028/www.scientific.net/amr.634-638.4026.

Full text
Abstract:
The problem of computer control algorithm for the singular Von Neumann input-output model is researched. A kind of new mathematic method is applied to study the singular systems without converting them into general systems. A kind of stability condition under which the singular input-output model is admissible is proved with the form of linear matrix inequality. Based on this, a new state feedback stability criterion is established. Then the formula of a desired state feedback controller is derived.
APA, Harvard, Vancouver, ISO, and other styles
5

Franchetti, C., and S. M. Holland. "Two extensions of the alternating algorithm of von Neumann." Annali di Matematica Pura ed Applicata 139, no. 1 (December 1985): i. http://dx.doi.org/10.1007/bf01766864.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Franchetti, C., and W. Light. "On the von Neumann alternating algorithm in Hilbert space." Journal of Mathematical Analysis and Applications 114, no. 2 (March 1986): 305–14. http://dx.doi.org/10.1016/0022-247x(86)90085-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gonçalves, João P. M., Robert H. Storer, and Jacek Gondzio. "A family of linear programming algorithms based on an algorithm by von Neumann." Optimization Methods and Software 24, no. 3 (June 2009): 461–78. http://dx.doi.org/10.1080/10556780902797236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, J., and N. Ansari. "Enhanced Birkhoff–von Neumann decomposition algorithm for input queued switches." IEE Proceedings - Communications 148, no. 6 (2001): 339. http://dx.doi.org/10.1049/ip-com:20010618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Devroye, Luc, and Claude Gravel. "The expected bit complexity of the von Neumann rejection algorithm." Statistics and Computing 27, no. 3 (March 26, 2016): 699–710. http://dx.doi.org/10.1007/s11222-016-9648-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

LOU, YIJUN, FANGYUE CHEN, and JUNBIAO GUAN. "FINGERPRINT FEATURE EXTRACTION VIA CNN WITH VON NEUMANN NEIGHBORHOOD." International Journal of Bifurcation and Chaos 17, no. 11 (November 2007): 4145–51. http://dx.doi.org/10.1142/s0218127407019676.

Full text
Abstract:
In this paper, we study fingerprint feature extraction via CNN with Von Neumann neighborhood. The extraction was implemented by using CNN with nine input variables, and we find that the process could also be implemented with only five variables, and an easier algorithm without compromising the effectiveness. According to the CNN model with five input variables and the corresponding CNN gene bank done by Chen et al. [2006, Http1], we can determine the CNN gene easily. Simultaneously, we also find some results in one of the references are incorrect.
APA, Harvard, Vancouver, ISO, and other styles
11

Khan, Sajid, Lansheng Han, Ghulam Mudassir, Bachira Guehguih, and Hidayat Ullah. "3C3R, an Image Encryption Algorithm Based on BBI, 2D-CA, and SM-DNA." Entropy 21, no. 11 (November 2, 2019): 1075. http://dx.doi.org/10.3390/e21111075.

Full text
Abstract:
Color image encryption has enticed a lot of attention in recent years. Many authors proposed a chaotic system-based encryption algorithms for that purpose. However, due to the shortcomings of the low dimensional chaotic systems, similar rule structure for RGB channels, and the small keyspace, many of those were cryptanalyzed by chosen-plaintext or other well-known attacks. A Security vulnerability exists because of the same method being applied over the RGB channels. This paper aims to introduce a new three-channel three rules (3C3R) image encryption algorithm along with two novel mathematical models for DNA rule generator and bit inversion. A different rule structure was applied in the different RGB-channels. In the R-channel, a novel Block-based Bit Inversion (BBI) is introduced, in the G-channel Von-Neumann (VN) and Rotated Von-Neumann (RVN)- based 2D-cellular structure is applied. In the B-channel, a novel bidirectional State Machine-based DNA rule generator (SM-DNA) is introduced. Simulations and results show that the proposed 3C3R encryption algorithm is robust against all well-known attacks particularly for the known-plaintext attacks, statistical attacks, brute-force attacks, differential attacks, and occlusion attacks, etc. Also, unlike earlier encryption algorithms, the 3C3R has no security vulnerability.
APA, Harvard, Vancouver, ISO, and other styles
12

Coluccio, Andrea, Marco Vacca, and Giovanna Turvani. "Logic-in-Memory Computation: Is It Worth It? A Binary Neural Network Case Study." Journal of Low Power Electronics and Applications 10, no. 1 (February 22, 2020): 7. http://dx.doi.org/10.3390/jlpea10010007.

Full text
Abstract:
Recently, the Logic-in-Memory (LiM) concept has been widely studied in the literature. This paradigm represents one of the most efficient ways to solve the limitations of a Von Neumann’s architecture: by placing simple logic circuits inside or near a memory element, it is possible to obtain a local computation without the need to fetch data from the main memory. Although this concept introduces a lot of advantages from a theoretical point of view, its implementation could introduce an increasing complexity overhead of the memory itself, leading to a more sophisticated design flow. As a case study, Binary Neural Networks (BNNs) have been chosen. BNNs binarize both weights and inputs, transforming multiply-and-accumulate into a simpler bitwise logical operation while maintaining high accuracy, making them well-suited for a LiM implementation. In this paper, we present two circuits implementing a BNN model in CMOS technology. The first one, called Out-Of-Memory (OOM) architecture, is implemented following a standard Von Neumann structure. The same architecture was redesigned to adapt the critical part of the algorithm for a modified memory, which is also capable of executing logic calculations. By comparing both OOM and LiM architectures we aim to evaluate if Logic-in-Memory paradigm is worth it. The results highlight that LiM architectures have a clear advantage over Von Neumann architectures, allowing a reduction in energy consumption while increasing the overall speed of the circuit.
APA, Harvard, Vancouver, ISO, and other styles
13

Pen͂a, Javier, Daniel Rodríguez, and Negar Soheili. "On the von Neumann and Frank--Wolfe Algorithms with Away Steps." SIAM Journal on Optimization 26, no. 1 (January 2016): 499–512. http://dx.doi.org/10.1137/15m1009937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Suzuki, Hideaki. "Multiple von Neumann Computers: An Evolutionary Approach to Functional Emergence." Artificial Life 3, no. 2 (April 1997): 121–42. http://dx.doi.org/10.1162/artl.1997.3.2.121.

Full text
Abstract:
A novel system composed of multiple von Neumann computers and an appropriate problem environment is proposed and simulated. Each computer has a memory to store the machine instruction program, and when a program is executed, a series of machine codes in the memory is sequentially decoded, leading to register operations in the central processing unit (CPU). By means of these operations, the computer not only can handle its generally used registers but also can read and write the environmental database. Simulation is driven by genetic algorithms (GAs) performed on the population of program memories. Mutation and crossover create program diversity in the memory, and selection facilitates the reproduction of appropriate programs. Through these evolutionary operations, advantageous combinations of machine codes are created and fixed in the population one by one, and the higher function, which enables the computer to calculate an appropriate number from the environment, finally emerges in the program memory. In the latter half of the article, the performance of GAs on this system is studied. Under different sets of parameters, the evolutionary speed, which is determined by the time until the domination of the final program, is examined and the conditions for faster evolution are clarified. At an intermediate mutation rate and at an intermediate population size, crossover helps create novel advantageous sets of machine codes and evidently accelerates optimization by GAs.
APA, Harvard, Vancouver, ISO, and other styles
15

BORRAS, A., M. CASAS, A. R. PLASTINO, and A. PLASTINO. "SOME ENTANGLEMENT FEATURES OF HIGHLY ENTANGLED MULTIQUBIT STATES." International Journal of Quantum Information 06, supp01 (July 2008): 605–11. http://dx.doi.org/10.1142/s0219749908003840.

Full text
Abstract:
We explore some basic entanglement features of multiqubit systems that are relevant for the development of algorithms for searching highly entangled states. In particular, we compare the behaviours of multiqubit entanglement measures based (i) on the von Neumann entropy of marginal density matrices and (ii) on the linear entropy of those matrices.
APA, Harvard, Vancouver, ISO, and other styles
16

Balsara, Dinshaw S. "von Neumann stability analysis of smoothed particle hydrodynamics—suggestions for optimal algorithms." Journal of Computational Physics 121, no. 2 (October 1995): 357–72. http://dx.doi.org/10.1016/s0021-9991(95)90221-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Haddad, Caroline N., and George J. Habetler. "Projective algorithms for solving complementarity problems." International Journal of Mathematics and Mathematical Sciences 29, no. 2 (2002): 99–113. http://dx.doi.org/10.1155/s0161171202007056.

Full text
Abstract:
We present robust projective algorithms of the von Neumann type for the linear complementarity problem and for the generalized linear complementarity problem. The methods, an extension of Projections Onto Convex Sets (POCS) are applied to a class of problems consisting of finding the intersection of closed nonconvex sets. We give conditions under which convergence occurs (always in2dimensions, and in practice, in higher dimensions) when the matrices areP-matrices (though not necessarily symmetric or positive definite). We provide numerical results with comparisons to Projective Successive Over Relaxation (PSOR).
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Jun-yi, Yi-ding Zhao, Jian-hua Li, and Xiao-jun Liu. "Artificial Bee Colony Optimizer with Bee-to-Bee Communication and Multipopulation Coevolution for Multilevel Threshold Image Segmentation." Mathematical Problems in Engineering 2015 (2015): 1–23. http://dx.doi.org/10.1155/2015/272947.

Full text
Abstract:
This paper proposes a modified artificial bee colony optimizer (MABC) by combining bee-to-bee communication pattern and multipopulation cooperative mechanism. In the bee-to-bee communication model, with the enhanced information exchange strategy, individuals can share more information from the elites through the Von Neumann topology. With the multipopulation cooperative mechanism, the hierarchical colony with different topologies can be structured, which can maintain diversity of the whole community. The experimental results on comparing the MABC to several successful EA and SI algorithms on a set of benchmarks demonstrated the advantage of the MABC algorithm. Furthermore, we employed the MABC algorithm to resolve the multilevel image segmentation problem. Experimental results of the new method on a variety of images demonstrated the performance superiority of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
19

Ou, Qiao-Feng, Bang-Shu Xiong, Lei Yu, Jing Wen, Lei Wang, and Yi Tong. "In-Memory Logic Operations and Neuromorphic Computing in Non-Volatile Random Access Memory." Materials 13, no. 16 (August 10, 2020): 3532. http://dx.doi.org/10.3390/ma13163532.

Full text
Abstract:
Recent progress in the development of artificial intelligence technologies, aided by deep learning algorithms, has led to an unprecedented revolution in neuromorphic circuits, bringing us ever closer to brain-like computers. However, the vast majority of advanced algorithms still have to run on conventional computers. Thus, their capacities are limited by what is known as the von-Neumann bottleneck, where the central processing unit for data computation and the main memory for data storage are separated. Emerging forms of non-volatile random access memory, such as ferroelectric random access memory, phase-change random access memory, magnetic random access memory, and resistive random access memory, are widely considered to offer the best prospect of circumventing the von-Neumann bottleneck. This is due to their ability to merge storage and computational operations, such as Boolean logic. This paper reviews the most common kinds of non-volatile random access memory and their physical principles, together with their relative pros and cons when compared with conventional CMOS-based circuits (Complementary Metal Oxide Semiconductor). Their potential application to Boolean logic computation is then considered in terms of their working mechanism, circuit design and performance metrics. The paper concludes by envisaging the prospects offered by non-volatile devices for future brain-inspired and neuromorphic computation.
APA, Harvard, Vancouver, ISO, and other styles
20

Duan, Xuefeng, and Chunmei Li. "A New Iterative Algorithm for Solving a Class of Matrix Nearness Problem." ISRN Computational Mathematics 2012 (December 8, 2012): 1–6. http://dx.doi.org/10.5402/2012/126908.

Full text
Abstract:
Based on the alternating projection algorithm, which was proposed by Von Neumann to treat the problem of finding the projection of a given point onto the intersection of two closed subspaces, we propose a new iterative algorithm to solve the matrix nearness problem associated with the matrix equations AXB=E, CXD=F, which arises frequently in experimental design. If we choose the initial iterative matrix X0=0, the least Frobenius norm solution of these matrix equations is obtained. Numerical examples show that the new algorithm is feasible and effective.
APA, Harvard, Vancouver, ISO, and other styles
21

Larrabee, Allan R. "The P4 Parallel Programming System, the Linda Environment, and Some Experiences with Parallel Computation." Scientific Programming 2, no. 3 (1993): 23–35. http://dx.doi.org/10.1155/1993/817634.

Full text
Abstract:
The first digital computers consisted of a single processor acting on a single stream of data. In this so-called "von Neumann" architecture, computation speed is limited mainly by the time required to transfer data between the processor and memory. This limiting factor has been referred to as the "von Neumann bottleneck". The concern that the miniaturization of silicon-based integrated circuits will soon reach theoretical limits of size and gate times has led to increased interest in parallel architectures and also spurred research into alternatives to silicon-based implementations of processors. Meanwhile, sequential processors continue to be produced that have increased clock rates and an increase in memory locally available to a processor, and an increase in the rate at which data can be transferred to and from memories, networks, and remote storage. The efficiency of compilers and operating systems is also improving over time. Although such characteristics limit maximum performance, a large improvement in the speed of scientific computations can often be achieved by utilizing more efficient algorithms, particularly those that support parallel computation. This work discusses experiences with two tools for large grain (or "macro task") parallelism.
APA, Harvard, Vancouver, ISO, and other styles
22

Bakaev, Maxim, and Olga Razumnikova. "What Makes a UI Simple? Difficulty and Complexity in Tasks Engaging Visual-Spatial Working Memory." Future Internet 13, no. 1 (January 19, 2021): 21. http://dx.doi.org/10.3390/fi13010021.

Full text
Abstract:
Tasks that imply engagement of visual-spatial working memory (VSWM) are common in interaction with two-dimensional graphical user interfaces. In our paper, we consider two groups of factors as predictors of user performance in such tasks: (1) the metrics based on compression algorithms (RLE and Deflate) plus the Hick’s law, which are known to be characteristic of visual complexity, and (2) metrics based on Gestalt groping principle of proximity, operationalized as von Neumann and Moore range 1 neighborhoods from the cellular automata theory. We involved 88 subjects who performed about 5000 VSWM-engaging tasks and 78 participants who assessed the complexity of the tasks’ configurations. We found that the Gestalt-based predictors had a notable advantage over the visual complexity-based ones, as the memorized chunks best corresponded to von Neumann neighborhood groping. The latter was further used in the formulation of index of difficulty and throughput for VSWM-engaging tasks, which we proposed by analogy with the infamous Fitts’ law. In our experimental study, throughput amounted to 3.75 bit/s, and we believe that it can be utilized for comparing and assessing UI designs.
APA, Harvard, Vancouver, ISO, and other styles
23

Ji, Hao, Michael Mascagni, and Yaohang Li. "Convergence Analysis of Markov Chain Monte Carlo Linear Solvers Using Ulam--von Neumann Algorithm." SIAM Journal on Numerical Analysis 51, no. 4 (January 2013): 2107–22. http://dx.doi.org/10.1137/130904867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Wako, Jun. "A Polynomial-Time Algorithm to Find von Neumann-Morgenstern Stable Matchings in Marriage Games." Algorithmica 58, no. 1 (February 11, 2010): 188–220. http://dx.doi.org/10.1007/s00453-010-9388-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Pinkus, Allan, and Daniel Wulbert. "The multi-dimensional Von Neumann alternating direction search algorithm in C(B) and L1." Journal of Functional Analysis 104, no. 1 (February 1992): 121–48. http://dx.doi.org/10.1016/0022-1236(92)90093-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Saxena, Vishal, Xinyu Wu, Ira Srivastava, and Kehan Zhu. "Towards Neuromorphic Learning Machines Using Emerging Memory Devices with Brain-Like Energy Efficiency." Journal of Low Power Electronics and Applications 8, no. 4 (October 2, 2018): 34. http://dx.doi.org/10.3390/jlpea8040034.

Full text
Abstract:
The ongoing revolution in Deep Learning is redefining the nature of computing that is driven by the increasing amount of pattern classification and cognitive tasks. Specialized digital hardware for deep learning still holds its predominance due to the flexibility offered by the software implementation and maturity of algorithms. However, it is being increasingly desired that cognitive computing occurs at the edge, i.e., on hand-held devices that are energy constrained, which is energy prohibitive when employing digital von Neumann architectures. Recent explorations in digital neuromorphic hardware have shown promise, but offer low neurosynaptic density needed for scaling to applications such as intelligent cognitive assistants (ICA). Large-scale integration of nanoscale emerging memory devices with Complementary Metal Oxide Semiconductor (CMOS) mixed-signal integrated circuits can herald a new generation of Neuromorphic computers that will transcend the von Neumann bottleneck for cognitive computing tasks. Such hybrid Neuromorphic System-on-a-chip (NeuSoC) architectures promise machine learning capability at chip-scale form factor, and several orders of magnitude improvement in energy efficiency. Practical demonstration of such architectures has been limited as performance of emerging memory devices falls short of the expected behavior from the idealized memristor-based analog synapses, or weights, and novel machine learning algorithms are needed to take advantage of the device behavior. In this article, we review the challenges involved and present a pathway to realize large-scale mixed-signal NeuSoCs, from device arrays and circuits to spike-based deep learning algorithms with ‘brain-like’ energy-efficiency.
APA, Harvard, Vancouver, ISO, and other styles
27

HERNANDEZ, GONZALO, HANS J. HERRMANN, and ERIC GOLES. "EXTREMAL AUTOMATA FOR IMAGE SHARPENING." International Journal of Modern Physics C 05, no. 06 (December 1994): 923–31. http://dx.doi.org/10.1142/s0129183194001057.

Full text
Abstract:
We study numerically the parallel iteration of Extremal Rules. For four Extremal Rules, conceived for sharpening algorithms for image processing, we measured, on the square lattice with Von Neumann neighborhood and free boundary conditions, the typical transient length, the loss of information and the damage spreading response considering random and smoothening random damage. The same qualitative behavior was found for all the rules, with no noticeable finite size effect. They have a fast logarithmic convergence towards the fixed points of the parallel update. The linear damage spreading response has no discontinuity at zero damage, for both kinds of damage. Three of these rules produce similar effects. We propose these rules as sharpening algorithms for image processing.
APA, Harvard, Vancouver, ISO, and other styles
28

Balu, Radhakrishnan, Dale Shires, and Raju Namburu. "A quantum algorithm for uniform sampling of models of propositional logic based on quantum probability." Journal of Defense Modeling and Simulation: Applications, Methodology, Technology 16, no. 1 (May 17, 2016): 57–65. http://dx.doi.org/10.1177/1548512916648232.

Full text
Abstract:
We describe a class of quantum algorithms to generate models of propositional logic with equal probability. We consider quantum stochastic flows that are the quantum analogues of classical Markov chains and establish a relation between fixed points on the two flows. We construct chains inspired by von Neumann algorithms using uniform measures as fixed points to construct the corresponding irreversible quantum stochastic flows. We formulate sampling models of propositions in the framework of adiabatic quantum computing and solve the underlying satisfiability instances. Satisfiability formulation is an important and successful technique in modeling the decision theoretic problems in a classical context. We discuss some features of the proposed algorithms tested on an existing quantum annealer D-Wave II extending the simulation of decision theoretic problems to a quantum context.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhang, Yudong, Shuihua Wang, and Genlin Ji. "A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications." Mathematical Problems in Engineering 2015 (2015): 1–38. http://dx.doi.org/10.1155/2015/931256.

Full text
Abstract:
Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms.
APA, Harvard, Vancouver, ISO, and other styles
30

Dong, Shu Xia, and Liang Tang. "Dynamic Neighborhood Particle Swarm Optimization Based on External Archive." Applied Mechanics and Materials 333-335 (July 2013): 1374–78. http://dx.doi.org/10.4028/www.scientific.net/amm.333-335.1374.

Full text
Abstract:
According to the defect of falling into a local optimum when dealing with multimodal problems with basic particle swarm optimization, a dynamic neighborhood particle swarm optimization with external archive (EA-DPSO) is proposed. The Ring topology, All topology and Von Neumann topology are adopted, and dynamically refining particle history optimal position, and then store them on the external archive. In terms of particles characteristics in the external archive, a kind of effective extract mechanism method is designed to choose learning sample. Three peak problems as simulation function are chosen and the results show that EA-DPSO can effectively jump out of local optimal solution. Therefore, it can be seen as an effective algorithm for solving multimodal problems.
APA, Harvard, Vancouver, ISO, and other styles
31

PEYRÉ, GABRIEL, LÉNAÏC CHIZAT, FRANÇOIS-XAVIER VIALARD, and JUSTIN SOLOMON. "Quantum entropic regularization of matrix-valued optimal transport." European Journal of Applied Mathematics 30, no. 6 (September 28, 2017): 1079–102. http://dx.doi.org/10.1017/s0956792517000274.

Full text
Abstract:
This article introduces a new notion of optimal transport (OT) between tensor fields, which are measures whose values are positive semidefinite (PSD) matrices. This “quantum” formulation of optimal transport (Q-OT) corresponds to a relaxed version of the classical Kantorovich transport problem, where the fidelity between the input PSD-valued measures is captured using the geometry of the Von-Neumann quantum entropy. We propose a quantum-entropic regularization of the resulting convex optimization problem, which can be solved efficiently using an iterative scaling algorithm. This method is a generalization of the celebrated Sinkhorn algorithm to the quantum setting of PSD matrices. We extend this formulation and the quantum Sinkhorn algorithm to compute barycentres within a collection of input tensor fields. We illustrate the usefulness of the proposed approach on applications to procedural noise generation, anisotropic meshing, diffusion tensor imaging and spectral texture synthesis.
APA, Harvard, Vancouver, ISO, and other styles
32

Braga, Henrique Costa, Gray Farias Moita, and Paulo Eduardo Maciel de Almeida. "Mapas de distâncias para a segurança contra incêndio em edifícios de interesse social." PARC Pesquisa em Arquitetura e Construção 8, no. 1 (March 30, 2017): 32. http://dx.doi.org/10.20396/parc.v8i1.8647977.

Full text
Abstract:
Algumas Habitações de Interesse Social (HIS), pela sua natureza, podem ter seu Processo de Segurança e Combate a Incêndio e Pânico o mais simplificado possível de forma a contribuir na redução, ao máximo, do custo da edificação. Devido a estas simplificações, a realização de estudos adicionais da segurança, inclusive de desempenho, torna-se especialmente relevante. O mapa de distâncias é um instrumento tecnológico que pode contribuir com o conhecimento da segurança da edificação, trazendo informações tais como a distância máxima a ser percorrida até a saída mais próxima, a distância pontual em qualquer posição do ambiente até a saída e as melhores rotas de movimentação. Também permite se estimar os tempos gastos na movimentação. Neste trabalho são definidos o que são mapas de distância e como podem ser gerados. São apresentadas duas variações dos mesmos com o estabelecimento do mais vantajoso a ser usado. Verificou-se que o mapa de distância gerado pelo algoritmo de busca pela vizinhança de Moore (VM) e seleção aleatória como gerador de rotas mais realistas que as obtidas pela busca pela vizinhança de Von Neumann (VVN) e outros critérios de seleção. Uma HIS de cinco pavimentos é então detalhadamente descrita, e um mapa de distâncias é gerado para a mesma. A partir deste mapa de distâncias, são avaliados diversos dados da edificação como a distância máxima a ser percorrida, a rota de fuga e o tempo estimado para o escape em caso de emergência. Diversas discussões sobre a segurança contra incêndio da edificação são apresentadas.
APA, Harvard, Vancouver, ISO, and other styles
33

RABELO, WILSON R. M., ALEXANDRE G. RODRIGUES, and REINALDO O. VIANNA. "AN ALGORITHM TO PERFORM POVMS THROUGH NEUMARK THEOREM: APPLICATION TO THE DISCRIMINATION OF NON-ORTHOGONAL PURE QUANTUM STATES." International Journal of Modern Physics C 17, no. 08 (August 2006): 1203–18. http://dx.doi.org/10.1142/s0129183106008911.

Full text
Abstract:
We consider a protocol to perform the optimal quantum state discrimination of N linearly independent non-orthogonal pure quantum states and present a computational code. Through the extension of the original Hilbert space, it is possible to perform an unitary operation yielding a final configuration, which gives the best discrimination without ambiguity by means of von Neumann measurements. Our goal is to introduce a detailed general mathematical procedure to realize this task by means of semidefinite programming and norm minimization. The former is used to fix which is the best detection probability amplitude for each state of the ensemble. The latter determines the matrix which leads the states to the final configuration. In a final step, we decompose the unitary transformation in a sequence of two-level rotation matrices.
APA, Harvard, Vancouver, ISO, and other styles
34

Tsotniashvili, Soso, and David Zarnadze. "Selfadjoint Operators and Generalized Central Algorithms in Frechet Spaces." gmj 13, no. 2 (June 2006): 363–82. http://dx.doi.org/10.1515/gmj.2006.363.

Full text
Abstract:
Abstract The paper gives an extension of the fundamental principles of selfadjoint operators in Fréchet–Hilbert spaces, countable-Hilbert and nuclear Fréchet spaces. Generalizations of the well known theorems of von Neumann, Hellinger–Toeplitz, Friedrichs and Ritz are obtained. Definitions of generalized central and generalized spline algorithms are given. The restriction 𝐴∞ of a selfadjoint operator 𝐴 defined on a dense set 𝐷(𝐴) of the Hilbert space 𝐻 to the Frechet space 𝐷(𝐴∞) is substantiated. The extended Ritz method is used for obtaining an approximate solution of the equation 𝐴∞𝑢 = 𝑓 in the Frechet space 𝐷(𝐴∞). It is proved that approximate solutions of this equation constructed by the extended Ritz method do not depend on the number of norms that generate the topology of the space 𝐷(𝐴∞). Hence this approximate method is both a generalized central and generalized spline algorithm. Examples of selfadjoint and positive definite elliptic differential operators satisfying the above conditions are given. The validity of theoretical results in the case of a harmonic oscillator operator is confirmed by numerical calculations.
APA, Harvard, Vancouver, ISO, and other styles
35

Kim, Choongmin, Jacob A. Abraham, Woochul Kang, and Jaeyong Chung. "A Neural Network Decomposition Algorithm for Mapping on Crossbar-Based Computing Systems." Electronics 9, no. 9 (September 18, 2020): 1526. http://dx.doi.org/10.3390/electronics9091526.

Full text
Abstract:
Crossbar-based neuromorphic computing to accelerate neural networks is a popular alternative to conventional von Neumann computing systems. It is also referred as processing-in-memory and in-situ analog computing. The crossbars have a fixed number of synapses per neuron and it is necessary to decompose neurons to map networks onto the crossbars. This paper proposes the k-spare decomposition algorithm that can trade off the predictive performance against the neuron usage during the mapping. The proposed algorithm performs a two-level hierarchical decomposition. In the first global decomposition, it decomposes the neural network such that each crossbar has k spare neurons. These neurons are used to improve the accuracy of the partially mapped network in the subsequent local decomposition. Our experimental results using modern convolutional neural networks show that the proposed method can improve the accuracy substantially within about 10% extra neurons.
APA, Harvard, Vancouver, ISO, and other styles
36

Daskin, Anmer, Ananth Grama, and Sabre Kais. "Quantum random state generation with predefined entanglement constraint." International Journal of Quantum Information 12, no. 05 (August 2014): 1450030. http://dx.doi.org/10.1142/s0219749914500300.

Full text
Abstract:
Entanglement plays an important role in quantum communication, algorithms, and error correction. Schmidt coefficients are correlated to the eigenvalues of the reduced density matrix. These eigenvalues are used in von Neumann entropy to quantify the amount of the bipartite entanglement. In this paper, we map the Schmidt basis and the associated coefficients to quantum circuits to generate random quantum states. We also show that it is possible to adjust the entanglement between subsystems by changing the quantum gates corresponding to the Schmidt coefficients. In this manner, random quantum states with predefined bipartite entanglement amounts can be generated using random Schmidt basis. This provides a technique for generating equivalent quantum states for given weighted graph states, which are very useful in the study of entanglement, quantum computing, and quantum error correction.
APA, Harvard, Vancouver, ISO, and other styles
37

Bieniasz, Lesław K. "The von Neumann stability of finite-difference algorithms for the electrochemical kinetic simulation of diffusion coupled with homogeneous reactions." Journal of Electroanalytical Chemistry 345, no. 1-2 (February 1993): 13–25. http://dx.doi.org/10.1016/0022-0728(93)80466-u.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Zhou, Jiarui, Junshan Yang, Ling Lin, Zexuan Zhu, and Zhen Ji. "A Local Best Particle Swarm Optimization Based on Crown Jewel Defense Strategy." International Journal of Swarm Intelligence Research 6, no. 1 (January 2015): 41–63. http://dx.doi.org/10.4018/ijsir.2015010103.

Full text
Abstract:
Particle swarm optimization (PSO) is a swarm intelligence algorithm well known for its simplicity and high efficiency on various optimization problems. Conventional PSO suffers from premature convergence due to the rapid convergence speed and lack of population diversity. PSO is easy to get trapped in local optimal, which largely deteriorates its performance. It is natural to detect stagnation during the optimization, and reactivate the swarm to search towards the global optimum. In this work the authors impose the reflecting bound-handling scheme and von Neumann topology on PSO to increase the population diversity. A novel Crown Jewel Defense (CJD) strategy is also introduced to restart the swarm when it is trapped in a local optimal. The resultant algorithm named LCJDPSO-rfl is tested on a group of unimodal and multimodal benchmark functions with rotation and shifting, and compared with other state-of-the-art PSO variants. The experimental results demonstrate stability and efficiency of LCJDPSO-rfl on most of the functions.
APA, Harvard, Vancouver, ISO, and other styles
39

Abbott, Andrew. "The Traditional Future: A Computational Theory of Library Research." College & Research Libraries 69, no. 6 (November 1, 2008): 524–45. http://dx.doi.org/10.5860/crl.69.6.524.

Full text
Abstract:
I argue that library-based research should be conceived as a particular kind of research system, in contrast to more familiar systems like standard social scientific research (SSSR). Unlike SSSR, library-based research is based on nonelicited sources, which are recursively used and multiply ordered. It employs the associative algorithms of reading and browsing as opposed to the measurement algorithms of SSSR. Unlike SSSR, it is nonstandardized, nonsequential, and artisanally organized, deriving crucial power from multitasking. Taken together, these facts imply that, as a larger structure, library-based research has a neural net architecture as opposed to the von Neumann architecture of SSSR. This architecture is probably optimal, given library-based research's chief aim, which is less finding truth than filling a space of possible interpretations. From these various considerations it follows that faster is not necessarily better in library-based research, with obvious implications for library technologization. Other implications of this computational theory of library research are also explored.
APA, Harvard, Vancouver, ISO, and other styles
40

Cantone, Domenico, Andrea De Domenico, Pietro Maugeri, and Eugenio G. Omodeo. "Complexity Assessments for Decidable Fragments of Set Theory. I: A Taxonomy for the Boolean Case*." Fundamenta Informaticae 181, no. 1 (June 30, 2021): 37–69. http://dx.doi.org/10.3233/fi-2021-2050.

Full text
Abstract:
We report on an investigation aimed at identifying small fragments of set theory (typically, sublanguages of Multi-Level Syllogistic) endowed with polynomial-time satisfiability decision tests, potentially useful for automated proof verification. Leaving out of consideration the membership relator ∈ for the time being, in this paper we provide a complete taxonomy of the polynomial and the NP-complete fragments involving, besides variables intended to range over the von Neumann set-universe, the Boolean operators ∪ ∩ \, the Boolean relators ⊆, ⊈,=, ≠, and the predicates ‘• = Ø’ and ‘Disj(•, •)’, meaning ‘the argument set is empty’ and ‘the arguments are disjoint sets’, along with their opposites ‘• ≠ Ø and ‘¬Disj(•, •)’. We also examine in detail how to test for satisfiability the formulae of six sample fragments: three sample problems are shown to be NP-complete, two to admit quadratic-time decision algorithms, and one to be solvable in linear time.
APA, Harvard, Vancouver, ISO, and other styles
41

Kulkarni, Sourabh, Sachin Bhat, and Csaba Andras Moritz. "Architecting for Artificial Intelligence with Emerging Nanotechnology." ACM Journal on Emerging Technologies in Computing Systems 17, no. 3 (July 31, 2021): 1–33. http://dx.doi.org/10.1145/3445977.

Full text
Abstract:
Artificial Intelligence is becoming ubiquitous in products and services that we use daily. Although the domain of AI has seen substantial improvements over recent years, its effectiveness is limited by the capabilities of current computing technology. Recently, there have been several architectural innovations for AI using emerging nanotechnology. These architectures implement mathematical computations of AI with circuits that utilize physical behavior of nanodevices purpose-built for such computations. This approach leads to a much greater efficiency vs. software algorithms running on von Neumann processors or CMOS architectures, which emulate the operations with transistor circuits. In this article, we provide a comprehensive survey of these architectural directions and categorize them based on their contributions. Furthermore, we discuss the potential offered by these directions with real-world examples. We also discuss major challenges and opportunities in this field.
APA, Harvard, Vancouver, ISO, and other styles
42

Wu, Tao, Chang Chun Liu, and Cheng He. "Fault Diagnosis of Bearings Based on KJADE and VNWOA-LSSVM Algorithm." Mathematical Problems in Engineering 2019 (December 4, 2019): 1–19. http://dx.doi.org/10.1155/2019/8784154.

Full text
Abstract:
In order to accurately diagnose the faulty parts of the rolling bearing under different operating conditions, the KJADE (Kernel Function Joint Approximate Diagonalization of Eigenmatrices) algorithm is proposed to reduce the dimensionality of the high-dimensional feature data. Then, the VNWOA (Von Neumann Topology Whale Optimization Algorithm) is used to optimize the LSSVM (Least Squares Support Vector Machine) method to diagnose the fault type of the rolling bearing. The VNWOA algorithm is used to optimize the regularization parameters and kernel parameters of LSSVM. The low-dimensional nonlinear features contained in the multidomain feature set are extracted by KJADE and compared with the results of PCA, LDA, KPCA, and JADE methods. Finally, VNWOA-LSSVM is used to identify bearing faults and compare them with LSSVM, GA-LSSVM, PSO-LSSVM, and WOA-LSSVM. The results show that the recognition rate of fault diagnosis is up to 98.67% by using VNWOA-LSSVM. The method based on KJADE and VNWOA-LSSVM can diagnose and identify fault signals more effectively and has higher classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
43

Larkin, Eugene, Alexey Bogomolov, and Sergey Feofilov. "Stability of digital feedback control systems." MATEC Web of Conferences 161 (2018): 02004. http://dx.doi.org/10.1051/matecconf/201816102004.

Full text
Abstract:
Specific problems arising, when Von Neumann type computer is used as feedback element, are considered. It is shown, that due to specifics of operation this element introduce pure lag into control loop, and lag time depends on complexity of algorithm of control. Method of evaluation of runtime between reading data from sensors of object under control and write out data to actuator based on the theory of semi- Markov process is proposed. Formulae for time characteristics estimation are obtained. Lag time characteristics are used for investigation of stability of linear systems. Digital PID controller is divided onto linear part, which is realized with a soft and pure lag unit, which is realized with both hardware and software. With use notions amplitude and phase margins, condition for stability of system functioning are obtained. Theoretical results are confirm with computer experiment carried out on the third-order system.
APA, Harvard, Vancouver, ISO, and other styles
44

de Silva, Nadish. "Efficient quantum gate teleportation in higher dimensions." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 477, no. 2251 (July 2021): 20200865. http://dx.doi.org/10.1098/rspa.2020.0865.

Full text
Abstract:
The Clifford hierarchy is a nested sequence of sets of quantum gates critical to achieving fault-tolerant quantum computation. Diagonal gates of the Clifford hierarchy and ‘nearly diagonal’ semi-Clifford gates are particularly important: they admit efficient gate teleportation protocols that implement these gates with fewer ancillary quantum resources such as magic states. Despite the practical importance of these sets of gates, many questions about their structure remain open; this is especially true in the higher-dimensional qudit setting. Our contribution is to leverage the discrete Stone–von Neumann theorem and the symplectic formalism of qudit stabilizer theory towards extending the results of Zeng et al . (2008) and Beigi & Shor (2010) to higher dimensions in a uniform manner. We further give a simple algorithm for recursively enumerating all gates of the Clifford hierarchy, a simple algorithm for recognizing and diagonalizing semi-Clifford gates, and a concise proof of the classification of the diagonal Clifford hierarchy gates due to Cui et al . (2016) for the single-qudit case. We generalize the efficient gate teleportation protocols of semi-Clifford gates to the qudit setting and prove that every third-level gate of one qudit (of any prime dimension) and of two qutrits can be implemented efficiently. Numerical evidence gathered via the aforementioned algorithms supports the conjecture that higher-level gates can be implemented efficiently.
APA, Harvard, Vancouver, ISO, and other styles
45

Hashmi, M. S., Zainab Shehzad, Asifa Ashraf, Zhiyue Zhang, Yu-Pei Lv, Abdul Ghaffar, Mustafa Inc, and Ayman A. Aly. "A New Variant of B-Spline for the Solution of Modified Fractional Anomalous Subdiffusion Equation." Journal of Function Spaces 2021 (July 1, 2021): 1–8. http://dx.doi.org/10.1155/2021/8047727.

Full text
Abstract:
The objective of this paper is to present an efficient numerical technique for solving time fractional modified anomalous subdiffusion equation. Anomalous diffusion equation has its role in various branches of biological sciences. B-spline is a piecewise function to draw curves and surfaces, which maintain its degree of smoothness at the connecting points. B-spline provides an active process of approximation to the limit curve. In current attempt, B-spline curve is used to approximate the solution curve of time fractional modified anomalous subdiffusion equation. The process is kept simple involving collocation procedure to the data points. The time fractional derivative is approximated with the discretized form of the Riemann-Liouville derivative. The process results in the form of system of algebraic equations, which is solved using a variant of Thomas algorithm. In order to ensure the convergence of the procedure, a valid method named Von Neumann stability analysis is attempted. The graphical and tabular display of results for the illustrated examples is presented, which stamped the efficiency of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
46

Khalid, Roszelinda, and Zuriati Ahmad Zulkarnain. "Enhanced Tight Finite Key Scheme for Quantum Key Distribution (QKD) Protocol to Authenticate Multi-Party System in Cloud Infrastructure." Applied Mechanics and Materials 481 (December 2013): 220–24. http://dx.doi.org/10.4028/www.scientific.net/amm.481.220.

Full text
Abstract:
This research is introducing an enhanced tight finite key scheme for quantum key distribution (QKD) protocol to authenticate multi-party system in cloud infrastructure. The main attraction is to provide a secure channel between a cloud client to establish a connection among them by applying the theories from Von Neumann and Shannon entropies and also Shor's algorithm. By generalizing these theories we will produce enhanced tight finite key scheme for quantum key distribution (QKD) protocol to authenticate multi-party system in cloud infrastructure. Hence we are using quantum channel and also quantum key distribution (QKD) together with BB84 protocol replacing common channel to distribute the key. We are proposing an authentication of multi-party Quantum Key Distribution (MQKD) protocol using an enhanced tight finite key scheme because it will involve a number of parties in cloud infrastructure. Significant of this research is to reduce the possibility of losing a private key by producing a high efficient key rate and attack resilient.
APA, Harvard, Vancouver, ISO, and other styles
47

KURYSHEV, Nikolai I. "The problem of measuring the quantity of output in the input–output model by W. Leontief in modeling the trends in economic reproduction of nations and regions." Regional Economics: Theory and Practice 19, no. 8 (August 16, 2021): 1568–92. http://dx.doi.org/10.24891/re.19.8.1568.

Full text
Abstract:
Subject. This article deals with the problem of constructing a Leontief's input–output matrix. Objectives. The article aims to determine the rules for constructing a Leontief's input–output matrix on the basis of data on production time and quantity of product output. Methods. For the study, I used the methods of logical and mathematical analyses. Results. The article formulates the rules for constructing a Leontief's input–output matrix, taking into account differences in the time of production, quantity of output, as well as the conditions for the reproduction of the resources expended. It summarizes these rules for the J. von Neumann model. Conclusions. The proposed approach to the analysis of the material mechanism of economic reproduction defines the relationship between the quantitative and cost characteristics of the production and consumption of products and resources. This relationship opens up new opportunities for the application of input–output models to create simple and accurate algorithms for identifying and predicting the macroeconomic trends.
APA, Harvard, Vancouver, ISO, and other styles
48

GROGGER, HERWIG A. "OPTIMIZED ARTIFICIAL DISSIPATION TERMS FOR ENHANCED STABILITY LIMITS." Journal of Computational Acoustics 15, no. 02 (June 2007): 235–53. http://dx.doi.org/10.1142/s0218396x07003329.

Full text
Abstract:
Finite difference approximations for the convection equation are developed, which exhibit enhanced stability limits for explicit Runge–Kutta integration. Stability limits are increased by adding artificial dissipation terms, which are optimized to yield greatest stable time steps. For the artificial dissipation terms, symmetric finite difference approximations of even-order derivatives are used with differencing stencils equal to the convective stencils. The spatial discretization inclusive of the added dissipation term is shown to be consistent with a first derivative. The formal order of accuracy in space is decreased by one order, while the order of time integration is not affected. As a result, the time step limits of originally stable Runge–Kutta integration is increased, for some combinations of spatial discretization and time integration by a factor of two. Algorithms, which are unstable without damping are stabilized. The dispersion properties of the algorithms are not influenced by the proposed damping terms. Spectral analysis of the algorithms show very low dissipation error for dimensionless wave numbers k Δ x < 0.5. Stability conditions based on von Neumann stability analysis are given for the proposed schemes for explicit Runge–Kutta time integration of orders up to ten.
APA, Harvard, Vancouver, ISO, and other styles
49

Yamazaki, Ichitaro, Jakub Kurzak, Piotr Luszczek, and Jack Dongarra. "Design and Implementation of a Large Scale Tree-Based QR Decomposition Using a 3D Virtual Systolic Array and a Lightweight Runtime." Parallel Processing Letters 24, no. 04 (December 2014): 1442004. http://dx.doi.org/10.1142/s0129626414420043.

Full text
Abstract:
A systolic array provides an alternative computing paradigm to the von Neumann architecture. Though its hardware implementation has failed as a paradigm to design integrated circuits in the past, we are now discovering that the systolic array as a software virtualization layer can lead to an extremely scalable execution paradigm. To demonstrate this scalability, in this paper, we design and implement a 3D virtual systolic array to compute a tile QR decomposition of a tall-and-skinny dense matrix. Our implementation is based on a state-of-the-art algorithm that factorizes a panel based on a tree-reduction. Freed from the constraint of a planar layout, we present a three-dimensional virtual systolic array architecture for this algorithm. Using a runtime developed as a part of the Parallel Ultra Light Systolic Array Runtime (PULSAR) project, we demonstrate on a Cray-XT5 machine how our virtual systolic array can be mapped to a large-scale machine and obtain excellent parallel performance. This is an important contribution since such a QR decomposition is used, for example, to compute a least squares solution of an overdetermined system, which arises in many scientific and engineering problems.
APA, Harvard, Vancouver, ISO, and other styles
50

Stilman, Boris. "Mosaic Reasoning for Discoveries." Journal of Artificial Intelligence and Soft Computing Research 3, no. 3 (July 1, 2013): 147–73. http://dx.doi.org/10.2478/jaiscr-2014-0011.

Full text
Abstract:
Abstract We investigate structure of the Primary Language of the human brain as introduced by J. von Neumann in 1957. Two components have been investigated, the algorithm optimizing warfighting, Linguistic Geometry (LG), and the algorithm for inventing new algorithms, the Algorithm of Discovery. The latter is based on multiple thought experiments, which manifest themselves via mental visual streams (“mental movies”). There are Observation, Construction and Validation classes of streams. Several visual streams can run concurrently and exchange information between each other. The streams may initiate additional thought experiments, program them, and execute them in due course. The visual streams are focused employing the algorithm of “a child playing a construction set” that includes a visual model, a construction set, and the Ghost. Mosaic reasoning introduced in this paper is one of the major means to focusing visual streams in a desired direction. It uses analogy with an assembly of a picture of various colorful tiles, components of a construction set. In investigating role of mosaic reasoning in the Algorithm of Discovery, in this paper, I replay a series of four thought experiments related to the discovery of the structure of the molecule of DNA. Only the fourth experiment was successful. This series of experiments reveals how a sequence of failures eventually leads the Algorithm to a discovery. This series permits to expose the key components of the mosaic reasoning, tiles and aggregates, local and global matching rules, and unstructured environment. In particular, it reveals the aggregates and the rules that played critical role in the discovery of the structure of DNA. They include the generator and the plug-in aggregates, the transformation and complementarity matching rules, and the type of unstructured environment. For the first time, the Algorithm of Discovery has been applied to replaying discoveries not related to LG and even to mathematics
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography