To see the other types of publications on this topic, follow the link: Computer architecture. Algorithms.

Journal articles on the topic 'Computer architecture. Algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Computer architecture. Algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Mihelič, Jurij, and Uroš Čibej. "EXPERIMENTAL COMPARISON OF MATRIX ALGORITHMS FOR DATAFLOW COMPUTER ARCHITECTURE." Acta Electrotechnica et Informatica 18, no. 3 (2018): 47–56. http://dx.doi.org/10.15546/aeei-2018-0025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Popov, Oleksandr, and Oleksiy Chystiakov. "On the Efficiency of Algorithms with Multi-level Parallelism." Physico-mathematical modelling and informational technologies, no. 33 (September 5, 2021): 133–37. http://dx.doi.org/10.15407/fmmit2021.33.133.

Full text
Abstract:
The paper investigates the efficiency of algorithms for solving computational mathematics problems that use a multilevel model of parallel computing on heterogeneous computer systems. A methodology for estimating the acceleration of algorithms for computers using a multilevel model of parallel computing is proposed. As an example, the parallel algorithm of the iteration method on a subspace for solving the generalized algebraic problem of eigenvalues of symmetric positive definite matrices of sparse structure is considered. For the presented algorithms, estimates of acceleration coefficients and efficiency were obtained on computers of hybrid architecture using graphics accelerators, on multi-core computers with shared memory and multi-node computers of MIMD-architecture.
APA, Harvard, Vancouver, ISO, and other styles
3

Krommer, Arnold R., and Christoph W. Ueberhuber. "Architecture adaptive algorithms." Parallel Computing 19, no. 4 (1993): 409–35. http://dx.doi.org/10.1016/0167-8191(93)90055-p.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kołata, Joanna, and Piotr Zierke. "The Decline of Architects: Can a Computer Design Fine Architecture without Human Input?" Buildings 11, no. 8 (2021): 338. http://dx.doi.org/10.3390/buildings11080338.

Full text
Abstract:
Architects are required to have knowledge of current legislation, ergonomics, and the latest technical solutions. In addition, the design process necessitates an appreciation of the quality of the space and a high degree of creativity. However, it is a profession that has undergone significant changes in recent years due to the pressure exerted by the development of information technology. The designs generated by computer algorithms are becoming such a serious part of designers’ work that some are beginning to question whether they are more the work of computers than humans. There are also increasing suggestions that software development will eventually lead to a situation where humans in the profession will become redundant. This review article aims to present the currently used, implemented, and planned computer technologies employed in the design and consider how they affect and will affect the work of architects in the future. It includes opinions of a wide range of experts on the possibility of computer algorithms replacing architects. The ultimate goal of the article is an attempt to answer the question: will computers eliminate the human factor in the design of the future? It also considers the artificial intelligence or communication skills that computer algorithms would require to achieve this goal. The answers to these questions will contribute not only to determining the future of architecture but will also indicate the current condition of the profession. They will also help us to understand the technologies that are making computers capable of increasingly replacing human professions. Despite differing opinions on the possibility of computer algorithms replacing architects, the conclusions indicate that, currently, computers do not have capabilities and skills to achieve this goal. The speed of technological development, especially such technologies as artificial superintelligence, artificial brains, or quantum computers allows us to predict that the replacement of the architect by machines will be unrealistic in coming decades.
APA, Harvard, Vancouver, ISO, and other styles
5

Keyes, D. E., H. Ltaief, and G. Turkiyyah. "Hierarchical algorithms on hierarchical architectures." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 378, no. 2166 (2020): 20190055. http://dx.doi.org/10.1098/rsta.2019.0055.

Full text
Abstract:
A traditional goal of algorithmic optimality, squeezing out flops, has been superseded by evolution in architecture. Flops no longer serve as a reasonable proxy for all aspects of complexity. Instead, algorithms must now squeeze memory, data transfers, and synchronizations, while extra flops on locally cached data represent only small costs in time and energy. Hierarchically low-rank matrices realize a rarely achieved combination of optimal storage complexity and high-computational intensity for a wide class of formally dense linear operators that arise in applications for which exascale computers are being constructed. They may be regarded as algebraic generalizations of the fast multipole method. Methods based on these hierarchical data structures and their simpler cousins, tile low-rank matrices, are well proportioned for early exascale computer architectures, which are provisioned for high processing power relative to memory capacity and memory bandwidth. They are ushering in a renaissance of computational linear algebra. A challenge is that emerging hardware architecture possesses hierarchies of its own that do not generally align with those of the algorithm. We describe modules of a software toolkit, hierarchical computations on manycore architectures, that illustrate these features and are intended as building blocks of applications, such as matrix-free higher-order methods in optimization and large-scale spatial statistics. Some modules of this open-source project have been adopted in the software libraries of major vendors. This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.
APA, Harvard, Vancouver, ISO, and other styles
6

Jacobson, Peter, Bo Kågström, and Mikael Rännar. "Algorithm Development for Distributed Memory Multicomputers Using CONLAB." Scientific Programming 1, no. 2 (1992): 185–203. http://dx.doi.org/10.1155/1992/365325.

Full text
Abstract:
CONLAB (CONcurrent LABoratory) is an environment for developing algorithms for parallel computer architectures and for simulating different parallel architectures. A user can experimentally verify and obtain a picture of the real performance of a parallel algorithm executing on a simulated target architecture. CONLAB gives a high-level support for expressing computations and communications in a distributed memory multicomputer (DMM) environment. A development methodology for DMM algorithms that is based on different levels of abstraction of the problem, the target architecture, and the CONLAB language itself is presented and illustrated with two examples. Simulotion results for and real experiments on the Intel iPSC/2 hypercube are presented. Because CONLAB is developed to run on uniprocessor UNIX workstations, it is an educational tool that offers interactive (simulated) parallel computing to a wide audience.
APA, Harvard, Vancouver, ISO, and other styles
7

Coe, James, and Mustafa Atay. "Evaluating Impact of Race in Facial Recognition across Machine Learning and Deep Learning Algorithms." Computers 10, no. 9 (2021): 113. http://dx.doi.org/10.3390/computers10090113.

Full text
Abstract:
The research aims to evaluate the impact of race in facial recognition across two types of algorithms. We give a general insight into facial recognition and discuss four problems related to facial recognition. We review our system design, development, and architectures and give an in-depth evaluation plan for each type of algorithm, dataset, and a look into the software and its architecture. We thoroughly explain the results and findings of our experimentation and provide analysis for the machine learning algorithms and deep learning algorithms. Concluding the investigation, we compare the results of two kinds of algorithms and compare their accuracy, metrics, miss rates, and performances to observe which algorithms mitigate racial bias the most. We evaluate racial bias across five machine learning algorithms and three deep learning algorithms using racially imbalanced and balanced datasets. We evaluate and compare the accuracy and miss rates between all tested algorithms and report that SVC is the superior machine learning algorithm and VGG16 is the best deep learning algorithm based on our experimental study. Our findings conclude the algorithm that mitigates the bias the most is VGG16, and all our deep learning algorithms outperformed their machine learning counterparts.
APA, Harvard, Vancouver, ISO, and other styles
8

Lakhotia, Arun, Suresh Golconda, Anthony Maida, et al. "CajunBot: Architecture and algorithms." Journal of Field Robotics 23, no. 8 (2006): 555–78. http://dx.doi.org/10.1002/rob.20129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fabiani, Erwan. "Experiencing a Problem-Based Learning Approach for Teaching Reconfigurable Architecture Design." International Journal of Reconfigurable Computing 2009 (2009): 1–11. http://dx.doi.org/10.1155/2009/923415.

Full text
Abstract:
This paper presents the “reconfigurable computing” teaching part of a computer science master course (first year) on parallel architectures. The practical work sessions of this course rely on active pedagogy using problem-based learning, focused on designing a reconfigurable architecture for the implementation of an application class of image processing algorithms. We show how the successive steps of this project permit the student to experiment with several fundamental concepts of reconfigurable computing at different levels. Specific experiments include exploitation of architectural parallelism, dataflow and communicating component-based design, and configurability-specificity tradeoffs.
APA, Harvard, Vancouver, ISO, and other styles
10

Damrudi, Masumeh, and Kamal Jadidy Aval. "Ranking and Closest Element Algorithms on Centralized Diamond Architecture." International Journal of Engineering & Technology 7, no. 4.1 (2018): 1. http://dx.doi.org/10.14419/ijet.v7i4.1.19480.

Full text
Abstract:
Employing an appropriate algorithm, hardware and technique makes operations easier and faster in today digital world. Searching data which is a fundamental operation in computer science is an important problem in different Areas. An efficient algorithm is useful to search a data element in huge amount of data while information is growing in every second. There are various papers on searching algorithms to find data elements whereas different types of query in different areas of works including position, rank, count and closest element exists. Each of these queries may be useful in different computations. This paper proposed two algorithms of these four types of query employing Centralized Diamond architecture which consume constant execution time.
APA, Harvard, Vancouver, ISO, and other styles
11

SCHMOLLINGER, MARTIN, and MICHAEL KAUFMANN. "DESIGNING PARALLEL ALGORITHMS FOR HIERARCHICAL SMP CLUSTERS." International Journal of Foundations of Computer Science 14, no. 01 (2003): 59–78. http://dx.doi.org/10.1142/s0129054103001595.

Full text
Abstract:
Clusters of symmetric multiprocessor nodes (SMP clusters) are one of the most important parallel architectures at the moment. The architecture consists of shared-memory nodes with multiple processors and a fast interconnection network between the nodes. New programming models try to exploit this architecture by using threads in the nodes and using message-passing-libraries for inter-node communication. In order to develop efficient algorithms, it is necessary to consider the hybrid nature of the architecture and of the programming models. We present the κNUMA-model and a methodology that build a good base for designing efficient algorithms for SMP clusters. The κNUMA-model is a computational model that extends the bulk-synchronous parallel (BSP) model with the characteristics of SMP clusters and new hybrid programming models. The κNUMA-methodology suggests to develop efficient overall algorithms by developing efficient algorithms for each level in the hierarchy. We use the problem of personalized one-to-all-broadcast and the dense matrix-vector-multiplication for the presentation. The theoretical results of the analysis of the dense matrix-vector-multiplication are verified practically. We show results of experiments, made on a Linux-cluster of dual Pentium-III nodes.
APA, Harvard, Vancouver, ISO, and other styles
12

ESHAGHIAN, MARY M., J. GREG NASH, MUHAMMAD E. SHAABAN, and DAVID B. SHU. "HETEROGENEOUS ALGORITHMS FOR IMAGE UNDERSTANDING ARCHITECTURE∗." Parallel Algorithms and Applications 1, no. 4 (1993): 273–84. http://dx.doi.org/10.1080/10637199308915447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

MÉRIGOT, A., and B. ZAVIDOVIQUE. "IMAGE ANALYSIS ON MASSIVELY PARALLEL COMPUTERS: AN ARCHITECTURAL POINT OF VIEW." International Journal of Pattern Recognition and Artificial Intelligence 06, no. 02n03 (1992): 387–93. http://dx.doi.org/10.1142/s0218001492000230.

Full text
Abstract:
Finding a parallel architecture adapted to a given class of algorithms is a central problem for architects. This paper presents a methodology to realize it, and provides an illustration using image analysis. First, we show a set of common basic operations that can be used to solve most image analysis problems. Then these movements are translated to fit some natural communications in a given architecture. The considered data movements (global operations on connected pixel sets) can express a large class of algorithms. Their implementation on exemplary massively parallel architectures (arrays, hypercubes, pyramids) is discussed.
APA, Harvard, Vancouver, ISO, and other styles
14

A.V., Chistyakov. "On improving the efficiency of mathematical modeling of the problem of stability of construction." Artificial Intelligence 25, no. 3 (2020): 27–36. http://dx.doi.org/10.15407/jai2020.03.027.

Full text
Abstract:
Algorithmic software for mathematical modeling of structural stability is considered, which is reduced to solving a partial generalized eigenvalues problem of sparse matrices, with automatic parallelization of calculations on modern parallel computers with graphics processors. Peculiarities of realization of parallel algorithms for different structures of sparse matrices are presented. The times of solving the problem of stability of composite materialsusing a three-dimensional model of "finite size fibers" on computers of different architectures are given. In mathematical modeling of physical and technical processes in many cases there is a need to solve problems of algebraic problem of eigenvalues (APVZ) with sparse matrices of large volumes. In particular, such problems arise in the analysis of the strength of structures in civil and industrial construction, aircraft construction, electric welding, etc. The solving to these problems is to determine the eigenvalues and eigenvectors of sparse matrices of different structure. The efficiency of solving these problems largely depends on the effectiveness of mathematical modeling of the problem as a whole. Continuous growth of task parameters, calculation of more complete models of objects and processes on computers require an increase in computer productivity. High-performance computing requirements are far ahead of traditional parallel computing, even with multicore processors. High-performance computing requirements are far ahead of traditional parallel computing, even with multicore processors. Today, this problem is solved by using powerful supercomputers of hybrid architecture, such as computers with multicore processors (CPUs) and graphics processors (GPUs), which combine MIMD and SIMD architectures. But the potential of high-performance computers can be used to the fullest only with algorithmic software that takes into account both the properties of the task and the features of the hybrid architecture. Complicating the architecture of modern high-performance supercomputers of hybrid architecture, which are actively used for mathematical modeling (increasing the number of computer processors and cores, different types of computer memory, different programming technologies, etc.) means a significant complication of efficient use of these resources in creating parallel algorithms and programs. here are problems with the creation of algorithmic software with automatic execution of stages of work, which are associated with the efficient use of computing resources, ways to store and process sparse matrices, analysis of the reliability of computer results. This makes it possible to significantly increase the efficiency of mathematical modeling of practical problems on modern high-performance computers, as well as free users from the problems of parallelization of complex problems. he developed algorithmic software automatically implements all stages of parallel computing and processing of sparse matrices on a hybrid computer. It was used at the Institute of Mechanics named after S.P. Tymoshenko NAS of Ukraine in modeling the strength problems of composite material. A significant improvement in the time characteristics of mathematical modeling was obtained. Problems of mathematical modeling of the properties of composite materials has an important role in designing the processes of deformation and destruction of products in various subject areas. Algorithmic software for mathematical modeling of structural stability is considered, which is reduced to solving a partial generalized problem of eigen values of sparse matrices of different structure of large orders, with automatic parallelization of calculations on modern parallel computers with graphics processors. The main methodological principles and features of implementation of parallel algorithms for different structures of sparse matrices are presented, which ensure effective implementation of multilevel parallelism of a hybrid system and reduce data exchange time during the computational process. As an example of these approaches, a hybrid algorithm of the iteration method in subspace for tape and block-diagonal matrices with a frame for computers of hybrid architecture is given. Peculiarities of data decomposition for matrices of profile structure at realization of parallel algorithms are considered. The proposed approach provides automatic determination of the required topology of the hybrid computer and the optimal amount of resources for the organization of an efficient computational process. The results of testing the developed algorithmic software for problems from the collection of the University of Florida, as well as the times of solving the problem of stability of composite materials using a three-dimensional model of "finite size fibers" on computers of different architectures. The results show a significant improvement in the time characteristics of solving problems.
APA, Harvard, Vancouver, ISO, and other styles
15

Huang, Hai, and Liyi Xiao. "Variable Length Reconfigurable Algorithms and Architectures for DCT/IDCT Based on Modified Unfolded Cordic." Open Electrical & Electronic Engineering Journal 7, no. 1 (2013): 71–81. http://dx.doi.org/10.2174/1874129001307010071.

Full text
Abstract:
A coordinate rotation digital computer (CORDIC) based variable length reconfigurable DCT/IDCT algorithm and corresponding architecture are proposed. The proposed algorithm is easily to extend to the 2n-point DCT/IDCT. Furthermore, we can easily construct the N-point DCT/IDCT with two N/2-pt DCTs/IDCTs based the proposed algorithm. The architecture based on the proposed algorithm can support several power-of-two transform sizes. To speed up the computation of DCT/IDCT without losing accuracy, we develop the modified unfolded CORDIC with the efficient carry save adder (CSA). The rotation angles of CORDIC used in proposed algorithm are arithmetic sequence. For convenience, we develop the architecture of N-point IDCT with the orthogonal property of DCT and IDCT transforms. The proposed architecture are modeled with MATLAB language and performed in DCT-based JPEG process, the experimental results show that the peak signal to noise ratio (PSNR) values of proposed architectures are higher than the existing CORDIC based architectures at both different quantization factors and different test images. Furthermore, the proposed architectures have higher regularity, modularity, computation accuracy and suitable for VLSI implementation.
APA, Harvard, Vancouver, ISO, and other styles
16

Khimich, Alexander, Victor Polyanko, and Tamara Chistyakova. "Parallel Algorithms for Solving Linear Systems on Hybrid Computers." Cybernetics and Computer Technologies, no. 2 (July 24, 2020): 53–66. http://dx.doi.org/10.34229/2707-451x.20.2.6.

Full text
Abstract:
Introduction. At present, in science and technology, new computational problems constantly arise with large volumes of data, the solution of which requires the use of powerful supercomputers. Most of these problems come down to solving systems of linear algebraic equations (SLAE). The main problem of solving problems on a computer is to obtain reliable solutions with minimal computing resources. However, the problem that is solved on a computer always contains approximate data regarding the original task (due to errors in the initial data, errors when entering numerical data into the computer, etc.). Thus, the mathematical properties of a computer problem can differ significantly from the properties of the original problem. It is necessary to solve problems taking into account approximate data and analyze computer results. Despite the significant results of research in the field of linear algebra, work in the direction of overcoming the existing problems of computer solving problems with approximate data is further aggravated by the use of contemporary supercomputers, do not lose their significance and require further development. Today, the most high-performance supercomputers are parallel ones with graphic processors. The architectural and technological features of these computers make it possible to significantly increase the efficiency of solving problems of large volumes at relatively low energy costs. The purpose of the article is to develop new parallel algorithms for solving systems of linear algebraic equations with approximate data on supercomputers with graphic processors that implement the automatic adjustment of the algorithms to the effective computer architecture and the mathematical properties of the problem, identified in the computer, as well with estimates of the reliability of the results. Results. A methodology for creating parallel algorithms for supercomputers with graphic processors that implement the study of the mathematical properties of linear systems with approximate data and the algorithms with the analysis of the reliability of the results are described. The results of computational experiments on the SKIT-4 supercomputer are presented. Conclusions. Parallel algorithms have been created for investigating and solving linear systems with approximate data on supercomputers with graphic processors. Numerical experiments with the new algorithms showed a significant acceleration of calculations with a guarantee of the reliability of the results. Keywords: systems of linear algebraic equations, hybrid algorithm, approximate data, reliability of the results, GPU computers.
APA, Harvard, Vancouver, ISO, and other styles
17

CHLEBUS, BOGDAN S. "TWO SELECTION ALGORITHMS ON A MESH-CONNECTED COMPUTER." Parallel Processing Letters 02, no. 04 (1992): 341–46. http://dx.doi.org/10.1142/s0129626492000489.

Full text
Abstract:
Two deterministic selection algorithms on an n × n mesh-connected processor array are developed. The model of computation is restricted in the following sense: at every step each processor buffers exactly one of the original keys, and every one of the original keys is buffered by a processor. The first algorithm operates in time 2.5n + o(n). It is a general selection algorithm, that is, its complexity bound does not depend on the rank of the element searched for. The second algorithm has its time bound depending on the rank of the item sought. This bound is [Formula: see text], where the rank is x2n2. This algorithm is superior to the previous one for approximately 10% of the smallest and 10% of the largest keys.
APA, Harvard, Vancouver, ISO, and other styles
18

BHANDARKAR, SUCHENDRA M., HAMID R. ARABNIA, and JEFFREY W. SMITH. "A RECONFIGURABLE ARCHITECTURE FOR IMAGE PROCESSING AND COMPUTER VISION." International Journal of Pattern Recognition and Artificial Intelligence 09, no. 02 (1995): 201–29. http://dx.doi.org/10.1142/s0218001495000110.

Full text
Abstract:
In this paper we describe a reconfigurable architecture for image processing and computer vision based on a multi-ring network which we call a Reconfigurable Multi-Ring System (RMRS). We describe the reconfiguration switch for the RMRS and also describe its VLSI implementation. The RMRS topology is shown to be regular and scalable and hence well-suited for VLSI implementation. We prove some important properties of the RMRS topology and show that a broad class of algorithms for the n-cube can be mapped to the RMRS in a simple and elegant manner. We design and analyze a class of procedural primitives for the SIMD RMRS and show how these primitives can be used as building blocks for more complex parallel operations. We demonstrate the usefulness of the RMRS for problems in image processing and computer vision by considering two important operations—the Fast Fourier Transform (FFT) and the Hough transform for detection of linear features in an image. Parallel algorithms for the FFT and the Hough transform on the SIMD RMRS are designed using the aforementioned procedural primitives. The analysis of the complexity of these algorithms shows that the SIMD RMRS is a viable architecture for problems in computer vision and image processing.
APA, Harvard, Vancouver, ISO, and other styles
19

Decyk, Viktor K., and Tajendra V. Singh. "Particle-in-Cell algorithms for emerging computer architectures." Computer Physics Communications 185, no. 3 (2014): 708–19. http://dx.doi.org/10.1016/j.cpc.2013.10.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Figueiredo, Marco A., Clay S. Gloster, Mark Stephens, Corey A. Graves, and Mouna Nakkar. "Implementation of Multispectral Image Classification on a Remote Adaptive Computer." VLSI Design 10, no. 3 (2000): 307–19. http://dx.doi.org/10.1155/2000/31983.

Full text
Abstract:
As the demand for higher performance computers for the processing of remote sensing science algorithms increases, the need to investigate new computing paradigms is justified. Field Programmable Gate Arrays enable the implementation of algorithms at the hardware gate level, leading to orders of magnitude performance increase over microprocessor based systems. The automatic classification of spaceborne multispectral images is an example of a computation intensive application that can benefit from implementation on an FPGA-based custom computing machine (adaptive or reconfigurable computer). A probabilistic neural network is used here to classify pixels of a multispectral LANDSAT-2 image. The implementation described utilizes Java client/server application programs to access the adaptive computer from a remote site. Results verify that a remote hardware version of the algorithm (implemented on an adaptive computer) is significantly faster than a local software version of the same algorithm (implemented on a typical general-purpose computer).
APA, Harvard, Vancouver, ISO, and other styles
21

Baltus, Vytautas, Laura Jankauskaitė-Jurevičienė, and Tadas Žebrauskas. "Parametric Architecture Today and Tomorrow." Journal of Sustainable Architecture and Civil Engineering 25, no. 2 (2019): 85–89. http://dx.doi.org/10.5755/j01.sace.25.2.21698.

Full text
Abstract:
In the past fifteen years avant-garde architectural practice has been developing in the field of parametric design at the phenomenal speed. This success can be attributed to the development of advanced computer software, previously used for animation and rendering. The parametric design method uses advanced computer scripts which transforms mathematic algorithms of relations between parameters into creations of complex geometries, which were almost impossible to produce and even imagine without it.Question remains will these mathematical equations inputed thru scripting language as algorithms will add additional value to the design, or traditional, intuitive way of designing still can be relevant enough in 21st century. There are doubts that without deep intellectual background parametric design process can play for the play’s sake with the shapes of objects, buildings and urban structures.
APA, Harvard, Vancouver, ISO, and other styles
22

Ren, Pengzhen, Yun Xiao, Xiaojun Chang, et al. "A Comprehensive Survey of Neural Architecture Search." ACM Computing Surveys 54, no. 4 (2021): 1–34. http://dx.doi.org/10.1145/3447582.

Full text
Abstract:
Deep learning has made substantial breakthroughs in many fields due to its powerful automatic representation capabilities. It has been proven that neural architecture design is crucial to the feature representation of data and the final performance. However, the design of the neural architecture heavily relies on the researchers’ prior knowledge and experience. And due to the limitations of humans’ inherent knowledge, it is difficult for people to jump out of their original thinking paradigm and design an optimal model. Therefore, an intuitive idea would be to reduce human intervention as much as possible and let the algorithm automatically design the neural architecture. Neural Architecture Search ( NAS ) is just such a revolutionary algorithm, and the related research work is complicated and rich. Therefore, a comprehensive and systematic survey on the NAS is essential. Previously related surveys have begun to classify existing work mainly based on the key components of NAS: search space, search strategy, and evaluation strategy. While this classification method is more intuitive, it is difficult for readers to grasp the challenges and the landmark work involved. Therefore, in this survey, we provide a new perspective: beginning with an overview of the characteristics of the earliest NAS algorithms, summarizing the problems in these early NAS algorithms, and then providing solutions for subsequent related research work. In addition, we conduct a detailed and comprehensive analysis, comparison, and summary of these works. Finally, we provide some possible future research directions.
APA, Harvard, Vancouver, ISO, and other styles
23

SHAPIRO, LINDA G., ROBERT M. HARALICK, and MICHAEL J. GOULISH. "INSIGHT: A DATAFLOW LANGUAGE FOR PROGRAMMING VISION ALGORITHMS IN A RECONFIGURABLE COMPUTATIONAL NETWORK." International Journal of Pattern Recognition and Artificial Intelligence 01, no. 03n04 (1987): 335–50. http://dx.doi.org/10.1142/s0218001487000230.

Full text
Abstract:
Machine vision systems used in industrial applications must execute their algorithms in real time to perform such tasks as inspecting a wire bond or guiding a robot to install a part on a car body moving along a conveyer. The real time speed is achieved by employing simple-minded algorithms and by designing parallel architectures and parallel algorithms for some tasks. The majority of the work on parallel architectures has been limited to architectures that support image processing, but not mid- or high-level vision In order for more complex vision algorithms to execute in real time, a more flexible architecture is needed. Our conceptual approach to the problem is a reconfigurable computational network. Each configuration of the network implements an algorithm or class of algorithms A high-level language expresses the algorithms in a relational form that can be easily translated to the specification for a configuration. The language must be able to encode low-, mid-, and high-level vision algorithms and to efficiently handle not only pixel data, but also higher level structures. In this paper we describe a dataflow language called INSIGHT, which we have designed to meet these needs, and give several examples of parallel machine vision algorithms expressed in the language.
APA, Harvard, Vancouver, ISO, and other styles
24

Márk Máder, Patrik, Olivér Rák, and István Ervin Háber. "Contemporary architecture based on algorithms." Pollack Periodica 13, no. 3 (2018): 53–60. http://dx.doi.org/10.1556/606.2018.13.3.6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Felzenszwalb, P. F., and D. McAllester. "The Generalized A* Architecture." Journal of Artificial Intelligence Research 29 (June 21, 2007): 153–90. http://dx.doi.org/10.1613/jair.2187.

Full text
Abstract:
We consider the problem of computing a lightest derivation of a global structure using a set of weighted rules. A large variety of inference problems in AI can be formulated in this framework. We generalize A* search and heuristics derived from abstractions to a broad class of lightest derivation problems. We also describe a new algorithm that searches for lightest derivations using a hierarchy of abstractions. Our generalization of A* gives a new algorithm for searching AND/OR graphs in a bottom-up fashion. We discuss how the algorithms described here provide a general architecture for addressing the pipeline problem --- the problem of passing information back and forth between various stages of processing in a perceptual system. We consider examples in computer vision and natural language processing. We apply the hierarchical search algorithm to the problem of estimating the boundaries of convex objects in grayscale images and compare it to other search methods. A second set of experiments demonstrate the use of a new compositional model for finding salient curves in images.
APA, Harvard, Vancouver, ISO, and other styles
26

ROSKA, TAMÁS. "COMPUTATIONAL AND COMPUTER COMPLEXITY OF ANALOGIC CELLULAR WAVE COMPUTERS." Journal of Circuits, Systems and Computers 12, no. 04 (2003): 539–62. http://dx.doi.org/10.1142/s0218126603001021.

Full text
Abstract:
The CNN Universal Machine is generalized as the latest step in computational architectures: a Universal Machine on Flows. Computational complexity and computer complexity issues are studied in different architectural settings. Three mathematical machines are considered: the universal machine on integers (UMZ), the universal machine on reals (UMR) and the universal machine on flows (UMF). The three machines induce different kinds of computational difficulties: combinatorial, algebraic, and dynamic, respectively. After a broader overview on computational complexity issues, it is shown, following the reasoning related the UMR, that in many cases the size is not the most important parameter related to computational complexity. Emerging new computing and computer architectures as well as their physical implementation suggest a new look on computational and computer complexities. The new analog-and-logic (analogic) cellular array computer paradigm, based on the CNN Universal Machine, and its physical implementation in CMOS and optical technologies, proves experimentally the relevance of the role of accuracy and problem parameter in computational complexity. We introduce also the rigorous definition of computational complexity for UMF and prove an NP class of problems. It is also shown that choosing the spatial temporal elementary instructions, as well as taking into account the area and power dissipation, these choices inherently influence computational complexity and computer complexity, respectively. Comments related to relevance to biology of the UMF are presented in relation to complexity theory. It is shown that algorithms using spatial-temporal continuous elementary instructions (α-recursive functions) represent not only a new world in computing, but also, a more general type of logic inference.
APA, Harvard, Vancouver, ISO, and other styles
27

Arnaudov, Rumen, and Ivo Dochev. "Functional generator controlled by internet." Facta universitatis - series: Electronics and Energetics 16, no. 1 (2003): 93–102. http://dx.doi.org/10.2298/fuee0301093a.

Full text
Abstract:
This paper presents a functional generator controlled by Internet. We describe a computer-system architecture, a block diagram of generator and working algorithms. The remote control is realized by computer networks and using the TCP/IP protocols. For that purpose is used "Customer-Server" architecture. The software algorithms is based on Linux operating system Apache web server, MySql database, HTML and PHP languages.
APA, Harvard, Vancouver, ISO, and other styles
28

Buxton, B. F., D. W. Murray, H. Buxton, and N. S. Williams. "Structure-from-motion algorithms for computer vision on an SIMD architecture." Computer Physics Communications 37, no. 1-3 (1985): 273–80. http://dx.doi.org/10.1016/0010-4655(85)90162-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Zhang, Lei, and Douglas G. Down. "APEM — Approximate Performance Evaluation for Multi-Core Computers." Journal of Circuits, Systems and Computers 28, no. 01 (2018): 1950004. http://dx.doi.org/10.1142/s021812661950004x.

Full text
Abstract:
Mean Value Analysis (MVA) has long been a standard approach for performance analysis of computer systems. While the exact load-dependent MVA algorithm is an efficient technique for computer system performance modeling, it fails to address multi-core computer systems with Dynamic Frequency Scaling (DFS). In addition, the load-dependent MVA algorithm suffers from numerical difficulties under heavy load conditions. The goal of our paper is to find an efficient and robust method which is easy to use in practice and is also accurate for performance prediction for multi-core platforms. The proposed method, called Approximate Performance Evaluation for Multi-core computers (APEM), uses a flow-equivalent performance model designed specifically to address multi-core computer systems and identify the influence on the CPU demand of the effect of DFS. We adopt an approximation technique to estimate resource demands to parametrize MVA algorithms. To validate the application of our method, we investigate three case studies with extended TPC-W benchmark kits, showing that our method achieves better accuracy compared with other commonly used MVA algorithms. We compare the three different performance models, and we also extend our approach to multi-class models.
APA, Harvard, Vancouver, ISO, and other styles
30

KAVIANPOUR, A., and N. BAGHERZADEH. "PARALLEL ALGORITHMS FOR LINE DETECTION ON A PYRAMID ARCHITECTURE." International Journal of Pattern Recognition and Artificial Intelligence 08, no. 01 (1994): 337–49. http://dx.doi.org/10.1142/s0218001494000164.

Full text
Abstract:
This paper considers the problem of detecting lines in images using a pyramid architecture. The approach is based on the Hough Transform calculation. A pyramid architecture of size n is a fine-grain architecture with a mesh base of size [Formula: see text] processors each holding a single pixel of the image. The pyramid operates in an SIMD mode. Two algorithms for computing the Hough Transform are explained. The first algorithm initially uses different angles, θj’s, and its complexity is O(k+log n) with O(m) storage requirement. The second algorithm computes the Hough Transform in a pipeline fashion for each angle θj at a time. This method produces results in O(k * log n) time with O(1) storage, where k is the number of θj angles, m is the number of ρi normal distances from the origin, and n is the number of pixels. A simulation program is also described.
APA, Harvard, Vancouver, ISO, and other styles
31

Eddine, Khamlich Salah, Khamlich Fathallah, Issam Atouf, and Benrabh Mohamed. "Parallel Implementation of Nios Ii Multiprocessors, Cepstral Coefficients of Mel Frequency and MLP Architecture in Fpga: the Application of Speech Recognition." WSEAS TRANSACTIONS ON SIGNAL PROCESSING 16 (January 13, 2021): 146–54. http://dx.doi.org/10.37394/232014.2020.16.16.

Full text
Abstract:
Speech processing in real time requires the use of fast, reconfigurable electronic circuits capable of handling large amounts of information generated by the audio source. This article presents hardware implementations of a multilayer perceptron (MLP) and the MFCC algorithm for speech recognition. These algorithms have been implemented in hardware and tested in an on-board electronic card based on a reconfigurable circuit (FPGA). We also present a comparative study between several architectures of MLP and with the literature on the level of costs with regard to the surface of silicon, the speed and the computing resources required. Following the FPGA circuit modification, we created NIOSII processors to physically implement the architecture of ANN-type MLPs and MFCC speech recognition algorithms and perform real-time speech recognition functions.
APA, Harvard, Vancouver, ISO, and other styles
32

ZHENG, S. Q., A. GUMASTE, and E. LU. "ALGORITHM-HARDWARE CODESIGN OF A FAST PARALLEL ROUTING ARCHITECTURE FOR CLOS NETWORKS." Journal of Interconnection Networks 11, no. 03n04 (2010): 189–210. http://dx.doi.org/10.1142/s0219265910002805.

Full text
Abstract:
Clos networks are an important class of switching networks due to their modular structure and much lower cost compared with crossbars. For routing I/O permutations of Clos networks, sequential routing algorithms are too slow, and all known parallel algorithms are not practical. We present the algorithm-hardware codesign of a unified fast parallel routing architecture called distributed pipeline routing (DPR) architecture for rearrangeable nonblocking and strictly nonblocking Clos networks. The DPR architecture uses a linear interconnection structure and processing elements that performs only shift and logic AND operations. We show that a DPR architecture can route any permutation in rearrangeable nonblocking and strictly nonblocking Clos networks in [Formula: see text] time. The same architecture can be used to carry out control of any group of connection/disconnection requests for strictly nonblocking Clos networks in [Formula: see text] time. Several speeding-up techniques are also presented. This architecture is designed for Clos-based packet and circuit switches of practical sizes.
APA, Harvard, Vancouver, ISO, and other styles
33

Eklund, Sven E. "A massively parallel architecture for distributed genetic algorithms." Parallel Computing 30, no. 5-6 (2004): 647–76. http://dx.doi.org/10.1016/j.parco.2003.12.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Angel, Edward, Steve Cunningham, Peter Shirley, and Kelvin Sung. "Teaching computer graphics without raster-level algorithms." ACM SIGCSE Bulletin 38, no. 1 (2006): 266–67. http://dx.doi.org/10.1145/1124706.1121423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Organick, Elliot I. "Algorithms, concurrent processors, and computer science education." ACM SIGCSE Bulletin 17, no. 1 (1985): 1–5. http://dx.doi.org/10.1145/323275.323276.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Maxim, Bruce R., and Bruce S. Elenbogen. "Teaching programming algorithms aided by computer graphics." ACM SIGCSE Bulletin 19, no. 1 (1987): 297–301. http://dx.doi.org/10.1145/31726.31775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Tramacere, Eugenio, Sara Luciani, Stefano Feraco, Angelo Bonfitto, and Nicola Amati. "Processor-in-the-Loop Architecture Design and Experimental Validation for an Autonomous Racing Vehicle." Applied Sciences 11, no. 16 (2021): 7225. http://dx.doi.org/10.3390/app11167225.

Full text
Abstract:
Self-driving vehicles have experienced an increase in research interest in the last decades. Nevertheless, fully autonomous vehicles are still far from being a common means of transport. This paper presents the design and experimental validation of a processor-in-the-loop (PIL) architecture for an autonomous sports car. The considered vehicle is an all-wheel drive full-electric single-seater prototype. The retained PIL architecture includes all the modules required for autonomous driving at system level: environment perception, trajectory planning, and control. Specifically, the perception pipeline exploits obstacle detection algorithms based on Artificial Intelligence (AI), and the trajectory planning is based on a modified Rapidly-exploring Random Tree (RRT) algorithm based on Dubins curves, while the vehicle is controlled via a Model Predictive Control (MPC) strategy. The considered PIL layout is implemented firstly using a low-cost card-sized computer for fast code verification purposes. Furthermore, the proposed PIL architecture is compared in terms of performance to an alternative PIL using high-performance real-time target computing machine. Both PIL architectures exploit User Datagram Protocol (UDP) protocol to properly communicate with a personal computer. The latter PIL architecture is validated in real-time using experimental data. Moreover, they are also validated with respect to the general autonomous pipeline that runs in parallel on the personal computer during numerical simulation.
APA, Harvard, Vancouver, ISO, and other styles
38

Lazarov, A., and C. Minchev. "ISAR Image Recognition Algorithm and Neural Network Implementation." Cybernetics and Information Technologies 17, no. 4 (2017): 183–99. http://dx.doi.org/10.1515/cait-2017-0048.

Full text
Abstract:
AbstractThe image recognition and identification procedures are comparatively new in the scope of ISAR (Inverse Synthetic Aperture Radar) applications and based on specific defects in ISAR images, e.g., missing pixels and parts of the image induced by target’s aspect angles require preliminary image processing before identification. The present paper deals with ISAR image enhancement algorithms and neural network architecture for image recognition and target identification. First, stages of the image processing algorithms intended for image improving and contour line extraction are discussed. Second, an algorithm for target recognition is developed based on neural network architecture. Two Learning Vector Quantization (LVQ) neural networks are constructed in Matlab program environment. A training algorithm by teacher is applied. Final identification decision strategy is developed. Results of numerical experiments are presented.
APA, Harvard, Vancouver, ISO, and other styles
39

Ajani, Taiwo Samuel, Agbotiname Lucky Imoize, and Aderemi A. Atayero. "An Overview of Machine Learning within Embedded and Mobile Devices–Optimizations and Applications." Sensors 21, no. 13 (2021): 4412. http://dx.doi.org/10.3390/s21134412.

Full text
Abstract:
Embedded systems technology is undergoing a phase of transformation owing to the novel advancements in computer architecture and the breakthroughs in machine learning applications. The areas of applications of embedded machine learning (EML) include accurate computer vision schemes, reliable speech recognition, innovative healthcare, robotics, and more. However, there exists a critical drawback in the efficient implementation of ML algorithms targeting embedded applications. Machine learning algorithms are generally computationally and memory intensive, making them unsuitable for resource-constrained environments such as embedded and mobile devices. In order to efficiently implement these compute and memory-intensive algorithms within the embedded and mobile computing space, innovative optimization techniques are required at the algorithm and hardware levels. To this end, this survey aims at exploring current research trends within this circumference. First, we present a brief overview of compute intensive machine learning algorithms such as hidden Markov models (HMM), k-nearest neighbors (k-NNs), support vector machines (SVMs), Gaussian mixture models (GMMs), and deep neural networks (DNNs). Furthermore, we consider different optimization techniques currently adopted to squeeze these computational and memory-intensive algorithms within resource-limited embedded and mobile environments. Additionally, we discuss the implementation of these algorithms in microcontroller units, mobile devices, and hardware accelerators. Conclusively, we give a comprehensive overview of key application areas of EML technology, point out key research directions and highlight key take-away lessons for future research exploration in the embedded machine learning domain.
APA, Harvard, Vancouver, ISO, and other styles
40

MIKHAEL, WASFY B., and FRANK H. WU. "A UNIFIED APPROACH FOR GENERATING OPTIMUM GRADIENT FIR ADAPTIVE ALGORITHMS WITH TIME-VARYING CONVERGENCE FACTORS." Journal of Circuits, Systems and Computers 01, no. 01 (1991): 19–42. http://dx.doi.org/10.1142/s0218126691000203.

Full text
Abstract:
In this paper, a unified approach for generating fast block- and sequential-gradient LMS FIR tapped delay line (TDL) adaptive algorithms is presented. These algorithms employ time-varying convergence factors which are tailored for the adaptive filter coefficients and updated at each block or single data iteration. The convergence factors are chosen to minimize the mean squared error (MSE) and are easily computed from readily available signals. The general formulation leads to three classes of adaptive algorithms. These algorithms. ordered in a descending order of their computational complexity and performance. are: the optimum block adaptive algorithm with individual adaptation of parameters (OBAI), the optimum block adaptive (OBA) and OBA shifting (ODAS) algorithms, and the homogeneous adaptive (HA) algorithm. In this paper, it is shown how each class of algorithms is obtained from the previous one, by a simple trade-off between adaptation performance and computational complexity. Implementation aspects of the generated algorithms are examined and their performance is evaluated and compared with several recently proposed algorithms by means of computer simulations under a wide range of adaptation conditions. The evaluation results show that the generated algorithms have attractive features in the comparisons due to the considerable reduction in the number of iterations required for a given adaptation accuracy. The improvement, however. is achieved at the expense of a relatively modest increase in the number of computations per data sample.
APA, Harvard, Vancouver, ISO, and other styles
41

Soltan Agh, Mohammad Reza, Zuriati Ahmad Zukarnain, Ali Mamat, and Hishamuddin Zainuddin. "A Hybrid Architecture Approach for Quantum Algorithms." Journal of Computer Science 5, no. 10 (2009): 725–31. http://dx.doi.org/10.3844/jcssp.2009.725.731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

SAHNI, SARTAJ. "DATA MANIPULATION ON THE DISTRIBUTED MEMORY BUS COMPUTER." Parallel Processing Letters 05, no. 01 (1995): 3–14. http://dx.doi.org/10.1142/s0129626495000023.

Full text
Abstract:
We consider fundamental data manipulation operations such as broadcasting, prefix sum, data sum, data shift, data accumulation, consecutive sum, adjacent sum, sorting, and random access reads and writes, and show how these may be performed on the distributed memory bus computer (DMBC). In addition, we study two image processing applications: shrinking and expanding, and template matching. The DMBC algorithms are generally simpler than corresponding algorithms of the same time complexity developed for other reconfigurable bus computers.
APA, Harvard, Vancouver, ISO, and other styles
43

Vianna, Reinaldo O., Wilson R. M. Rabelo, and C. H. Monken. "The Semi-Quantum Computer." International Journal of Quantum Information 01, no. 02 (2003): 279–88. http://dx.doi.org/10.1142/s021974990300019x.

Full text
Abstract:
We discuss the performance of the Search and Fourier Transform algorithms on a hybrid computer constituted of classical and quantum processors working together. We show that this semi-quantum computer would be an improvement over a pure classical architecture, no matter how few qubits are available and, therefore, it suggests an easier implementable technology than a pure quantum computer with arbitrary number of qubits.
APA, Harvard, Vancouver, ISO, and other styles
44

Moses, C. John, D. Selvathi, and V. M. Anne Sophia. "VLSI Architectures for Image Interpolation: A Survey." VLSI Design 2014 (May 19, 2014): 1–10. http://dx.doi.org/10.1155/2014/872501.

Full text
Abstract:
Image interpolation is a method of estimating the values at unknown points using the known data points. This procedure is used in expanding and contrasting digital images. In this survey, different types of interpolation algorithm and their hardware architecture have been analyzed and compared. They are bilinear, winscale, bi-cubic, linear convolution, extended linear, piecewise linear, adaptive bilinear, first order polynomial, and edge enhanced interpolation architectures. The algorithms are implemented for different types of field programmable gate array (FPGA) and/or by different types of complementary metal oxide semiconductor (CMOS) technologies like TSMC 0.18 and TSMC 0.13. These interpolation algorithms are compared based on different types of optimization such as gate count, frequency, power, and memory buffer. The goal of this work is to analyze the different very large scale integration (VLSI) parameters like area, speed, and power of various implementations for image interpolation. From the survey followed by analysis, it is observed that the performance of hardware architecture of image interpolation can be improved by minimising number of line buffer memory and removing superfluous arithmetic elements on generating weighting coefficient.
APA, Harvard, Vancouver, ISO, and other styles
45

Latif, M., and M. A. Ismail. "Towards Multi-objective Optimization of Automatic Design Space Exploration for Computer Architecture through Hyper-heuristic." Engineering, Technology & Applied Science Research 9, no. 3 (2019): 4292–97. http://dx.doi.org/10.48084/etasr.2738.

Full text
Abstract:
Multi-objective optimization is an NP-hard problem. ADSE (automatic design space exploration) using heuristics has been proved to be an appropriate method in resolving this problem. This paper presents a hyper-heuristic technique to solve the DSE issue in computer architecture. Two algorithms are proposed. A hyper-heuristic layer has been added to the FADSE (framework for automatic design space exploration) and relevant algorithms have been implemented. The benefits of already existing multi-objective algorithms have been joined in order to strengthen the proposed algorithms. The proposed algorithms, namely RRSNS (round-robin scheduling NSGA-II and SPEA2) and RSNS (random scheduling NSGA-II and SPEA2) have been evaluated for the ADSE problem. The results have been compared with NSGA-II and SPEA2 algorithms. Results show that the proposed methodologies give competitive outcomes in comparison with NSGA-II and SPEA2.
APA, Harvard, Vancouver, ISO, and other styles
46

AKL, SELIM G., and Stefan D. Bruda. "PARALLEL REAL-TIME OPTIMIZATION: BEYOND SPEEDUP." Parallel Processing Letters 09, no. 04 (1999): 499–509. http://dx.doi.org/10.1142/s0129626499000463.

Full text
Abstract:
Traditionally, interest in parallel computation centered around the speedup provided by parallel algorithms over their sequential counterparts. In this paper, we ask a different type of question: Can parallel computers, due to their speed, do more than simply speed up the solution to a problem? We show that for real-time optimization problems, a parallel computer can obtain a solution that is better than that obtained by a sequential one. Specifically, a sequential and a parallel algorithm are exhibited for the problem of computing the best-possible approximation to the minimum-weight spanning tree of a connected, undirected and weighted graph whose vertices and edges are not all available at the outset, but instead arrive in real time. While the parallel algorithm succeeds in computing the exact minimum-weight spanning tree, the sequential algorithm can only manage to obtain an approximate solution. In the worst case, the ratio of the weight of the solution obtained sequentially to that of the solution computed in parallel can be arbitrarily large.
APA, Harvard, Vancouver, ISO, and other styles
47

Binder, Eli E., and James H. Herzog. "Distributed Computer Architecture and Fast Parallel Algorithms in Real-Time Robot Control." IEEE Transactions on Systems, Man, and Cybernetics 16, no. 4 (1986): 543–49. http://dx.doi.org/10.1109/tsmc.1986.289257.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Graziani, Salvatore, and Maria Gabriella Xibilia. "Innovative Topologies and Algorithms for Neural Networks." Future Internet 12, no. 7 (2020): 117. http://dx.doi.org/10.3390/fi12070117.

Full text
Abstract:
The introduction of new topologies and training procedures to deep neural networks has solicited a renewed interest in the field of neural computation. The use of deep structures has significantly improved the state of the art in many applications, such as computer vision, speech and text processing, medical applications, and IoT (Internet of Things). The probability of a successful outcome from a neural network is linked to selection of an appropriate network architecture and training algorithm. Accordingly, much of the recent research on neural networks is devoted to the study and proposal of novel architectures, including solutions tailored to specific problems. The papers of this Special Issue make significant contributions to the above-mentioned fields by merging theoretical aspects and relevant applications. Twelve papers are collected in the issue, addressing many relevant aspects of the topic.
APA, Harvard, Vancouver, ISO, and other styles
49

Prathiba, A., and V. S. Kanchana Bhaaskaran. "Secured Communication System Architecture Using Light Weight Algorithms." Research Journal of Applied Sciences, Engineering and Technology 11, no. 10 (2015): 1114–23. http://dx.doi.org/10.19026/rjaset.11.2126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Papadimitriou, Christos H., and Mihalis Yannakakis. "Towards an Architecture-Independent Analysis of Parallel Algorithms." SIAM Journal on Computing 19, no. 2 (1990): 322–28. http://dx.doi.org/10.1137/0219021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography