To see the other types of publications on this topic, follow the link: Computers / Computer Architecture.

Journal articles on the topic 'Computers / Computer Architecture'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Computers / Computer Architecture.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kaiser, Marcus. "Brain architecture: a design for natural computation." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 365, no. 1861 (September 13, 2007): 3033–45. http://dx.doi.org/10.1098/rsta.2007.0007.

Full text
Abstract:
Fifty years ago, John von Neumann compared the architecture of the brain with that of the computers he invented and which are still in use today. In those days, the organization of computers was based on concepts of brain organization. Here, we give an update on current results on the global organization of neural systems. For neural systems, we outline how the spatial and topological architecture of neuronal and cortical networks facilitates robustness against failures, fast processing and balanced network activation. Finally, we discuss mechanisms of self-organization for such architectures. After all, the organization of the brain might again inspire computer architecture.
APA, Harvard, Vancouver, ISO, and other styles
2

Odhiambo, M. O., and P. O. Umenne. "NET-COMPUTER: Internet Computer Architecture and its Application in E-Commerce." Engineering, Technology & Applied Science Research 2, no. 6 (December 4, 2012): 302–9. http://dx.doi.org/10.48084/etasr.145.

Full text
Abstract:
Research in Intelligent Agents has yielded interesting results, some of which have been translated into commer­cial ventures. Intelligent Agents are executable software components that represent the user, perform tasks on behalf of the user and when the task terminates, the Agents send the result to the user. Intelligent Agents are best suited for the Internet: a collection of computers connected together in a world-wide computer network. Swarm and HYDRA computer architectures for Agents’ execution were developed at the University of Surrey, UK in the 90s. The objective of the research was to develop a software-based computer architecture on which Agents execution could be explored. The combination of Intelligent Agents and HYDRA computer architecture gave rise to a new computer concept: the NET-Computer in which the comput­ing resources reside on the Internet. The Internet computers form the hardware and software resources, and the user is provided with a simple interface to access the Internet and run user tasks. The Agents autonomously roam the Internet (NET-Computer) executing the tasks. A growing segment of the Internet is E-Commerce for online shopping for products and services. The Internet computing resources provide a marketplace for product suppliers and consumers alike. Consumers are looking for suppliers selling products and services, while suppliers are looking for buyers. Searching the vast amount of information available on the Internet causes a great deal of problems for both consumers and suppliers. Intelligent Agents executing on the NET-Computer can surf through the Internet and select specific information of interest to the user. The simulation results show that Intelligent Agents executing HYDRA computer architecture could be applied in E-Commerce.
APA, Harvard, Vancouver, ISO, and other styles
3

ROSKA, TAMÁS. "COMPUTATIONAL AND COMPUTER COMPLEXITY OF ANALOGIC CELLULAR WAVE COMPUTERS." Journal of Circuits, Systems and Computers 12, no. 04 (August 2003): 539–62. http://dx.doi.org/10.1142/s0218126603001021.

Full text
Abstract:
The CNN Universal Machine is generalized as the latest step in computational architectures: a Universal Machine on Flows. Computational complexity and computer complexity issues are studied in different architectural settings. Three mathematical machines are considered: the universal machine on integers (UMZ), the universal machine on reals (UMR) and the universal machine on flows (UMF). The three machines induce different kinds of computational difficulties: combinatorial, algebraic, and dynamic, respectively. After a broader overview on computational complexity issues, it is shown, following the reasoning related the UMR, that in many cases the size is not the most important parameter related to computational complexity. Emerging new computing and computer architectures as well as their physical implementation suggest a new look on computational and computer complexities. The new analog-and-logic (analogic) cellular array computer paradigm, based on the CNN Universal Machine, and its physical implementation in CMOS and optical technologies, proves experimentally the relevance of the role of accuracy and problem parameter in computational complexity. We introduce also the rigorous definition of computational complexity for UMF and prove an NP class of problems. It is also shown that choosing the spatial temporal elementary instructions, as well as taking into account the area and power dissipation, these choices inherently influence computational complexity and computer complexity, respectively. Comments related to relevance to biology of the UMF are presented in relation to complexity theory. It is shown that algorithms using spatial-temporal continuous elementary instructions (α-recursive functions) represent not only a new world in computing, but also, a more general type of logic inference.
APA, Harvard, Vancouver, ISO, and other styles
4

Dannenberg, Roger B., Nicolas E. Gold, Dawen Liang, and Guangyu Xia. "Methods and Prospects for Human–Computer Performance of Popular Music." Computer Music Journal 38, no. 2 (June 2014): 36–50. http://dx.doi.org/10.1162/comj_a_00238.

Full text
Abstract:
Computers are often used in performance of popular music, but most often in very restricted ways, such as keyboard synthesizers where musicians are in complete control, or pre-recorded or sequenced music where musicians follow the computer's drums or click track. An interesting and yet little-explored possibility is the computer as highly autonomous performer of popular music, capable of joining a mixed ensemble of computers and humans. Considering the skills and functional requirements of musicians leads to a number of predictions about future human–computer music performance (HCMP) systems for popular music. We describe a general architecture for such systems and describe some early implementations and our experience with them.
APA, Harvard, Vancouver, ISO, and other styles
5

AKL, SELIM G. "THREE COUNTEREXAMPLES TO DISPEL THE MYTH OF THE UNIVERSAL COMPUTER." Parallel Processing Letters 16, no. 03 (September 2006): 381–403. http://dx.doi.org/10.1142/s012962640600271x.

Full text
Abstract:
It is shown that the concept of a Universal Computer cannot be realized. Specifically, instances of a computable function [Formula: see text] are exhibited that cannot be computed on any machine [Formula: see text] that is capable of only a finite and fixed number of operations per step. This remains true even if the machine [Formula: see text] is endowed with an infinite memory and the ability to communicate with the outside world while it is attempting to compute [Formula: see text]. It also remains true if, in addition, [Formula: see text] is given an indefinite amount of time to compute [Formula: see text]. This result applies not only to idealized models of computation, such as the Turing Machine and the like, but also to all known general-purpose computers, including existing conventional computers (both sequential and parallel), as well as contemplated unconventional ones such as biological and quantum computers. Even accelerating machines (that is, machines that increase their speed at every step) cannot be universal.
APA, Harvard, Vancouver, ISO, and other styles
6

Ahmad, Othman. "FPGA BASED INDIVIDUAL COMPUTER ARCHITECTURE LABORATORY EXERCISES." Journal of BIMP-EAGA Regional Development 3, no. 1 (December 15, 2017): 23–31. http://dx.doi.org/10.51200/jbimpeagard.v3i1.1026.

Full text
Abstract:
Computer Architecture is the study of digital computers towards designing, building and operating digital computers. Digital computers are vital for the modern living because they are essential in providing the intelligences in devices such as self-driving cars and smartphones. Computer Architecture is a core subject for the Electronic (Computer) Engineering course at the Universiti Malaysia Sabah that is compliant to the requirement of the Washington Accord as accredited by the Engineering Accreditation Council of the Board of Engineers of Malaysia (EAC). An FPGA (Field Programmable Gate Array) based Computer Architecture Laboratory had been developed to support the curriculum of this course. FPGA allows a sustainable implementation of laboratory exercises without resorting to poisonous fabrication of microelectronic devices and installation of integrated circuits. An FPGA is just a configurable and therefore reusable digital design component. Two established organisations promoting computer engineering curriculum, ACM and IEEE, encourages the use of FPGA in digital design in their latest recommendation and together with the EAC, emphasises the grasp of the fundamentals for each student. The laboratory exercises are individual exercises where each student is given a unique assignment. A laboratory manual is provided as a guide and project specification for each student but overall the concept of the laboratory exercise is a student-centred one. Each student is allowed to pace their effort to achieve the sessions of the laboratory exercises starting from session one to session ten. A quantitative analysis of the effectiveness of these laboratory sessions is carried out based on the numbers of students completing the laboratory sessions. These sessions start from an 1:FPGA tutorial to implementations of features of a microprocessor of 2:Immediate Load, 3:Immediate Load to Multiple Registers, 4:Addition, 5:Operation Code, 6:Program Memory, 7:Jump, 8:Conditional Jump, 9:Register to Register and 10:Input-Output. The results of three batches of students show that within the time limits of a one credit hour course, students had managed to complete some aspects of the implementation of a simple microprocessor.
APA, Harvard, Vancouver, ISO, and other styles
7

Kołata, Joanna, and Piotr Zierke. "The Decline of Architects: Can a Computer Design Fine Architecture without Human Input?" Buildings 11, no. 8 (August 6, 2021): 338. http://dx.doi.org/10.3390/buildings11080338.

Full text
Abstract:
Architects are required to have knowledge of current legislation, ergonomics, and the latest technical solutions. In addition, the design process necessitates an appreciation of the quality of the space and a high degree of creativity. However, it is a profession that has undergone significant changes in recent years due to the pressure exerted by the development of information technology. The designs generated by computer algorithms are becoming such a serious part of designers’ work that some are beginning to question whether they are more the work of computers than humans. There are also increasing suggestions that software development will eventually lead to a situation where humans in the profession will become redundant. This review article aims to present the currently used, implemented, and planned computer technologies employed in the design and consider how they affect and will affect the work of architects in the future. It includes opinions of a wide range of experts on the possibility of computer algorithms replacing architects. The ultimate goal of the article is an attempt to answer the question: will computers eliminate the human factor in the design of the future? It also considers the artificial intelligence or communication skills that computer algorithms would require to achieve this goal. The answers to these questions will contribute not only to determining the future of architecture but will also indicate the current condition of the profession. They will also help us to understand the technologies that are making computers capable of increasingly replacing human professions. Despite differing opinions on the possibility of computer algorithms replacing architects, the conclusions indicate that, currently, computers do not have capabilities and skills to achieve this goal. The speed of technological development, especially such technologies as artificial superintelligence, artificial brains, or quantum computers allows us to predict that the replacement of the architect by machines will be unrealistic in coming decades.
APA, Harvard, Vancouver, ISO, and other styles
8

Choi, Yongseok, Eunji Lim, Jaekwon Shin, and Cheol-Hoon Lee. "MemBox: Shared Memory Device for Memory-Centric Computing Applicable to Deep Learning Problems." Electronics 10, no. 21 (November 8, 2021): 2720. http://dx.doi.org/10.3390/electronics10212720.

Full text
Abstract:
Large-scale computational problems that need to be addressed in modern computers, such as deep learning or big data analysis, cannot be solved in a single computer, but can be solved with distributed computer systems. Since most distributed computing systems, consisting of a large number of networked computers, should propagate their computational results to each other, they can suffer the problem of an increasing overhead, resulting in lower computational efficiencies. To solve these problems, we proposed an architecture of a distributed system that used a shared memory that is simultaneously accessible by multiple computers. Our architecture aimed to be implemented in FPGA or ASIC. Using an FPGA board that implemented our architecture, we configured the actual distributed system and showed the feasibility of our system. We compared the results of the deep learning application test using our architecture with that using Google Tensorflow’s parameter server mechanism. We showed improvements in our architecture beyond Google Tensorflow’s parameter server mechanism and we determined the future direction of research by deriving the expected problems.
APA, Harvard, Vancouver, ISO, and other styles
9

A.V., Chistyakov. "On improving the efficiency of mathematical modeling of the problem of stability of construction." Artificial Intelligence 25, no. 3 (October 10, 2020): 27–36. http://dx.doi.org/10.15407/jai2020.03.027.

Full text
Abstract:
Algorithmic software for mathematical modeling of structural stability is considered, which is reduced to solving a partial generalized eigenvalues problem of sparse matrices, with automatic parallelization of calculations on modern parallel computers with graphics processors. Peculiarities of realization of parallel algorithms for different structures of sparse matrices are presented. The times of solving the problem of stability of composite materialsusing a three-dimensional model of "finite size fibers" on computers of different architectures are given. In mathematical modeling of physical and technical processes in many cases there is a need to solve problems of algebraic problem of eigenvalues (APVZ) with sparse matrices of large volumes. In particular, such problems arise in the analysis of the strength of structures in civil and industrial construction, aircraft construction, electric welding, etc. The solving to these problems is to determine the eigenvalues and eigenvectors of sparse matrices of different structure. The efficiency of solving these problems largely depends on the effectiveness of mathematical modeling of the problem as a whole. Continuous growth of task parameters, calculation of more complete models of objects and processes on computers require an increase in computer productivity. High-performance computing requirements are far ahead of traditional parallel computing, even with multicore processors. High-performance computing requirements are far ahead of traditional parallel computing, even with multicore processors. Today, this problem is solved by using powerful supercomputers of hybrid architecture, such as computers with multicore processors (CPUs) and graphics processors (GPUs), which combine MIMD and SIMD architectures. But the potential of high-performance computers can be used to the fullest only with algorithmic software that takes into account both the properties of the task and the features of the hybrid architecture. Complicating the architecture of modern high-performance supercomputers of hybrid architecture, which are actively used for mathematical modeling (increasing the number of computer processors and cores, different types of computer memory, different programming technologies, etc.) means a significant complication of efficient use of these resources in creating parallel algorithms and programs. here are problems with the creation of algorithmic software with automatic execution of stages of work, which are associated with the efficient use of computing resources, ways to store and process sparse matrices, analysis of the reliability of computer results. This makes it possible to significantly increase the efficiency of mathematical modeling of practical problems on modern high-performance computers, as well as free users from the problems of parallelization of complex problems. he developed algorithmic software automatically implements all stages of parallel computing and processing of sparse matrices on a hybrid computer. It was used at the Institute of Mechanics named after S.P. Tymoshenko NAS of Ukraine in modeling the strength problems of composite material. A significant improvement in the time characteristics of mathematical modeling was obtained. Problems of mathematical modeling of the properties of composite materials has an important role in designing the processes of deformation and destruction of products in various subject areas. Algorithmic software for mathematical modeling of structural stability is considered, which is reduced to solving a partial generalized problem of eigen values of sparse matrices of different structure of large orders, with automatic parallelization of calculations on modern parallel computers with graphics processors. The main methodological principles and features of implementation of parallel algorithms for different structures of sparse matrices are presented, which ensure effective implementation of multilevel parallelism of a hybrid system and reduce data exchange time during the computational process. As an example of these approaches, a hybrid algorithm of the iteration method in subspace for tape and block-diagonal matrices with a frame for computers of hybrid architecture is given. Peculiarities of data decomposition for matrices of profile structure at realization of parallel algorithms are considered. The proposed approach provides automatic determination of the required topology of the hybrid computer and the optimal amount of resources for the organization of an efficient computational process. The results of testing the developed algorithmic software for problems from the collection of the University of Florida, as well as the times of solving the problem of stability of composite materials using a three-dimensional model of "finite size fibers" on computers of different architectures. The results show a significant improvement in the time characteristics of solving problems.
APA, Harvard, Vancouver, ISO, and other styles
10

Falcone, Alberto, Alfredo Garro, Marat S. Mukhametzhanov, and Yaroslav D. Sergeyev. "Representation of grossone-based arithmetic in simulink for scientific computing." Soft Computing 24, no. 23 (August 3, 2020): 17525–39. http://dx.doi.org/10.1007/s00500-020-05221-y.

Full text
Abstract:
AbstractNumerical computing is a key part of the traditional computer architecture. Almost all traditional computers implement the IEEE 754-1985 binary floating point standard to represent and work with numbers. The architectural limitations of traditional computers make impossible to work with infinite and infinitesimal quantities numerically. This paper is dedicated to the Infinity Computer, a new kind of a supercomputer that allows one to perform numerical computations with finite, infinite, and infinitesimal numbers. The already available software simulator of the Infinity Computer is used in different research domains for solving important real-world problems, where precision represents a key aspect. However, the software simulator is not suitable for solving problems in control theory and dynamics, where visual programming tools like Simulink are used frequently. In this context, the paper presents an innovative solution that allows one to use the Infinity Computer arithmetic within the Simulink environment. It is shown that the proposed solution is user-friendly, general purpose, and domain independent.
APA, Harvard, Vancouver, ISO, and other styles
11

Ossama MOHAMED al-Rawi, Ossama. "Origins of Computational Design in Architecture." Future Engineering Journal 1, no. 1 (March 19, 2020): 1–9. http://dx.doi.org/10.54623/fue.fej.1.1.5.

Full text
Abstract:
The changes that the computer is bringing to architecture are one part of a revolutionary social upheaval. Tools not only change individual patterns and behaviour, but also cause transformations in institutions. Just as other tools have in the past, the computer is in the process of conditioning our understanding of the world and our perception of our place in it. The application of computers to architecture is more than anew sophisticated tool that can be manipulated like a pencil or pen. It is rather, “the culmination of the objectifying mentality of modernity and it is, therefore, inherently perspectival. The tyranny of computer-aided design and its graphic systems can be awesome: because its rigorous mathematical base is unshakable, it rigidly establishes a homogeneous space and is inherently unable to combine different structure of reference.” Digital space is quantified by a programmer, who enacts a simplification of reality through a process of abstraction in which empirical that does not fit the chosen conceptual framework is discarded. The aim of this paper is to investigate and track the origins of the core concepts of geometrics in the history of architecture and to designate its basic conceptual applications that could be probably using the same concepts and processing steps that are used today with computers. The gained benefits of this investigation could help in boosting new methodologies in architectural design regarding form generation. Comparative analysis shall be the methodology used to reach these theoretical origins, and, the royal palace of the Alhambra will be the main case study together with related styles from Islamic architecture.
APA, Harvard, Vancouver, ISO, and other styles
12

Popov, Oleksandr, and Oleksiy Chystiakov. "On the Efficiency of Algorithms with Multi-level Parallelism." Physico-mathematical modelling and informational technologies, no. 33 (September 5, 2021): 133–37. http://dx.doi.org/10.15407/fmmit2021.33.133.

Full text
Abstract:
The paper investigates the efficiency of algorithms for solving computational mathematics problems that use a multilevel model of parallel computing on heterogeneous computer systems. A methodology for estimating the acceleration of algorithms for computers using a multilevel model of parallel computing is proposed. As an example, the parallel algorithm of the iteration method on a subspace for solving the generalized algebraic problem of eigenvalues of symmetric positive definite matrices of sparse structure is considered. For the presented algorithms, estimates of acceleration coefficients and efficiency were obtained on computers of hybrid architecture using graphics accelerators, on multi-core computers with shared memory and multi-node computers of MIMD-architecture.
APA, Harvard, Vancouver, ISO, and other styles
13

Ding, Yongshan, and Frederic T. Chong. "Quantum Computer Systems: Research for Noisy Intermediate-Scale Quantum Computers." Synthesis Lectures on Computer Architecture 15, no. 2 (June 16, 2020): 1–227. http://dx.doi.org/10.2200/s01014ed1v01y202005cac051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

An, Gang, Yu Li, and Xin Li. "Architecture Design of Aviation Fault-tolerant Computer Based on ARINC659 Bus Technology." MATEC Web of Conferences 179 (2018): 03025. http://dx.doi.org/10.1051/matecconf/201817903025.

Full text
Abstract:
The ARINC659 backplane bus is suitable for high safety and high reliability requirements of aircraft on-board computer communication systems. This paper analyzes the structure of ARINC 659 serial backplane bus and the bus fault tolerance mechanism. On the basis of backplane bus, a 4 degree of aviation fault-tolerant computer is designed. Moreover, the computer architecture and computer system of the instruction branch and monitoring branch are designed in the computer channel. The fault-tolerant management of the computer is realized by bus fault tolerance, redundancy voting between computers and the monitoring of the instruction and monitoring branches.
APA, Harvard, Vancouver, ISO, and other styles
15

Berque, Dave, Terri Bonebright, and Michael Whitesell. "Using pen-based computers across the computer science curriculum." ACM SIGCSE Bulletin 36, no. 1 (March 2004): 61–65. http://dx.doi.org/10.1145/1028174.971324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Owen, G. S. "Teaching introductory and advanced computer graphics using micro-computers." ACM SIGCSE Bulletin 21, no. 1 (February 1989): 283–87. http://dx.doi.org/10.1145/65294.71443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Farsad, Behshld. "Networking Your Computer Lab: Benefits And Pitfalls." Hospitality Education and Research Journal 12, no. 2 (February 1988): 482. http://dx.doi.org/10.1177/109634808801200259.

Full text
Abstract:
Local area networks (LANs) are probably the most flexible and adaptable to customizing communications systems. LANs can virtually fit any location/site requirements. They can be tailored for any number of users, any application type and any cost/performance ratio. LANs can work with small (micro computers), medium (mini computers) and large/complex (mainframe) systems. This great flexibility which is due to several factors like, distributed architecture design, software standards, and hardware independence technology make LANs easy to use in a computer laboratory environment. Currently, many hospitality institutions are investigating the feasibility of using LANs in their computer laboratory. However, LANs are still costly, and sometimes difficult to install.
APA, Harvard, Vancouver, ISO, and other styles
18

Mishra, Aamlan Saswat. "Social Acceptance Prediction Model for Generative Architectural Spaces in India." Journal of Advanced Research in Construction and Urban Architecture 6, no. 3 (July 23, 2021): 50–57. http://dx.doi.org/10.24321/2456.9925.202109.

Full text
Abstract:
Generative Architectural design is an emerging design process that is evolving due to evolution of computational power of computers and its ability to provide multiple choices of design solutions in architecture. This process, however, has a few drawbacks, some of which are, a high number of solutions which take less time for computers to produce than for their human counterpart to interpret and choose from and the less social acceptance of generative architectural design solutions. Due to the algorithms being unaware of what humans deem as acceptable solutions, these problems persist. A way to bridge such gap is through a survey simulation model, which the computer can apply to simulate acceptance of the created solution if it were put through a survey. A mathematical model has been developed though analysis of a survey such that a computer can predict how acceptable a particular iteration of a Generative Architectural design process is if it were put through a similar survey. Scores obtained in the survey simulation can be used to predict how acceptable a particular design iteration is there by culling less acceptable solutions and reducing the number of iterations provided to humans for review after running Generative Architectural algorithms.
APA, Harvard, Vancouver, ISO, and other styles
19

Kendon, Vivien M., Kae Nemoto, and William J. Munro. "Quantum analogue computing." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 368, no. 1924 (August 13, 2010): 3609–20. http://dx.doi.org/10.1098/rsta.2010.0017.

Full text
Abstract:
We briefly review what a quantum computer is, what it promises to do for us and why it is so hard to build one. Among the first applications anticipated to bear fruit is the quantum simulation of quantum systems. While most quantum computation is an extension of classical digital computation, quantum simulation differs fundamentally in how the data are encoded in the quantum computer. To perform a quantum simulation, the Hilbert space of the system to be simulated is mapped directly onto the Hilbert space of the (logical) qubits in the quantum computer. This type of direct correspondence is how data are encoded in a classical analogue computer. There is no binary encoding, and increasing precision becomes exponentially costly: an extra bit of precision doubles the size of the computer. This has important consequences for both the precision and error-correction requirements of quantum simulation, and significant open questions remain about its practicality. It also means that the quantum version of analogue computers, continuous-variable quantum computers, becomes an equally efficient architecture for quantum simulation. Lessons from past use of classical analogue computers can help us to build better quantum simulators in future.
APA, Harvard, Vancouver, ISO, and other styles
20

Wu, Nan, and Yuan Xie. "A Survey of Machine Learning for Computer Architecture and Systems." ACM Computing Surveys 55, no. 3 (April 30, 2023): 1–39. http://dx.doi.org/10.1145/3494523.

Full text
Abstract:
It has been a long time that computer architecture and systems are optimized for efficient execution of machine learning (ML) models. Now, it is time to reconsider the relationship between ML and systems and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: improvement of designers’ productivity and completion of the virtuous cycle. In this article, we present a comprehensive review of the work that applies ML for computer architecture and system design. First, we perform a high-level taxonomy by considering the typical role that ML techniques take in architecture/system design, i.e., either for fast predictive modeling or as the design methodology. Then, we summarize the common problems in computer architecture/system design that can be solved by ML techniques and the typical ML techniques employed to resolve each of them. In addition to emphasis on computer architecture in a narrow sense, we adopt the concept that data centers can be recognized as warehouse-scale computers; sketchy discussions are provided in adjacent computer systems, such as code generation and compiler; we also give attention to how ML techniques can aid and transform design automation. We further provide a future vision of opportunities and potential directions and envision that applying ML for computer architecture and systems would thrive in the community.
APA, Harvard, Vancouver, ISO, and other styles
21

A.N., Khimich, Chistyakova T.V., Sydoruk V.A., and Yershov P.S. "Intellectual computer mathematics system inparsolver." Artificial Intelligence 25, no. 4 (December 25, 2020): 60–71. http://dx.doi.org/10.15407/jai2020.04.060.

Full text
Abstract:
The paper considers the intellectual computer mathematics system InparSolver, which is designed to automatically explore and solve basic classes of computational mathematics problems on multi-core computers with graphics accelerators. The problems of results reliability of solving problems with approximate input data are outlined. The features of the use of existing computer mathematics systems are analyzed, their weaknesses are found. The functionality of InparSolver, some innovative approaches to the implementation of effective solutions to problems in a hybrid architecture are described. Examples of applied usage of InparSolver for processes mathematical modeling in various subject areas are given. Nowadays, new more complex objects and phenomena in many subject areas (nuclear energy, mechanics, chemistry, molecular biology, medicine, etc.) are constantly emerging, which are subject to mathematical research on a computer. This encourages the development of new numerical methods and technologies of mathematical modeling, as well as the creation of more powerful computers for their implementation. With the advent and constant development of supercomputers of various architectures, the problems of their effective use, expansion of tasks range should be solved, ensuring the reliability of computer results and increasing the level of intellectual information support for users ‒ specialists in various fields. Today, the issues of solving these problems are given special attention by many specialists in the fields of information technology and parallel programming. The world's leadingscientists in the field of computer technology see the solution to the problems of efficient usage of modern supercomputers in algorithmic software creation that easily adapts to different computer architectures with different types of memory and coprocessors, supports efficient parallelism on millions of cores etc. In addition, improving the efficiency of high-performance computing on modern supercomputers is provided by their intellectualization, transferring to the computer to perform a significant part of the functions (symbolic languages for computer problem statement, research of mathematical models properties, visualization and analysis of tasks results, etc.). The industry of development and usage of intelligent computer technologies is one of the main directions of science and technology development in modern society
APA, Harvard, Vancouver, ISO, and other styles
22

Liu, Yuan-Ting, Kai Wang, Yuan-Dong Liu, and Dong-Sheng Wang. "A Survey of Universal Quantum von Neumann Architecture." Entropy 25, no. 8 (August 9, 2023): 1187. http://dx.doi.org/10.3390/e25081187.

Full text
Abstract:
The existence of universal quantum computers has been theoretically well established. However, building up a real quantum computer system not only relies on the theory of universality, but also needs methods to satisfy requirements on other features, such as programmability, modularity, scalability, etc. To this end, here we study the recently proposed model of quantum von Neumann architecture by putting it in a practical and broader setting, namely, the hierarchical design of a computer system. We analyze the structures of quantum CPU and quantum control units and draw their connections with computational advantages. We also point out that a recent demonstration of our model would require less than 20 qubits.
APA, Harvard, Vancouver, ISO, and other styles
23

Roudavski, Stanislav. "Towards Morphogenesis in Architecture." International Journal of Architectural Computing 7, no. 3 (September 2009): 345–74. http://dx.doi.org/10.1260/147807709789621266.

Full text
Abstract:
Procedural, parametric and generative computer-supported techniques in combination with mass customization and automated fabrication enable holistic manipulation in silico and the subsequent production of increasingly complex architectural arrangements. By automating parts of the design process, computers make it easier to develop designs through versioning and gradual adjustment. In recent architectural discourse, these approaches to designing have been described as morphogenesis. This paper invites further reflection on the possible meanings of this imported concept in the field of architectural designing. It contributes by comparing computational modelling of morphogenesis in plant science with techniques in architectural designing. Deriving examples from case-studies, the paper suggests potentials for collaboration and opportunities for bi-directional knowledge transfers.
APA, Harvard, Vancouver, ISO, and other styles
24

YEPEZ, JEFFREY. "TYPE-II QUANTUM COMPUTERS." International Journal of Modern Physics C 12, no. 09 (November 2001): 1273–84. http://dx.doi.org/10.1142/s0129183101002668.

Full text
Abstract:
This paper discusses a computing architecture that uses both classical parallelism and quantum parallelism. We consider a large parallel array of small quantum computers, connected together by classical communication channels. This kind of computer is called a type-II quantum computer, to differentiate it from a globally phase-coherent quantum computer, which is the first type of quantum computer that has received nearly exclusive attention in the literature. Although a hybrid, a type-II quantum computer retains the crucial advantage allowed by quantum mechanical superposition that its computational power grows exponentially in the number of phase-coherent qubits per node, only short-range and short time phase-coherence is needed, which significantly reduces the level of engineering facility required to achieve its construction. Therefore, the primary factor limiting its computational power is an economic one and not a technological one, since the volume of its computational medium can in principle scale indefinitely.
APA, Harvard, Vancouver, ISO, and other styles
25

Ronchi, Jessica, Grazia Butera, Enrico Frascari, and Piero Scaruffi. "A distributed blackboard-based architecture for tele-diagnosis." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 1, no. 2 (May 1987): 103–8. http://dx.doi.org/10.1017/s0890060400000196.

Full text
Abstract:
KANT is a knowledge-based system designed to diagnose Olivetti personal computers connected as remote terminals to a host computer through a SNA link. KANT is a collection of a few knowledge-based units: some of them operate in the field, and some operate back at the home office. They are configured in two blackboard systems which exchange data via the SNA link. The first blackboard runs on a cheap personal computer, only employs shallow knowledge, and performs the diagnoses that can be achieved in the field. The second blackboard runs on corporate mainframes, employs deep knowledge, and supports the more sophisticated analysis that is required from the project team for fixing complex problems.
APA, Harvard, Vancouver, ISO, and other styles
26

Resch, Salonik, and Ulya R. Karpuzcu. "Benchmarking Quantum Computers and the Impact of Quantum Noise." ACM Computing Surveys 54, no. 7 (July 2021): 1–35. http://dx.doi.org/10.1145/3464420.

Full text
Abstract:
Benchmarking is how the performance of a computing system is determined. Surprisingly, even for classical computers this is not a straightforward process. One must choose the appropriate benchmark and metrics to extract meaningful results. Different benchmarks test the system in different ways, and each individual metric may or may not be of interest. Choosing the appropriate approach is tricky. The situation is even more open ended for quantum computers, where there is a wider range of hardware, fewer established guidelines, and additional complicating factors. Notably, quantum noise significantly impacts performance and is difficult to model accurately. Here, we discuss benchmarking of quantum computers from a computer architecture perspective and provide numerical simulations highlighting challenges that suggest caution.
APA, Harvard, Vancouver, ISO, and other styles
27

Kawakami, Satoshi. "Research on optical computing system architecture for simple recurrent neural networks." Impact 2024, no. 1 (January 22, 2024): 51–53. http://dx.doi.org/10.21820/23987073.2024.1.51.

Full text
Abstract:
Moore’s Law, relating to the speed and capabilities of computers is becoming less applicable. In this ‘post-Moore’ era, a cross-disciplinary team based in the Constructive Electronics Laboratory, Kyushu University, Japan, is investigating optical computing system infrastructures, with a view to driving computing technology forward in a way that negates the need to comply with Moore’s Law. Associate Professor Satoshi Kawakami is an expert in electric circuits and computer architecture who is part of the team. The team’s expertise covers materials, devices, circuits, architectures and algorithms and is geared towards pioneering new computing technologies in the post-Moore era. Kawakami believes that the continuous improvement of computer systems with higher performance and lower power consumption/energy consumption will be essential to realise a sustainable advanced information society and wants to maximise the advantages of devices and hide their disadvantages at the system level, which will necessitate collaboration with higher system layers. Another important goal is reducing power consumption by improving the efficiency of computers. In one current project, the researchers are exploring optical computing system infrastructure for simple recurrent neural networks. The team is keen to re-examine the ideal state of optical circuits from the perspective of the entire system, including electrical memory and interfaces.
APA, Harvard, Vancouver, ISO, and other styles
28

Bardak Denerel, Simge, and Gaye Anil. "Computer Aided Drawing Programs in Interior Architecture Education." Revista Amazonia Investiga 10, no. 39 (May 5, 2021): 28–39. http://dx.doi.org/10.34069/ai/2021.39.03.3.

Full text
Abstract:
Interior architecture education has displayed much variability from the past to the present day. Additionally, computer-aided drawing systems have become an irreplaceable part of interior architecture education, as in all other design disciplines. The contribution of computers in education to the design process has created a process of, Hand drawing – Design – Design in computer environment – Product – Prototype. Currently, traditional drawing methods are used much less. Computer-aided drawing programs in universities display differences in terms of models and content. Additionally, the year and semester in which these lessons are taught are different in every university. In this context, this study deals with computer-aided drawing lessons in a total of 63 programs in 31 interior architecture departments and 32 interior architecture and environmental design departments in Turkey and the Turkish Republic of Northern Cyprus linked to the Council of Higher Education (YÖK) currently. This research was completed with the screening model. Data collection started in October 2020 and was completed at the end of 15 days. Screening was performed to learn which programs are taught in the programs in interior architecture and interior architecture and environmental design departments in different faculties. The software features of these programs were analyzed. The results of the study revealed the similarities of the different programs to each other.
APA, Harvard, Vancouver, ISO, and other styles
29

Bushur, Jacob, and Chao Chen. "Exploiting Raspberry PI Clusters and Campus Lab Computers for Distributed Computing." International Journal of Computer Science and Information Technology 14, no. 03 (June 30, 2022): 41–54. http://dx.doi.org/10.5121/ijcsit.2022.14304.

Full text
Abstract:
Distributed computing networks harness the power of existing computing resources and grant access to significant computing power while averting the costs of a supercomputer. This work aims to configure distributed computing networks using different computer devices and explore the benefits of the computing power of such networks. First, an HTCondor pool consisting of sixteen Raspberry Pi single-board computers and one laptop is created. The second distributed computing network is set up with Windows computers in university campus labs. With the HTCondor setup, researchers inside the university can utilize the lab computers as computing resources. In addition, the HTCondor pool is configured alongside the BOINC installation on both computer clusters, allowing them to contribute to high-throughput scientific computing projects in the research community when the computers would otherwise sit idle. The scalability of these two distributed computing networks is investigated through a matrix multiplication program and the performance of the HTCondor pool is also quantified using its built-in benchmark tool. With such a setup, the limits of the distributed computing network architecture in computationally intensive problems are explored.
APA, Harvard, Vancouver, ISO, and other styles
30

SAMET, REFIK. "CHOOSING BETWEEN DESIGN OPTIONS FOR REAL-TIME COMPUTERS TOLERATING A SINGLE FAULT." Journal of Circuits, Systems and Computers 19, no. 05 (August 2010): 1041–68. http://dx.doi.org/10.1142/s0218126610006591.

Full text
Abstract:
This paper proposes a methodology for supporting the design of fault-tolerant computers for real-time applications. To this end, the paper first presents steps of fault tolerance and describes mechanisms that can be used to realize them. Then, the design options consisting of described mechanisms are proposed and a table summarizing them is designed. From that, the paper proposes a flowchart for choosing between the many various design options available for building a redundant computer system. Choosing an optimal design option is performed according to the number of redundant computers, the mode of operation of redundant computers, the computer failure mode and the severity of the real-time constraint. Finally, graphical models for sequencing the mechanisms of design options are proposed. The main merit of the proposed methodology includes a spectrum of design options of fault-tolerant mechanisms for real-time computers tolerating a single fault at a time and a guide for choosing between them.
APA, Harvard, Vancouver, ISO, and other styles
31

Sitanggang, Andri Sahata, R. Fenny Syafariani, Novrini Hasti, Febilita Wulan Sari, and Dhara Pasya. "LAN network architecture design at Nurul Jalal Islamic Boarding School, North Jakarta." International Journal of Advances in Applied Sciences 13, no. 1 (March 1, 2024): 123. http://dx.doi.org/10.11591/ijaas.v13.i1.pp123-133.

Full text
Abstract:
A computer network is a telecommunications network that allows computers to communicate with each other by exchanging data. At Nurul Jalal Islamic Boarding School, they have taken advantage of advances in computer network technology, but have not been fully connected properly. Therefore, in this research, a local area network (LAN) network architecture that is connected to speedy internet will be built and developed. The design of this network architecture includes designing a connection to the internet speedy network and designing a computer lab architecture that is connected to the network at the Nurul Jalal Islamic Boarding School. This study aims to design the existing technology with the system and add technologies and systems that do not yet exist so that they can be integrated into a computer network connected to the speedy internet network. It is hoped that this research will help teachers and students of Nurul Jalal Islamic Boarding School in exploring information so that it can help effective and efficient learning.
APA, Harvard, Vancouver, ISO, and other styles
32

Aisyah, Siti. "Computer Networking Company in Business Area." International Research Journal of Management, IT & Social Sciences 2, no. 7 (July 1, 2015): 1. http://dx.doi.org/10.21744/irjmis.v2i7.67.

Full text
Abstract:
Computer Networking is not something new today. Almost every company there is a Computer Network to facilitate the flow of Information within the company. Internet increasingly popular today is a giant Computer Network of Computers that are connected and can interact. This can occur because of the network technology development is very rapid. But in some ways connected to the internet can be dangerous threat, many attacks that can occur both within and outside such as Viruses, Trojans, and Hackers. In the end the security of computers and computer networks will play an important role in this case. A good firewall configuration and optimized to reduce these threats. Firewall configuration there are 3 types of them are screened host firewall system (Single- homed bastion), screened host firewall system (Dual-homed bastion), and screened subnet firewall. And also configure the firewall to open the ports Port right to engage connect to the Internet, because the ports to configure a firewall that can filter packets incoming data in accordance with the policy or policies. This firewall architecture that will be used to optimize a firewall on the network.
APA, Harvard, Vancouver, ISO, and other styles
33

Larchenko, L. V., A. V. Parkhomenko, B. D. Larchenko, and V. R. Korniienko. "DESIGN MODELS OF BIT-STREAM ONLINE-COMPUTERS FOR SENSOR COMPONENTS." Radio Electronics, Computer Science, Control, no. 1 (April 2, 2024): 62. http://dx.doi.org/10.15588/1607-3274-2024-1-6.

Full text
Abstract:
Context. Currently, distributed real-time control systems need the creation of devices that perform online computing operations close to the sensor. The proposed online-computers of elementary mathematical functions can be used as components for the functional conversion of signals in the form of pulse streams received from measuring sensors with frequency output. Objective. The objective of the study is the development of mathematical, architectural and automata models for the design of bit-stream online-computers of elementary mathematical functions in order to create a unified approach to their design, due to which the accuracy of calculating functions can be increased, functional capabilities expanded, hardware costs reduced, and design efficiency increased. Method. Mathematical models of devices were developed using the method of forming increments of ascending step functions based on inverse functions with minimization of calculation error. Automata models of online-computers based on Moore’s Finite State Machine have been developed, the graph diagrams of which made it possible to ensure the clarity of function implementation algorithms, to increase visibility and invariance of implementation in formal languages of programming and hardware description. Results. The paper presents the results of research, development and practical approbation of design models of bit-stream onlinecomputers of power functions and root extraction function. A generalized architecture of an online-computer was proposed. Conclusions. The considered functional online-computers are effective from the point of view of calculation accuracy, simplicity of technical implementation, and universality of the architecture
APA, Harvard, Vancouver, ISO, and other styles
34

Pyle, I. C. "Computers from Logic to Architecture." Computing & Control Engineering Journal 1, no. 3 (1990): 108. http://dx.doi.org/10.1049/cce:19900028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Parsons, Michael G., and Klaus-Peter Beier. "Microcomputer Software for Computer-Aided Ship Design." Marine Technology and SNAME News 24, no. 03 (July 1, 1987): 246–64. http://dx.doi.org/10.5957/mt1.1987.24.3.246.

Full text
Abstract:
The rapid evolution of the microcomputer has changed the software needs of today's naval architects. The Department of Naval Architecture and Marine Engineering at The University of Michigan has been a leader in the application of computers in ship design education. The computer environment readily available to the department's students has changed dramatically in the past few years with the evolution of the Computer-Aided Marine Design Laboratory within the department and the creation of the Computer Aided Engineering Network (CAEN) within the College of Engineering. The microcomputer facilities available to the students are briefly described. To fully integrate this capability into the department's curriculum, a coordinated suite of computer-aided ship design software has been developed for use on the Macintosh and IBM-PC/XT/AT microcomputers provided for the students. To support the use of this and other software on a wide range of computers, a portable, device-independent computer graphics subprogram package M-PLOT has been developed. The educational philosophy behind this design software and its scope, capabilities, and use in ship design education are described. Examples of the use of selected programs are presented to illustrate these capabilities. Plans for further work are outlined. The effort is well toward the goal of a complete, microcomputer-based ship design software environment.
APA, Harvard, Vancouver, ISO, and other styles
36

VAN METER, RODNEY, THADDEUS D. LADD, AUSTIN G. FOWLER, and YOSHIHISA YAMAMOTO. "DISTRIBUTED QUANTUM COMPUTATION ARCHITECTURE USING SEMICONDUCTOR NANOPHOTONICS." International Journal of Quantum Information 08, no. 01n02 (February 2010): 295–323. http://dx.doi.org/10.1142/s0219749910006435.

Full text
Abstract:
In a large-scale quantum computer, the cost of communications will dominate the performance and resource requirements, place many severe demands on the technology, and constrain the architecture. Unfortunately, fault-tolerant computers based entirely on photons with probabilistic gates, though equipped with "built-in" communication, have very large resource overheads; likewise, computers with reliable probabilistic gates between photons or quantum memories may lack sufficient communication resources in the presence of realistic optical losses. Here, we consider a compromise architecture, in which semiconductor spin qubits are coupled by bright laser pulses through nanophotonic waveguides and cavities using a combination of frequent probabilistic and sparse determinstic entanglement mechanisms. The large photonic resource requirements incurred by the use of probabilistic gates for quantum communication are mitigated in part by the potential high-speed operation of the semiconductor nanophotonic hardware. The system employs topological cluster-state quantum error correction for achieving fault-tolerance. Our results suggest that such an architecture/technology combination has the potential to scale to a system capable of attacking classically intractable computational problems.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhao, Yongwei, Yunji Chen, and Zhiwei Xu. "Fractal Parallel Computing." Intelligent Computing 2022 (September 5, 2022): 1–10. http://dx.doi.org/10.34133/2022/9797623.

Full text
Abstract:
As machine learning (ML) becomes the prominent technology for many emerging problems, dedicated ML computers are being developed at a variety of scales, from clouds to edge devices. However, the heterogeneous, parallel, and multilayer characteristics of conventional ML computers concentrate the cost of development on the software stack, namely, ML frameworks, compute libraries, and compilers, which limits the productivity of new ML computers. Fractal von Neumann architecture (FvNA) is proposed to address the programming productivity issue for ML computers. FvNA is scale-invariant to program, thus making the development of a family of scaled ML computers as easy as a single node. In this study, we generalize FvNA to the field of general-purpose parallel computing. We model FvNA as an abstract parallel computer, referred to as the fractal parallel machine (FPM), to demonstrate several representative general-purpose tasks that are efficiently programmable. FPM limits the entropy of programming by applying constraints on the control pattern of the parallel computing systems. However, FPM is still general-purpose and cost-optimal. We settle some preliminary results showing that FPM is as powerful as many fundamental parallel computing models such as BSP and alternating Turing machine. Therefore, FvNA is also generally applicable to various fields other than ML.
APA, Harvard, Vancouver, ISO, and other styles
38

Kameyama, Michitaka. "Special Issue on Computer Architecture for Robotics." Journal of Robotics and Mechatronics 2, no. 6 (December 20, 1990): 417. http://dx.doi.org/10.20965/jrm.1990.p0417.

Full text
Abstract:
In the realization of intelligent robots, highly intelligent manipulation and movement techniques are required such as intelligent man-machine interfaces, intelligent information processing for path planning and problem solutions, practical robot vision, and high-speed sensor signal processing. Thus, very high-speed processing to cope with vast amounts of data as well as the development of various algorithms has become important subjects. To fulfill such requirements, the development of high-performance computer architecture using advanced microelectronics technology is required. For these purposes, the development of implementing computer systems’ for robots will be classified as follows: (a) Use of general-purpose computers As the performance of workstations and personal computers is increased year by year, software development is the major task without requiring hardware development except the interfaces with peripheral equipment. Since current high-level languages and software can be applied, the approach is excellent in case of system development, but the processing performance is limited. (b) Use of commercially available (V) LSI chips This is an approach to design a computer system by the combination of commercially available LSIs. Since the development of both hardware and software is involved in this system development, the development period tends to be longer than in (a). These chips include general-purpose microprocessors, memory chips, digital signal processors (DSPs) and multiply-adder LSIs. Though the kinds of available chips are limited to some degree, the approach can cope with a considerably high-performance specifications because a number of chips can be flexibly used. (c) Design, development and system configuration of VLSI chips This is an approach to develop new special-purpose VLSI chips using ASIC (Application Specific Integrated Circuit) technology, that is, semicustom or full-custom technology. If these attain practical use and are marketed, they will be widely used as high-performance VLSI chips of the level (b). Since a very high-performance specification must be satisfied, the study of very high performance VLSI computer architecture becomes very important. But this approach involving chip development requires a very long period in the design-development from the determination of processor specifications to the system configuration using the fabricated chips. For the above three approaches, the order from the viewpoint of ease of development will be (a), (b) and (c), while that from the viewpoint of performance will be (c), (b) and (a). Each approach is not exclusive but is complementary each other. For example, the development of new chips by (c) can also give new impact as the components of (a) and (b). Further, the common point of these approaches is that performance improvement by highly parallel architecture becomes important. This special edition introduces, from the above standpoint, the latest information on the present state and' future prospects of the computer techniques in Japan. We hope that this edition will contribute to the development of this field.
APA, Harvard, Vancouver, ISO, and other styles
39

Xiong, Lu, and Dean Bruton. "On Procedural Modeling of Urban Form - a Designer’s View and a Research Practice." Advanced Materials Research 374-377 (October 2011): 330–35. http://dx.doi.org/10.4028/www.scientific.net/amr.374-377.330.

Full text
Abstract:
Procedural modeling is a term in computer graphics referring to the creation of digital models with sets of rules. With the user-defined rule sets, digital models can be generated automatically by computers rather than modeled manually. Several popular procedural modeling methods and are listed and compared in the paper. A new research framework on procedural modeling of urban and architecture form is introduced. We also choose Jørn Utzon’s “additive architecture” as a case study and show the possibilities of future urban and architecture design.
APA, Harvard, Vancouver, ISO, and other styles
40

Zadiraka, Valerii, Oleksandr Khimich, and Inna Shvidchenko. "Models of Computer Calculations." Cybernetics and Computer Technologies, no. 2 (September 30, 2022): 38–51. http://dx.doi.org/10.34229/2707-451x.22.2.4.

Full text
Abstract:
Introduction. The complexity of computational algorithms for solving typical problems of computational, applied, and discrete mathematics is analyzed from the perspective of the theory of computation, depending on the computer architecture and the used computing model: single-processor, multiprocessor, and quantum. The following classes of problems are considered: systems of linear algebraic equations, the Cauchy problem for systems of ordinary differential equations, numerical integration, boundary value problems for ordinary differential equations, factorization of numbers, finding the discrete logarithm of a number in multiplicative integer groups, searching for the necessary record in an unordered database, etc. The purposes of the paper are: 1. To investigate how the computational complexity depends on the computer architecture and the computational model. 2. To show that the construction of the computational process under the given conditions of calculations is related to the solution of the following problems: – the existence ε-solution to the problem; – the existence of T-effective computing algorithms; – the possibility of building a real computing process under the given computing conditions. 3. To investigate the effect of rounding numbers on computational complexity (especially when solving problems of transcomputational complexity). 4. To give the complexity estimates and total error of the computational algorithm for a number of typical problems of computational, applied, and discrete mathematics. The results. The complexity estimates of computational algorithms of the listed classes of problems for single-processor, multiprocessor and quantum computing models are given. The main focus is on high-performance computing: using the principles of parallel data processing and quantum mechanics. Conclusions. The connection of complexity estimates of computational algorithms with the architecture of computers and models of calculations is demonstrated. The characteristics of the first quantum computers (2016 – 2022), which have gone beyond laboratory research, are given. Keywords: computer technologies, rounding error, sequential, parallel and quantum computing models, complexity estimate.
APA, Harvard, Vancouver, ISO, and other styles
41

Da Rosa, Evandro Chagas Ribeiro, and Rafael De Santiago. "Ket Quantum Programming." ACM Journal on Emerging Technologies in Computing Systems 18, no. 1 (January 31, 2022): 1–25. http://dx.doi.org/10.1145/3474224.

Full text
Abstract:
Quantum programming languages (QPL) fill the gap between quantum mechanics and classical programming constructions, simplifying the development of quantum applications. However, most QPL addresses the inherent quantum programming problem, neglecting quantum computer implementation constraints. We present a runtime architecture for classical-quantum execution that mitigates the limitation of interaction between classical and quantum computers originated from the cloud-based model of quantum computation provided by several vendors, which implies a quantum computer processing in batch. In the proposed runtime architecture, we introduce (i) runtime quantum code generation to enable generic quantum programming and dynamic quantum execution; and (ii) the concept of futures to handle dynamic interaction between classical and quantum computers. To support our proposal, we have implemented the Ket Quantum Programming framework that features a Python-embedded classical-quantum programming language named Ket, the C++ quantum programming library Libket, and Ket Bitwise (quantum computing) Simulator. The last one improves over the bitwise representation, making the simulation time not dependent on the number of qubits but the amount of superposition and entanglement of simulation.
APA, Harvard, Vancouver, ISO, and other styles
42

Ракитский, Антон Андреевич, and Борис Яковлевич Рябко. "Information theory as a means of determining the main factors affecting the processors architecture." Вычислительные технологии, no. 6 (January 19, 2021): 104–15. http://dx.doi.org/10.25743/ict.2020.25.6.007.

Full text
Abstract:
В работе исследуется процесс разработки компьютеров за последние десятилетия с целью определения наиболее влияющих на него факторов. Описываются сами факторы, которые используются для предсказания направления будущих разработок. Для решения этой задачи применяется концепция Вычислительной Способности, которая позволяет оценить производительность компьютеров теоретически, опираясь исключительно на описание их архитектуры. In this article we are investigating the computers development process in the past decades in order to identify the factors that influence it the most. We describe such factors and use them to predict the direction of further development. To solve these problems, we use the concept of the Computer Capacity, which allows us to estimate the performance of computers theoretically, relying only on the description of its architecture.
APA, Harvard, Vancouver, ISO, and other styles
43

Bouraya, Sara, and Abdessamad Belangour. "Dissecting of the two-stages object detection models architecture and performance." Bulletin of Electrical Engineering and Informatics 13, no. 3 (June 1, 2024): 1694–706. http://dx.doi.org/10.11591/eei.v13i3.6424.

Full text
Abstract:
Artificial intelligence (AI) is the discipline focused on enabling computers to operate autonomously without explicit programming. Within AI, computer vision is an emerging field tasked with endowing machines with the ability to interpret visual data from images and videos. Over recent decades, computer vision has found applications in diverse fields such as autonomous vehicles, information retrieval, surveillance, and understanding human behavior. Object detection, a key aspect of computer vision, employs deep neural networks to continually advance detection accuracy and speed. Its goal is to precisely identify objects within images or videos and assign them to specific classes. Object detection models typically consist of three components: a backbone network for feature extraction, a neck model for feature aggregation, and a head for prediction. The focus of this study lies on two stage detectors. This study aims to provide a comprehensive review of two stage detectors in object detection, followed by benchmarking to offer insights for researchers and scientists. By analyzing and understanding the efficacy of these models, this research seeks to guide future developments in the field of object detection within computer vision.
APA, Harvard, Vancouver, ISO, and other styles
44

Palme, Jacob, and Sirkku Männikö. "Use of computer conferencing to teach a course on humans and computers." ACM SIGCSE Bulletin 29, no. 3 (September 1997): 88–90. http://dx.doi.org/10.1145/268809.268847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Ogozalek, Virginia Z. "A comparison of male and female computer science students' attitudes toward computers." ACM SIGCSE Bulletin 21, no. 2 (June 1989): 8–14. http://dx.doi.org/10.1145/65738.65740.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Cooper, Martin. "When Computers Changed Banking." ITNOW 63, no. 3 (August 16, 2021): 26–27. http://dx.doi.org/10.1093/itnow/bwab074.

Full text
Abstract:
Abstract The story of a rivalry, drive-in-banking, a computer called Pegasus and a team with great vision. Martin Cooper MBCS explores the history of Martins Bank and its desire to be first at computers.
APA, Harvard, Vancouver, ISO, and other styles
47

Tribollet, J. "Globally controlled artificial semiconducting molecules as quantum computers." Quantum Information and Computation 5, no. 7 (November 2005): 561–72. http://dx.doi.org/10.26421/qic5.7-4.

Full text
Abstract:
Quantum computers are expected to be considerably more efficient than classical computers for the execution of some specific tasks. The difficulty in the practical implementation of those computers is to build a microscopic quantum system that can be controlled at a larger macroscopic scale. Here I show that vertical lines of donor atoms embedded in an appropriate Zinc Oxide semiconductor structure can constitute artificial molecules that are as many copy of the same quantum computer. In this scalable architecture, each unit of information is encoded onto the electronic spin of a donor. Contrary to most existing practical proposals, here the logical operations only require a global control of the spins by electromagnetic pulses. Ensemble measurements simplify the readout. With appropriate improvement of its growth and doping methods, Zinc Oxide could be a good semiconductor for the next generation of computers.
APA, Harvard, Vancouver, ISO, and other styles
48

MOON, YOUNGME, and CLIFFORD NASS. "Are computers scapegoats? Attributions of responsibility in human–computer interaction." International Journal of Human-Computer Studies 49, no. 1 (July 1998): 79–94. http://dx.doi.org/10.1006/ijhc.1998.0199.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

FON-DER-FLAASS, DMITRI, and IVAN RIVAL. "COLLECTING INFORMATION IN GRADED ORDERED SETS." Parallel Processing Letters 03, no. 03 (September 1993): 253–60. http://dx.doi.org/10.1142/s0129626493000290.

Full text
Abstract:
We consider a set of computers with precedence constraints (an ordered set) in which stored information can be passed, serially or in parallel, from one computer to any other which is an immediate successor. When is it possible to organize information transmission so that each computer receives all information from its predecessors, without duplication?
APA, Harvard, Vancouver, ISO, and other styles
50

Ayebeng Botchway, Edward. "Software Application Employed in Architectural Design Education: The Case of KNUST." Review of European Studies 8, no. 2 (March 15, 2016): 30. http://dx.doi.org/10.5539/res.v8n2p30.

Full text
Abstract:
<p>Computer software has come to replace the manual form of designing in both architectural education and practice. The use of drawing boards had been employed in architectural education and practice for a long time. Since the first half of the twentieth century, computer hardware and corresponding software have seen dramatic change and development manufactured and tailored to meet the demand of changing technological and human needs. Architecture has had its fair share since the advent of computers and has seen major milestone changes in its integration into the profession. In the last century, architectural education in Ghana has also witnessed this revolution. From the year 2000 and thereon since Computer Aided Architectural Design (CAAD) was introduced in the Department of Architecture (DOA) in the Kwame Nkrumah University of Science and Technology (KNUST) there has been tremendous improvement in the CAAD tools used in architectural design education. There is therefore the need to evaluate the CAAD software used by the students and faculty. This paper looked at the existence and the mode in which CAAD software is applied in the department, the predominant software used by students and the mode of acquisition of the software. The findings proved that CAAD is taught as part of the curriculum in the DOA and has helped improve architectural design education over the years. However, the full potential and benefit of CAAD use has not been realized as a result of challenges faced by students and faculty in teaching, learning and acquisition of software.</p>
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography