Journal articles on the topic 'Microprocessors Operating systems (Computers) Microprocessors'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Microprocessors Operating systems (Computers) Microprocessors.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gallacher, Joe. "Microprocessors and their operating systems." Microprocessors and Microsystems 14, no. 8 (1990): 550–51. http://dx.doi.org/10.1016/0141-9331(90)90056-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Samoilova, M. E., and A. A. Zubrilin. "AN EXTRACURRICULAR EVENT “HISTORY OF INFORMATICS IN DATES." Informatics in school, no. 5 (June 23, 2019): 7–13. http://dx.doi.org/10.32517/2221-1993-2019-18-5-7-13.

Full text
Abstract:
The article presents an extracurricular event “History of informatics in dates”, conducted using infographics. The article justifies why infographics will allow us to memorize historical events associated with the development of informatics. The dates and information from the history of the development of the Internet, computers, microprocessors and operating systems are given. The work with dates is carried out in a game form.
APA, Harvard, Vancouver, ISO, and other styles
3

Shevelev, S. S. "RECONFIGURABLE COMPUTING MODULAR SYSTEM." Radio Electronics, Computer Science, Control 1, no. 1 (2021): 194–207. http://dx.doi.org/10.15588/1607-3274-2021-1-19.

Full text
Abstract:
Context. Modern general purpose computers are capable of implementing any algorithm, but when solving certain problems in terms of processing speed they cannot compete with specialized computing modules. Specialized devices have high performance, effectively solve the problems of processing arrays, artificial intelligence tasks, and are used as control devices. The use of specialized microprocessor modules that implement the processing of character strings, logical and numerical values, represented as integers and real numbers, makes it possible to increase the speed of performing arithmetic operations by using parallelism in data processing.
 Objective. To develop principles for constructing microprocessor modules for a modular computing system with a reconfigurable structure, an arithmetic-symbolic processor, specialized computing devices, switching systems capable of configuring microprocessors and specialized computing modules into a multi-pipeline structure to increase the speed of performing arithmetic and logical operations, high-speed design algorithms specialized processors-accelerators of symbol processing. To develop algorithms, structural and functional diagrams of specialized mathematical modules that perform arithmetic operations in direct codes on neural-like elements and systems for decentralized control of the operation of blocks.
 Method. An information graph of the computational process of a modular system with a reconstructed structure has been built. Structural and functional diagrams, algorithms that implement the construction of specialized modules for performing arithmetic and logical operations, search operations and functions for replacing occurrences in processed words have been developed. Software has been developed for simulating the operation of an arithmetic-symbolic processor, specialized computing modules, and switching systems.
 Results. A block diagram of a reconfigurable computing modular system has been developed, which consists of compatible functional modules, it is capable of static and dynamic reconfiguration, has a parallel structure for connecting the processor and computing modules through the use of interface channels. The system consists of an arithmetic-symbolic processor, specialized computing modules and switching systems, performs specific tasks of symbolic information processing, arithmetic and logical operations.
 Conclusions. The architecture of reconfigurable computing systems can change dynamically during their operation. It becomes possible to adapt the architecture of a computing system to the structure of the problem being solved, to create problem-oriented computers, the structure of which corresponds to the structure of the problem being solved. As the main computing element in reconfigurable computing systems, not universal microprocessors are used, but programmable logic integrated circuits, which are combined using high-speed interfaces into a single computing field. Reconfigurable multipipeline computing systems based on fields are an effective tool for solving streaming information processing and control problems.
APA, Harvard, Vancouver, ISO, and other styles
4

STATSENKO, D., B. ZLOTENKO, S. NATROSHVILI, T. KULIK, and S. DEMISHONKOVA. "COMPUTER SYSTEM FOR CONTROLLING INDOOR LIGHTING." HERALD OF KHMELNYTSKYI NATIONAL UNIVERSITY 295, no. 2 (2021): 40–44. http://dx.doi.org/10.31891/2307-5732-2021-295-2-40-44.

Full text
Abstract:
The analysis of modern tendencies related to “Smart House” technologies is carried out in this article. The questions of programming languages of microcontrollers and microprocessors are considered. Software products that are used to create mobile applications for smartphones or tablets are presented. A computer system for remote control of room lighting is considered. The design and principle of its operation are shown schematically. A prototype of a computer system that has the following functions: 1) Control, on / off, lighting systems, depending on the needs of the owner of the premises. 2) Transfer of information about the level of illumination to the user, the owner of the premises. 3) Automatic switching on / off of electric, electroluminescent light sources, which are included in the room lighting control system. Photo of the prototype is shown. The principle of operation of the system control program based on the use of a photoresistor is presented. The Arduino microcontroller receives and processes information from the photoresistor, on the basis of which it automatically sends signals to the room lighting control system. The formulas for calculating the illumination using the results of the data obtained from the photoresistor of the prototype are given. The processed information, using wireless networks, goes to the interactive devices of the user, who can remotely check the value of illumination and, if necessary, control it. The visual interface of a mobile application for mobile phones and tablets using the Android operating system is presented. A computer system for controlling the lighting of premises, which is easy to use and does not require significant financial costs, is considered and analyzed. The methods of modeling, observation and research of computer systems are used in the work. The obtained results allow obtaining an effective computer system for remote control of indoor lighting.
APA, Harvard, Vancouver, ISO, and other styles
5

McCarthy, J. J., and J. J. Frief. "EDS and WDS Automation: Past Development and Future Technology." Microscopy and Microanalysis 5, S2 (1999): 556–57. http://dx.doi.org/10.1017/s143192760001610x.

Full text
Abstract:
Early Development Automation of electron probe analysis began to flourish in the early 1970s spurred on by advances in computer technology and the availability of operating systems and programming languages that the individual researcher could afford to dedicate to a single instrument. By the end of the decade, most researchers and vendors in the microanalysis field had adopted the PDP-11 minicomputer, and languages such as FOCAL, FORTRAN and BASIC that ran on these computers. A good summary of these early efforts was given by Hatfield. The first use of the energy dispersive detector on the electron probe in 1968 added the need to control the acquisition, display and processing of EDS spectra. As a result, the 70’s were also a time when much attention was focussed on development of software for on-line data reduction and analysis. These efforts produced a suite of programs to provide matrix corrections and spectral processing, and automation of WDS data collection. The culmination of these development efforts was first reported in 1977 with the analysis of a lunar whitlockite mineral by simultaneous EDS/WDS measurement. This analysis determined the concentration of 23 elements, 8 by EDS and took a total of 37 minutes for data collection and analysis. In this paper, the authors noted the complementary use of the EDS and WDS (WDS for trace elements and severe peak overlaps, EDS for other elements and rapid qualitative analysis) in their automated instrument, a convention that remains common on the electron probe even today. Toward the end of the decade the analytical accuracy and precision achieved by automated analysis of bulk samples approached the limits of the instrumentation, with the exception of analysis of light element concentrations.Two Decades of Improvements The explosive growth in digital electronics and microprocessors for data processing and control functions during the 80’s was rapidly applied to electron probe automation. Second and third generation automation systems included direct control of many microscope functions, beam position and imaging conditions. Motor positioning was more precise and far faster. As a result, the data collection and analysis of 23 elements reported in 1977 could be accomplished at least three times faster on a modern instrument.
APA, Harvard, Vancouver, ISO, and other styles
6

Halim, Fransiscus Ati. "Application Software For Learning CPU Process of Interrupt and I/O Operation." International Journal of New Media Technology 4, no. 2 (2017): 69–74. http://dx.doi.org/10.31937/ijnmt.v4i2.782.

Full text
Abstract:
The purpose of this research is to have simulation software capable of processing interrupt instruction and I/O operation that in the future it can contribute in developing a kernel. Interrupt and I/O operation are necessary in the development of the kernel system. Kernel is a medium for hardware and software to communicate. However, Not many application software which helps the learner to understand interrupt process. In managing the hardware, there are times when some kind of condition exist in the system that needs attention of processor or in this case kernel which managing the hardware. In response to that condition, the system will issue an interrupt request to sort that condition. As the I/O operation is needed since a computer system not just consists of CPU and memory only but also other device such as I/O device. This paper elaborates the application software for learning Interrupt application. With interrupt instruction and I/O operation in the simulation program, the program will be more represent the process happened in the real life computer. In this case, the program is able to run the interrupt instruction, I/O operation and other changes are running as expected. Refers to its main purpose, perhaps this simulation can lead to developing the kernel in operating system. From the results of instruction’s testing above, has a result that shows that 90% of instructions are run properly. In executing instructions, simulation program still has a bug following after the execution of Jump and conditional Jump.
 Index Terms—Interrupt; I/O; Kernel; Operating System
 REFERENCES
 [1] C. Hamacher, Z. Vranesic, S. Zaky. Naraig Manjikian , Computer Organization and Embedded Systems 6th Edition; McGraw-Hill, 2012
 [2] B. Brey. The Intel Microprocessors , Architecture, Programming, and Interfacing , 8th Edition. Pearson, 2008
 [3] W.Stallings. Computer Organization and Architecture, 9th Edition Pearson , 2012
 [4] F.A.Halim , Sutrisno, “Fundamental Characteristic of Central Processing Unit Simulation as a Basic Stage of Making Kernel”, Publish in Konferensi Nasional Sistem & Informatika (KNS&I 2010), 12-13 Nov 2010, Bali
 [5] Intel, IA-32 Intel® Architecture Software Developer’s Manual Volume 3: System Programming Guide, Denver: Intel Corporation, 2004 [6] Intel,IA-32 Intel 80386 Reference Programmer's,: I/O Instruction , https://pdos.csail.mit. edu/6.828/2014/readings/i386/s08_02.htm, available 17 June 2017
APA, Harvard, Vancouver, ISO, and other styles
7

Popentiu, Florin. "Computers and microprocessors components and systems." Microelectronics Reliability 33, no. 1 (1993): 111. http://dx.doi.org/10.1016/0026-2714(93)90054-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Terrell, T. J. "Book Review: Computers and Microprocessors: Components and Systems." International Journal of Electrical Engineering Education 23, no. 1 (1986): 94. http://dx.doi.org/10.1177/002072098602300126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ramana Murthy, G., C. Senthilpari, P. Velrajkumar, and Lim Tien Sze. "Monte-Carlo analysis of a new 6-T full-adder cell for power and propagation delay optimizations in 180 nm process." Engineering Computations 31, no. 2 (2014): 149–59. http://dx.doi.org/10.1108/ec-01-2013-0023.

Full text
Abstract:
Purpose – Demand and popularity of portable electronic devices are driving the designers to strive for higher speeds, long battery life and more reliable designs. Recently, an overwhelming interest has been seen in the problems of designing digital systems with low power at no performance penalty. Most of the very large-scale integration applications, such as digital signal processing, image processing, video processing and microprocessors, extensively use arithmetic operations. Binary addition is considered as the most crucial part of the arithmetic unit because all other arithmetic operations usually involve addition. Building low-power and high-performance adder cells are of great interest these days, and any modifications made to the full adder would affect the system as a whole. The full adder design has attracted many designer's attention in recent years, and its power reduction is one of the important apprehensions of the designers. This paper presents a 1-bit full adder by using as few as six transistors (6-Ts) per bit in its design. The paper aims to discuss these issues. Design/methodology/approach – The outcome of the proposed adder architectural design is based on micro-architectural specification. This is a textual description, and adder's schematic can accurately predict the performance, power, propagation delay and area of the design. It is designed with a combination of multiplexing control input (MCIT) and Boolean identities. The proposed design features lower operating voltage, higher computing speed and lower energy consumption due to the efficient operation of 6-T adder cell. The design adopts MCIT technique effectively to alleviate the threshold voltage loss problem commonly encountered in pass transistor logic design. Findings – The proposed adder circuit simulated results are used to verify the correctness and timing of each component. According to the design concepts, the simulated results are compared to the existing adders from the literature, and the significant improvements in the proposed adder are observed. Some of the drawbacks of the existing adder circuits from the literature are as follows: The Shannon theorem-based adder gives voltage swing restoration in sum circuit. Due to this problem, the Shannon circuit consumes high power and operates at low speed. The MUX-14T adder circuit is designed by using multiplexer concept which has a complex node in its design paradigm. The node drivability of input consumes high power to transmit the voltage level. The MCIT-7T adder circuit is designed by using MCIT technique, which consumes more power and leads to high power consumption in the circuit. The MUX-12T adder circuit is designed by MCIT technique. The carry circuit has buffering restoration unit, and its complement leads to high power dissipation and propagation delay. Originality/value – The new 6-T full adder circuit overcomes the drawbacks of the adders from the literature and successfully reduces area, power dissipation and propagation delay.
APA, Harvard, Vancouver, ISO, and other styles
10

Thangamuthu, Tamilarasi, Rajasekar Rathanasamy, Saminathan Kulandaivelu, et al. "Experimental investigation on the influence of carbon-based nanoparticle coating on the heat transfer characteristics of the microprocessor." Journal of Composite Materials 54, no. 1 (2019): 61–70. http://dx.doi.org/10.1177/0021998319859926.

Full text
Abstract:
In the current scenario, thermal management plays a vital role in electronic system design. The temperature of the electronic components should not exceed manufacturer-specified temperature levels in order to maintain safe operating range and service life. The reduction in heat build-up will certainly enhance the component life and reliability of the system. The aim of this research work is to analyze the effect of multi-walled carbon nanotube and graphene coating on the heat transfer capacity of a microprocessor used in personal computers. The performance of coating materials was investigated at three different usages of central processing unit. Multi-walled carbon nanotube-coated and graphene-coated microprocessors showed better enhancement in heat transfer as compared with uncoated microprocessors. Maximum decrease in heat build-up of 7 and 9℃ was achieved for multi-walled carbon nanotube-coated and graphene-coated microprocessors compared to pure substrate. From the results, graphene has been proven to be a suitable candidate for effective heat transfer compared to with multi-walled carbon nanotubes due to high thermal conductivity characteristics of the former compared to the latter.
APA, Harvard, Vancouver, ISO, and other styles
11

Benner, G., J. Frey, M. Rosβ-Meβemer, and W. Probst. "A new computer-powered TEM with unique imaging capabilities." Proceedings, annual meeting, Electron Microscopy Society of America 52 (1994): 490–91. http://dx.doi.org/10.1017/s0424820100170189.

Full text
Abstract:
1. IntroductionA modern electron microscope designed for routine operation must provide user-friendly operation and a high degree of automation without compromising its imaging performance. Adequate computerization of the system is the way to achieve this goal. Up to now even the most advanced computer controlled microscopes can set important parameters like the magnification, the image brightness or the image orientation only in discrete steps. In the new Zeiss EM 906 continuous adjustment of these parameters has been realised for the first time by means of realtime computer interpolation of the lens excitations between the discrete lens current combinations corresponding to the discrete parameter settings.2. Computer architecture of the microscopeThe computer network comprises a DOS-compatible system (host) computer, four subsystems controlled by separate microprocessor and a flexible data and program memory. The integrated system computer controls the electron optics while the gun, the goniometer, the camera and the vacuum system are controlled and monitored by autonomous microprocessor systems, which are connected to the host computer for data transfer via interrupt-controlled parallel interfaces.
APA, Harvard, Vancouver, ISO, and other styles
12

Irita, Takahiro, Takayuki Ogura, Minoru Fujishima, and Koichiro Hoh. "Microprocessor architecture utilizing redundant-binary operation." Systems and Computers in Japan 30, no. 13 (1999): 106–15. http://dx.doi.org/10.1002/(sici)1520-684x(19991130)30:13<106::aid-scj11>3.0.co;2-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Bocharov, N. A. "Modeling of algorithms of disaster tolerance of robot groups on hardware and software Elbrus." Radio industry (Russia) 29, no. 3 (2019): 8–14. http://dx.doi.org/10.21778/2413-9599-2019-29-3-8-14.

Full text
Abstract:
In operation of ground-based robotic systems for military purposes, due to target actions of an opposing party, numerous failures might occur that lead to a suddenly changed state of the system and, therefore, fall into the category of catastrophic failures. In this case, we face the question of providing the disaster tolerance as an ability of a group of robots to continue operations with partially lost efficiency. A significant but still unresolved issue regarding ground-based robot systems is their equipment with computing equipment developed on domestic microprocessors and software. The paper includes offered techniques and algorithms that serve as a basis for building of disaster-tolerant control systems of ground-based robot systems based on domestic computer systems and software. Authors have developed the algorithms to ensure tolerance of on-board control systems against catastrophic failures. Authors have received numerical results that show the increased operation time of groups of robots in case of catastrophic failures. The findings improve import substitution options in the field of robotics.
APA, Harvard, Vancouver, ISO, and other styles
14

Benner, Gerd, Manfred Prinz, Johannes Bihr, and Josef Frey. "A New Computer Control System for the EM 910 Transmission Electron Microscope." Proceedings, annual meeting, Electron Microscopy Society of America 48, no. 1 (1990): 160–61. http://dx.doi.org/10.1017/s0424820100179555.

Full text
Abstract:
Modern analytical electron microscopes must provide a multitude of illumination and imaging modes, a user-friendly operation and a high degree of flexibility. In addition a large number of monitoring and control functions must be performed. In the new Zeiss EM 910 this is achieved by a complete digitization of the instrument control system. The computer network comprises an AT-compatible system computer, 4 microprocessor-controlled subsystems and a flexible data and program memory. Two control panels and an interactive control monitor are used for operation of the instrument. A keyboard is integrated for data input.Fig. 1 shows a block diagram of the computer control of the EM 910. The integrated system computer with an 80286 processor and a clock frequency of 12 MHz controls and monitors the electron optics (lenses, deflection systems, stigmators). Specially developed interrupt-controlled parallel interfaces ensure rapid communication between the system computer and the 4 autonomous Z80 microprocessor-controlled subsystems. The subsystems are:1. gun subsystem which controls and monitors the high-voltage system;2. goniometer subsystem for control and operation of the motorized 4-axis eucentric goniometer;3. camera subsystem which controls the sheet film camera and the components required for exposure such as automatic screens and the shutter;4. vacuum subsystem for control of the vacuum system and monitoring the compressed air and water supply.
APA, Harvard, Vancouver, ISO, and other styles
15

P. A., Agbedemnab, Agebure M. A., and Akobre S. "A Fault Tolerant Scheme for Detecting Overflow in Residue Number Microprocessors." International Journal Of Engineering And Computer Science 7, no. 02 (2018): 23578–87. http://dx.doi.org/10.18535/ijecs/v7i2.09.

Full text
Abstract:
The decomposition of larger numbers into smaller ones termed as residues is the main operation behind the concept of Residue Number System (RNS); it possesses inherent features such as parallelism and independent digit arithmetic computations. These features of the RNS has made it desirable for applications that require intensive computations such as Digital Signal Processing (DSP), Digital Filtering and Convolutions. Overflow detection is one of the major challenges that confront the efficient implementation of RNS in general purpose computer processors. Overflow occurs in RNS when an illegitimate value is represented within legitimate range – Dynamic Range (DR) as if it is legitimate value. This misrepresentation of results, which usually arises during addition operations ultimately affects systems built on this Number System. It is therefore imperative that steps are taken not to only detect but correct the occurrence of overflow whenever it occurs. In this paper, an additive overflow detection and correction scheme for the moduli set is presented. The scheme uses a redundant modulus to extend the DR of the moduli set. The proposed scheme is demonstrated theoretically to be an efficient scheme by comparing it to previous similar works.
APA, Harvard, Vancouver, ISO, and other styles
16

Moses, Melanie, George Bezerra, Benjamin Edwards, James Brown, and Stephanie Forrest. "Energy and time determine scaling in biological and computer designs." Philosophical Transactions of the Royal Society B: Biological Sciences 371, no. 1701 (2016): 20150446. http://dx.doi.org/10.1098/rstb.2015.0446.

Full text
Abstract:
Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy–time minimization principle may govern the design of many complex systems that process energy, materials and information. This article is part of the themed issue ‘The major synthetic evolutionary transitions’.
APA, Harvard, Vancouver, ISO, and other styles
17

Bennett, P. A. "Safety Critical Systems: So What… ?" Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 206, no. 4 (1992): 197–205. http://dx.doi.org/10.1243/pime_proc_1992_206_335_02.

Full text
Abstract:
Safety critical systems contain advanced computer, microprocessor and software technologies to a degree of sophistication that is frequently beyond the understanding of many practising engineers. Many of these systems control the safe operation of everyday things such as anti-lock braking on cars, personnel lifts and trains, as well as industrial processes and fly-by-wire aircraft. This paper describes the nature, processes, standards and assessment methods currently being employed with safety critical systems, and addresses various questions that the practising engineer may ask. It demonstrates that although the technology and methods may he novel, concerns surrounding the evaluation of safety critical systems are yet another instance of the age-old dilemma involved in exercising engineering judgement.
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Tao, Lizy Kurian John, Anand Sivasubramaniam, N. Vijaykrishnan, and Juan Rubio. "OS-Aware Branch Prediction: Improving Microprocessor Control Flow Prediction for Operating Systems." IEEE Transactions on Computers 56, no. 1 (2007): 2–17. http://dx.doi.org/10.1109/tc.2007.250619.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Dyduch, Janusz, and Mieczysław Kornaszewski. "New systems in management of railway traffic in Poland." Transportation Overview - Przeglad Komunikacyjny 2017, no. 10 (2017): 45–53. http://dx.doi.org/10.35117/a_eng_17_10_06.

Full text
Abstract:
The new computer's solutions and microprocessor technology, microcomputers and the programmable controllers (PLC) for management of train traffic, which are implemented, contribute to the creation of modern rail traffic control systems. These systems provide high reliability, low power consumption, stability and safety of the trains' movement. One of the most important things for the boards of railway European countries is unification the systems of rail transport, in particular unification the signaling systems and control of the rail traffic. A good solution is as soon as possible implementation the European Rail Traffic Management System (ERTMS), which connect the system of safe operation of trains ETCS and the digital Global System for Mobile Communications – Railways GSM-R.
APA, Harvard, Vancouver, ISO, and other styles
20

Kameyama, Michitaka. "Special Issue on Integration of Intelligence for Robotics in VLSI Chips." Journal of Robotics and Mechatronics 8, no. 6 (1996): 491. http://dx.doi.org/10.20965/jrm.1996.p0491.

Full text
Abstract:
Intelligence is one of the most important subjects in information and electronics systems. In many applications such as multi media systems, home electronics systems, factory automation systems, security systems and aerospace systems, advanced intelligent processing technologies are more required to be developed as shown in Figure. There are two approaches to increase intelligence, although they are closely related each other and may not be separable. One is an algorithm-based approach to directly increase intelligence quality. The other is a computational-power-based approach to directly increase processing performance. Even if a single operation is very simple, its repeated operations often make the processing intelligent. The problem is how to increase the computational power. It is obvious that software acceleration using general-purpose microprocessors has some limitation. Therefore, special acceleration using newly developed chips is one of the most important solutions. In particular, real-world applications need to achieve very quick response for dynamically changing real-world environment. Therefore, special-purpose processors and special-purpose accelerators or engines, are essential to make the above applications realistic. Another words, ""to realize high speed processing intelligence"" On the other hand, solid-state circuits technology enabling single-chip systems have rapid advancement resulting in dramatic improvements in both performance and cost oer function. In fact, one-giga-bit DRAMs, ten SPECint95 microprocessors containing ten million transistors are being developed by recent VLSI technology. It is no more a dream to develop practical special processors using the recent VLSI technology. Moreover, new architecture and new concept circuits have been actively studied for the next-generation integration technology. From the above point of view, this special issue was planned to demonstrate the above important area. Especially, intelligent robot is a typical class of applications, soits intelligence technology makes also any other application promising. Finally, I would like to express my application to the authors for their efforts and contributions to this special issue and also the members of the Editorial Board for their useful comments.
APA, Harvard, Vancouver, ISO, and other styles
21

Buchman, Timothy G. "Computers in the Intensive Care Unit: Promises Yet to Be Fulfilled." Journal of Intensive Care Medicine 10, no. 5 (1995): 234–40. http://dx.doi.org/10.1177/088506669501000505.

Full text
Abstract:
Computers, whether disguised as microprocessor-controlled bedside devices or obvious as electronic patient charts, are proliferating in intensive care units. The history of the relationship between computers and intensive care units suggests that their joint development has been characterized by customization of a device or a program to automate each specific task. Failure to develop standard definitions of clinical data, standards for their interpretation, or a comprehensive model of the process of critical care retards development of computer systems beyond device-dedicated microprocessors. An agenda that gives priority to systematic examination of definitions, descriptions, and processes of critical care over additional hardware and software development is recommended.
APA, Harvard, Vancouver, ISO, and other styles
22

Khan, Fatima Hameed, Muhammad Adeel Pasha, and Shahid Masud. "Advancements in Microprocessor Architecture for Ubiquitous AI—An Overview on History, Evolution, and Upcoming Challenges in AI Implementation." Micromachines 12, no. 6 (2021): 665. http://dx.doi.org/10.3390/mi12060665.

Full text
Abstract:
Artificial intelligence (AI) has successfully made its way into contemporary industrial sectors such as automobiles, defense, industrial automation 4.0, healthcare technologies, agriculture, and many other domains because of its ability to act autonomously without continuous human interventions. However, this capability requires processing huge amounts of learning data to extract useful information in real time. The buzz around AI is not new, as this term has been widely known for the past half century. In the 1960s, scientists began to think about machines acting more like humans, which resulted in the development of the first natural language processing computers. It laid the foundation of AI, but there were only a handful of applications until the 1990s due to limitations in processing speed, memory, and computational power available. Since the 1990s, advancements in computer architecture and memory organization have enabled microprocessors to deliver much higher performance. Simultaneously, improvements in the understanding and mathematical representation of AI gave birth to its subset, referred to as machine learning (ML). ML includes different algorithms for independent learning, and the most promising ones are based on brain-inspired techniques classified as artificial neural networks (ANNs). ANNs have subsequently evolved to have deeper and larger structures and are often characterized as deep neural networks (DNN) and convolution neural networks (CNN). In tandem with the emergence of multicore processors, ML techniques started to be embedded in a range of scenarios and applications. Recently, application-specific instruction-set architecture for AI applications has also been supported in different microprocessors. Thus, continuous improvement in microprocessor capabilities has reached a stage where it is now possible to implement complex real-time intelligent applications like computer vision, object identification, speech recognition, data security, spectrum sensing, etc. This paper presents an overview on the evolution of AI and how the increasing capabilities of microprocessors have fueled the adoption of AI in a plethora of application domains. The paper also discusses the upcoming trends in microprocessor architectures and how they will further propel the assimilation of AI in our daily lives.
APA, Harvard, Vancouver, ISO, and other styles
23

Zhang, Weini. "Research on Recognition Method of Basketball Goals Based on Image Analysis of Computer Vision." Journal of Sensors 2021 (September 20, 2021): 1–11. http://dx.doi.org/10.1155/2021/5269431.

Full text
Abstract:
Moving target detection is involved in many engineering applications, but basketball has some difficulties because of the time-varying speed and uncertain path. The purpose of this paper is to use computer vision image analysis to identify the path and speed of a basketball goal, so as to meet the needs of recognition and achieve trajectory prediction. This research mainly discusses the basketball goal recognition method based on computer vision. In the research process, Kalman filter is used to improve the KCF tracking algorithm to track the basketball path. The algorithm of this research is based on MATLAB, so it can avoid the mixed programming of MATLAB and other languages and reduce the difficulty of interface design software. In the aspect of data acquisition, the extended EPROM is used to store user programs, and parallel interface chips (such as 8255A) can be configured in the system to output switch control signals and display and print operations. The automatic basketball bowling counter based on 8031 microprocessor is used as the host computer. After the level conversion by MAX232, it is connected with the RS232C serial port of PC, and the collected data is sent to the workstation recording the results. In order to consider the convenience of user operation, the GUI design of MATLAB is used to facilitate the exchange of information between users and computers so that users can see the competition results intuitively. The processing frame rate of the tested video image can reach 60 frames/second, more than 25 frames/second, which meet the real-time requirements of the system. The results show that the basketball goal recognition method used in this study has strong anti-interference ability and stable performance.
APA, Harvard, Vancouver, ISO, and other styles
24

Quilici-Gonzalez, J. A., G. Kobayashi, M. C. Broens, and M. E. Q. Gonzalez. "Ubiquitous Computing." International Journal of Technoethics 1, no. 3 (2010): 11–23. http://dx.doi.org/10.4018/jte.2010070102.

Full text
Abstract:
In this article, the authors investigate, from an interdisciplinary perspective, possible ethical implications of the presence of ubiquitous computing systems in human perception/action. The term ubiquitous computing is used to characterize information-processing capacity from computers that are available everywhere and all the time, integrated into everyday objects and activities. The contrast in approach to aspects of ubiquitous computing between traditional considerations of ethical issues and the Ecological Philosophy view concerning its possible consequences in the context of perception/action are the underlying themes of this paper. The focus is on an analysis of how the generalized dissemination of microprocessors in embedded systems, commanded by a ubiquitous computing system, can affect the behaviour of people considered as embodied embedded agents.
APA, Harvard, Vancouver, ISO, and other styles
25

Martins Bezerra, Pedro André, Florian Krismer, Johann Walter Kolar, et al. "Experimental Efficiency Evaluation of Stacked Transistor Half-Bridge Topologies in 14 nm CMOS Technology." Electronics 10, no. 10 (2021): 1150. http://dx.doi.org/10.3390/electronics10101150.

Full text
Abstract:
Different Half-Bridge (HB) converter topologies for an Integrated Voltage Regulator (IVR), which serves as a microprocessor application, were evaluated. The HB circuits were implemented with Stacked Transistors (HBSTs) in a cutting-edge 14 nm CMOS technology node in order to enable the integration on the microprocessor die. Compared to a conventional realization of the HBST, it was found that the Active Neutral-Point Clamped (ANPC) HBST topology with Independent Clamp Switches (ICSs) not only ensured balanced blocking voltages across the series-connected transistors, but also featured a more robust operation and achieved higher efficiencies at high output currents. The IVR achieved a maximum efficiency of 85.3% at an output current of 300 mA and a switching frequency of 50 MHz. At the maximum measured output current of 780 mA, the efficiency was 83.1%. The active part of the IVR (power switches, gate-drivers, and level shifters) realized a high maximum current density of 24.7 A/mm2.
APA, Harvard, Vancouver, ISO, and other styles
26

Dichev, Dimitar, Hristofor Koev, Totka Bakalova, and Petr Louda. "A Gyro-Free System for Measuring the Parameters of Moving Objects." Measurement Science Review 14, no. 5 (2014): 263–69. http://dx.doi.org/10.2478/msr-2014-0036.

Full text
Abstract:
Abstract The present paper considers a new measurement concept of modeling measuring instruments for gyro-free determination of the parameters of moving objects. The proposed approach eliminates the disadvantages of the existing measuring instruments since it is based, on one hand, on a considerably simplified mechanical module, and on the other hand, on the advanced achievements in the area of nanotechnologies, microprocessor and computer equipment. A specific measuring system intended for measuring the trim, heel, roll, and pitch of a ship has been developed in compliance with the basic principles of this concept. The high dynamic accuracy of this measuring system is ensured by an additional measurement channel operating in parallel with the main channel. The operating principle of the additional measurement channel is based on an appropriate correction algorithm using signals from linear MEMS accelerometers. The presented results from the tests carried out by means of stand equipment in the form of a hexapod of six degrees of freedom prove the effectiveness of the proposed measurement concept
APA, Harvard, Vancouver, ISO, and other styles
27

Çaylı, Ali, Adil Akyüz, Abdullah Nafi Baytorun, Sedat Boyacı, Sait Üstün, and Fatma Begüm Kozak. "Sera Çevre Koşullarının Nesnelerin İnterneti Tabanlı İzleme ve Analiz Sistemi ile Denetlenmesi." Turkish Journal of Agriculture - Food Science and Technology 5, no. 11 (2017): 1279. http://dx.doi.org/10.24925/turjaf.v5i11.1279-1289.1282.

Full text
Abstract:
Wireless sensor networks applications and inter-machine communication (M2M), called the Internet of Things, help decision-makers to control complex systems thanks to the low data-rate and cost-effective data collection and analysis. These technologies offer new possibilities to monitor environmental management and agricultural policies, and to improve agricultural production, especially in low-income rural areas. In this study, IoT is proposed with a low cost, flexible and scalable data collection and analysis system. For this purpose, open source hardware microprocessor cards and sensors are stored in the greenhouse computer database using the IEEE 802.15.4 Zigbee wireless communication protocol. The data can be analyzed by greenhouse computer analysis software, which is developed with the PHP programming language. It is possible to monitor the real time data from the greenhouse computer. Also alert rules definitions can be made and the system was tested in greenhouse conditions. It has been observed that it performs operations steadily such as data transfer, sensor measurements and data processing. The proposed system may be useful for monitoring indoor climate and controlling ventilation, irrigation and heating systems, especially for small enterprises due to the modular structure.
APA, Harvard, Vancouver, ISO, and other styles
28

Furness, R. A. "Developments in Pipeline Instrumentation." Measurement and Control 20, no. 1 (1987): 7–17. http://dx.doi.org/10.1177/002029408702000102.

Full text
Abstract:
Pipelines are an integral part of the world's economy and literally billions of pounds worth of fluids are moved each year in pipelines of varying lengths and diameters. As the cost of some of these fluids and the price of moving them has increased, so the need to measure the flows more accurately and control and operate the line more effectively has arisen. Instrumentation and control equipment has developed steadily in the past decade but not as fast as the computers and microprocessors that are now a part of most large scale pipeline systems. It is the interfacing of the new generation of digital and sometimes ‘intelligent’ instrumentation with smaller and more powerful computers that has led to a quiet but rapid revolution in pipeline monitoring and control. This paper looks at the more significant developments from the many that have appeared in the past few years and attempts to project future trends in the industry for the next decade.
APA, Harvard, Vancouver, ISO, and other styles
29

Davies, R. M., R. B. Lawrence, P. E. Routledge, and W. Knox. "The Rapidform process for automated thermoplastic socket production." Prosthetics and Orthotics International 9, no. 1 (1985): 27–30. http://dx.doi.org/10.3109/03093648509164821.

Full text
Abstract:
This paper describes the genesis of the Rapidform process and its pioneering place in the new developments leading to complete control of the processes of manufacture of prostheses. The materials and geometric considerations involved in the development of a double deformation process under microprocessor control are described. Stages in the development of the system show the advance from the initial application to modular below-knee prostheses through extensions to special suspension systems (supracondylar and suprapatellar) to Syme's and above-knee sockets. The clinical and laboratory results are summarized along with an account of the current aspects of the project, ie advanced clinical trials, testing and analysis. Setting the scene historically for the other computer based modules in this high technology approach to prosthetics, Rapidform has proven to be swift, accurate and economical in its operation. Also, in common with the rest of the suite of equipment, this socket production facility, despite its flexibility and technical sophistication, requires no special services beyond standard single phase mains electricity supply.
APA, Harvard, Vancouver, ISO, and other styles
30

Irfan, Mohammad Mujahid, Sushama Malaji, Chandarashekhar Patsa, Shriram S. Rangarajan, Randolph E. Collins, and Tomonobu Senjyu. "Online Learning-Based ANN Controller for a Grid-Interactive Solar PV System." Applied Sciences 11, no. 18 (2021): 8712. http://dx.doi.org/10.3390/app11188712.

Full text
Abstract:
The technology transformation of industry 4.0 comprises computers, power converters such as variable speed devices, and microprocessors, which distract from the quality of power. The integration of distribution-generation technologies, such as solar photovoltaic (PV) and wind systems with source grids, frequently uses power converters, which increases the issues with power quality. DSTATCOM is the FACTS device most proficient in recompensing current-related power quality concerns. A model of DSTATCOM with an ANN controller was developed and implemented using a backpropagation online learning-based algorithm for balanced non-linear loads. This algorithm minimized the mathematical burden and the complications of control. It demonstrated a dynamic role in improving the quality of the power at the grid. The algorithm was implemented in MATLAB using an ANN model controller and the results were validated with an experimental set-up using an FPGA controller.
APA, Harvard, Vancouver, ISO, and other styles
31

Gordienko, R. G., O. G. Fedorenko, A. A. Demidov, and A. V. Fedorov. "Debugging and monitoring of applicationprograms in the BagrOS-4000 real-time operation system based on the Elbrus architecture." Radio industry 29, no. 1 (2019): 8–15. http://dx.doi.org/10.21778/2413-9599-2019-29-1-8-15.

Full text
Abstract:
The article is concerned with the problems of monitoring and debugging of operating system processes, the effectiveness of which in the hard real-time operating system version does not allow any stopping to analyze the state of software and/ or hardware. The paper describes the concept of a debugging and monitoring system developed taking into account this feature in the Sukhoi design bureau for the BagrOS-4000 hard real-time operating system on the Elbrus architectural platform together with the specialists of MCST JSC. The method of non-stop monitoring and data collection in hard realtime processes in the multiprocess multimodular systems is discussed. An approach to the management of debugging targets in terms of source code using the DWARF debugging information specification is presented. The transition from the instrumental machine to the system server built into the target computer is described. Given the rationale for the use of client-server architecture in the debugging and monitoring system for BagrOS-4000. A comparative analysis of the key functionality of the debugging and monitoring system with the existing debugging systems has been carried out; the key aspects of the DMS architecture have been considered. The design of a machine-dependent interface required for the integration of the independent hardware platforms into the BagrOS-4000 system when implementing the system on an integrated avionics module of the onboard complex is discussed. The results of testing of the debugging and monitoring systems are analyzed in terms of efficiency versus the classical method of using the debug console prints when debugging a real-time operating system. Most of the above solutions are universal and have been successfully tested using other microprocessor platforms on multi-threaded application programs of real-time operating systems running on multi-core processors, including the MIPS, Power PC, Intel platforms.
APA, Harvard, Vancouver, ISO, and other styles
32

Laktionov, Ivan S., Oleksandr V. Vovna, Maryna M. Kabanets, Iryna A. Getman, and Oksana V. Zolotarova. "Computer-Integrated Device for Acidity Measurement Monitoring in Greenhouse Conditions with Compensation of Destabilizing Factors." Instrumentation Mesure Métrologie 19, no. 4 (2020): 243–53. http://dx.doi.org/10.18280/i2m.190401.

Full text
Abstract:
The purpose of the article is to improve procedures of computerized monitoring and control of technological processes of growing greenhouse crops by substantiating methods of improving the accuracy of computer-integrated devices for measuring irrigation solution acidity. The article solves the topical scientific and applied problem of determining the conversion characteristics of computerized acidity monitoring systems with integral and differential assessment of their metrological parameters. Theoretical and experimental studies were obtained based on structural-algorithmic synthesis methods for information-measuring systems; methods of mathematical planning of experiments; regression analysis of experimental data and the concept of uncertainty. The computerized acidity meter was implemented on the basis of an ion-selective pH electrode, Arduino microprocessor platform, and ThingSpeak cloud computing service. The relative total boundary uncertainty of acidity measurement is not more than ±1.1 %. Methods of compensating of the random component of uncertainty based on the median filtering algorithm and additional uncertainty from the destabilizing effect of temperature were introduced when implementing the measuring device. Promising areas of priority research to improve the efficiency of the developed computerized acidity meter were justified. The developed device can be used in the complex automation of greenhouse cultivation processes. The developed and implemented measuring tool can be used when planning agricultural operations in greenhouse conditions.
APA, Harvard, Vancouver, ISO, and other styles
33

Pasquero, Claudia, and Marco Poletto. "Cities as biological computers." Architectural Research Quarterly 20, no. 1 (2016): 10–19. http://dx.doi.org/10.1017/s135913551600018x.

Full text
Abstract:
In this paper the authors propose a conceptual model and a bio-computational design method to articulate the world's Urbansphere, suggesting new terms for its co-evolution with the Biosphere.The proposed model responds to principles of biological self-organisation, and operates by embedding a numerical/computational engine, a living Physarum polycephalum, onto a spatial/morphogenetic substratum, a Satellite driven informational territory. This integration is embodied in the Physarum Machine, a bio-digital design apparatus conceived by the authors and further developed within the Urban Morphogenesis Lab at the UCL in London.The use of specifically designed apparatus of material computation to demonstrate and solve problems of urban morphogenesis is not new and the authors refer to the work of German Architect Frei Otto and his theory for the occupation and connection of territories.This research leads to a notion of bio-city of the future where manmade infrastructures and non-human biological systems will constitute parts of a single biotechnological whole. To this respect it can be read as a manifesto for the extension of biotechnology to the scale of the Biosphere (biosphere geo-engineering) by expanding the scope and material articulation of global informational and energetic infrastructures (the internet of things and the internet of energy).In the tradition of design based research, the paper also suggests an application of the proposed model to a specific case study demonstrating its efficacy in the re-conceptualization of the post-industrial and ecologically depleted landscapes of eastern Arizona. In conclusion the experiment describes the potential of augmenting materiality through sensors and microprocessors so that it would become possible to harvest the computational power latent in micro-organisms like the slime mould.The dream outlined here is for an era where descriptive computation will be superseded by our capability to simulate and compute through the world that surrounds us.
APA, Harvard, Vancouver, ISO, and other styles
34

Thornton, Richard, Tracy Clark, and Ken Stevens. "MagneMotion Maglev System." Transportation Research Record: Journal of the Transportation Research Board 1838, no. 1 (2003): 50–57. http://dx.doi.org/10.3141/1838-07.

Full text
Abstract:
The MagneMotion Maglev system, called M3, is an alternative to all conventional guided transportation systems. Advantages include major reductions in capital cost, travel time, operating cost, noise, and energy consumption. Vans or small-bus size vehicles operating automatically with headways of only a few seconds can be moved in platoons to achieve capacities of at least 12,000 passengers per hour per direction. Small vehicles lead to lighter guideways, shorter waiting time for passengers, lower power requirements for wayside inverters, more effective regenerative braking, and reduced station size. The design objectives were achieved by taking advantage of high-energy permanent magnets, improved microprocessor-based power electronics, precise position sensing, lightweight vehicles, a guideway matched to the vehicles, and the ability to use sophisticated computer-aided design tools for analysis, simulation, and optimization. Arrays of permanent magnets on both sides of a vehicle provide suspension, guidance, and a field for linear synchronous motor propulsion. Feedback-controlled current in control coils wound around the magnets stabilizes the suspension. The motor windings are integrated into suspension rails and excited by inverters along the guide-way. M3 is designed to provide speeds up to 45 m/s (101 mph) and acceleration and braking up to 2 m/s2 (4.5 mph/s) without onboard propulsion equipment. Operating speeds and accelerations can be modified by changing only the power system and wayside inverters. Capital cost, travel time, and operating cost are predicted to be less than half that of any competing transit system.
APA, Harvard, Vancouver, ISO, and other styles
35

Bulatov, Yuri, and Andrey Kryukov. "Study of cyber security of predictive control algorithms for distributed generation plants." Analysis and data processing systems, no. 2 (June 18, 2021): 19–34. http://dx.doi.org/10.17212/2782-2001-2021-2-19-34.

Full text
Abstract:
The power industry is currently actively developing the field related to the use of distributed generation plants located near the power receiving devices of consumers. At the same time, the introduction of distributed generation plants causes a lot of engineering problems which need solutions. One of them is the optimization of the settings of automatic voltage regulators (AVR) and speed regulators (ASR) of synchronous generators in all possible operating modes. This requires the use of complex models of power supply systems, distributed generation plants and their regulators, as well as labor-intensive calculations that take into account a large number of interrelated parameters. However, there is another approach based on the use of predictive controllers. In this case only one parameter is needed for linear predictive models.The article describes a method for constructing and tuning the proposed predictive ASR synchronous generator, as well as computer models of distributed generation plants used in research. The purpose of the research was to determine cyber security of power supply systems equipped with various distributed generation plants with predictive speed controllers that can be implemented on the basis of the microprocessor technology. The studies were carried out in the MATLAB system using the Simulink and SymPowerSystems simulation packages on computer models of distributed generation plants with one turbine generator operating at a dedicated load, as well as a group of hydrogenerators connected to a high-power electric power system. The simulation results showed the effectiveness of the proposed predictive control algorithms, as well as the fact that their cyber security can be increased by introducing hardware restrictions on the range of changes in the time constant of the predictive link.
APA, Harvard, Vancouver, ISO, and other styles
36

Orlowska-Kowalska, Teresa, Mateusz Korzonek, and Grzegorz Tarchala. "Performance Analysis of Speed-Sensorless Induction Motor Drive Using Discrete Current-Error Based MRAS Estimators." Energies 13, no. 10 (2020): 2595. http://dx.doi.org/10.3390/en13102595.

Full text
Abstract:
In the literature on sensorless control of induction motors, many algorithms have been presented for rotor flux and speed estimation. However, all these algorithms have been developed in the continuous–time domain. The digital realization of the control systems, requires the implementation of those estimation methods in a discrete–time domain. The main goal of this article is comparison of the impact of different numerical integration methods, used in analogue emulation under the digital implementation of the control systems, to the operation of classical Model Reference Adaptive System; CC-based on two current models (MRASCC) speed estimator and its three modified versions developed for the extension of the estimator stability region. In this paper the generalized mathematical model of MRASCC estimator is proposed, which takes into account all known methods for the extension of the stability region of classical speed estimator of this type. After the short discussion of the discretization methods used for the microprocessor implementation of control algorithms the impact of different numerical integration methods on the stable operation range of the classical and modified MRASCC estimators is analyzed and validated in simulation and experimental tests. It is proved that Modified Euler discretization method is much more accurate than forward and backward Euler methods and gives almost as accurate results as Tustin method, however is much less complicated in practical realization.
APA, Harvard, Vancouver, ISO, and other styles
37

Md Naziri, Siti Zarina, Rizalafande Che Ismail, Mohd Nazrin Md Isa, and Razaidi Hussin. "Less memory and high accuracy logarithmic number system architecture for arithmetic operations." Indonesian Journal of Electrical Engineering and Computer Science 23, no. 3 (2021): 1708. http://dx.doi.org/10.11591/ijeecs.v23.i3.pp1708-1717.

Full text
Abstract:
&lt;p&gt;Interpolation is another important procedure for logarithmic number system (LNS) addition and subtraction. As a medium of approximation, the interpolation procedure has an urgent need to be enhanced to increase the accuracy of the operation results. Previously, most of the interpolation procedures utilized the first degree interpolators with special error correction procedure which aim to eliminate additional embedded multiplications. However, the interpolation procedure for this research was elevated up to a second degree interpolation. Proper design process, investigation, and analysis were done for these interpolation configurations in positive region by standardizing the same co-transformation procedure, which is the extended range, second order co-transformation. Newton divided differences turned out to be the best interpolator for second degree implementation of LNS addition and subtraction, with the best-achieved BTFP rate of +0.4514 and reduction of memory consumption compared to the same arithmetic used in european logarithmic microprocessor (ELM) up to 51%.&lt;/p&gt;
APA, Harvard, Vancouver, ISO, and other styles
38

Grübl, Andreas, Sebastian Billaudelle, Benjamin Cramer, Vitali Karasenko, and Johannes Schemmel. "Verification and Design Methods for the BrainScaleS Neuromorphic Hardware System." Journal of Signal Processing Systems 92, no. 11 (2020): 1277–92. http://dx.doi.org/10.1007/s11265-020-01558-7.

Full text
Abstract:
Abstract This paper presents verification and implementation methods that have been developed for the design of the BrainScaleS-2 65 nm ASICs. The 2nd generation BrainScaleS chips are mixed-signal devices with tight coupling between full-custom analog neuromorphic circuits and two general purpose microprocessors (PPU) with SIMD extension for on-chip learning and plasticity. Simulation methods for automated analysis and pre-tapeout calibration of the highly parameterizable analog neuron and synapse circuits and for hardware-software co-development of the digital logic and software stack are presented. Accelerated operation of neuromorphic circuits and highly-parallel digital data buses between the full-custom neuromorphic part and the PPU require custom methodologies to close the digital signal timing at the interfaces. Novel extensions to the standard digital physical implementation design flow are highlighted. We present early results from the first full-size BrainScaleS-2 ASIC containing 512 neurons and 130 K synapses, demonstrating the successful application of these methods. An application example illustrates the full functionality of the BrainScaleS-2 hybrid plasticity architecture.
APA, Harvard, Vancouver, ISO, and other styles
39

Faraji, Mustapha. "Cooling Management of Highly Powered Chips Packed in an Insulated Cavity Filled with a Phase Change Material." Journal of Microelectronics and Electronic Packaging 7, no. 2 (2010): 79–89. http://dx.doi.org/10.4071/1551-4897-7.2.79.

Full text
Abstract:
This work describes and analyses a novel computer's thermal management system based on a phase change material (PCM) heat storage reservoir. The proposed heat sink consists of a PCM filled enclosure heated by substrate-mounted protruding heat sources (micro processors). PCMs, characterized by high energy storage density and small transition temperature interval, are able to store a high amount of generated heat; which provides a passive cooling of microprocessors. The advantage of this cooling strategy is that the phase change materials are able to absorb a high amount of generated heat without energizing the fan. The proposed strategy is suitable and efficient for situations where the cooling by air convection is not practical (thermal control of recent multiprocessors computers, for example). The problem is modelled as, two dimensional, time dependent and convection–dominated phenomena. A finite volume numerical approach is developed and used to simulate the physical details of the problem. This approach is based on the enthalpy method which is traditionally used to track the motion of the liquid/solid front and obtain the temperature and velocity profiles in the liquid phase. The study gives an instruction on the presentation of PCM heat sink used for cooling management of recent computers. Numerical investigations have been conducted in order to examine the impact of several parameters on the thermal behaviour and efficiency of the proposed PCM-based heat sink. Correlation for the secured operating time (time required by the heat sink before reaching the critical temperature, Tcr) is developed.
APA, Harvard, Vancouver, ISO, and other styles
40

Ometov, A. E., A. A. Vinogradov, and A. S. Vorobiev. "Post-silicon verification of high-speed interconnections in Elbrus-8CB microprocessor." Radio industry (Russia) 29, no. 3 (2019): 33–40. http://dx.doi.org/10.21778/2413-9599-2019-29-3-33-40.

Full text
Abstract:
The article describes the experiments carried out during the post-silicone verification of Elbrus-8CB microprocessor – one of the important stages of the verification process, which mostly determines the possibility of creating high-performance computing systems consisting of several microprocessors of this series. The interprocessor communication channels of the Elbrus-8CB microprocessor were investigated and some hypotheses were put forward about the reasons for their low operating speed. Experiments conducted to validate these hypotheses are made with intermediate conclusions based on their results. The built-in testing mechanism of CEI-6G and PCIe 2.0 physical levels was described alongside with its operating modes and testing algorithm. Several studies were carried out to ensure the correctness of the testing mechanism. This led to modifications of the initial testing method. The final conclusions about the reasons for the incorrect operation of interprocessor communications were made, and recommendations were given to improve the high-speed communications signals attenuation parameters and the level of their interference immunity. The relevance of this study for the production of modern high-performance computing systems can be traced not only in the growing interest of designers to this problem, but also in tightening of the requirements of the physical layers manufacturers.
APA, Harvard, Vancouver, ISO, and other styles
41

Nnodim, Chiebuka T., Micheal O. Arowolo, Blessing D. Agboola, Roseline O. Ogundokun, and Moses K. Abiodun. "Future trends in mechatronics." IAES International Journal of Robotics and Automation (IJRA) 10, no. 1 (2021): 24. http://dx.doi.org/10.11591/ijra.v10i1.pp24-31.

Full text
Abstract:
&lt;p&gt;Presently, the move towards a more complex and multidisciplinary system development is increasingly important in order to understand and strengthen engineering approaches for the systems in the engineering field. This will lead to the effective and successful management of these systems. The scientific developments in computer engineering, simulation and modeling, electromechanical motion tools, power electronics, computers and informatics, micro-electro-mechanical systems (MEMS), microprocessors, and distributed system platforms (DSPs) have brought new challenges to industry and academia. Important aspects of designing advanced mechatronic products include modeling, simulation, analysis, virtual prototyping, and visualization. Competition on a global market includes the adaptation of new technology to produce better, cheaper, and smarter, scalable, multifunctional goods. Since the application area for developing such systems is very broad, including, for example, automotive, aeronautics, robotics or consumer products, and much more, there is also the need for flexible and adaptable methods to develop such systems. These dynamic interdisciplinary systems are called mechatronic systems, which refer to a system that possess synergistic integration of Software, electronic, and mechanical systems. To approach the complexity inherent in the aspects of the discipline, different methods and techniques of development and integration are coming from the disciplines involved. This paper will provide a brief review of the history, current developments and the future trends of mechatronics in general view.&lt;/p&gt;
APA, Harvard, Vancouver, ISO, and other styles
42

Raianov, Timur A. "Overview of new types of torque force sensors." Transportation Systems and Technology 6, no. 1 (2020): 5–14. http://dx.doi.org/10.17816/transsyst2020615-14.

Full text
Abstract:
In recent years, modern torque measurement systems have become very popular. they are used in road, rail, aviation and ship transport, as well as in the pulp, paper, and metallurgical industries. These metrological systems provide accurate torque measurement in difficult operating conditions as well as in aggressive environments. Thanks to the introduction of microprocessors in these devices, it became possible to increase the speed, it became possible to connect automatic torque tracking systems via a network interface to a single automatic control center and to perform remote control of torque sensors. With the use of modern software increases communication automatic torque measurement systems. Various software models are being developed for automatic torque measurement systems that have the ability to either partially simulate this system or work as an assistant device adapting automatic measurement systems to various uncertainties such as ambient temperature and properties of ferromagnetic materials. Safety of operation of transport systems, load-lifting devices and production facilities is increased.&#x0D; The purpose of this article is to review and analyze new types of torque sensors well-known manufacturers. The design and composition of modern measuring systems are considered and their advantages and disadvantages are analyzed. The technical description for each of the torque converters is given.
APA, Harvard, Vancouver, ISO, and other styles
43

Banks, James H., and Patrick A. Powell. "San Diego Field Operational Test of Smart Call Boxes: Technical Aspects." Transportation Research Record: Journal of the Transportation Research Board 1603, no. 1 (1997): 27–33. http://dx.doi.org/10.3141/1603-04.

Full text
Abstract:
Smart call boxes are devices similar to those used as emergency call boxes in California. The basic call box consists of a microprocessor, a cellular transceiver, and a solar power source. The smart call box system also includes data-collection devices, call-box maintenance computers, and data recording systems at a central location. The goal of the smart call box field operational test (FOT) was to demonstrate that smart call boxes are feasible and cost-effective means of processing and transmitting data for tasks such as traffic census, incident detection, hazardous weather reporting, changeable message sign control, and video surveillance. The objective of the FOT evaluation was to determine the cost-effectiveness of smart call boxes, but because of schedule slippage the evaluation focused on only functional adequacy and capital costs. The concept for the smart call box system was found to be feasible but not necessarily optimal for the tasks involved. System integration was a major problem. Also, the number of external devices that can be attached to a single call box while maintaining the economic advantages of the system is restricted by wiring costs and limitations of the solar power supply. Test system performance was mixed. One subtest was canceled before the installation of equipment, functional systems were produced for only three of the four remaining subtests, and reliable operation was observed in only one case. In most cases, system costs will be dominated by the expense of installing wiring. Consequently, smart call boxes will be cost-effective compared with hardwire systems at many sites but may not be cost-effective compared with alternative wireless systems.
APA, Harvard, Vancouver, ISO, and other styles
44

ElAzab, Heba-Allah, R. Swief, Hanady Issa, Noha El-Amary, Alsnosy Balbaa, and H. Temraz. "FPGA Eco Unit Commitment Based Gravitational Search Algorithm Integrating Plug-in Electric Vehicles." Energies 11, no. 10 (2018): 2547. http://dx.doi.org/10.3390/en11102547.

Full text
Abstract:
Smart grid architecture is one of the difficult constructions in electrical power systems. The main feature is divided into three layers; the first layer is the power system level and operation, the second layer is the sensor and the communication devices, which collect the data, and the third layer is the microprocessor or the machine, which controls the whole operation. This hierarchy is working from the third layer towards first layer and vice versa. This paper introduces an eco unit commitment study, that scheduling both conventional power plants (three IEEE) thermal plants) as a dispatchable distributed generators, with renewable energy resources (wind, solar) as a stochastic distributed generating units; and plug-in electric vehicles (PEVs), which can be contributed either loads or generators relied on the charging timetable in a trustworthy unit commitment. The target of unit commitment study is to minimize the combined eco costs by integrating more augmented clean and renewable energy resource with the help of field programming gate array (FPGA) layer installation. A meta-heuristic algorithm, such as the Gravitational Search Algorithm (GSA), proves its accuracy and efficiency in reducing the incorporated cost function implicating costs of CO2 emission by optimally integrating and scheduling stochastic resources and charging and discharging processes of PEVs with conventional resources power plants. The results obtained from GSA are compared with a conventional numerical technique, such as the Dynamic Programming (DP) algorithm. The feasibility to implement GSA on an appropriate hardware platform, such as FPGA, is also discussed.
APA, Harvard, Vancouver, ISO, and other styles
45

Auken, Esben, Louise Pellerin, Niels B. Christensen, and Kurt Sørensen. "A survey of current trends in near-surface electrical and electromagnetic methods." GEOPHYSICS 71, no. 5 (2006): G249—G260. http://dx.doi.org/10.1190/1.2335575.

Full text
Abstract:
Electrical and electromagnetic (E&amp;EM) methods for near-surface investigations have undergone rapid improvements over the past few decades. Besides the traditional applications in groundwater investigations, natural-resource exploration, and geological mapping, a number of new applications have appeared. These include hazardous-waste characterization studies, precision-agriculture applications, archeological surveys, and geotechnical investigations. The inclu-sion of microprocessors in survey instruments, development of new interpretation algorithms, and easy access to powerful computers have supported innovation throughout the geophysical community and the E&amp;EM community is no exception. Most notable are development of continuous-measurement systems that generate large, dense data sets efficiently. These have contributed significantly to the usefulness of E&amp;EM methods by allowing measurements over wide areas without sacrificing lateral resolution. The availability of these luxuriant data sets in turn spurred development of interpretation algorithms, including: Laterally constrained 1D inversion as well as innovative 2D- and 3D-inversion methods. Taken together, these developments can be expected to improve the resolution and usefulness of E&amp;EM methods and permit them to be applied economically. The trend is clearly toward dense surveying over larger areas, followed by highly automated, post-acquisition processing and interpretation to provide improved resolution of the shallow subsurface in a cost-effective manner.
APA, Harvard, Vancouver, ISO, and other styles
46

Gilkey, J. C., and J. D. Powell. "Fuel-Air Ratio Determination From Cylinder Pressure Time Histories." Journal of Dynamic Systems, Measurement, and Control 107, no. 4 (1985): 252–57. http://dx.doi.org/10.1115/1.3140731.

Full text
Abstract:
Determining fuel-air ratio quickly over a wide range of engine operating conditions is desirable for better transient engine control. This paper describes a method based on cylinder pressure time history pattern recognition which has potential for providing such a high bandwidth measurement. The fact that fuel-air ratio has an effect on the shape of the cylinder pressure trace is well-known. It should therefore be possible to obtain the fuel-air ratio of an engine by examining the pressure trace if the engine speed, load, and EGR are known. The difficulty lies in separating the effects of unknown engine load, speed, and EGR from the fuel-air ratio effects. An algorithm was developed using a wide range of steady state experimental data from a single cylinder engine. Application of the algorithm requires the calculation of first, second and third moments of the cylinder pressure time history. Verification of the algorithm showed that the root mean square error in estimates were about 5 percent for fuel-air ratio and 3 percent for a combination of fuel-air and EGR. These results were obtained using a single pressure trace which yields a response time of 1.5 engine revolutions. The algorithm was also found to be relatively insensitive to the use of different fuels, errors in spark advance, and variations in relative humidity. Research is continuing to verify the accuracy under transient engine conditions. An operational count shows that this algorithm should be well within the limits of present microprocessor technology.
APA, Harvard, Vancouver, ISO, and other styles
47

Farooq, Aqeel, Wadee Alhalabi, and Sara M. Alahmadi. "Traffic systems in smart cities using LabVIEW." Journal of Science and Technology Policy Management 9, no. 2 (2018): 242–55. http://dx.doi.org/10.1108/jstpm-05-2017-0015.

Full text
Abstract:
Purpose The purpose of this research work is to design and apply LabVIEW in the area of traffic maintenance and flow, by introducing improvements in the smart city. The objective is to introduce the automated human–machine interface (HMI) – a computer-based graphical user interface (GUI) – for measuring the traffic flow and detecting faults in poles. Design/methodology/approach This research paper is based on the use of LabVIEW for designing the HMI for a traffic system in a smart city. This includes considerable measures that are: smart flow of traffic, violation detection on the signal, fault measurement in the traffic pole, locking down of cars for emergency and measuring parameters inside the cars. Findings In this paper, the GUIs and the required circuitry for making improvements in the infrastructure of traffic systems have been designed and proposed, with their respective required hardware. Several measured conditions have been discussed in detail. Research limitations/implications PJRC Teensy 3.1 has been used because it contains enough general-purpose input–output (GPIO) pins required for monitoring parameters that are used for maintaining the necessary flow of traffic and monitor the proposed study case. A combination of sensors such as infrared, accelerometer, magnetic compass, temperature sensor, current sensors, ultrasonic sensor, fingerprint readers, etc. are used to create a monitoring environment for the application. Using Teensy and LabVIEW, the system costs less and is effective in terms of performance. Practical implications The microprocessor board shields for placing actuators and sensors and for attaching the input/output (I/O) to the LED indicators and display have been designed. A circuitry for scaling voltage, i.e. making sensor readings to read limits, has been designed. A combination of certain sensors, at different signals, will lead to a secure and more durable control of traffic. The proposed applications with its hardware and software cost less, are effective and can be easily used for making the city’s traffic services smart. For alarm levels, the desired alarm level can be set from the front panel for certain conditions from the monitoring station. For this, virtual channels can be created for allowing the operator to set any random value for limits. If the sensor value crosses the alarm value, then the corresponding alarm displays an alert. The system works by using efficient decision-making techniques and stores the data along with the corresponding time of operation, for future decisions. Originality/value This study is an advanced research of its category because it combines the field of electrical engineering, computer science and traffic systems by using LabVIEW.
APA, Harvard, Vancouver, ISO, and other styles
48

Grant, Nicholas, Brian Geiss, Stuart Field, August Demann, and Thomas W. Chen. "Design of a Hand-Held and Battery-Operated Digital Microfluidic Device Using EWOD for Lab-on-a-Chip Applications." Micromachines 12, no. 9 (2021): 1065. http://dx.doi.org/10.3390/mi12091065.

Full text
Abstract:
Microfluidics offer many advantages to Point of Care (POC) devices through lower reagent use and smaller size. Additionally, POC devices offer the unique potential to conduct tests outside of the laboratory. In particular, Electro-wetting on Dielectric (EWOD) microfluidics has been shown to be an effective way to move and mix liquids enabling many PoC devices. However, much of the research surrounding these microfluidic systems are focused on a single aspect of the system capability, such as droplet control or a specific new application at the device level using the EWOD technology. Often in these experiments the supporting systems required for operation are bench top equipment such as function generators, power supplies, and personal computers. Although various aspects of how an EWOD device is capable of moving and mixing droplets have been demonstrated at various levels, a complete self-contained and portable lab-on-a-chip system based on the EWOD technology has not been well demonstrated. For instance, EWOD systems tend to use high voltage alternating current (AC) signals to actuate electrodes, but little consideration is given to circuitry size or power consumption of such components to make the entire system portable. This paper demonstrates the feasibility of integrating all supporting hardware and software to correctly operate an EWOD device in a completely self-contained and battery-powered handheld unit. We present results that demonstrate a complete sample preparation flow for deoxyribonucleic acid (DNA) extraction and isolation. The device was designed to be a field deployable, hand-held platform capable of performing many other sample preparation tasks automatically. Liquids are transported using EWOD and controlled via a programmable microprocessor. The programmable nature of the device allows it to be configured for a variety of tests for different applications. Many considerations were given towards power consumption, size, and system complexity which make it ideal for use in a mobile environment. The results presented in this paper show a promising step forward to the portable capability of microfluidic devices based on the EWOD technology.
APA, Harvard, Vancouver, ISO, and other styles
49

Pucher, Krzysztof, and Dariusz Polok. "Analysis of Timings in Networks that Use TCP/IP or UDP/IP Protocols for Communication with Industrial Controllers in Mechatronic Systems." Solid State Phenomena 144 (September 2008): 94–99. http://dx.doi.org/10.4028/www.scientific.net/ssp.144.94.

Full text
Abstract:
In pace with the technical progress in controllability of mechatronic systems including machines and industrial equipment, the systems of industrial controllers (both PLC and microprocessor ones) more and more frequently use Ethernet-based networks for communication with supervising centres and surveillance systems. The Internet offers unsurpassed opportunities of remote programming as well as remote development, debugging and tuning the existing control software. Nowadays, supporting the remote tools and facilities is the essential requirement that is mandatory when decisions on purchase and implementation of industrial controllers are made. It is the underlying reason to launch more extensive research in that field. The presented paper describes dedicated software that has been developed to enable communication over the Internet within dispersed control systems. The system makes it possible to transmit and to receive short messages to and from the controlled actuators as well as to perform basic tasks related to management of data flow in networks that use TCD and UDP protocols. The special attention was paid to dynamic phenomena of the data exchange process. It is an issue of crucial importance within dispersed systems of industrial controllers and it assures efficient operation of the entire system owing to timely and quick respond to fast-changing control signals. Data exchange was carried out with the use of so-called primitives for Berkeley sockets that serve as primary structures within the network and are capable to perform basic operation such as creation and destruction, assigning network addresses to the sockets, establishing connections, transmission (broadcasting), receiving, etc. To measure time intervals of communication sessions the authors took advantage of functional features of contemporary motherboards of PC computers. In particular, the function of the API counter was used as it allows to readout the fast internal 64-bit counter which, in consequence, enabled measurement of time gaps with accuracy up to single microseconds. The described software performs tests of communication facilities in terms of their applicability to fast data exchange between field control modules of the control system and the CPU, whereas the entire communication is performed via Internet. Therefore the reaction time of a hypothetical field controller in respond to switchovers of the input signals or interrupt events can be measured. The communication and measurements were performed over local and national internet networks as well as for GPRS networks. Measurement results are presented in a compact form of tables that is suitable for further analysis. The presented system is able to transmit diagnostic information therefore it can be also used for integrated diagnostics of mechatronic systems as well as for location and analysis of possible failures within the in-field systems.
APA, Harvard, Vancouver, ISO, and other styles
50

Tait, Russel J., Alan M. Bond, Barrie C. Finnin, and Barry L. Reed. "Rapid scanning voltammetry under steady-state conditions in a flow through thin layer cell with a microelectrode." Collection of Czechoslovak Chemical Communications 56, no. 1 (1991): 192–205. http://dx.doi.org/10.1135/cccc19910192.

Full text
Abstract:
A microelectrode based detector system has been developed for measurement of steady state voltammetric curves in flowing solutions. Two microprocessors operating in parallel allow the direct transfer of collected data to a floppy diskette. Long term experiments can then be performed, with individual voltammograms being rapidly obtained, recorded and stored. The system can be used with scan rates up to 10 V s-1 and with 1 mV resolution over a potential range of 2.5 V. When a 10 μm diameter micro-disk platinum electrode serves as the working electrode, rapid scan voltammetry (scan rate 1 to 10 V s-1) can be undertaken under steady state conditions for reversible processes with a flow rate in the range of 1 to 3 ml min-1 as evidenced by the observation of sigmoidal rather than the peak shaped curves obtained with previously described rapid scan systems. That is, complete voltammograms can be obtained with minimal distortion due to uncompensated resistance and charging current which is not the case when conventionally sized electrodes are used or when microelectrodes are used at excessively high scan rates where linear diffusion terms become important. The working microelectrodes were developed to suit a conventional thin-layer cell design and therefore permit ready adaptation to existing flow through electrochemical detection systems. The detection limits for the determination of ferrocene in methanol at flow rates up to 3 ml min-1 were 10-6 mol dm-3 after background correction, and the response was found to be linear over the concentration range 10-3 to 10-6 mol dm-3. Three-dimensional methods of data treatment and contour plots can be used to interpret results obtained from steady state or near steady state voltammograms of incompletely resolved chromatograms as demonstrated with range of biologically important compounds.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!