Academic literature on the topic 'Microprocessors Operating systems (Computers) Microprocessors'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Microprocessors Operating systems (Computers) Microprocessors.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Microprocessors Operating systems (Computers) Microprocessors"

1

Gallacher, Joe. "Microprocessors and their operating systems." Microprocessors and Microsystems 14, no. 8 (1990): 550–51. http://dx.doi.org/10.1016/0141-9331(90)90056-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Samoilova, M. E., and A. A. Zubrilin. "AN EXTRACURRICULAR EVENT “HISTORY OF INFORMATICS IN DATES." Informatics in school, no. 5 (June 23, 2019): 7–13. http://dx.doi.org/10.32517/2221-1993-2019-18-5-7-13.

Full text
Abstract:
The article presents an extracurricular event “History of informatics in dates”, conducted using infographics. The article justifies why infographics will allow us to memorize historical events associated with the development of informatics. The dates and information from the history of the development of the Internet, computers, microprocessors and operating systems are given. The work with dates is carried out in a game form.
APA, Harvard, Vancouver, ISO, and other styles
3

Shevelev, S. S. "RECONFIGURABLE COMPUTING MODULAR SYSTEM." Radio Electronics, Computer Science, Control 1, no. 1 (2021): 194–207. http://dx.doi.org/10.15588/1607-3274-2021-1-19.

Full text
Abstract:
Context. Modern general purpose computers are capable of implementing any algorithm, but when solving certain problems in terms of processing speed they cannot compete with specialized computing modules. Specialized devices have high performance, effectively solve the problems of processing arrays, artificial intelligence tasks, and are used as control devices. The use of specialized microprocessor modules that implement the processing of character strings, logical and numerical values, represented as integers and real numbers, makes it possible to increase the speed of performing arithmetic operations by using parallelism in data processing.
 Objective. To develop principles for constructing microprocessor modules for a modular computing system with a reconfigurable structure, an arithmetic-symbolic processor, specialized computing devices, switching systems capable of configuring microprocessors and specialized computing modules into a multi-pipeline structure to increase the speed of performing arithmetic and logical operations, high-speed design algorithms specialized processors-accelerators of symbol processing. To develop algorithms, structural and functional diagrams of specialized mathematical modules that perform arithmetic operations in direct codes on neural-like elements and systems for decentralized control of the operation of blocks.
 Method. An information graph of the computational process of a modular system with a reconstructed structure has been built. Structural and functional diagrams, algorithms that implement the construction of specialized modules for performing arithmetic and logical operations, search operations and functions for replacing occurrences in processed words have been developed. Software has been developed for simulating the operation of an arithmetic-symbolic processor, specialized computing modules, and switching systems.
 Results. A block diagram of a reconfigurable computing modular system has been developed, which consists of compatible functional modules, it is capable of static and dynamic reconfiguration, has a parallel structure for connecting the processor and computing modules through the use of interface channels. The system consists of an arithmetic-symbolic processor, specialized computing modules and switching systems, performs specific tasks of symbolic information processing, arithmetic and logical operations.
 Conclusions. The architecture of reconfigurable computing systems can change dynamically during their operation. It becomes possible to adapt the architecture of a computing system to the structure of the problem being solved, to create problem-oriented computers, the structure of which corresponds to the structure of the problem being solved. As the main computing element in reconfigurable computing systems, not universal microprocessors are used, but programmable logic integrated circuits, which are combined using high-speed interfaces into a single computing field. Reconfigurable multipipeline computing systems based on fields are an effective tool for solving streaming information processing and control problems.
APA, Harvard, Vancouver, ISO, and other styles
4

STATSENKO, D., B. ZLOTENKO, S. NATROSHVILI, T. KULIK, and S. DEMISHONKOVA. "COMPUTER SYSTEM FOR CONTROLLING INDOOR LIGHTING." HERALD OF KHMELNYTSKYI NATIONAL UNIVERSITY 295, no. 2 (2021): 40–44. http://dx.doi.org/10.31891/2307-5732-2021-295-2-40-44.

Full text
Abstract:
The analysis of modern tendencies related to “Smart House” technologies is carried out in this article. The questions of programming languages of microcontrollers and microprocessors are considered. Software products that are used to create mobile applications for smartphones or tablets are presented. A computer system for remote control of room lighting is considered. The design and principle of its operation are shown schematically. A prototype of a computer system that has the following functions: 1) Control, on / off, lighting systems, depending on the needs of the owner of the premises. 2) Transfer of information about the level of illumination to the user, the owner of the premises. 3) Automatic switching on / off of electric, electroluminescent light sources, which are included in the room lighting control system. Photo of the prototype is shown. The principle of operation of the system control program based on the use of a photoresistor is presented. The Arduino microcontroller receives and processes information from the photoresistor, on the basis of which it automatically sends signals to the room lighting control system. The formulas for calculating the illumination using the results of the data obtained from the photoresistor of the prototype are given. The processed information, using wireless networks, goes to the interactive devices of the user, who can remotely check the value of illumination and, if necessary, control it. The visual interface of a mobile application for mobile phones and tablets using the Android operating system is presented. A computer system for controlling the lighting of premises, which is easy to use and does not require significant financial costs, is considered and analyzed. The methods of modeling, observation and research of computer systems are used in the work. The obtained results allow obtaining an effective computer system for remote control of indoor lighting.
APA, Harvard, Vancouver, ISO, and other styles
5

McCarthy, J. J., and J. J. Frief. "EDS and WDS Automation: Past Development and Future Technology." Microscopy and Microanalysis 5, S2 (1999): 556–57. http://dx.doi.org/10.1017/s143192760001610x.

Full text
Abstract:
Early Development Automation of electron probe analysis began to flourish in the early 1970s spurred on by advances in computer technology and the availability of operating systems and programming languages that the individual researcher could afford to dedicate to a single instrument. By the end of the decade, most researchers and vendors in the microanalysis field had adopted the PDP-11 minicomputer, and languages such as FOCAL, FORTRAN and BASIC that ran on these computers. A good summary of these early efforts was given by Hatfield. The first use of the energy dispersive detector on the electron probe in 1968 added the need to control the acquisition, display and processing of EDS spectra. As a result, the 70’s were also a time when much attention was focussed on development of software for on-line data reduction and analysis. These efforts produced a suite of programs to provide matrix corrections and spectral processing, and automation of WDS data collection. The culmination of these development efforts was first reported in 1977 with the analysis of a lunar whitlockite mineral by simultaneous EDS/WDS measurement. This analysis determined the concentration of 23 elements, 8 by EDS and took a total of 37 minutes for data collection and analysis. In this paper, the authors noted the complementary use of the EDS and WDS (WDS for trace elements and severe peak overlaps, EDS for other elements and rapid qualitative analysis) in their automated instrument, a convention that remains common on the electron probe even today. Toward the end of the decade the analytical accuracy and precision achieved by automated analysis of bulk samples approached the limits of the instrumentation, with the exception of analysis of light element concentrations.Two Decades of Improvements The explosive growth in digital electronics and microprocessors for data processing and control functions during the 80’s was rapidly applied to electron probe automation. Second and third generation automation systems included direct control of many microscope functions, beam position and imaging conditions. Motor positioning was more precise and far faster. As a result, the data collection and analysis of 23 elements reported in 1977 could be accomplished at least three times faster on a modern instrument.
APA, Harvard, Vancouver, ISO, and other styles
6

Halim, Fransiscus Ati. "Application Software For Learning CPU Process of Interrupt and I/O Operation." International Journal of New Media Technology 4, no. 2 (2017): 69–74. http://dx.doi.org/10.31937/ijnmt.v4i2.782.

Full text
Abstract:
The purpose of this research is to have simulation software capable of processing interrupt instruction and I/O operation that in the future it can contribute in developing a kernel. Interrupt and I/O operation are necessary in the development of the kernel system. Kernel is a medium for hardware and software to communicate. However, Not many application software which helps the learner to understand interrupt process. In managing the hardware, there are times when some kind of condition exist in the system that needs attention of processor or in this case kernel which managing the hardware. In response to that condition, the system will issue an interrupt request to sort that condition. As the I/O operation is needed since a computer system not just consists of CPU and memory only but also other device such as I/O device. This paper elaborates the application software for learning Interrupt application. With interrupt instruction and I/O operation in the simulation program, the program will be more represent the process happened in the real life computer. In this case, the program is able to run the interrupt instruction, I/O operation and other changes are running as expected. Refers to its main purpose, perhaps this simulation can lead to developing the kernel in operating system. From the results of instruction’s testing above, has a result that shows that 90% of instructions are run properly. In executing instructions, simulation program still has a bug following after the execution of Jump and conditional Jump.
 Index Terms—Interrupt; I/O; Kernel; Operating System
 REFERENCES
 [1] C. Hamacher, Z. Vranesic, S. Zaky. Naraig Manjikian , Computer Organization and Embedded Systems 6th Edition; McGraw-Hill, 2012
 [2] B. Brey. The Intel Microprocessors , Architecture, Programming, and Interfacing , 8th Edition. Pearson, 2008
 [3] W.Stallings. Computer Organization and Architecture, 9th Edition Pearson , 2012
 [4] F.A.Halim , Sutrisno, “Fundamental Characteristic of Central Processing Unit Simulation as a Basic Stage of Making Kernel”, Publish in Konferensi Nasional Sistem & Informatika (KNS&I 2010), 12-13 Nov 2010, Bali
 [5] Intel, IA-32 Intel® Architecture Software Developer’s Manual Volume 3: System Programming Guide, Denver: Intel Corporation, 2004 [6] Intel,IA-32 Intel 80386 Reference Programmer's,: I/O Instruction , https://pdos.csail.mit. edu/6.828/2014/readings/i386/s08_02.htm, available 17 June 2017
APA, Harvard, Vancouver, ISO, and other styles
7

Popentiu, Florin. "Computers and microprocessors components and systems." Microelectronics Reliability 33, no. 1 (1993): 111. http://dx.doi.org/10.1016/0026-2714(93)90054-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Terrell, T. J. "Book Review: Computers and Microprocessors: Components and Systems." International Journal of Electrical Engineering Education 23, no. 1 (1986): 94. http://dx.doi.org/10.1177/002072098602300126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ramana Murthy, G., C. Senthilpari, P. Velrajkumar, and Lim Tien Sze. "Monte-Carlo analysis of a new 6-T full-adder cell for power and propagation delay optimizations in 180 nm process." Engineering Computations 31, no. 2 (2014): 149–59. http://dx.doi.org/10.1108/ec-01-2013-0023.

Full text
Abstract:
Purpose – Demand and popularity of portable electronic devices are driving the designers to strive for higher speeds, long battery life and more reliable designs. Recently, an overwhelming interest has been seen in the problems of designing digital systems with low power at no performance penalty. Most of the very large-scale integration applications, such as digital signal processing, image processing, video processing and microprocessors, extensively use arithmetic operations. Binary addition is considered as the most crucial part of the arithmetic unit because all other arithmetic operations usually involve addition. Building low-power and high-performance adder cells are of great interest these days, and any modifications made to the full adder would affect the system as a whole. The full adder design has attracted many designer's attention in recent years, and its power reduction is one of the important apprehensions of the designers. This paper presents a 1-bit full adder by using as few as six transistors (6-Ts) per bit in its design. The paper aims to discuss these issues. Design/methodology/approach – The outcome of the proposed adder architectural design is based on micro-architectural specification. This is a textual description, and adder's schematic can accurately predict the performance, power, propagation delay and area of the design. It is designed with a combination of multiplexing control input (MCIT) and Boolean identities. The proposed design features lower operating voltage, higher computing speed and lower energy consumption due to the efficient operation of 6-T adder cell. The design adopts MCIT technique effectively to alleviate the threshold voltage loss problem commonly encountered in pass transistor logic design. Findings – The proposed adder circuit simulated results are used to verify the correctness and timing of each component. According to the design concepts, the simulated results are compared to the existing adders from the literature, and the significant improvements in the proposed adder are observed. Some of the drawbacks of the existing adder circuits from the literature are as follows: The Shannon theorem-based adder gives voltage swing restoration in sum circuit. Due to this problem, the Shannon circuit consumes high power and operates at low speed. The MUX-14T adder circuit is designed by using multiplexer concept which has a complex node in its design paradigm. The node drivability of input consumes high power to transmit the voltage level. The MCIT-7T adder circuit is designed by using MCIT technique, which consumes more power and leads to high power consumption in the circuit. The MUX-12T adder circuit is designed by MCIT technique. The carry circuit has buffering restoration unit, and its complement leads to high power dissipation and propagation delay. Originality/value – The new 6-T full adder circuit overcomes the drawbacks of the adders from the literature and successfully reduces area, power dissipation and propagation delay.
APA, Harvard, Vancouver, ISO, and other styles
10

Thangamuthu, Tamilarasi, Rajasekar Rathanasamy, Saminathan Kulandaivelu, et al. "Experimental investigation on the influence of carbon-based nanoparticle coating on the heat transfer characteristics of the microprocessor." Journal of Composite Materials 54, no. 1 (2019): 61–70. http://dx.doi.org/10.1177/0021998319859926.

Full text
Abstract:
In the current scenario, thermal management plays a vital role in electronic system design. The temperature of the electronic components should not exceed manufacturer-specified temperature levels in order to maintain safe operating range and service life. The reduction in heat build-up will certainly enhance the component life and reliability of the system. The aim of this research work is to analyze the effect of multi-walled carbon nanotube and graphene coating on the heat transfer capacity of a microprocessor used in personal computers. The performance of coating materials was investigated at three different usages of central processing unit. Multi-walled carbon nanotube-coated and graphene-coated microprocessors showed better enhancement in heat transfer as compared with uncoated microprocessors. Maximum decrease in heat build-up of 7 and 9℃ was achieved for multi-walled carbon nanotube-coated and graphene-coated microprocessors compared to pure substrate. From the results, graphene has been proven to be a suitable candidate for effective heat transfer compared to with multi-walled carbon nanotubes due to high thermal conductivity characteristics of the former compared to the latter.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Microprocessors Operating systems (Computers) Microprocessors"

1

Haag, Roger. "Programming the INTEL 8086 microprocessor for GRADS : a graphic real-time animation display system." Thesis, McGill University, 1985. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=65929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Michael, Michael Nasri. "Dynamic voltage and frequency scaling with multi-clock distribution systems on SPARC core /." Online version of thesis, 2009. http://hdl.handle.net/1850/10750.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Inman, Oliver Lane. "Technology Forecasting Using Data Envelopment Analysis." PDXScholar, 2004. https://pdxscholar.library.pdx.edu/open_access_etds/2682.

Full text
Abstract:
The ability to anticipate future capabilities of technology products has broad implications for organizations. Technological forecasting allows management to improve resource allocation, make better staffing decisions, and more confidently plan facilities and capital expenditures. Technology forecasting may also identify potential new markets and opportunities, such as finding ways to exploit current technology beyond its originally intended purposes. Modern technology forecasters use an array of forecasting methods to predict the future performance of a technology, such as time-series analysis, regression, stochastic methods, and simulation. These quantitative methods rely on the assumption that past behavior will continue. Shortcomings include their lack of emphasis on the best technology available and the fact that they do not effectively address the dynamic nature of ever changing trade-off surfaces. This research proposes a new method to address the shortcomings of common forecasting techniques by extending a well-established management science methodology known as data envelopment analysis (DEA). This new method is referred to as Technology Forecasting with Data Envelopment Analysis (TFDEA). Three case studies are examined to determine the method's validity. The first case study is that of relational database system performance based upon industry benchmarks obtained from the Transaction Processing Performance Council (TPC). The results reveal that TFDEA provides a more accurate picture of the state of the art than basic regression. The second case study expands Moore's law to six dimensions, resulting in a more comprehensive assessment of microprocessor technology. The final case study re-examines hard disk drive data for the years 1994-1999 in order to evaluate the technological progress of multiple technological approaches presented in Christensen's The Innovator's Dilemma . Major contributions include both a new technology forecasting technique and an important extension of the temporal DEA methodology, which together offer a new and more comprehensive method for evaluating and forecasting technology.
APA, Harvard, Vancouver, ISO, and other styles
4

Siddique, Nafiul Alam. "Spare Block Cache Architecture to Enable Low-Voltage Operation." PDXScholar, 2011. https://pdxscholar.library.pdx.edu/open_access_etds/216.

Full text
Abstract:
Power consumption is a major concern for modern processors. Voltage scaling is one of the most effective mechanisms to reduce power consumption. However, voltage scaling is limited by large memory structures, such as caches, where many cells can fail at low voltage operation. As a result, voltage scaling is limited by a minimum voltage (Vccmin), below which the processor may not operate reliably. Researchers have proposed architectural mechanisms, error detection and correction techniques, and circuit solutions to allow the cache to operate reliably at low voltages. Architectural solutions reduce cache capacity at low voltages at the expense of logic complexity. Circuit solutions change the SRAM cell organization and have the disadvantage of reducing the cache capacity (for the same area) even when the system runs at a high voltage. Error detection and correction mechanisms use Error Correction Codes (ECC) codes to keep the cache operation reliable at low voltage, but have the disadvantage of increasing cache access time. In this thesis, we propose a novel architectural technique that uses spare cache blocks to back up a set-associative cache at low voltage. In our mechanism, we perform memory tests at low voltage to detect errors in all cache lines and tag them as faulty or fault-free. We have designed shifter and adder circuits for our architecture, and evaluated our design using the SimpleScalar simulator. We constructed a fault model for our design to find the cache set failure probability at low voltage. Our evaluation shows that, at 485mV, our designed cache operates with an equivalent bit failure probability to a conventional cache operating at 782mV. We have compared instructions per cycle (IPC), miss rates, and cache accesses of our design with a conventional cache operating at nominal voltage. We have also compared our cache performance with a cache using the previously proposed Bit-Fix mechanism. Our result show that our designed spare cache mechanism is 15% more area efficient compared to Bit-Fix. Our proposed approach provides a significant improvement in power and EPI (energy per instruction) over a conventional cache and Bit-Fix, at the expense of having lower performance at high voltage.
APA, Harvard, Vancouver, ISO, and other styles
5

Head, Michael Reuben. "Analysis and optimization for processing grid-scale XML datasets." Diss., Online access via UMI:, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Tao John Lizy Kurian. "OS-aware architecture for improving microprocessor performance and energy efficiency." 2004. http://wwwlib.umi.com/cr/utexas/fullcit?p3143299.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Tao. "OS-aware architecture for improving microprocessor performance and energy efficiency." Thesis, 2004. http://hdl.handle.net/2152/1237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mohd, Bassam Jamil 1968. "Switch-based Fast Fourier Transform processor." 2008. http://hdl.handle.net/2152/18192.

Full text
Abstract:
The demand for high-performance and power scalable DSP processors for telecommunication and portable devices has increased significantly in recent years. The Fast Fourier Transform (FFT) computation is essential to such designs. This work presents a switch-based architecture to design radix-2 FFT processors. The processor employs M processing elements, 2M memory arrays and M Read Only Memories (ROMs). One processing element performs one radix-2 butterfly operation. The memory arrays are designed as single-port memory, where each has a size of N/(2M); N is the number of FFT points. Compared with a single processing element, this approach provides a speedup of M. If not addressed, memory collisions degrade the processor performance. A novel algorithm to detect and resolve the collisions is presented. When a collision is detected, a memory management operation is executed. The performance of the switch architecture can be further enhanced by pipelining the design, where each pipeline stage employs a switch component. The result is a speedup of Mlog2N compared with a single processing element performance. The utilization of single-port memory reduces the design complexities and area. Furthermore, memory arrays significantly reduce power compared with the delay elements used in some FFT processors. The switch-based architecture facilitates deactivating processing elements for power scalability. It also facilitates implementing different FFT sizes. The VLSI implementation of a non-pipeline switch-based processor is presented. Matlab simulations are conducted to analyze the performance. The timing, power and area results from RTL, synthesis and layout simulations are discussed and compared with other processors.<br>text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Microprocessors Operating systems (Computers) Microprocessors"

1

Bronson, Gary J. 32-bit microprocessors: A primer plus. AT&T Customer Information Center, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Holland, R. C. Microprocessors and their operating systems: A comprehensive guide to 8-, 16-, and 32- bit hardware, assembly language, and computer architecture. Pergamon, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

IEEE Computer Society. Technical Committee on Microprocessors and Microcomputers. and American National Standards Institute, eds. IEEE trial-use standard specification for microprocessor operating systems interfaces. Institute of Electrical and Electronics Engineers, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Christopher, Ken W. IBM's official OS/2 Warp Connect PowerPC edition: Operating in the new frontier. IDG Books Worldwide, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

See MIPS run. 2nd ed. Morgan Kaufmann Publishers/Elsevier, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

See MIPS run. Morgan Kaufmann Publishers, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tianruo, Yang Laurence, ed. Embedded software and systems: Second international conference, ICESS 2005, Xi'an, China, December 16-18, 2005 : proceedings. Springer, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Downton, A. C. Computers and microprocessors: Components and systems. 2nd ed. Van Nostrand Reinhold, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Downton, A. C. Computers and microprocessors: Components and systems. 2nd ed. Chapman and Hall, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Downton, A. C. Computers and microprocessors: Components and systems. 2nd ed. VanNostrand Reinhold, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Microprocessors Operating systems (Computers) Microprocessors"

1

Fontaine, A. B., and F. Barrand. "Operating Systems." In 80286 and 80386 Microprocessors. Macmillan Education UK, 1989. http://dx.doi.org/10.1007/978-1-349-10764-3_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

HOLLAND, R. C. "32-BIT MICROPROCESSORS." In Microprocessors and their Operating Systems. Elsevier, 1989. http://dx.doi.org/10.1016/b978-0-08-037188-7.50013-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wells, Benjamin. "The PC-User’s Guide to Colossus." In Colossus. Oxford University Press, 2006. http://dx.doi.org/10.1093/oso/9780192840554.003.0018.

Full text
Abstract:
Personal computers (PCs) dominate today’s digital landscape. The two-letter name started with the 1981 IBM PC. Desktop machines based on single-chip microprocessors—and thus called microcomputers—were widely used before the IBM PC, but that is the name that has stuck. Many would consider a Packard-Bell tabletop computer of the early 1960s to be the first personal computer. But as far as single-user/operator commercial machines go, there is also the Bendix G15 from the mid-1950s, described in Chapter 9. PC users are likely to have forgotten or never known the atmosphere of early-generation computers. The ‘operating system’ was a schedule for the human staff who mounted large reels of tape, toggled inputs at the long control panel, pushed the load-and-run switch, and stacked punch cards and fanfold sheets of printed output. The numerous operators wore white lab jackets, worked in large air-conditioned spaces, and appeared to be high priests and acolytes in a vocational order. The users were supplicants. Apart from experimental machines at universities—such as MIT’s famed TX-0 (1955–6), which was controlled by the first computer hackers once it moved to the MIT campus in 1958—the users entreated the operators through written requests heading a card deck. Back then, the users as well as the general public stood behind a velvet rope, even a window wall. The operators continued to rule the machine long after users had electronic connection through time-sharing remote terminals. But those who had hacked the small machines like the TX-0 knew that the goal was the direct, immediate access of personal computers. Colossus already had that personal touch. Designed to be used by a single cryptanalyst assisted by one Wren, and later often run by the Wren solo, Colossus was in that sense a personal computer. But just how close was Colossus to being a PC? This chapter compares and contrasts the architecture of Colossus with that of today’s personal computers. An architect designs a building by balancing needs and functions with resources and aesthetics. The availability and cost of components constrain her work. Physical limitations and dynamics of use further impact on it. The building reflects the architect’s imagination and skill.
APA, Harvard, Vancouver, ISO, and other styles
4

HOLLAND, R. C. "THE UNIX OPERATING SYSTEM." In Microprocessors and their Operating Systems. Elsevier, 1989. http://dx.doi.org/10.1016/b978-0-08-037188-7.50017-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

HOLLAND, R. C. "THE CP/M OPERATING SYSTEM." In Microprocessors and their Operating Systems. Elsevier, 1989. http://dx.doi.org/10.1016/b978-0-08-037188-7.50015-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

"APPLIED ELECTRICITY AND ELECTRONICS SERIES." In Microprocessors and their Operating Systems. Elsevier, 1989. http://dx.doi.org/10.1016/b978-0-08-037188-7.50001-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

"Copyright." In Microprocessors and their Operating Systems. Elsevier, 1989. http://dx.doi.org/10.1016/b978-0-08-037188-7.50003-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

"PREFACE." In Microprocessors and their Operating Systems. Elsevier, 1989. http://dx.doi.org/10.1016/b978-0-08-037188-7.50004-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

HOLLAND, R. C. "MICROCOMPUTER PRINCIPLES." In Microprocessors and their Operating Systems. Elsevier, 1989. http://dx.doi.org/10.1016/b978-0-08-037188-7.50005-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

HOLLAND, R. C. "INTEL 8080/8085 FAMILY (8-BIT)." In Microprocessors and their Operating Systems. Elsevier, 1989. http://dx.doi.org/10.1016/b978-0-08-037188-7.50006-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Microprocessors Operating systems (Computers) Microprocessors"

1

Mulay, Veerendra, Dereje Agonafer, and Roger Schmidt. "Liquid Cooling for Thermal Management of Data Centers." In ASME 2008 International Mechanical Engineering Congress and Exposition. ASMEDC, 2008. http://dx.doi.org/10.1115/imece2008-68743.

Full text
Abstract:
The power trend for Server systems continues to grow thereby making thermal management of Data centers a very challenging task. Although various configurations exist, the raised floor plenum with Computer Room Air Conditioners (CRACs) providing cold air is a popular operating strategy. The air cooling of data center however, may not address the situation where more energy is expended in cooling infrastructure than the thermal load of data center. Revised power trend projections by ASHRAE TC 9.9 predict heat load as high as 5000W per square feet of compute servers’ equipment footprint by year 2010. These trend charts also indicate that heat load per product footprint has doubled for storage servers during 2000–2004. For the same period, heat load per product footprint for compute servers has tripled. Amongst the systems that are currently available and being shipped, many racks exceed 20kW. Such high heat loads have raised concerns over limits of air cooling of data centers similar to air cooling of microprocessors. Thermal management of such dense data center clusters using liquid cooling is presented.
APA, Harvard, Vancouver, ISO, and other styles
2

Mulay, Veerendra, Saket Karajgikar, Dereje Agonafer, Roger Schmidt, and Madhusudan Iyengar. "Parametric Study of Hybrid Cooling Solution for Thermal Management of Data Centers." In ASME 2007 International Mechanical Engineering Congress and Exposition. ASMEDC, 2007. http://dx.doi.org/10.1115/imece2007-43761.

Full text
Abstract:
The power trend for Server systems continues to grow thereby making thermal management of Data centers a very challenging task. Although various configurations exist, the raised floor plenum with Computer Room Air Conditioners (CRACs) providing cold air is a popular operating strategy. The air cooling of data center however, may not address the situation where more energy is expended in cooling infrastructure than the thermal load of data center. Revised power trend projections by ASHRAE TC 9.9 predict heat load as high as 5000W per square feet of compute servers’ equipment footprint by year 2010. These trend charts also indicate that heat load per product footprint has doubled for storage servers during 2000–2004. For the same period, heat load per product footprint for compute servers has tripled. Amongst the systems that are currently available and being shipped, many racks exceed 20kW. Such high heat loads have raised concerns over limits of air cooling of data centers similar to air cooling of microprocessors. A hybrid cooling strategy that incorporates liquid cooling along with air cooling can be very efficient in these situations. A parametric study of such solution is presented in this paper. A representative data center with 40 racks is modeled using commercially available CFD code. The variation in rack inlet temperature due to tile openings, underfloor plenum depths is reported.
APA, Harvard, Vancouver, ISO, and other styles
3

Lu, Jiachuan, Longtao Liao, Bo Feng, et al. "A Development of Human-System Interaction in Digital NPPs." In 2017 25th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/icone25-67212.

Full text
Abstract:
Now Nuclear Power Plants (NPPs) design is moving toward being highly dependent on digital computers in many complex systems, especially microprocessors. As a medium between operators and NPPs for exchange and interaction which ultimate operational decisions still rely on, the Human-System Interaction has been widely concerned and become one of the focuses in NPPs design. So in order to take full advantage of operating experience, human cognitive processing abilities, and progressive technologies, it is critical to plan, design, implement, operate, and maintain a reliable HSIs. The project, funded by the Nuclear Power Institute of China (NPIC), designs and develops a set of typical and comparatively complete technical solution of Human-System Interaction based on instrumentation and control system in actual NPP. To take advantage of the design process and modules as well as templates of this technical solution provided by this project, which take HFE into account, we can achieve the realistic simulation of Human-System Interaction for digital NPPs, making the use of iFIX software, and the Human-System Interaction system can be used to design interfaces for different kinds of NPPs. In this paper, the realization of human-system interaction will be introduced, and the current research status and main challenges of Human-System Interaction are included. And at this stage we have made the processes of the cross-platform data acquisition and monitoring, processing and display of small instrument control systems come true.
APA, Harvard, Vancouver, ISO, and other styles
4

Mulay, Veerendra, Saket Karajgikar, Dereje Agonafer, Roger Schmidt, Madshusudan Iyengar, and Jay Nigen. "Computational Study of Hybrid Cooling Solution for Thermal Management of Data Centers." In ASME 2007 InterPACK Conference collocated with the ASME/JSME 2007 Thermal Engineering Heat Transfer Summer Conference. ASMEDC, 2007. http://dx.doi.org/10.1115/ipack2007-33000.

Full text
Abstract:
The power trend for server systems continues to grow thereby making thermal management of data centers a very challenging task. Although various configurations exist, the raised floor plenum with Computer Room Air Conditioners (CRACs) providing cold air is a popular operating strategy. In prior work, numerous data center layouts employing raised floor plenum and the impact of design parameters such as plenum depth, ceiling height, cold isle location, tile openings and others on thermal performance of data center were presented. The air cooling of data center however, may not address the situation where more energy is expended in cooling infrastructure than the thermal load of data center. Revised power trend projections by ASHRAE TC 9.9 predict heat loads as high as 5000W per square feet of compute servers’ equipment footprint by year 2010. These trend charts also indicate that heat load per product footprint has doubled for storage servers during 2000–2004. For the same period, heat load per product footprint for compute servers has tripled. Amongst the systems that are currently available and being shipped, many racks exceed 20kW. Such high heat loads have raised concerns over air cooling limits of data centers similar to that of microprocessors. A hybrid cooling strategy that incorporates liquid cooling along with air cooling can be very efficient in such situations. The impact of such an operating strategy on thermal management of data center is discussed in this paper. A representative data center is modeled using commercially available CFD code. The change in rack temperature gradients, recirculation cells and CRAC demand due to use of hybrid cooling is presented in a detailed parametric study. It is shown that the hybrid cooling strategy improves the cooling of data center which may enable full population of rack and better management of system infrastructure.
APA, Harvard, Vancouver, ISO, and other styles
5

Viguera, Jose´ Manuel, Alfonso Jime´nez, and Juan Antonio Burillo. "An Integrated Approach to Human System Interface Design in New Nuclear Power Plants." In 17th International Conference on Nuclear Engineering. ASMEDC, 2009. http://dx.doi.org/10.1115/icone17-75280.

Full text
Abstract:
Nuclear power plants design is moving toward a wider use of digital computers, especially microprocessors, in information and control systems. The amount of automation and the role of the operator are under discussion in many countries. The view of the operator’s role presently varies. The main opinions can be summarized as follows: 1. Move toward a high degree of automation, fostering the machine role. 2. Use of computer-generated procedures providing information to skilled operators for them to make the final decision. 3. Use of digital systems to help the operator identify problems, decide on the appropriate corrective actions and aid in the execution of those actions. Tecnatom, S.A. has developed an integrated Human Factor Engineering (HFE) methodology, based on international regulations and experience obtained from several national and international projects, combining technology, organization and human elements to generate a Human-Centered Design. Human Factor Engineering (HFE) is the application of the knowledge of human capabilities and characteristics to develop equipment, facilities and systems. With the application of this knowledge, human performance, and therefore system performance, can be dramatically improved. Man/machine systems designed with the human as a key element are inherently safer and more reliable than those that are not. Until recently, design of these human-equipment interfaces has been secondary to “pure hardware” design; that is, equipment and facilities were designed without formal consideration of the implications for operators. Our approach is to systematically apply an HFE methodology that will produce: a) Human-System Interfaces that are easy, friendly to and consistent for the operators. b) Simulator-Assisted Engineering platforms for validation activities in the logic, control and human-system interface areas. c) Training Programs based on the systematic analysis of job and task requirements. d) Procedures derived from the same design process and analyses as the Human-System Interface and Training. Application of good HFE methodology during system development, implementation and operation is, from our point of view, vital for optimal system performance regarding operation activities. “A disciplined approach to HFE helps ensure that humans are considered integral system components, requiring careful consideration of how they will interact with their equipment.”
APA, Harvard, Vancouver, ISO, and other styles
6

McNelles, Phillip, and Lixuan Lu. "Lab-Scale Design, Demonstration and Safety Assessment of an FPGA-Based Post Accident Monitoring System for Westinghouse AP1000 Nuclear Power Plants." In 2014 22nd International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/icone22-30457.

Full text
Abstract:
A Field Programmable Gate Array (FPGA) is a type of integrated circuit (IC), which is programmed after it is manufactured. FPGAs are referred to as a form of programmable hardware, as there is typically no software or operating system running on the FPGA itself. A significant amount of design work has been performed regarding the application of FPGAs in the nuclear field in recent years, with much of that work centered around safety related Instrumentation and Control (I&amp;C) systems and safety systems. These new FPGA based systems are considered to be viable alternatives to replace many old I&amp;C systems that are commonly used in Nuclear Power Plants (NPPs). Many of these older analog and digital systems are obsolete, and it has become increasingly difficult to maintain and repair them. FPGAs possess certain advantages over traditional analog circuits, PLCs and microprocessors, when considering nuclear I&amp;C and safety system applications. This paper describes how FPGA technology has been used to construct a lab-scale implementation of a Post-Accident Monitoring System (PAMS), for a Westinghouse AP1000 Nuclear Power Plant, using a National Instruments “cRIO” chassis and I/O modules. This system will perform the major functions of the existing PAMS, including monitoring the vital values such as temperature, water level, pressure, flow rate, radiation levels and neutron flux in the event of a serious reactor accident. These values are required in standards such as United States Nuclear Regulatory Commission (NRC), Canadian Nuclear Safety Commission (CNSC), International Electrotechnical Commission (IEC), and Institute of Electrical and Electronics Engineers (IEEE). All of the input signals are read and processed using the FPGA, which includes alarms if the values go beyond the specified range, or if the values change rapidly. The values were then output to the computer through the FPGA interface to provide information to the operator, as well as being sent through analog and digital output modules for further processing. The system was tested using both simulated and real inputs from sensors. Furthermore, the reliability of the new system has also been analyzed, using the Dynamic Flowgraph Methodology (DFM). DFM has been successfully applied in both the nuclear and aerospace fields, and has been described as one of the best methodologies for modeling software/hardware interactions, by the scientific literature as well as in NRC reports. DFM was applied to fine-tune the design parameters by determining the potential causes of faults in the design, as well as to highlight the effectiveness of DFM in nuclear and in FPGA applications.
APA, Harvard, Vancouver, ISO, and other styles
7

Mawasha, P., S. P. Rooke, and R. Gross. "Examination of a “Map-Based” Approach to Modeling Steady State Heat Pump Performance With Variable Air Flow Rates." In ASME 1994 International Computers in Engineering Conference and Exhibition and the ASME 1994 8th Annual Database Symposium collocated with the ASME 1994 Design Technical Conferences. American Society of Mechanical Engineers, 1994. http://dx.doi.org/10.1115/cie1994-0467.

Full text
Abstract:
Abstract Performance map-based modeling has been commonly employed for the rapid prediction of air conditioning and heat pump steady state performance. This approach requires that performance characteristics of the compressor and heat exchangers can be expressed in the form of simple algebraic functions. The growth of microprocessor based control, particularly in residential systems, has led to more frequent off-design operation. This study explores the ability of the map-based modeling approach to predict off-design operating performance of a single air-source heat pump operating in the cooling mode. Off-design conditions were simulated by varying the evaporator air flow rate over a wide range. Performance predictions are compared with the predictions of a more rigorous heat pump simulation program which utilizes heat transfer correlations for the heat exchangers rather than performance maps. A PC math package was used to generate results for the map-based model, while a Fortran compiler was used to generate results for the more rigorous model. The benefits of the map-based approach in predicting system performance are presented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
8

Gramatikas, George F., and Daniel L. Davis. "Expanding Expertise With State-of-the-Art Monitoring." In ASME 1989 International Gas Turbine and Aeroengine Congress and Exposition. American Society of Mechanical Engineers, 1989. http://dx.doi.org/10.1115/89-gt-95.

Full text
Abstract:
This paper describes a program that groups gas turbines from one or more sites for the purpose of efficient monitoring and performance evaluation. Cost-improved gas turbine and power plant operation is achieved by a new, unified-yet-flexible service approach which combines state-of-the-art microprocessor-based monitoring with routine and emergency evaluation by a core of highly skilled personnel many miles from the operating site. This unique approach delivers expertise which supplements the gas turbine owner’s in-house resources. It is based on a modular concept of condition health monitoring and performance evaluation, including scheduled as well as on-line services. Portable condition health monitoring equipment provides the capability for scheduled plant performance evaluation by service engineers without investment in additional equipment. On-line monitoring includes a PC-based software system and a computer link to the service engineer’s headquarters. Both scheduled and on-line monitoring services include trend evaluation, projected maintenance requirements, maintenance planning assistance and suggestions for performance enhancement.
APA, Harvard, Vancouver, ISO, and other styles
9

Nagothu, Kranthimanoj, Brain Kelley, Jeff Prevost, and Mo Jamshidi. "On prediction to dynamically assign heterogeneous microprocessors to the minimum joint power state to achieve Ultra Low Power Cloud Computing." In 2010 44th Asilomar Conference on Signals, Systems and Computers. IEEE, 2010. http://dx.doi.org/10.1109/acssc.2010.5757735.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Diaz, Jhon, Mehdi Karzar Jeddi, Nejat Olgac, Tai-Hsi Fan, and Ali Fuat Ergenc. "On the Geometric Characteristics of Cell Membrane Using Rotationally Oscillating Drill (Ros-Drill©)." In ASME 2009 Dynamic Systems and Control Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/dscc2009-2633.

Full text
Abstract:
ICSI (intracytoplasmic sperm injection) is a broadly utilized assisted reproductive technology. A number of new versions of the procedure have evolved lately, such as piezo-assisted ICSI technique. An important problem with this technique, however, is that it requires small amounts of mercury to stabilize the pipette tip. A completely different and mercury-free technology, called the “Ros-Drill©” (rotationally oscillating drill) was developed by the group of the authors. It uses microprocessor-controlled rotational oscillations of a spiked micropipette for piercing. One of the key issues is to determine when to start the oscillations. It is based on the cell deformation prior to the membrane piercing. In-situ measurements are needed for this protocol. The contribution of this paper is the utilization of computer vision for these measurements. The triggering logic is correlated to the cell membrane curvature variations along the vision-detected membrane line segment. Such a tool becomes very helpful for automating the Ros-Drill operation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography