To see the other types of publications on this topic, follow the link: UNIX System V (Computer operating system).

Journal articles on the topic 'UNIX System V (Computer operating system)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'UNIX System V (Computer operating system).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bagnoli, Michele, Bruno Belvedere, Michele Bianchi, Alberto Borghetti, Andrea De Pascale, and Mario Paolone. "A Feasibility Study of an Auxiliary Power Unit Based on a PEM Fuel Cell for On-Board Applications." Journal of Fuel Cell Science and Technology 3, no. 4 (March 27, 2006): 445–51. http://dx.doi.org/10.1115/1.2349527.

Full text
Abstract:
Proton exchange membrane (PEM) fuel cells show characteristics of high power density, low operating temperature, and fast start-up capability, which make them potentially suitable to replace conventional power sources (e.g., internal combustion engines) as auxiliary power units (APU) for on-board applications. This paper presents a methodology for a preliminary investigation on either sizing and operating management of the main components of an on-board power system composed by: (i) PEM fuel cell, (ii) hydrogen storage subsystem, (iii) battery, (iv) grid interface for the connection to an external electrical power source when available, and (v) electrical appliances and auxiliaries installed on the vehicle. A model able to reproduce the typical profiles of electric power requests of on-board appliances and auxiliaries has been implemented in a computer program. The proposed methodology helps also to define the sizing of the various system components and to identify the fuel cell operating sequence, on the basis of the above mentioned load profiles.
APA, Harvard, Vancouver, ISO, and other styles
2

Furber, D. J. "The Unix operating system." Information and Software Technology 32, no. 3 (April 1990): 229. http://dx.doi.org/10.1016/0950-5849(90)90184-s.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Menendez, Doug. "Understanding the Unix Operating System." EDPACS 19, no. 2 (August 1991): 9–16. http://dx.doi.org/10.1080/07366989109451261.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Huan, Shi Yu. "A Unix-Based Telecom Billing System." Applied Mechanics and Materials 321-324 (June 2013): 2923–26. http://dx.doi.org/10.4028/www.scientific.net/amm.321-324.2923.

Full text
Abstract:
With the development of computer technology, people are no longer satisfied with a single operating system. More and more people like to use UNIX system because of its high reliability, good open, and powerful network features. The design of this telecommunications billing system is based on Browser / Server model using the java language for the UNIX platforms laboratory environment. The system can accurately time and automatically charge according to the business types when the user rents the server, and to complete information management of the server and user. This system is simple, friendly interface. It reduces operating costs of the telecommunications operators and improves productivity of them.
APA, Harvard, Vancouver, ISO, and other styles
5

ZHANG, DU, and RAUL VELEZ. "BMS: A KNOWLEDGE-BASED TOOL FOR UNIX PERFORMANCE TUNING." International Journal on Artificial Intelligence Tools 05, no. 03 (September 1996): 323–45. http://dx.doi.org/10.1142/s0218213096000225.

Full text
Abstract:
This paper presents the design and implementation of a knowledge-based tool for performance tuning of the UNIX operating system. The tool, called BMS, provides an intelligent support and maintenance for identifying performance bottlenecks in UNIX and recommending solutions to the problems. Currently, it handles problems in UNIX resource management, such as memory utilization, disk utilization, CPU scheduling and I/O devices. BMS has been implemented in the EXSYS environment and tested on UNIX V.3. Preliminary results have indicated that such a knowledge-based tool to operating system performance tuning (1) is viable; (2) increases the productivity of system maintenance personnel and reduces the cost of training; and (3) offers a better service to operating system users by providing prompt recommendations to solutions of their system performance problems.
APA, Harvard, Vancouver, ISO, and other styles
6

Menendez, Doug. "Introducing Information Systems Auditors to the Unix Operating System." EDPACS 19, no. 1 (July 1991): 8–15. http://dx.doi.org/10.1080/07366989109451255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nikulenkov, A. G., D. V. Samoilenko, and T. V. Nikulenkova. "Study of the impact of NPP rated thermal power uprate on process behavior at different transient conditions." Nuclear and Radiation Safety, no. 4(80) (December 3, 2018): 9–13. http://dx.doi.org/10.32918/nrs.2018.4(80).02.

Full text
Abstract:
Today, objective preconditions have been formed to find the ways on how to increase cost-effectiveness of NPPs operation, while providing the required safety level. One of such ways to increase thermal nominal power of power unit. The paper provides for the results of reactor behavior analysis at increased thermal power above nominal received using a one-dimensional system computer code RELAP5/MOD3.2 and relevant model of VVER-1000 (V-320) power unit. Calculation analyses are performed for quasi-static reactor operating conditions and transients using realistic approach in terms of initial performance parameters of reactor installation. In researches, representative initial events for transients have been selected according to the principle described further. For an abnormal operation, an event has been selected based on its high frequency and consequences, which require decreasing reactor power down to 50 % of nominal thermal power. For emergency conditions an event has been selected which is caused by external extreme impacts typical for Ukrainian NPP sites resulting in the worst consequences. Thus, the transients are represented by events associated with failure of a single turbine-driven feed water pump and total station blackout unit. To analyze emergency conditions caused by long-term blackout, they were additionally accompanied by a leakage through reactor coolant pump seals. Given that increase of steam flow in a turbine at increased thermal power above nominal requires additional studies on residual service life assessment of its critical components, a 3-D model of high-pressure rotor of a full speed turbine is proposed for further studies. Based on the calculations a comparative analysis of major parameters of the reactor at rated and increased thermal power is performed with assessment of significant factors to be considered in further studies on increase of installed thermal output of NPP unit.
APA, Harvard, Vancouver, ISO, and other styles
8

Luta, Doudou, and Atanda Raji. "Fuzzy Rule-Based and Particle Swarm Optimisation MPPT Techniques for a Fuel Cell Stack." Energies 12, no. 5 (March 11, 2019): 936. http://dx.doi.org/10.3390/en12050936.

Full text
Abstract:
The negative environmental impact and the rapidly declining reserve of fossil fuel-based energy sources for electricity generation is a big challenge to finding sustainable alternatives. This scenario is complicated by the ever-increasing world population growth demanding a higher standard of living. A fuel cell system is able to generate electricity and water with higher energy efficiency while producing near-zero emissions. A common fuel cell stack displays a nonlinear power characteristic as a result of internal limitations and operating parameters such as temperature, hydrogen and oxygen partial pressures and humidity levels, leading to a reduced overall system performance. It is therefore important to extract as much power as possible from the stack, thus hindering excessive fuel use. This study considers and compares two Maximum Power Point Tracking (MPPT) approaches; one based on the Mamdani Fuzzy Inference System and the other on the Particle Swarm Optimisation (PSO) algorithm to maintain the output power of a fuel cell stack extremely close to its maximum. To ensure that, the power converter interfaced to the fuel cell unit must be able to continuously self-modify its parameters, hence changing its voltage and current depending upon the Maximum Power Point position. While various methods exist for Maximum Power Point tracker design, this paper analyses the response characteristics of a Mamdani Fuzzy Inference Engine and the Particle Swarm Optimisation technique. The investigation was conducted on a 53 kW Proton Exchange Membrane Fuel Cell interfaced to a DC-to-DC boost converter supplying 1.2 kV from a 625 V input DC voltage. The modelling was accomplished using a Matlab/Simulink environment. The results showed that the MPPT controller based on the PSO algorithm presented better tracking efficiency as compared to the Mamdani controller. Furthermore, the rise time of the PSO controller was slightly shorter than the Mamdani controller and the overshoot of the PSO controller was 2% lower than that of the Mamdani controller.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Hai-Tao, Zhi-Yu Wen, Yi Xu, Zheng-Guo Shang, Jin-Lan Peng, and Peng Tian. "An integrated microfluidic analysis microsystems with bacterial capture enrichment and in-situ impedance detection." Modern Physics Letters B 31, no. 25 (September 6, 2017): 1750233. http://dx.doi.org/10.1142/s0217984917502335.

Full text
Abstract:
In this paper, an integrated microfluidic analysis microsystems with bacterial capture enrichment and in-situ impedance detection was purposed based on microfluidic chips dielectrophoresis technique and electrochemical impedance detection principle. The microsystems include microfluidic chip, main control module, and drive and control module, and signal detection and processing modulet and result display unit. The main control module produce the work sequence of impedance detection system parts and achieve data communication functions, the drive and control circuit generate AC signal which amplitude and frequency adjustable, and it was applied on the foodborne pathogens impedance analysis microsystems to realize the capture enrichment and impedance detection. The signal detection and processing circuit translate the current signal into impendence of bacteria, and transfer to computer, the last detection result is displayed on the computer. The experiment sample was prepared by adding Escherichia coli standard sample into chicken sample solution, and the samples were tested on the dielectrophoresis chip capture enrichment and in-situ impedance detection microsystems with micro-array electrode microfluidic chips. The experiments show that the Escherichia coli detection limit of microsystems is [Formula: see text] CFU/mL and the detection time is within 6 min in the optimization of voltage detection 10 V and detection frequency 500 KHz operating conditions. The integrated microfluidic analysis microsystems laid the solid foundation for rapid real-time in-situ detection of bacteria.
APA, Harvard, Vancouver, ISO, and other styles
10

Djordjevic-Kajan, S., Dragan Stojanovic, and Aleksandar Stanimirovic. "Advanced System Software curricula." Facta universitatis - series: Electronics and Energetics 18, no. 2 (2005): 309–17. http://dx.doi.org/10.2298/fuee0502309d.

Full text
Abstract:
An advanced System Software curricula at the Faculty of Electronic Engineering in Nis is presented in this paper. The system software track consists of two important themes of Computer Science and Computing in General organized now as two separated courses: Operating Systems course and System Software Development and System Programming course. Both courses offer extensive teaching of foundational concepts and principles of Operating Systems and System Programming along with design and implementation of presented topics in real operating systems and system software, such as Unix, Linux and Windows 2000/XP. Laboratory environments and exercises for both courses offer both examination of main algorithms and structures within operating systems and system software through simulation, and what is more important, hands-on experience with operating system internals and code.
APA, Harvard, Vancouver, ISO, and other styles
11

Koppány, J. "A Dual-Computer Based Data Acquisition and Control System Using Xenix/Unix System V." IFAC Proceedings Volumes 19, no. 7 (May 1986): 123–26. http://dx.doi.org/10.1016/b978-0-08-034347-1.50019-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Bateson, Bill, and Geraint Davies. "System V interface definition — a last chance for Unix?" Microprocessors and Microsystems 9, no. 7 (September 1985): 337–39. http://dx.doi.org/10.1016/0141-9331(85)90318-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

VALLÉE, GEOFFROY, RENAUD LOTTIAUX, LOUIS RILLING, JEAN-YVES BERTHOU, IVAN DUTKA MALHEN, and CHRISTINE MORIN. "A CASE FOR SINGLE SYSTEM IMAGE CLUSTER OPERATING SYSTEMS: THE KERRIGHED APPROACH." Parallel Processing Letters 13, no. 02 (June 2003): 95–122. http://dx.doi.org/10.1142/s0129626403001185.

Full text
Abstract:
In this paper, we present fundamental mechanisms for global process and memory management in an efficient single system image cluster operating system designed to execute workloads composed of high performance sequential and parallel applications. Their implementation in Kerrighed, our proposed distributed operating system, is composed of a set of Linux modules and a patch of less than 200 lines of code to the Linux kernel. Kerrighed is a unique single system image cluster operating system providing the standard Unix interface as well as distributed OS mechanisms such as load balancing on all cluster nodes. Our support for standard Unix interface includes support for multi-threaded applications and a checkpointing facility for both sequential and shared memory parallel applications. We present an experimental evaluation of the Kerrighed system and demonstrate the feasibility of the single system image approach at the kernel level.
APA, Harvard, Vancouver, ISO, and other styles
14

Hughes, Larry. "Chat: an N-party talk facility for the Unix 4.2 operating system." Computer Communications 11, no. 1 (February 1988): 20–23. http://dx.doi.org/10.1016/0140-3664(88)90004-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

JURTELA, JURIJ. "SISTEMI UPRAVLJANJA OGNJENE PODPORE V SODOBNIH OBOROžENIH SILAH." PROFESIONALIZACIJA SLOVENSKE VOJSKE / PROFESSIONALIZATION OF THE SLOVENIAN ARMED FORCES, VOLUME 2012/ ISSUE 14/1 (May 30, 2012): 89–107. http://dx.doi.org/10.33179/bsv.99.svi.11.cmc.14.1.6.

Full text
Abstract:
Prizadevanje držav za zagotavljanje svetovnega miru je postala prednostna naloga. Trend združenega delovanja enot je vsesplošno prisoten in uveljavljen. Velik razkorak med zmožnostmi in resnično izvedbo združenega delovanja je privedel do tega, da so se začeli povezovati nacionalni sistemi poveljevanja in kontrole na ravni operativnega delovanja. Povezovanje sistemov zahteva predvsem standardizacijo postopkov in opreme, zato standardizacija ni več omejena le na nacionalno raven, temveč mora biti globalna. Sodobni vojaški sistemi poveljevanja in kontrole so usmerjeni v optimalno reševanje nalog. Zgrajeni so mrežno, omogočajo vključevanje sedanjih in prihodnjih modulov ter komunicirajo med seboj brez dodatnih vmesnikov. S pravilnima usklajevanjem in delitvijo resursov pa precej zmanjšamo tudi materialna in finančna sredstva. Ognjena podpora ima v nalogah zagotavljanja miru pomembno vlogo. Velika ognjena moč zagotavlja premoč na bojišču, saj ognjena podpora pomeni skupno in usklajeno uporabo ognjenega delovanja kopenskih, mornariških ter zračnih bojnih sistemov in delovanja ofenzivnih sistemov elektronskega bojevanja ter neubojnih sredstev na cilje na kopnem in morju. Za zagotavljanje takšnega delovanja moramo imeti razvit ustrezen računalniški sistem, ki poveže vse komponente v sistem upravljanja ognjene podpore. V potrditev pomembnosti ognjene podpore je v novejšem času prišlo do izvedbe ra- čunalniškega vmesnika, ki poveže sisteme upravljanja ognjene podpore posameznih držav v celoto in omogoča skupno delovanje. Vmesnik je izveden tako, da obdržimo nacionalne delovne procese. Tako se ni treba dodatno izobraževati in usposabljati. Slovenska vojska sledi globalizacijskim usmeritvam. Dokaz je nabava ustreznega operativnega in taktičnega sistema z možnostjo mednarodne povezave. Žal nekateri sistemi Slovenske vojske nimajo te zmožnosti. Med njimi je sistem upravljanja ognjene podpore. Čeprav je sistem sodoben, brez ustrezne povezave tako v nacionalnem kot mednarodnem okolju izgublja veliko prednosti, zato se na tem področju iščejo nove, ustreznejše rešitve povezav, ki morajo izpolnjevati današnje in prihajajoče zahteve. Prav tako lahko aplikacijo rešitve povzamemo tudi za druge avtonomne sisteme, kot je sistem za zagotavljanje obveščevalnih podatkov ali sistem vodenja logistike. Zavest, da nova standardizacija in mednarodna povezanost ne pomenita povečanja stroškov, temveč kakovostno izboljšanje delovanja in materialno zmanjšanje potrebnih sredstev, naj bo glavno vodilo. The effort of the countries to provide global peace has become a priority. The trend of combined unit operations is universally present and established. A large gap between the potential and actual execution of combined operations has led to the integration of national command and control (C2) systems at the operational level. The integration of systems primarily requires the standardization of procedures and equipment. The standardization is thus no longer limited solely to the national level, but it should be global. Modern C2 systems are directed towards optimal completion of tasks. Built as networks, they allow for the integration of the existing and future modules and for their communication without additional interfaces. With a proper coordination and allocation of resources we also substantially reduce the material and financial resources. Fire support plays an important role in providing security during peace tasks. At the same time, great fire power ensures battlefield superiority, since it includes joint and coordinated use of fire from land, navy and air engagement systems, and offensive operation of electronic warfare systems and non-lethal means against land and sea targets. Such operations require an appropriate computer system which links all the components into a fire support management system. The awareness of the importance of fire support has led to the development of a computer interface, which connects fire support management systems of individu- al countries into a whole and thus enables joint operations. The interface was made in a way to preserve national work processes. Further education and training are therefore not necessary. The Slovenian Armed Forces (SAF) follows the globalization trends. To this end, it has acquired an operational and tactical system capable of international connections. Unfortunately, some SAF systems, namely the fire support management system, do not include this feature. Although the system is a modern one, it loses a great deal of benefits due to the lack of appropriate national and international links. Therefore, new and more appropriate solutions for connections, capable of fulfilling contempo- rary and future requirements, are sought-after. The application of the solution can also be applied to other autonomous systems, such as the intelligence system or the logistics management system. The main principle shall be the awareness that new standardization and international cooperation do not incur increased costs, but rather a quality improvement of the operations and a quan- titative reduction of the required resources.
APA, Harvard, Vancouver, ISO, and other styles
16

Lim, Seung-Ho, WoonSik William Suh, Jin-Young Kim, and Sang-Young Cho. "RISC-V Virtual Platform-Based Convolutional Neural Network Accelerator Implemented in SystemC." Electronics 10, no. 13 (June 23, 2021): 1514. http://dx.doi.org/10.3390/electronics10131514.

Full text
Abstract:
The optimization for hardware processor and system for performing deep learning operations such as Convolutional Neural Networks (CNN) in resource limited embedded devices are recent active research area. In order to perform an optimized deep neural network model using the limited computational unit and memory of an embedded device, it is necessary to quickly apply various configurations of hardware modules to various deep neural network models and find the optimal combination. The Electronic System Level (ESL) Simulator based on SystemC is very useful for rapid hardware modeling and verification. In this paper, we designed and implemented a Deep Learning Accelerator (DLA) that performs Deep Neural Network (DNN) operation based on the RISC-V Virtual Platform implemented in SystemC in order to enable rapid and diverse analysis of deep learning operations in an embedded device based on the RISC-V processor, which is a recently emerging embedded processor. The developed RISC-V based DLA prototype can analyze the hardware requirements according to the CNN data set through the configuration of the CNN DLA architecture, and it is possible to run RISC-V compiled software on the platform, can perform a real neural network model like Darknet. We performed the Darknet CNN model on the developed DLA prototype, and confirmed that computational overhead and inference errors can be analyzed with the DLA prototype developed by analyzing the DLA architecture for various data sets.
APA, Harvard, Vancouver, ISO, and other styles
17

Oyarzun, Francisco J. "On the 30th Anniversary of UNIX, Are We Finally Going to Enjoy a "Modern" Operating System?" SIMULATION 70, no. 4 (April 1998): 266–72. http://dx.doi.org/10.1177/003754979807000408.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

GALLARD, PASCAL, and CHRISTINE MORIN. "DYNAMIC STREAMS FOR EFFICIENT COMMUNICATIONS BETWEEN MIGRATING PROCESSES IN A CLUSTER." Parallel Processing Letters 13, no. 04 (December 2003): 601–14. http://dx.doi.org/10.1142/s0129626403001549.

Full text
Abstract:
This paper presents a communication system designed to allow efficient process migration in a cluster. The proposed system is generic enough to allow the migration of any kind of stream: socket, pipe, char devices. Communicating processes using IP or Unix sockets are transparently migrated with our mechanisms and they can still efficiently communicate after migration. The designed communication system is implemented as part of Kerrighed, a single system image operating system for a cluster based on Linux. Preliminary performance results are presented.
APA, Harvard, Vancouver, ISO, and other styles
19

Rusjan, Borut. "Computer system validation." Management 25, no. 2 (December 21, 2020): 1–23. http://dx.doi.org/10.30924/mjcmi.25.2.1.

Full text
Abstract:
The purpose of this paper is to present a Quality Management System (QMS) for computer systems validation and to identify and demonstrate the validation process on a practical case of a pharmaceutical company. Based on the European and the US legal requirements, we define QMS for computer system validation elements. Validation process example based on the use of a general V-model provides a thorough understanding of the actual validation implementation in practice. Computer system validation in a concrete organization can be implemented, based on general and specific standard operating procedures which form the QMS. Planning, Specifying, Development/Building, Verification and Report validation activities are presented through process diagrams based on a practical Supervisory Control and Data Acquisition (SCADA) manufacturing computer-aided system validation example. Empirical part employed two research strategies: a single case study and action research. Presented computer system validation QMS and process can provide a guideline for all companies where computer systems are important. Although the presented QMS and process for the computer system validation are related to a specific pharmaceutical company case and its legal requirements, the experience from this highly regulated industry can be appropriately used in other less regulated industries. For verification of the proposed model, they need to be further tested within the pharmaceutical and other less regulated industries.
APA, Harvard, Vancouver, ISO, and other styles
20

Ohata, Toru, Hiroyuki Konishi, Hiroaki Kimura, Yukito Furukawa, Kenji Tamasaku, Takeshi Nakatani, Toshiya Tanabe, Norimasa Matsumoto, Miho Ishii, and Tetsuya Ishikawa. "SPring-8 beamline control system." Journal of Synchrotron Radiation 5, no. 3 (May 1, 1998): 590–92. http://dx.doi.org/10.1107/s0909049597016038.

Full text
Abstract:
The SPring-8 beamline control system is now taking part in the control of the insertion device (ID), front end, beam transportation channel and all interlock systems of the beamline: it will supply a highly standardized environment of apparatus control for collaborative researchers. In particular, ID operation is very important in a third-generation synchrotron light source facility. It is also very important to consider the security system because the ID is part of the storage ring and is therefore governed by the synchrotron ring control system. The progress of computer networking systems and the technology of security control require the development of a highly flexible control system. An interlock system that is independent of the control system has increased the reliability. For the beamline control system the so-called standard model concept has been adopted. VME-bus (VME) is used as the front-end control system and a UNIX workstation as the operator console. CPU boards of the VME-bus are RISC processor-based board computers operated by a LynxOS-based HP-RT real-time operating system. The workstation and the VME are linked to each other by a network, and form the distributed system. The HP 9000/700 series with HP-UX and the HP 9000/743rt series with HP-RT are used. All the controllable apparatus may be operated from any workstation.
APA, Harvard, Vancouver, ISO, and other styles
21

HUNTER, T. R., R. W. WILSON, R. KIMBERK, P. S. LEIKER, N. A. PATEL, R. BLUNDELL, R. D. CHRISTENSEN, et al. "THE DIGITAL MOTION CONTROL SYSTEM FOR THE SUBMILLIMETER ARRAY ANTENNAS." Journal of Astronomical Instrumentation 02, no. 01 (September 2013): 1350002. http://dx.doi.org/10.1142/s2251171713500025.

Full text
Abstract:
We describe the design and performance of the digital servo and motion control system for the 6-meter parabolic antennas of the Submillimeter Array (SMA) on Mauna Kea, Hawaii. The system is divided into three nested layers operating at a different, appropriate bandwidth. (1) A rack-mounted, real-time Unix system runs the position loop which reads the high resolution azimuth and elevation encoders and sends velocity and acceleration commands at 100 Hz to a custom-designed servo control board (SCB). (2) The microcontroller-based SCB reads the motor axis tachometers and implements the velocity loop by sending torque commands to the motor amplifiers at 558 Hz. (3) The motor amplifiers implement the torque loop by monitoring and sending current to the three-phase brushless drive motors at 20 kHz. The velocity loop uses a traditional proportional-integral-derivative (PID) control algorithm, while the position loop uses only a proportional term and implements a command shaper based on the Gauss error function. Calibration factors and software filters are applied to the tachometer feedback prior to the application of the servo gains in the torque computations. All of these parameters are remotely adjustable in the software. The three layers of the control system monitor each other and are capable of shutting down the system safely if a failure or anomaly occurs. The Unix system continuously relays the antenna status to the central observatory computer via reflective memory. In each antenna, a Palm Vx hand controller displays the complete system status and allows full local control of the drives in an intuitive touchscreen user interface. The hand controller can also be connected outside the cabin, a major convenience during the frequent reconfigurations of the interferometer. Excellent tracking performance (~ 0.3′′ rms) is achieved with this system. It has been in reliable operation on 8 antennas for over 10 years and has required minimal maintenance.
APA, Harvard, Vancouver, ISO, and other styles
22

Craig, Iain. "The Unix Operating System (Second Edition) by Kaare Christian John Wiley and Sons, New York, 1988, 455pp. (incl. index) (£17.50)." Robotica 7, no. 1 (January 1989): 87. http://dx.doi.org/10.1017/s0263574700005312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Yussif, Neama, Omar H. Sabry, Ayman S. Abdel-Khalik, Shehab Ahmed, and Abdelfatah M. Mohamed. "Enhanced Quadratic V/f-Based Induction Motor Control of Solar Water Pumping System." Energies 14, no. 1 (December 28, 2020): 104. http://dx.doi.org/10.3390/en14010104.

Full text
Abstract:
In rural and remote areas, solar photovoltaic energy (PV) water pumping systems (SPWPSs) are being favored over diesel-powered water pumping due to environmental and economic considerations. PV is a clean source of electric energy offering low operational and maintenance cost. However, the direct-coupled SPWPS requires inventive solutions to improve the system’s efficiency under solar power variations while producing the required amount of pumped water concurrently. This paper introduces a new quadratic V/f (Q V/f) control method to drive an induction motor powered directly from a solar PV source using a two-stage power converter without storage batteries. Conventional controllers usually employ linear V/f control, where the reference motor speed is derived from the PV input power and the dc-link voltage error using a simple proportional–integral (PI) controller. The proposed Q V/f-based system is compared with the conventional linear V/f control using a simulation case study under different operating conditions. The proposed controller expectedly enhances the system output power and efficiency, particularly under low levels of solar irradiance. Some alternative controllers rather than the simple PI controller are also investigated in an attempt to improve the system dynamics as well as the water flow output. An experimental prototype system is used to validate the proposed Q V/f under diverse operating conditions.
APA, Harvard, Vancouver, ISO, and other styles
24

CHENG, CHING-CHING, MING-SHING YOUNG, SHANG-WEN YOUNG, and CHANG-LIN CHUANG. "AN INTELLIGENT DIGITAL VOLTAMMETRIC SYSTEM WITH MULTIPLE FUNCTIONS EXECUTED THROUGH STAND-ALONE OPERATION OR PC-CONTROL." Biomedical Engineering: Applications, Basis and Communications 14, no. 05 (October 25, 2002): 218–36. http://dx.doi.org/10.4015/s1016237202000322.

Full text
Abstract:
This study presents an intelligent voltammetric system consisting of a personal computer and a digital voltammeter with VXIbus architecture of system control board, voltammetric measurement board and electrode evaluation board. System is designed to provide superior, comprehensive, versatile and convenient storage, analysis and display of electrochemical and voltammetric waveforms. Voltammeter is capable of stand-alone operation or direct PC control through a Labview program and serial communication interface. Stand-alone offers several general voltammetric functions such as electrochemical treatment and evaluation of electrodes and experimental voltammetry. PC connection gives additional functions such as automatic scanning of oxidation potential, expanded storage and processing of experimental data, arbitrary voltammetric waveform parameters, etc. Standalone uses microcontroller and three-bus structure, with EEPROM storing waveform parameters, experimental data and machine code program downloaded from PC. Electrode evaluation board tests electrode quality by measuring electrode equivalent resistance and capacitance, requiring only one button to perform the entire procedure. Minimum potential unit is 1 mV, at which setting the voltage range is −2.05 to +2.05 V. At a minimum unit of 4.9 mV, the voltage range is −10 to +10V. Experimental results are presented using carbon fiber electrode to measure the dopamine concentration in PBS solution, showing minimum oxidation current can be measured to less than 10 pA, with a minimum detectable bulk concentration of less than 10 ppb. The combination of PC with stand-alone voltammeter offers high-speed, precision, automation, versatility and portability, while the VXIbus architecture allows easy expansion capability.
APA, Harvard, Vancouver, ISO, and other styles
25

Cáceres, Manuel, Andrés Firman, Jesús Montes-Romero, Alexis Raúl González Mayans, Luis Horacio Vera, Eduardo F. Fernández, and Juan de la Casa Higueras. "Low-Cost I–V Tracer for PV Modules under Real Operating Conditions." Energies 13, no. 17 (August 20, 2020): 4320. http://dx.doi.org/10.3390/en13174320.

Full text
Abstract:
Solar photovoltaic technologies have undergone significant scientific development. To ensure the transfer of knowledge through the training of qualified personnel, didactic tools that can be acquired or built at a reasonable price are needed. Most training and research centres have restrictions on acquiring specific equipment due to its high cost. With this in mind, this article presents the development and transfer of a low-cost I–V curve tracer acquisition system. The device is made up of embedded systems with all the necessary hardware and software for its operation. The hardware and software presented are open source and have a low cost, i.e., the estimated material cost of the system is less than 200 euros. For its development, four institutions from three different countries participated in the project. Three photovoltaic technologies were used to measure the uncertainties related to the equipment developed. In addition, the system can be transferred for use as an academic or research tool, as long as the measurement does not need to be certified. Two accredited laboratories have certified the low uncertainties in the measurement of the proposed system.
APA, Harvard, Vancouver, ISO, and other styles
26

Ham, Seok-Hyeong, Yoon-Geol Choi, Hyeon-Seok Lee, Sang-Won Lee, Su-Chang Lee, and Bongkoo Kang. "High-efficiency Bidirectional Buck–Boost Converter for Residential Energy Storage System." Energies 12, no. 19 (October 6, 2019): 3786. http://dx.doi.org/10.3390/en12193786.

Full text
Abstract:
This paper proposes a bidirectional dc–dc converter for residential micro-grid applications. The proposed converter can operate over an input voltage range that overlaps the output voltage range. This converter uses two snubber capacitors to reduce the switch turn-off losses, a dc-blocking capacitor to reduce the input/output filter size, and a 1:1 transformer to reduce core loss. The windings of the transformer are connected in parallel and in reverse-coupled configuration to suppress magnetic flux swing in the core. Zero-voltage turn-on of the switch is achieved by operating the converter in discontinuous conduction mode. The experimental converter was designed to operate at a switching frequency of 40–210 kHz, an input voltage of 48 V, an output voltage of 36–60 V, and an output power of 50–500 W. The power conversion efficiency for boost conversion to 60 V was ≥98.3% in the entire power range. The efficiency for buck conversion to 36 V was ≥98.4% in the entire power range. The output voltage ripple at full load was <3.59 Vp.p for boost conversion (60 V) and 1.35 Vp.p for buck conversion (36 V) with the reduced input/output filter. The experimental results indicate that the proposed converter is well-suited to smart-grid energy storage systems that require high efficiency, small size, and overlapping input and output voltage ranges.
APA, Harvard, Vancouver, ISO, and other styles
27

Kusumaningrum, Anggraini. "PENGUJIAN KINERJA JARINGAN SISTEM AKSES FILE BERBASIS CLIENT SERVER MENGGUNAKAN SAMBA SERVER." Conference SENATIK STT Adisutjipto Yogyakarta 2 (November 15, 2016): 129. http://dx.doi.org/10.28989/senatik.v2i0.31.

Full text
Abstract:
Data communication is the process of exchanging data between two or more devices through a transmission medium such as a cable. In order for the data communication can occur, the device must be connected to communicate with each other or be a part of a communication system consisting of hardware (hardware) and software (software). Samba server is a software bridge between the two operating systems that run within a computer network. Samba is able to share files with computers that use operating system linux, unix and windows with a peer to peer system. The time needed on the LAN network access system based on client file servers using Samba server with a minimum file size of 3MB and a maximum of 1GB for the download process is 04 minutes 54 seconds while the upload is 09 minutes 24 seconds. Speed transfer rate that is produced with a minimum file size of 3MB and a maximum of 1GB for the download process is 4.762Kbps and to upload is 1.896Kbps. On QoS testing conducted on 5 PCs, 10PC, 15pc and client 20PC result more and more PC client that accesses 1 file the success rate will be getting worse, it is because the bottleneck on the network.Kata Kunci : Network, client server, Samba Server
APA, Harvard, Vancouver, ISO, and other styles
28

Axelrod, Robert, and D. Scott Bennett. "A Landscape Theory of Aggregation." British Journal of Political Science 23, no. 2 (April 1993): 211–33. http://dx.doi.org/10.1017/s000712340000973x.

Full text
Abstract:
Aggregation means the organization of elements of a system into patterns that tend to put highly compatible elements together and less compatible elements apart. Landscape theory Predicts how aggregation will lead to alignments among actors (such as nations), whose leaders are myopic in their assessments and incremental in their actions. The predicted configurations are based upon the attempts of actors to minimize their frustration based upon their pairwise Propensities to align with some actors and oppose others. These attempts lead to a local minimum in the energy landscape of the entire system. The theory is supported by the results of two cases: the alignment of seventeen European nations in the Second World War and membership in competing alliances of nine computer companies to set standards for Unix computer operating systems. The theory has potential for application to coalitions of political Parties in parliaments, social networks, social cleavages in democracies and organizational structures.
APA, Harvard, Vancouver, ISO, and other styles
29

Pediaditis, Panagiotis, Katja Sirviö, Charalampos Ziras, Kimmo Kauhaniemi, Hannu Laaksonen, and Nikos Hatziargyriou. "Compliance of Distribution System Reactive Flows with Transmission System Requirements." Applied Sciences 11, no. 16 (August 22, 2021): 7719. http://dx.doi.org/10.3390/app11167719.

Full text
Abstract:
Transmission system operators (TSOs) often set requirements to distribution system operators (DSOs) regarding the exchange of reactive power on the interface between the two parts of the system they operate, typically High Voltage and Medium Voltage. The presence of increasing amounts of Distributed Energy Resources (DERs) at the distribution networks complicates the problem, but provides control opportunities in order to keep the exchange within the prescribed limits. Typical DER control methods, such as constant cosϕ or Q/V functions, cannot adequately address these limits, while power electronics interfaced DERs provide to DSOs reactive power control capabilities for complying more effectively with TSO requirements. This paper proposes an optimisation method to provide power set-points to DERs in order to control the hourly reactive power exchanges with the transmission network. The method is tested via simulations using real data from the distribution substation at the Sundom Smart Grid, in Finland, using the operating guidelines imposed by the Finnish TSO. Results show the advantages of the proposed method compared to traditional methods for reactive power compensation from DERs. The application of more advanced Model Predictive Control techniques is further explored.
APA, Harvard, Vancouver, ISO, and other styles
30

Wu, Wenjuan, Dongchu Su, Bo Yuan, and Yong Li. "Intelligent Security Monitoring System Based on RISC-V SoC." Electronics 10, no. 11 (June 7, 2021): 1366. http://dx.doi.org/10.3390/electronics10111366.

Full text
Abstract:
With the development of the economy and society, the demand for social security and stability increases. However, traditional security systems rely too much on human resources and are affected by uncontrollable community security factors. An intelligent security monitoring system can overcome the limitations of traditional systems and save human resources, contributing to public security. To build this system, a RISC-V SoC is first designed in this paper and implemented on the Nexys-Video Artix-7 FPGA. Then, the Linux operating system is transplanted and successfully run. Meanwhile, the driver of related hardware devices is designed independently. After that, three OpenCV-based object detection models including YOLO (You Only Look Once), Haar (Haar-like features), and LBP (Local Binary Pattern) are compared, and the LBP model is chosen to design applications. Finally, the processing speed of 1.25 s per frame is realized to detect and track moving objects. To sum up, we build an intelligent security monitoring system with real-time detection, tracking, and identification functions through hardware and software collaborative design. This paper also proposes a video downsampling technique. Based on this technique, the BRAM resource usage on the hardware side is reduced by 50% and the amount of pixel data that needs to be processed on the software side is reduced by 75%. A video downsampling technology is also proposed in this paper to achieve better video display effects under limited hardware resources. It provides conditions for future function expansion and improves the models’ processing speed. Additionally, it reduces the run time of the application and improves the system performance.
APA, Harvard, Vancouver, ISO, and other styles
31

Tanmay Ghosh, Khan Alamgir, Yang Xingyao, Muhammad Fayaz,. "Computer-Aided Diagnostic System for Digital Mammography." INFORMATION TECHNOLOGY IN INDUSTRY 9, no. 2 (April 12, 2021): 989–95. http://dx.doi.org/10.17762/itii.v9i2.443.

Full text
Abstract:
In this work, Computer-Aided Detection (CADe) and Computer-Aided Diagnosis (CADx) systems are developed and tested using the public and freely available mammographic databases named MIAS and DDSM databases, respectively. CADe system is used to differentiate between normal and abnormal tissues, and it assists radiologists to avoid missing a breast abnormality. At the same time, CADx is developed to distinguish between normal, benign and malignant breast tissues, and it helps radiologists to decide whether a biopsy is needed when reading a diagnostic mammogram or not. Any CAD system is constituted of typical stages including preprocessing and segmentation of mammogram images, extraction of regions of interest (ROI), features removal, features selection and classification. In both proposed CAD systems, ROIs are selected using a window size of 32×32 pixels, then a total of 543 features from four different feature categories are extracted from each ROI and then normalized. After that, the selection of the most relevant features is performed using four different selection methods from MATLAB Pattern Recognition Toolbox v.5 (PRtool5) named Sequential Backward Selection (SBS), Sequential Forward Selection (SFS), Sequential Floating Forward Selection (SFFS) and Branch and Bound Selection (BBS) methods. We also utilized Principal Component Analysis (PCA) as the fifth method to reduce the dimensions of the features set. After that, we used different classifiers such as Support Vector Machines (SVM), K-voting Nearest Neighbor (K-NN), Quadratic Discriminant Analysis (QDA) and Artificial Neural Networks (ANN) for the classification. Both CAD systems have the same implementation stages but different output. CADe systems are designed to detect breast abnormalities while CADx system indicates the likelihood of malignancy of lesions. Finally, we independently compared the performance of all classifiers with each selection method in both modes. The evaluation of the proposed CAD systems is done using performance indices such as sensitivity, specificity, the area under the curve (AUC) of the Receiver Operating Characteristic (ROC) curves, the overall accuracy and Cohen-k factor. Both CAD systems provided encouraging results. These results were different corresponding to the selection method and classifier.
APA, Harvard, Vancouver, ISO, and other styles
32

Montaser, Ali, Ibrahim Bakry, Adel Alshibani, and Osama Moselhi. "Estimating productivity of earthmoving operations using spatial technologies1This paper is one of a selection of papers in this Special Issue on Construction Engineering and Management." Canadian Journal of Civil Engineering 39, no. 9 (September 2012): 1072–82. http://dx.doi.org/10.1139/l2012-059.

Full text
Abstract:
This paper presents an automated method for estimating productivity of earthmoving operations in near-real-time. The developed method utilizes Global Positioning System (GPS) and Google Earth to extract the data needed to perform the estimation process. A GPS device is mounted on a hauling unit to capture the spatial data along designated hauling roads for the project. The variations in the captured cycle times were used to model the uncertainty associated with the operation involved. This was carried out by automated classification, data fitting, and computer simulation. The automated classification is applied through a spreadsheet application that classifies GPS data and identifies, accordingly, durations of different activities in each cycle using spatial coordinates and directions captured by GPS and recorded on its receiver. The data fitting was carried out using commercially available software to generate the probability distribution functions used in the simulation software “Extend V.6”. The simulation was utilized to balance the production of an excavator with that of the hauling units. A spreadsheet application was developed to perform the calculations. An example of an actual project was analyzed to demonstrate the use of the developed method and illustrates its essential features. The analyzed case study demonstrates how the proposed method can assist project managers in taking corrective actions based on the near-real-time actual data captured and processed to estimate productivity of the operations involved.
APA, Harvard, Vancouver, ISO, and other styles
33

Luo, Jia Bin, Jian Qun Liu, Wei Qiang Gao, and Xin Du Chen. "Design and Implementation of Keyboard Module in V-Groove CNC System Based on LPC1343." Key Engineering Materials 620 (August 2014): 550–55. http://dx.doi.org/10.4028/www.scientific.net/kem.620.550.

Full text
Abstract:
The V-groove component which needs to be machined by ultra-precision CNC (Computer numerical control) machine tool is a core part in optical fiber connector. A design of keyboard module for V-groove ultra-precision CNC system based on LPC1343 is presented. This paper introduced the functions and features of LPC1343 and described the overall hardware structure of the keyboard module at first. Then the basic design, such as scan and decoding, design principle of module circuit and communication process between keyboard module and upper computer, were emphasized in detail. Afterwards, the functional design including data editing, LED indicator driving, override switch adjusting and motion control operating were also finished. A series of test have been conducted and the results have shown that this kind of keyboard module is reliable and stable.
APA, Harvard, Vancouver, ISO, and other styles
34

Wisse, E., A. Geerts, and R. B. De Zanger. "Routine Digital Processing of Microscope Images." Proceedings, annual meeting, Electron Microscopy Society of America 48, no. 1 (August 12, 1990): 550–51. http://dx.doi.org/10.1017/s0424820100181506.

Full text
Abstract:
The slowscan and TV signal of the Philips SEM 505 and the signal of a TV camera attached to a Leitz fluorescent microscope, were digitized by the data acquisition processor of a Masscomp 5520S computer, which is based on a 16.7 MHz 68020 CPU with 10 Mb RAM memory, a graphics processor with two frame buffers for images with 8 bit / 256 grey values, a high definition (HD) monitor (910 × 1150), two hard disks (70 and 663 Mb) and a 60 Mb tape drive. The system is equipped with Imaging Technology video digitizing boards: analog I/O, an ALU, and two memory mapped frame buffers for TV images of the IP 512 series. The Masscomp computer has an ethernet connection to other computers, such as a Vax PDP 11/785, and a Sun 368i with a 327 Mb hard disk and a SCSI interface to an Exabyte 2.3 Gb helical scan tape drive. The operating system for these computers is based on different versions of Unix, such as RTU 4.1 (including NFS) on the acquisition computer, bsd 4.3 for the Vax, and Sun OS 4.0.1 for the Sun (with NFS).
APA, Harvard, Vancouver, ISO, and other styles
35

Gogolou, Vasiliki, Konstantinos Kozalakis, Eftichios Koutroulis, Gregory Doumenis, and Stylianos Siskos. "An Ultra-Low-Power CMOS Supercapacitor Storage Unit for Energy Harvesting Applications." Electronics 10, no. 17 (August 29, 2021): 2097. http://dx.doi.org/10.3390/electronics10172097.

Full text
Abstract:
This work presents an ultra-low-power CMOS supercapacitor storage unit suitable for a plethora of low-power autonomous applications. The proposed unit exploits the unregulated voltage output of harvesting circuits (i.e., DC-DC converters) and redirects the power to the storage elements and the working loads. Being able to adapt to the input energy conditions and the connected loads’ supply demands offers extended survival to the system with the self-startup operation and voltage regulation. A low-complexity control unit is implemented which is composed of power switches, comparators and logic gates and is able to supervise two supercapacitors, a small and a larger one, as well as a backup battery. Two separate power outputs are offered for external load connection which can be controlled by a separate unit (e.g., microcontroller). Furthermore, user-controlled parameters such as charging and discharging supercapacitor voltage thresholds, provide increased versatility to the system. The storage unit was designed and fabricated in a 0.18 um standard CMOS process and operates with ultra-low current consumption of 432 nA at 2.3 V. The experimental results validate the proper operation of the overall structure.
APA, Harvard, Vancouver, ISO, and other styles
36

Xing, Fei, Yi Ping Yao, Zhi Wen Jiang, and Bing Wang. "Fine-Grained Parallel and Distributed Spatial Stochastic Simulation of Biological Reactions." Advanced Materials Research 345 (September 2011): 104–12. http://dx.doi.org/10.4028/www.scientific.net/amr.345.104.

Full text
Abstract:
To date, discrete event stochastic simulations of large scale biological reaction systems are extremely compute-intensive and time-consuming. Besides, it has been widely accepted that spatial factor plays a critical role in the dynamics of most biological reaction systems. The NSM (the Next Sub-Volume Method), a spatial variation of the Gillespie’s stochastic simulation algorithm (SSA), has been proposed for spatially stochastic simulation of those systems. While being able to explore high degree of parallelism in systems, NSM is inherently sequential, which still suffers from the problem of low simulation speed. Fine-grained parallel execution is an elegant way to speed up sequential simulations. Thus, based on the discrete event simulation framework JAMES II, we design and implement a PDES (Parallel Discrete Event Simulation) TW (time warp) simulator to enable the fine-grained parallel execution of spatial stochastic simulations of biological reaction systems using the ANSM (the Abstract NSM), a parallel variation of the NSM. The simulation results of classical Lotka-Volterra biological reaction system show that our time warp simulator obtains remarkable parallel speed-up against sequential execution of the NSM.I.IntroductionThe goal of Systems biology is to obtain system-level investigations of the structure and behavior of biological reaction systems by integrating biology with system theory, mathematics and computer science [1][3], since the isolated knowledge of parts can not explain the dynamics of a whole system. As the complement of “wet-lab” experiments, stochastic simulation, being called the “dry-computational” experiment, plays a more and more important role in computing systems biology [2]. Among many methods explored in systems biology, discrete event stochastic simulation is of greatly importance [4][5][6], since a great number of researches have present that stochasticity or “noise” have a crucial effect on the dynamics of small population biological reaction systems [4][7]. Furthermore, recent research shows that the stochasticity is not only important in biological reaction systems with small population but also in some moderate/large population systems [7].To date, Gillespie’s SSA [8] is widely considered to be the most accurate way to capture the dynamics of biological reaction systems instead of traditional mathematical method [5][9]. However, SSA-based stochastic simulation is confronted with two main challenges: Firstly, this type of simulation is extremely time-consuming, since when the types of species and the number of reactions in the biological system are large, SSA requires a huge amount of steps to sample these reactions; Secondly, the assumption that the systems are spatially homogeneous or well-stirred is hardly met in most real biological systems and spatial factors play a key role in the behaviors of most real biological systems [19][20][21][22][23][24]. The next sub-volume method (NSM) [18], presents us an elegant way to access the special problem via domain partition. To our disappointment, sequential stochastic simulation with the NSM is still very time-consuming, and additionally introduced diffusion among neighbor sub-volumes makes things worse. Whereas, the NSM explores a very high degree of parallelism among sub-volumes, and parallelization has been widely accepted as the most meaningful way to tackle the performance bottleneck of sequential simulations [26][27]. Thus, adapting parallel discrete event simulation (PDES) techniques to discrete event stochastic simulation would be particularly promising. Although there are a few attempts have been conducted [29][30][31], research in this filed is still in its infancy and many issues are in need of further discussion. The next section of the paper presents the background and related work in this domain. In section III, we give the details of design and implementation of model interfaces of LP paradigm and the time warp simulator based on the discrete event simulation framework JAMES II; the benchmark model and experiment results are shown in Section IV; in the last section, we conclude the paper with some future work.II. Background and Related WorkA. Parallel Discrete Event Simulation (PDES)The notion Logical Process (LP) is introduced to PDES as the abstract of the physical process [26], where a system consisting of many physical processes is usually modeled by a set of LP. LP is regarded as the smallest unit that can be executed in PDES and each LP holds a sub-partition of the whole system’s state variables as its private ones. When a LP processes an event, it can only modify the state variables of its own. If one LP needs to modify one of its neighbors’ state variables, it has to schedule an event to the target neighbor. That is to say event message exchanging is the only way that LPs interact with each other. Because of the data dependences or interactions among LPs, synchronization protocols have to be introduced to PDES to guarantee the so-called local causality constraint (LCC) [26]. By now, there are a larger number of synchronization algorithms have been proposed, e.g. the null-message [26], the time warp (TW) [32], breath time warp (BTW) [33] and etc. According to whether can events of LPs be processed optimistically, they are generally divided into two types: conservative algorithms and optimistic algorithms. However, Dematté and Mazza have theoretically pointed out the disadvantages of pure conservative parallel simulation for biochemical reaction systems [31]. B. NSM and ANSM The NSM is a spatial variation of Gillespie’ SSA, which integrates the direct method (DM) [8] with the next reaction method (NRM) [25]. The NSM presents us a pretty good way to tackle the aspect of space in biological systems by partitioning a spatially inhomogeneous system into many much more smaller “homogeneous” ones, which can be simulated by SSA separately. However, the NSM is inherently combined with the sequential semantics, and all sub-volumes share one common data structure for events or messages. Thus, directly parallelization of the NSM may be confronted with the so-called boundary problem and high costs of synchronously accessing the common data structure [29]. In order to obtain higher efficiency of parallel simulation, parallelization of NSM has to firstly free the NSM from the sequential semantics and secondly partition the shared data structure into many “parallel” ones. One of these is the abstract next sub-volume method (ANSM) [30]. In the ANSM, each sub-volume is modeled by a logical process (LP) based on the LP paradigm of PDES, where each LP held its own event queue and state variables (see Fig. 1). In addition, the so-called retraction mechanism was introduced in the ANSM too (see algorithm 1). Besides, based on the ANSM, Wang etc. [30] have experimentally tested the performance of several PDES algorithms in the platform called YH-SUPE [27]. However, their platform is designed for general simulation applications, thus it would sacrifice some performance for being not able to take into account the characteristics of biological reaction systems. Using the similar ideas of the ANSM, Dematté and Mazza have designed and realized an optimistic simulator. However, they processed events in time-stepped manner, which would lose a specific degree of precisions compared with the discrete event manner, and it is very hard to transfer a time-stepped simulation to a discrete event one. In addition, Jeschke etc.[29] have designed and implemented a dynamic time-window simulator to execution the NSM in parallel on the grid computing environment, however, they paid main attention on the analysis of communication costs and determining a better size of the time-window.Fig. 1: the variations from SSA to NSM and from NSM to ANSMC. JAMES II JAMES II is an open source discrete event simulation experiment framework developed by the University of Rostock in Germany. It focuses on high flexibility and scalability [11][13]. Based on the plug-in scheme [12], each function of JAMES II is defined as a specific plug-in type, and all plug-in types and plug-ins are declared in XML-files [13]. Combined with the factory method pattern JAMES II innovatively split up the model and simulator, which makes JAMES II is very flexible to add and reuse both of models and simulators. In addition, JAMES II supports various types of modelling formalisms, e.g. cellular automata, discrete event system specification (DEVS), SpacePi, StochasticPi and etc.[14]. Besides, a well-defined simulator selection mechanism is designed and developed in JAMES II, which can not only automatically choose the proper simulators according to the modeling formalism but also pick out a specific simulator from a serious of simulators supporting the same modeling formalism according to the user settings [15].III. The Model Interface and SimulatorAs we have mentioned in section II (part C), model and simulator are split up into two separate parts. Thus, in this section, we introduce the designation and implementation of model interface of LP paradigm and more importantly the time warp simulator.A. The Mod Interface of LP ParadigmJAMES II provides abstract model interfaces for different modeling formalism, based on which Wang etc. have designed and implemented model interface of LP paradigm[16]. However, this interface is not scalable well for parallel and distributed simulation of larger scale systems. In our implementation, we accommodate the interface to the situation of parallel and distributed situations. Firstly, the neighbor LP’s reference is replaced by its name in LP’s neighbor queue, because it is improper even dangerous that a local LP hold the references of other LPs in remote memory space. In addition, (pseudo-)random number plays a crucial role to obtain valid and meaningful results in stochastic simulations. However, it is still a very challenge work to find a good random number generator (RNG) [34]. Thus, in order to focus on our problems, we introduce one of the uniform RNGs of JAMES II to this model interface, where each LP holds a private RNG so that random number streams of different LPs can be independent stochastically. B. The Time Warp SimulatorBased on the simulator interface provided by JAMES II, we design and implement the time warp simulator, which contains the (master-)simulator, (LP-)simulator. The simulator works strictly as master/worker(s) paradigm for fine-grained parallel and distributed stochastic simulations. Communication costs are crucial to the performance of a fine-grained parallel and distributed simulation. Based on the Java remote method invocation (RMI) mechanism, P2P (peer-to-peer) communication is implemented among all (master-and LP-)simulators, where a simulator holds all the proxies of targeted ones that work on remote workers. One of the advantages of this communication approach is that PDES codes can be transferred to various hardwire environment, such as Clusters, Grids and distributed computing environment, with only a little modification; The other is that RMI mechanism is easy to realized and independent to any other non-Java libraries. Since the straggler event problem, states have to be saved to rollback events that are pre-processed optimistically. Each time being modified, the state is cloned to a queue by Java clone mechanism. Problem of this copy state saving approach is that it would cause loads of memory space. However, the problem can be made up by a condign GVT calculating mechanism. GVT reduction scheme also has a significant impact on the performance of parallel simulators, since it marks the highest time boundary of events that can be committed so that memories of fossils (processed events and states) less than GVT can be reallocated. GVT calculating is a very knotty for the notorious simultaneous reporting problem and transient messages problem. According to our problem, another GVT algorithm, called Twice Notification (TN-GVT) (see algorithm 2), is contributed to this already rich repository instead of implementing one of GVT algorithms in reference [26] and [28].This algorithm looks like the synchronous algorithm described in reference [26] (pp. 114), however, they are essentially different from each other. This algorithm has never stopped the simulators from processing events when GVT reduction, while algorithm in reference [26] blocks all simulators for GVT calculating. As for the transient message problem, it can be neglect in our implementation, because RMI based remote communication approach is synchronized, that means a simulator will not go on its processing until the remote the massage get to its destination. And because of this, the high-costs message acknowledgement, prevalent over many classical asynchronous GVT algorithms, is not needed anymore too, which should be constructive to the whole performance of the time warp simulator.IV. Benchmark Model and Experiment ResultsA. The Lotka-Volterra Predator-prey SystemIn our experiment, the spatial version of Lotka-Volterra predator-prey system is introduced as the benchmark model (see Fig. 2). We choose the system for two considerations: 1) this system is a classical experimental model that has been used in many related researches [8][30][31], so it is credible and the simulation results are comparable; 2) it is simple but helpful enough to test the issues we are interested in. The space of predator-prey System is partitioned into a2D NXNgrid, whereNdenotes the edge size of the grid. Initially the population of the Grass, Preys and Predators are set to 1000 in each single sub-volume (LP). In Fig. 2,r1,r2,r3stand for the reaction constants of the reaction 1, 2 and 3 respectively. We usedGrass,dPreyanddPredatorto stand for the diffusion rate of Grass, Prey and Predator separately. Being similar to reference [8], we also take the assumption that the population of the grass remains stable, and thusdGrassis set to zero.R1:Grass + Prey ->2Prey(1)R2:Predator +Prey -> 2Predator(2)R3:Predator -> NULL(3)r1=0.01; r2=0.01; r3=10(4)dGrass=0.0;dPrey=2.5;dPredato=5.0(5)Fig. 2: predator-prey systemB. Experiment ResultsThe simulation runs have been executed on a Linux Cluster with 40 computing nodes. Each computing node is equipped with two 64bit 2.53 GHz Intel Xeon QuadCore Processors with 24GB RAM, and nodes are interconnected with Gigabit Ethernet connection. The operating system is Kylin Server 3.5, with kernel 2.6.18. Experiments have been conducted on the benchmark model of different size of mode to investigate the execution time and speedup of the time warp simulator. As shown in Fig. 3, the execution time of simulation on single processor with 8 cores is compared. The result shows that it will take more wall clock time to simulate much larger scale systems for the same simulation time. This testifies the fact that larger scale systems will leads to more events in the same time interval. More importantly, the blue line shows that the sequential simulation performance declines very fast when the mode scale becomes large. The bottleneck of sequential simulator is due to the costs of accessing a long event queue to choose the next events. Besides, from the comparison between group 1 and group 2 in this experiment, we could also conclude that high diffusion rate increased the simulation time greatly both in sequential and parallel simulations. This is because LP paradigm has to split diffusion into two processes (diffusion (in) and diffusion (out) event) for two interactive LPs involved in diffusion and high diffusion rate will lead to high proportional of diffusion to reaction. In the second step shown in Fig. 4, the relationship between the speedups from time warp of two different model sizes and the number of work cores involved are demonstrated. The speedup is calculated against the sequential execution of the spatial reaction-diffusion systems model with the same model size and parameters using NSM.Fig. 4 shows the comparison of speedup of time warp on a64X64grid and a100X100grid. In the case of a64X64grid, under the condition that only one node is used, the lowest speedup (a little bigger than 1) is achieved when two cores involved, and the highest speedup (about 6) is achieved when 8 cores involved. The influence of the number of cores used in parallel simulation is investigated. In most cases, large number of cores could bring in considerable improvements in the performance of parallel simulation. Also, compared with the two results in Fig. 4, the simulation of larger model achieves better speedup. Combined with time tests (Fig. 3), we find that sequential simulator’s performance declines sharply when the model scale becomes very large, which makes the time warp simulator get better speed-up correspondingly.Fig. 3: Execution time (wall clock time) of Seq. and time warp with respect to different model sizes (N=32, 64, 100, and 128) and model parameters based on single computing node with 8 cores. Results of the test are grouped by the diffusion rates (Group 1: Sequential 1 and Time Warp 1. dPrey=2.5, dPredator=5.0; Group 2: dPrey=0.25, dPredator=0.5, Sequential 2 and Time Warp 2).Fig. 4: Speedup of time warp with respect to the number of work cores and the model size (N=64 and 100). Work cores are chose from one computing node. Diffusion rates are dPrey=2.5, dPredator=5.0 and dGrass=0.0.V. Conclusion and Future WorkIn this paper, a time warp simulator based on the discrete event simulation framework JAMES II is designed and implemented for fine-grained parallel and distributed discrete event spatial stochastic simulation of biological reaction systems. Several challenges have been overcome, such as state saving, roll back and especially GVT reduction in parallel execution of simulations. The Lotka-Volterra Predator-Prey system is chosen as the benchmark model to test the performance of our time warp simulator and the best experiment results show that it can obtain about 6 times of speed-up against the sequential simulation. The domain this paper concerns with is in the infancy, many interesting issues are worthy of further investigated, e.g. there are many excellent PDES optimistic synchronization algorithms (e.g. the BTW) as well. Next step, we would like to fill some of them into JAMES II. In addition, Gillespie approximation methods (tau-leap[10] etc.) sacrifice some degree of precision for higher simulation speed, but still could not address the aspect of space of biological reaction systems. The combination of spatial element and approximation methods would be very interesting and promising; however, the parallel execution of tau-leap methods should have to overcome many obstacles on the road ahead.AcknowledgmentThis work is supported by the National Natural Science Foundation of China (NSF) Grant (No.60773019) and the Ph.D. Programs Foundation of Ministry of Education of China (No. 200899980004). The authors would like to show their great gratitude to Dr. Jan Himmelspach and Dr. Roland Ewald at the University of Rostock, Germany for their invaluable advice and kindly help with JAMES II.ReferencesH. Kitano, "Computational systems biology." Nature, vol. 420, no. 6912, pp. 206-210, November 2002.H. Kitano, "Systems biology: a brief overview." Science (New York, N.Y.), vol. 295, no. 5560, pp. 1662-1664, March 2002.A. Aderem, "Systems biology: Its practice and challenges," Cell, vol. 121, no. 4, pp. 511-513, May 2005. [Online]. Available: http://dx.doi.org/10.1016/j.cell.2005.04.020.H. de Jong, "Modeling and simulation of genetic regulatory systems: A literature review," Journal of Computational Biology, vol. 9, no. 1, pp. 67-103, January 2002.C. W. Gardiner, Handbook of Stochastic Methods: for Physics, Chemistry and the Natural Sciences (Springer Series in Synergetics), 3rd ed. Springer, April 2004.D. T. Gillespie, "Simulation methods in systems biology," in Formal Methods for Computational Systems Biology, ser. Lecture Notes in Computer Science, M. Bernardo, P. Degano, and G. Zavattaro, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5016, ch. 5, pp. 125-167.Y. Tao, Y. Jia, and G. T. Dewey, "Stochastic fluctuations in gene expression far from equilibrium: Omega expansion and linear noise approximation," The Journal of Chemical Physics, vol. 122, no. 12, 2005.D. T. Gillespie, "Exact stochastic simulation of coupled chemical reactions," Journal of Physical Chemistry, vol. 81, no. 25, pp. 2340-2361, December 1977.D. T. Gillespie, "Stochastic simulation of chemical kinetics," Annual Review of Physical Chemistry, vol. 58, no. 1, pp. 35-55, 2007.D. T. Gillespie, "Approximate accelerated stochastic simulation of chemically reacting systems," The Journal of Chemical Physics, vol. 115, no. 4, pp. 1716-1733, 2001.J. Himmelspach, R. Ewald, and A. M. Uhrmacher, "A flexible and scalable experimentation layer," in WSC '08: Proceedings of the 40th Conference on Winter Simulation. Winter Simulation Conference, 2008, pp. 827-835.J. Himmelspach and A. M. Uhrmacher, "Plug'n simulate," in 40th Annual Simulation Symposium (ANSS'07). Washington, DC, USA: IEEE, March 2007, pp. 137-143.R. Ewald, J. Himmelspach, M. Jeschke, S. Leye, and A. M. Uhrmacher, "Flexible experimentation in the modeling and simulation framework james ii-implications for computational systems biology," Brief Bioinform, vol. 11, no. 3, pp. bbp067-300, January 2010.A. Uhrmacher, J. Himmelspach, M. Jeschke, M. John, S. Leye, C. Maus, M. Röhl, and R. Ewald, "One modelling formalism & simulator is not enough! a perspective for computational biology based on james ii," in Formal Methods in Systems Biology, ser. Lecture Notes in Computer Science, J. Fisher, Ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5054, ch. 9, pp. 123-138. [Online]. Available: http://dx.doi.org/10.1007/978-3-540-68413-8_9.R. Ewald, J. Himmelspach, and A. M. Uhrmacher, "An algorithm selection approach for simulation systems," pads, vol. 0, pp. 91-98, 2008.Bing Wang, Jan Himmelspach, Roland Ewald, Yiping Yao, and Adelinde M Uhrmacher. Experimental analysis of logical process simulation algorithms in james ii[C]// In M. D. Rossetti, R. R. Hill, B. Johansson, A. Dunkin, and R. G. Ingalls, editors, Proceedings of the Winter Simulation Conference, IEEE Computer Science, 2009. 1167-1179.Ewald, J. Rössel, J. Himmelspach, and A. M. Uhrmacher, "A plug-in-based architecture for random number generation in simulation systems," in WSC '08: Proceedings of the 40th Conference on Winter Simulation. Winter Simulation Conference, 2008, pp. 836-844.J. Elf and M. Ehrenberg, "Spontaneous separation of bi-stable biochemical systems into spatial domains of opposite phases." Systems biology, vol. 1, no. 2, pp. 230-236, December 2004.K. Takahashi, S. Arjunan, and M. Tomita, "Space in systems biology of signaling pathways? Towards intracellular molecular crowding in silico," FEBS Letters, vol. 579, no. 8, pp. 1783-1788, March 2005.J. V. Rodriguez, J. A. Kaandorp, M. Dobrzynski, and J. G. Blom, "Spatial stochastic modelling of the phosphoenolpyruvate-dependent phosphotransferase (pts) pathway in escherichia coli," Bioinformatics, vol. 22, no. 15, pp. 1895-1901, August 2006.D. Ridgway, G. Broderick, and M. Ellison, "Accommodating space, time and randomness in network simulation," Current Opinion in Biotechnology, vol. 17, no. 5, pp. 493-498, October 2006.J. V. Rodriguez, J. A. Kaandorp, M. Dobrzynski, and J. G. Blom, "Spatial stochastic modelling of the phosphoenolpyruvate-dependent phosphotransferase (pts) pathway in escherichia coli," Bioinformatics, vol. 22, no. 15, pp. 1895-1901, August 2006.W. G. Wilson, A. M. Deroos, and E. Mccauley, "Spatial instabilities within the diffusive lotka-volterra system: Individual-based simulation results," Theoretical Population Biology, vol. 43, no. 1, pp. 91-127, February 1993.K. Kruse and J. Elf. Kinetics in spatially extended systems. In Z. Szallasi, J. Stelling, and V. Periwal, editors, System Modeling in Cellular Biology. From Concepts to Nuts and Bolts, pages 177–198. MIT Press, Cambridge, MA, 2006.M. A. Gibson and J. Bruck, "Efficient exact stochastic simulation of chemical systems with many species and many channels," The Journal of Physical Chemistry A, vol. 104, no. 9, pp. 1876-1889, March 2000.R. M. Fujimoto, Parallel and Distributed Simulation Systems (Wiley Series on Parallel and Distributed Computing). Wiley-Interscience, January 2000.Y. Yao and Y. Zhang, “Solution for analytic simulation based on parallel processing,” Journal of System Simulation, vol. 20, No.24, pp. 6617–6621, 2008.G. Chen and B. K. Szymanski, "Dsim: scaling time warp to 1,033 processors," in WSC '05: Proceedings of the 37th conference on Winter simulation. Winter Simulation Conference, 2005, pp. 346-355.M. Jeschke, A. Park, R. Ewald, R. Fujimoto, and A. M. Uhrmacher, "Parallel and distributed spatial simulation of chemical reactions," in 2008 22nd Workshop on Principles of Advanced and Distributed Simulation. Washington, DC, USA: IEEE, June 2008, pp. 51-59.B. Wang, Y. Yao, Y. Zhao, B. Hou, and S. Peng, "Experimental analysis of optimistic synchronization algorithms for parallel simulation of reaction-diffusion systems," High Performance Computational Systems Biology, International Workshop on, vol. 0, pp. 91-100, October 2009.L. Dematté and T. Mazza, "On parallel stochastic simulation of diffusive systems," in Computational Methods in Systems Biology, M. Heiner and A. M. Uhrmacher, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5307, ch. 16, pp. 191-210.D. R. Jefferson, "Virtual time," ACM Trans. Program. Lang. Syst., vol. 7, no. 3, pp. 404-425, July 1985.J. S. Steinman, "Breathing time warp," SIGSIM Simul. Dig., vol. 23, no. 1, pp. 109-118, July 1993. [Online]. Available: http://dx.doi.org/10.1145/174134.158473 S. K. Park and K. W. Miller, "Random number generators: good ones are hard to find," Commun. ACM, vol. 31, no. 10, pp. 1192-1201, October 1988.
APA, Harvard, Vancouver, ISO, and other styles
37

Chandani, Ashok. "Writing Styles of Abstracts in Occupational Therapy Journals." British Journal of Occupational Therapy 48, no. 8 (August 1985): 244–46. http://dx.doi.org/10.1177/030802268504800807.

Full text
Abstract:
The readability grades of abstracts randomly selected from the American Journal of Occupational Therapy, the Australian Occupational Therapy Journal, the British Journal of Occupational Therapy, and the Canadian Journal of Occupational Therapy were studied through the program Style of the Unix operating system (computer). The formulae for readability grades used were the Kincaid formula, the automated readability index, the Coleman-Liau formula, and the Flesch formula. One-way analysis of variance was found to be significant (p<0.05) between the British and Australian journals in all four formulae. Based on samples of abstracts, the results indicated that the British journal is the easiest and the Australian journal is the most difficult to read of all four journals. A Pearson correlation matrix revealed a significant positive and negative relationship between some of the 12 variables in each journal.
APA, Harvard, Vancouver, ISO, and other styles
38

Glaessl, A., R. Schiffner, T. Walther, M. Landthaler, and W. Stolz. "Teledermatology - the requirements of dermatologists in private practice." Journal of Telemedicine and Telecare 6, no. 3 (June 1, 2000): 138–41. http://dx.doi.org/10.1258/1357633001935211.

Full text
Abstract:
Eighty-four dermatologists in private practice in Bavaria were surveyed by postal questionnaire. Of the 45 who responded (a 54% response rate), 96% used a computer in their private practice. Fifty-seven per cent of respondents owned systems with Pentium processors, while 23% were still using 386 or 486 processors. Most of them used the Windows 95, UNIX or Apple operating system. Of the respondents who had a modem, 74% used ISDN. There were few modems connected to the ordinary telephone network. Of all respondents, 56% used email regularly. Several possible teledermatology applications were proposed in the survey (i.e. teleconsultation, on-line off-line videoconferencing, email attachments). Fifty-six per cent of respondents said that they would perform teleconsultations with dermatology clinics, 40% preferred a teleconsultation via telephone and computer, and 42% sending files via email. The survey demonstrated that a high proportion of dermatologists in private practice would use a teledermatology service.
APA, Harvard, Vancouver, ISO, and other styles
39

Rehman, Atiq, Chunyi Guo, and Chengyong Zhao. "Coordinated Control for Operating Characteristics Improvement of UHVDC Transmission Systems under Hierarchical Connection Scheme with STATCOM." Energies 12, no. 5 (March 12, 2019): 945. http://dx.doi.org/10.3390/en12050945.

Full text
Abstract:
Ultra-high voltage direct current (UHVDC) systems under hierarchical connection schemes (HCSs) linked to AC grids with different voltage levels (500 and 1000 kV) have been a great concern for power utilities to transfer bulk power. They have some operating issues like cascaded commutation failures and longer fault recovery time under certain fault conditions. Since STATCOM has the ability to effectively regulate AC busbar voltages, thus it is considered in this paper to improve the operating characteristics of UHVDC-HCS systems. To further improve the operating characteristics, a coordinated control between an UHVDC-HCS system and STATCOM is presented. To validate the effectiveness of coordinated control, the comparison between different control modes such as reactive power control (Q-control) and voltage control (V-control) in the outer loop control of STATCOM are conducted in detail. Various indices like commutation failure immunity index (CFII) and commutation failure probability index (CFPI) are also comprehensively evaluated in order to investigate robustness of the adopted coordinated control. An UHVDC-HCS system with multiple STATCOMs on the inverter side (500 kV bus) is developed in PSCAD/EMTDC. The impact of coordinated control on commutation failure phenomena and fault recovery time during single and three phase AC faults is analyzed. The analysis shows that coordinated control with V-control mode of STATCOM exhibits better performance in enhancing the operating characteristics of UHVDC-HCS system by improving the CFII, effectively reducing the CFPI and fault recovery time under various AC faults.
APA, Harvard, Vancouver, ISO, and other styles
40

Listewnik, Paulina, and Adam Mazikowski. "Automatic system for optical parameters measurements of biological tissues." Photonics Letters of Poland 10, no. 3 (October 1, 2018): 91. http://dx.doi.org/10.4302/plp.v10i3.846.

Full text
Abstract:
In this paper a system allowing execution of automatic measurements of the optical parameters of scattering materials in a efficient and accurate manner is proposed and described. The system is designed especially for measurements of biological tissues including phantoms, which closely imitate optical characteristics of a real tissue. The system has modular construction and is based on ISEL system, luminance and color meter and a computer with worked out dedicated software and user interface. Performed measurements of scattering distribution characteristics for selected materials revealed good accuracy, confirmed by comparative measurements using well-known reference characteristics. Full Text: PDF ReferencesWróbel, M. S., Popov, A. P., Bykov, A. V., Kinnunen, M., Jedrzejewska-Szczerska, M., & Tuchin, V. V. (2015). Measurements of fundamental properties of homogeneous tissue phantoms. Journal of Biomedical Optics CrossRef Wróbel, M. S., Jedrzejewska-Szczerska, M., Galla, S., Piechowski, L., Sawczak, M., Popov, A. P., Cenian, A. (2015). Use of optical skin phantoms for preclinical evaluation of laser efficiency for skin lesion therapy. Journal of Biomedical Optics. CrossRef Jędrzejewska-Szczerska, M., Wróbel, M. S., Galla, S., Popov, A. P., Bykov, A. V., Tuchin, V. V., & Cenian, A. (2015). Investigation of photothermolysis therapy of human skin diseases using optical phantoms. In Proceedings of SPIE - The International Society for Optical Engineering. CrossRef Brown A. M., et al.: Optical material characterization through BSDF measurement and analysis, Proc. of SPIE, Vol. 7792, 2010 CrossRef 4-Axis Controller: iMC-S8. Operating Instruction. ISEL Germany AG, 2012. DirectLink Konica Minolta, Inc. (2005-2013). Chroma meter CS-200. Datasheet. DirectLink Malacara D.: Color Vision and Colorimetry; Theory and Applications, SPIE Press, 2002. DirectLink A. Mazikowski, M. Trojanowski: Measurements of Spectral Spatial Distribution of Scattering Materials for Rear Projection Screens used in Virtual Reality Systems, Metrology and Measurement Systems, 20 (3), pp. 443 - 452, 2013 CrossRef
APA, Harvard, Vancouver, ISO, and other styles
41

Mohammed, Noor Saleh, and Nasir Hussein Selman. "Real-time monitoring of the prototype design of electric system by the ubidots platform." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 6 (December 1, 2021): 5568. http://dx.doi.org/10.11591/ijece.v11i6.pp5568-5577.

Full text
Abstract:
<span>In this paper, a prototype DC electric system was practically designed. The idea of the proposed system was derived from the microgrid concept. The system contained two houses each have a DC generator and load that consists of four 12 V DC lamps. Each house is controlled fully by Arduino UNO microcontroller to work in Island mode or connected it with the second house or main electric network. House operating mode depends on the power generated by its source and the availability of the main network. Under all operating cases, the minimum price of electricity consumption should satisfy as possible. Information between the houses about the operating mode and the main network state was exchanging wirelessly with the help of the RF-HC12. This information uploaded to the Ubidots platform by the Wi-Fi-ESP8266 included in the node MCU microcontroller. This platform has several advantages such as capture, visualization, analysis, and management of data. The system was examined for different cases to verify its working by varying the load in each building. All tested states showed that the houses transfer from one mode to another automatically with high reliability and minimum energy cost. The information about the main grid states and the sources of the houses were monitored and stored at the Ubidots platform.</span>
APA, Harvard, Vancouver, ISO, and other styles
42

Tronchin, Lamberto, Kristian Fabbri, and Chiara Bertolli. "Controlled Mechanical Ventilation in Buildings: A Comparison between Energy Use and Primary Energy among Twenty Different Devices." Energies 11, no. 8 (August 14, 2018): 2123. http://dx.doi.org/10.3390/en11082123.

Full text
Abstract:
Indoor air quality (IAQ) of buildings is a problem that affects both comfort for occupants and the energy consumption of the structure. Controlled mechanical ventilation systems (CMVs) make it possible to control the air exchange rate. When using CMV systems, it is interesting to investigate the relationship between the useful thermal energy requirements for ventilation and the energy consumption of these systems. This paper addresses whether there is a correlation between these two parameters. The methodology used in this work involves the application of equations of technical Italian regulations UNI/TS 11300 applied to a case study. The case study is represented by a 54 m3 room, which is assumed to have three CMV systems installed (extraction, insertion, insertion and extraction) for twenty different devices available on the market. Afterwards, simulations of useful thermal energy requirements QH,ve and primary energy EP,V were performed according to the electrical power of each fan W and the ventilation flow. The results show that the two values are not linearly correlated: it is not possible to clearly associate the operating cost for CMV systems according to building requirements. The study also shows that CMV systems are particularly efficient for high-performance buildings, where there is no leakage that can be ascribed to windows infiltrations.
APA, Harvard, Vancouver, ISO, and other styles
43

Dudek, Magdalena, Andrzej Raźniak, Maciej Rosół, Tomasz Siwek, and Piotr Dudek. "Design, Development, and Performance of a 10 kW Polymer Exchange Membrane Fuel Cell Stack as Part of a Hybrid Power Source Designed to Supply a Motor Glider." Energies 13, no. 17 (August 26, 2020): 4393. http://dx.doi.org/10.3390/en13174393.

Full text
Abstract:
A 10 kW PEMFC (polymer exchange membrane fuel cell) stack consisting of two 5 kW modules, (A) and (B), connected in series with a multi-function controller unit was constructed and tested. The electrical performance of the V-shaped PEMFC stack was investigated under constant and variable electrical load. It was found that the PEMFC stack was capable of supplying the required 10 kW of electrical power. An optimised purification process via ‘purge’ or humidification, implemented by means of a short-circuit unit (SCU) control strategy, enabled slightly improved performance. Online monitoring of the utilisation of the hydrogen system was developed and tested during the operation of the stack, especially under variable electrical load. The air-cooling subsystem consisting of a common channel connecting two 5 kW PEMFC modules and two cascade axial fans was designed, manufactured using 3D printing technology, and tested with respect to the electrical performance of the device. The dependence of total partial-pressure drop vs. ratio of air volumetric flow for the integrated PEMFC stack with cooling devices was also determined. An algorithm of stack operation involving thermal, humidity, and energy management was elaborated. The safety operation and fault diagnosis of the PEMFC stack was also tested.
APA, Harvard, Vancouver, ISO, and other styles
44

O'Neill, M. A., and C. C. Hilgetag. "The portable UNIX programming system (PUPS) and CANTOR: a computational environment for dynamical representation and analysis of complex neurobiological data." Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 356, no. 1412 (August 29, 2001): 1259–76. http://dx.doi.org/10.1098/rstb.2001.0912.

Full text
Abstract:
Many problems in analytical biology, such as the classification of organisms, the modelling of macromolecules, or the structural analysis of metabolic or neural networks, involve complex relational data. Here, we describe a software environment, the portable UNIX programming system (PUPS), which has been developed to allow efficient computational representation and analysis of such data. The system can also be used as a general development tool for database and classification applications. As the complexity of analytical biology problems may lead to computation times of several days or weeks even on powerful computer hardware, the PUPS environment gives support for persistent computations by providing mechanisms for dynamic interaction and homeostatic protection of processes. Biological objects and their interrelations are also represented in a homeostatic way in PUPS. Object relationships are maintained and updated by the objects themselves, thus providing a flexible, scalable and current data representation. Based on the PUPS environment, we have developed an optimization package, CANTOR, which can be applied to a wide range of relational data and which has been employed in different analyses of neuroanatomical connectivity. The CANTOR package makes use of the PUPS system features by modifying candidate arrangements of objects within the system's database. This restructuring is carried out via optimization algorithms that are based on user–defined cost functions, thus providing flexible and powerful tools for the structural analysis of the database content. The use of stochastic optimization also enables the CANTOR system to deal effectively with incomplete and inconsistent data. Prototypical forms of PUPS and CANTOR have been coded and used successfully in the analysis of anatomical and functional mammalian brain connectivity, involving complex and inconsistent experimental data. In addition, PUPS has been used for solving multivariate engineering optimization problems and to implement the digital identification system (DAISY), a system for the automated classification of biological objects. PUPS is implemented in ANSI–C under the POSIX.1 standard and is to a great extent architecture– and operating–system independent. The software is supported by systems libraries that allow multi–threading (the concurrent processing of several database operations), as well as the distribution of the dynamic data objects and library operations over clusters of computers. These attributes make the system easily scalable, and in principle allow the representation and analysis of arbitrarily large sets of relational data. PUPS and CANTOR are freely distributed (http://www.pups.org.uk) as open–source software under the GNU license agreement.
APA, Harvard, Vancouver, ISO, and other styles
45

Huang, Chien-Chun, Sheng-Li Yao, and Huang-Jen Chiu. "Stability Analysis and Optimal Design for Virtual Impedance of 48 V Server Power System for Data Center Applications." Energies 13, no. 20 (October 10, 2020): 5253. http://dx.doi.org/10.3390/en13205253.

Full text
Abstract:
In the past literature on virtual impedance to series systems, most of the discussion focused on stability without in-depth research on the system design of the series converter and the overall output impedance. Accordingly, this study takes an open-loop resonant LLC converter series-connected closed-loop Buck converter as an example. First, the conditions required for the direct connection of the small-signal model in the series, the effect of feedback compensation on the input impedance of the load stage, the operating frequency, and passive components of the two-stage converter are discussed in detail―the relationship between the matching and the output impedance. Afterwards, a mathematical model is used to discuss the effect of adding parallel virtual impedance on the output impedance of the overall series converter and then derive an optimized virtual impedance design. Finally, an experimental platform of 48 V to 12 V and maximum wattage of 96 W are implemented. The output impedance of the series converter is measured with an impedance analyzer to verify the theoretical analysis proposed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
46

Li, Jianjun, and Liwei Liu. "On an M/G/1 queue in random environment with Min(N, V) policy." RAIRO - Operations Research 52, no. 1 (January 2018): 61–77. http://dx.doi.org/10.1051/ro/2018006.

Full text
Abstract:
In this paper, we analyze an M∕G∕1 queue operating in multi-phase random environment with Min(N, V) vacation policy. In operative phase i, i = 1, 2, …, n, customers are served according to the discipline of First Come First Served (FCFS). When the system becomes empty, the server takes a vacation under the Min(N, V) policy, causing the system to move to vacation phase 0. At the end of a vacation, if the server finds no customer waiting, another vacation begins. Otherwise, the system jumps from the phase 0 to some operative phase i with probability qi, i = 1, 2, …, n. And whenever the number of the waiting customers in the system reaches N, the server interrupts its vacation immediately and the system jumps from the phase 0 to some operative phase i with probability qi, i = 1, 2, …, n, too. Using the method of supplementary variable, we derive the distribution for the stationary system size at arbitrary epoch. We also obtain mean system size, the results of the cycle analysis and the sojourn time distribution. In addition, some special cases and numerical examples are presented.
APA, Harvard, Vancouver, ISO, and other styles
47

Yang, Zebo, and Tatsuo Nakajima. "Connecting Smart Objects in IoT Architectures by Screen Remote Monitoring and Control." Computers 7, no. 4 (September 24, 2018): 47. http://dx.doi.org/10.3390/computers7040047.

Full text
Abstract:
Electronic visual display enabled by touchscreen technologies evolves as one of the universal multimedia output methods and a popular input intermediate with touch–interaction. As a result, we can always gain access of an intelligent machine by obtaining control of its display contents. Since remote screen sharing systems are also increasingly prevalent, we propose a cross-platform middleware infrastructure which supports remote monitoring and control functionalities based on remote streaming for networked intelligent devices such as smart phone, computer and smart watch, etc. and home appliances such as smart refrigerator, smart air-conditioner and smart TV, etc. We aim to connect all these devices with display screens, so as to make possible remote monitoring and controlling a certain device by whichever one (usually the nearest one) of display screens among the network. The system is a distributed network consisting of multiple modular nodes of server and client, and is compatible to prevalent operating systems such as Windows, macOS, Unix-like/Linux and Android, etc.
APA, Harvard, Vancouver, ISO, and other styles
48

Zhang, Xiaodong, Ru Li, Wenhan Hou, and Hui Zhao. "V-Lattice: A Lightweight Blockchain Architecture Based on DAG-Lattice Structure for Vehicular Ad Hoc Networks." Security and Communication Networks 2021 (May 30, 2021): 1–17. http://dx.doi.org/10.1155/2021/9942632.

Full text
Abstract:
With the development of wireless communication technology and the automobile industry, the Vehicular Ad Hoc Networks bring many conveniences to humans in terms of safety and entertainment. In the process of communication between the nodes, security problems are the main concerns. Blockchain is a decentralized distributed technology used in nonsecure environments. Using blockchain technology in the VANETs can solve the security problems. However, the characteristics of highly dynamic and resource-constrained VANETs make the traditional chain blockchain system not suitable for actual VANETs scenarios. Therefore, this paper proposes a lightweight blockchain architecture using DAG-lattice structure for VANETs, called V-Lattice. In V-Lattice, each node (vehicle or roadside unit) has its own account chain. The transactions they generated can be added to the blockchain asynchronously and parallelly, and resource-constrained vehicles can store the pruned blockchain and execute blockchain related operations normally. At the same time, in order to encourage more nodes to participate in the blockchain, a reputation-based incentive mechanism is introduced in V-Lattice. This paper uses Colored Petri Nets to verify the security of the architecture and verifies the feasibility of PoW anti-spam through experiment. The validation results show that the architecture proposed in this paper is security, and it is feasible to prevent nodes from generating malicious behaviors by using PoW anti-spam.
APA, Harvard, Vancouver, ISO, and other styles
49

Ting, Tan Chee, Zulhani Rasin, and Chan Sia Ching. "Design and simulation of cascaded H-bridge multilevel inverter with energy storage." Indonesian Journal of Electrical Engineering and Computer Science 23, no. 3 (September 1, 2021): 1289. http://dx.doi.org/10.11591/ijeecs.v23.i3.pp1289-1298.

Full text
Abstract:
<p>Stand-alone power system provides a solution for the user in rural areas that are disconnected from the utility grid which requires power electronics device for the power conversion. This work proposes a design of 5-level cascaded H-bridge inverter with energy storage to realize DC-AC power conversion for such system. The DC-DC bidirectional converter is designed to control the charging and discharging of current into/from the battery during the buck and boost mode of operation. At the DC side, dual-loop control strategy using PI controllers is designed to control the current and voltage. The inner loop current controller controls the recharging/discharging of current for the battery, while the outer voltage controller controls the DC link voltage at 200 V for each of the H-bridge unit. At the AC side, multiple feedback loop control strategy regulates the inverter output voltage at 240 Vrms under various load change. The modelling and design of the system is implemented under Matlab Simulink environment. From the results, the battery storage unit works well with the DC link voltage to achieve a balance power transfer within the system between the PV source, load and battery storage under variation of PV power and loading condition.</p>
APA, Harvard, Vancouver, ISO, and other styles
50

Rowland, Larry, Evelyn Williams, and Hewlett-Packard Williams. "A Computer Aided Method for Assessing Accessibility of Information in Technical Documentation." Proceedings of the Human Factors Society Annual Meeting 33, no. 5 (October 1989): 394–98. http://dx.doi.org/10.1177/154193128903300535.

Full text
Abstract:
A methodology and a computer based program for testing documentation organization and location aids (tab-dividers, indices, tables of content and headings) was developed and used to aid the design and evaluation of documentation. The methodology and program allow computer analogs of documents to be tested before they were actually produced (based on detailed outlines). The documentation testing program presents the test subject a series of goal oriented user tasks. The subject then selects from a set of books and uses the existing location aids or paging to locate the heading that contains the information required to accomplish the task. The program automatically records use of the table of contents, tab-dividers, and index as well as the heading under which the subject believes the information will be found. The subject is allowed to make changes and additions to the tables of content, the index, and main body headings as the test progresses. The program runs in two modes. One mode provides feedback to the test subject on whether the final location is correct and tests how rapidly information can be found. Another model provides no feedback on the correctness of the locations and is used for developing models for the documentation based on user search paths and information content assumed to be under headings. The program has been used to evaluate documentation for a large computer operating system (HP-UX, a variant of UNIX*) and the results show promise.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography