To see the other types of publications on this topic, follow the link: Electric utilities - Data processing.

Dissertations / Theses on the topic 'Electric utilities - Data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Electric utilities - Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Poon, Shuk-yan. "A decentralized multi-agent system for restructured power system operation /." Hong Kong : University of Hong Kong, 1997. http://sunzi.lib.hku.hk/hkuto/record.jsp?B19616211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

潘淑欣 and Shuk-yan Poon. "A decentralized multi-agent system for restructured power system operation." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1997. http://hub.hku.hk/bib/B31219810.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Deivakkannu, Ganesan. "Data acquisition and data transfer methods for real-time power system optimisation problems solution." Thesis, Cape Peninsula University of Technology, 2014. http://hdl.handle.net/20.500.11838/1178.

Full text
Abstract:
Thesis submitted in fulfilment of the requirements for the degree Master of Technology: Electrical Engineering in the Faculty of Engineering at the Cape Peninsula University of Technology 2014
The electric power utilities play a vital role in the generation, transmission and distribution of the electrical power to the end users. The power utilities face two major issues, i.e. i) power grids are expected to operate close to the maximum capacity, and ii) there is a need for accurate and better monitoring and control of the power system network using the modern technology and the available tools. These two issues are interconnected as better monitoring allows for better control of the power system. Development of the new standard-based power system technologies contributed to raising the ideas for building of a Smart grid. The challenges are that this process requires development of new control and operation architectures and methods for data acquisition, data transfer, and control computation. These methods require data for the full dynamic state of the power system in real-time, which leads to the introduction of the synchrophasor-based monitoring and control of the power system. The thesis describes the research work and investigations for integration of the existing new power system technologies to build fully automated systems for real-time solution of power system energy management problems, incorporating data measurement and acquisition, data transfer and distribution through a communication network, and data storage and retrieval in one whole system.
APA, Harvard, Vancouver, ISO, and other styles
4

Staschus, Konstantin. "Renewable energy in electric utility capacity planning: a decomposition approach with application to a Mexican utility." Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/53898.

Full text
Abstract:
Many electric utilities have been tapping such energy sources as wind energy or conservation for years. However, the literature shows few attempts to incorporate such non-dispatchable energy sources as decision variables into the long-range planning methodology. In this dissertation, efficient algorithms for electric utility capacity expansion planning with renewable energy are developed. The algorithms include a deterministic phase which quickly finds a near-optimal expansion plan using derating and a linearized approximation to the time-dependent availability of non-dispatchable energy sources. A probabilistic second phase needs comparatively few computer-time consuming probabilistic simulation iterations to modify this solution towards the optimal expansion plan. For the deterministic first phase, two algorithms, based on a Lagrangian Dual decomposition and a Generalized Benders Decomposition, are developed. The Lagrangian Dual formulation results in a subproblem which can be separated into single-year plantmix problems that are easily solved using a breakeven analysis. The probabilistic second phase uses a Generalized Benders Decomposition approach. A depth-first Branch and Bound algorithm is superimposed on the two-phase algorithm if conventional equipment types are only available in discrete sizes. In this context, computer time savings accrued through the application of the two-phase method are crucial. Extensive computational tests of the algorithms are reported. Among the deterministic algorithms, the one based on Lagrangian Duality proves fastest. The two-phase approach is shown to save up to 80 percent in computing time as compared to a purely probabilistic algorithm. The algorithms are applied to determine the optimal expansion plan for the Tijuana-Mexicali subsystem of the Mexican electric utility system. A strong recommendation to push conservation programs in the desert city of Mexicali I results from this implementation.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
5

Leung, Kwok-wing, and 梁國榮. "The strategic importance of information systems in the electricity supply industry in Hong Kong." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B31266691.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Thompson, John Ronald. "Development and Analysis of a Model for Change in the Workplace, Using Quasi-Experimentation with Computer Professionals in Northwestern Investor Owned Utilities." PDXScholar, 1991. https://pdxscholar.library.pdx.edu/open_access_etds/1248.

Full text
Abstract:
Computer professionals have been agents of change in many organizations. In some cases the role inadvertently became theirs as they were the ones at the vanguard of implementing the new information processing technology in organizations. While in other cases they were the catalysts for change, to force new methods/procedures onto lethargic organizations. While introducing change on others in the organization and adapting to new technological changes themselves, the computer professionals have not really had to face a significant change in their status, power, or importance to the organization. The introduction of the personal computer has brought about significant change in the way the job of the computer professional is perceived by many in the business world. While this change is personally affecting the way they do their job, there has not been a noticeable attempt by those managing computer professionals to deal with the human emotions engendered by such a change. Part of the reason for this lack of attention may be due to the lack of a model as to how computer professionals react to change. Such a model would provide a system whereby it would be possible to recognize where efforts could be made to measure, predict, and modify situations so that a smooth transition can be made to the change. Toward this end a model was developed which presents a system as to how computer professionals react to change. This dissertation presents the model, surveys a population of computer professionals, and analyzes the model using data gathered from the population. The data was gathered in the form of a self administered survey which was given to computer professionals working for six investor owned electric and gas utilities in the Northwestern United states. They answered questions on a scale of from one to five as to their emotions and perceptions about the introduction of personal computers into their organizations. These questions spanned the timeframe as the organizations migrated from the early beginnings of personal computer introduction, to a situation where the use of personal computers was widespread in the company. In the case of three of the companies the personal computer had not yet achieved widespread use at the time of the survey. The data gathered from the computer professionals was statistically analyzed to see if relationships exist between the model and the data. Additionally, interesting demographic data was analyzed to see if certain other factors affected the computer professional's perception as to the impact of the personal computer on their quality of worklife.
APA, Harvard, Vancouver, ISO, and other styles
7

Nduku, Nyaniso Prudent. "Development of methods for distribution network power quality variation monitoring." Thesis, Cape Peninsula University of Technology, 2009. http://hdl.handle.net/20.500.11838/1144.

Full text
Abstract:
Thesis (MTech (Electrical Engineering))--Cape Peninsula University of Technology, 2009
The purpose of this project is to develop methods for distribution network power quality' variations monitoring. Power quality (PO) has become a significant issue for both power suppliers and customers. There have been important changes in power system regarding to power quality requirements. "Power quality" is the combination at voltage quality and current quality. The main research problem of the project is to investigate the power quality of a distribution network by selection of proper measurement, applying and developing the existing classic and modern signal conditioning methods for power disturbance's parameters extracting and monitoring. The research objectives are: To study the standard lEC 61000-4-30 requirements. to investigate the common couplings in the distribution network. To identity the points for measurement, to develop MySQL database for the data from the measurement and to develop MATLAB software tor simulation of the network To develop methods based on Fourier transforms for estimation of the parameters of the disturbances. To develop software for the methods implementation, The influence of different loads on power quality disturbances are considered in the distribution network. Points on the network and meters according to the lEC power quality standards are investigated and applied for the CPUT Bellville campus distribution network. The implementation of the power quality monitoring for the CPUT Bellville campus helps the quality of power supply to be improved and the used power to be reduced. MATLAB programs to communicate with the database and calculate the disturbances and power quality parameters are developed.
APA, Harvard, Vancouver, ISO, and other styles
8

Faruqui, Saif Ahmed. "Utility computing: Certification model, costing model, and related architecture development." CSUSB ScholarWorks, 2005. https://scholarworks.lib.csusb.edu/etd-project/2756.

Full text
Abstract:
The purpose of the thesis was to propose one set of solutions to some of the challenges that are delaying the adoption of utility computing on a wider scale. These components enable effective deployment of utility computing, efficient look-up, and comparison of service offerings of different utility computing resource centers connected to the utility computing network.
APA, Harvard, Vancouver, ISO, and other styles
9

Nemoto, Jiro, and Mika Goto. "Measurement of Dynamic Efficiency in Production : An Application of Data Envelopment Analysis to Japanese Electric Utilities." Springer, 2003. http://hdl.handle.net/2237/7775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Javanshir, Marjan. "DC distribution system for data center." Thesis, Click to view the E-thesis via HKUTO, 2007. http://sunzi.lib.hku.hk/hkuto/record/B39344952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Qadri, Syed Saadat. "A systematic approach to setting underfrequency relays in electric power systems /." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=116022.

Full text
Abstract:
Generation loss contingencies in electric power systems result in a deviation of system frequency from nominal, a condition which must be corrected promptly in order to prevent further degradation of the power system. Automatic load-shedding using underfrequency relays is one of the techniques used to correct abnormal frequency deviations and prevent the risk of uncontrolled outages. If sufficient load is shed following a contingency to preserve interconnections and keep generators on-line, the system can be restored with relative speed and ease. On the other hand, if a declining frequency condition is not dealt with adequately, a cascading disconnection of generating units may develop, leading to a possible total system blackout.
This thesis develops and tests a new systematic method for setting underfrequency relays offering a number of advantages over conventional methods. A discretized swing equation model is used to evaluate the system frequency following a contingency, and the operational logic of an underfrequency relay is modeled using mixed integer linear programming (MILP) techniques. The proposed approach computes relay settings with respect to a subset of all plausible contingencies for a given system. A method for selecting the subset of contingencies for inclusion in the MILP is presented. The goal of this thesis is to demonstrate that given certain types of degrees of freedom in the relay setting problem, it is possible to obtain a set of relay settings that limits damage or disconnection of generating units for each and every possible generation loss outage in a given system, while attempting to shed the least amount of load for each contingency.
APA, Harvard, Vancouver, ISO, and other styles
12

Beran, Edward W. "An electromagnetic interference analysis of uninterruptible power supply systems in a data processing environment." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://library.nps.navy.mil/uhtbin/hyperion-image/02Dec%5FBeran.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, December 2002.
Thesis advisor(s): Richard W. Adler, Wilbur R. Vincent. Includes bibliographical references (p. 103-104). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
13

湯世傑 and Sai-kit Tong. "A computer-aided measurement system for monopolar high-voltage direct-current coronating lines." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1986. http://hub.hku.hk/bib/B31207467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Erabelli, Prasad Rao 1962. "EXPERT SYSTEM FOR DESIGN OF ARC WELDING (ARTIFICIAL INTELLIGENCE)." Thesis, The University of Arizona, 1987. http://hdl.handle.net/10150/291579.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Langkilde, Maria. "Positioning Electric Field Sensors in the Marine Environment Using Passage Data." Thesis, Uppsala universitet, Fasta tillståndets fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-435114.

Full text
Abstract:
When underwater sensors are being deployed there is always some uncertainty about the actual position of the sensors. The most common way of determine the sensors position is the use of hydro-acoustic methods. However, for electric field sensors the most favourable would be to use the sensor system itself. The first question being answered in this report is whether it is possible to position electric field sensors with the sensor system itself, and the answer is yes. An algorithm has been developed which calculates the relative position of the sensors based on data measured by the sensors when a dipole passes the sensor group. The algorithm extracts zero crossings of the z-components of the electric field measured by each sensor from the data, which are converted to moments in time, multiplied by the speed and course of the vessel and finally calculated into relative position vectors between the sensors using vector algebra. The result of the predicted relative position is within 0.2 m from the sensors’ actual position, which answers the second question about how accurate the method is. However, the error estimation is within a couple of centimetres indicating that there are other sources of error than speed and course. The third question being answered is whether the method is better than acoustic methods, and the answer is no. Nonetheless, the methods are within the same order of magnitude. In conclusion, the method has acceptable performance, especially considering the fact that it can determine the position of the sensors with the sensor system itself which could be significant.
APA, Harvard, Vancouver, ISO, and other styles
16

劉心雄 and Sum-hung Lau. "Adaptive FEM preprocessing for electro magnetic field analysis of electric machines." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B31212451.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Muthuswamy, Sunil. "System implementation of a real-time, content based application router for a managed publish-subscribe system." Online access for everyone, 2008. http://www.dissertations.wsu.edu/Thesis/Summer2008/S_Muthuswamy_080408.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Hansen, Charles William. "Model enhancements for state estimation in electric power systems." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/15611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Hall, David Eric. "Transient thermal models for overhead current-carrying hardware." Thesis, Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/17133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Julie, Ferdie Gavin. "Development of an IEC 61850 standard-based automation system for a distribution power network." Thesis, Cape Peninsula University of Technology, 2014. http://hdl.handle.net/20.500.11838/1183.

Full text
Abstract:
Thesis submitted in fulfillment of the requirements for the degree Master of Technology: Electrical Engineering in the Faculty of Engineering at the Cape Peninsula University of Technology
The electric power distribution network, an essential section of the electric power system, supplies electrical power to the customer. Automating the distribution network allows for better efficiency, reliability, and level of work through the installation of distribution control systems. Presently, research and development efforts are focused in the area of communication technologies and application of the IEC 61850 protocol to make distribution automation more comprehensive, efficient and affordable. The aim of the thesis is to evaluate the relevance of the IEC61850 standard-based technology in the development and investigation of the distribution automation for a typical underground distribution network through the development of a distribution automation algorithm for fault detection, location, isolation and service restoration and the building of a lab scale test bench Distribution Automation (DA) has been around for many decades and each utility applies its developments for different reasons. Nowadays, due to the advancement in the communication technology, authentic and automatic reconfigurable power system that replies swiftly to instantaneous events is possible. Distribution automation functions do not only supersede legacy devices, but it allows the distribution network to function on another lever. The primary function of a DA system is to enable the devices on the distribution network to be operated and controlled remotely to automatically locate, isolate and reconnect supply during fault conditions. Utilities have become increasingly interested in DA due to the numerous benefits it offers. Operations, maintenance and efficiencies within substations and out on the feeders can be improved by the development of new additional capabilities of DA. Furthermore, the new standard-based technology has advanced further than a traditional Distribution Supervisory and Control Data Acquisition (DSCADA) system. These days the most important components of a DA system include Intelligent Electronic Devices (IEDs). IEDs have evolved through the years and execute various protection related actions, monitoring and control functions and are very promising for improving the operation of the DA systems. The thesis has developed an algorithm for automatic fault detection, location, isolation and system supply restoration using the functions of the IEC61850 standard-based technology. A lab scale system that would meet existing and future requirements for the control and automation of a typical underground distribution system is designed and constructed. The requirement for the lab scale distribution system is to have the ability to clear faults through reliable and fast protection operation, isolate faulted section/s, on the network and restore power to the unaffected parts of the network through automation control operation functions of the IEC61850 standard. Various tests and simulations have been done on the lab scale test bench to prove that the objective of the thesis is achieved. Keywords: IEC61850 Standard, Distribution automation, Distribution automation system, IEDs, Lab scale test bench, Protection, Algorithm for automatic control
APA, Harvard, Vancouver, ISO, and other styles
21

Ogidan, Olugbenga Kayode. "Design of nonlinear networked control for wastewater distributed systems." Thesis, Cape Peninsula University of Technology, 2014. http://hdl.handle.net/20.500.11838/1201.

Full text
Abstract:
Thesis submitted in fulfilment of the requirements for the degree Doctor of Technology: Electrical Engineering in the Faculty of Engineering at the Cape Peninsula University of Technology 2014
This thesis focuses on the design, development and real-time simulation of a robust nonlinear networked control for the dissolved oxygen concentration as part of the wastewater distributed systems. This concept differs from previous methods of wastewater control in the sense that the controller and the wastewater treatment plants are separated by a wide geographical distance and exchange data through a communication medium. The communication network introduced between the controller and the DO process creates imperfections during its operation, as time delays which are an object of investigation in the thesis. Due to the communication network imperfections, new control strategies that take cognisance of the network imperfections in the process of the controller design are needed to provide adequate robustness for the DO process control system. This thesis first investigates the effects of constant and random network induced time delays and the effects of controller parameters on the DO process behaviour with a view to using the obtained information to design an appropriate controller for the networked closed loop system. On the basis of the above information, a Smith predictor delay compensation controller is developed in the thesis to eliminate the deadtime, provide robustness and improve the performance of the DO process. Two approaches are adopted in the design of the Smith predictor compensation scheme. The first is the transfer function approach that allows a linearized model of the DO process to be described in the frequency domain. The second one is the nonlinear linearising approach in the time domain. Simulation results reveal that the developed Smith predictor controllers out-performed the nonlinear linearising controller designed for the DO process without time delays by compensating for the network imperfections and maintaining the DO concentration within a desired acceptable level. The transfer function approach of designing the Smith predictor is found to perform better under small time delays but the performance deteriorates under large time delays and disturbances. It is also found to respond faster than the nonlinear approach. The nonlinear feedback linearisig approach is slower in response time but out-performs the transfer function approach in providing robustness and performance for the DO process under large time delays and disturbances. The developed Smith predictor compensation schemes were later simulated in a real-time platform using LabVIEW. The Smith predictor controllers developed in this thesis can be applied to other process control plants apart from the wastewater plants, where distributed control is required. It can also be applied in the nuclear reactor plants where remote control is required in hazardous conditions. The developed LabVIEW real-time simulation environment would be a valuable tool for researchers and students in the field of control system engineering. Lastly, this thesis would form the basis for further research in the field of distributed wastewater control.
APA, Harvard, Vancouver, ISO, and other styles
22

Ray, Subhasis. "Multi-objective optimization of an interior permanent magnet motor." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=116021.

Full text
Abstract:
In recent years, due to growing environmental awareness regarding global warming, green cars, such as hybrid electric vehicles, have gained a lot of importance. With the decreasing cost of rare earth magnets, brushless permanent magnet motors, such as the Interior Permanent Magnet Motor, have found usage as part of the traction drive system in these types of vehicles. As a design issue, building a motor with a performance curve that suits both city and highway driving has been treated in this thesis as a multi-objective problem; matching specific points of the torque-speed curve to the desired performance output. Conventionally, this has been treated as separate problems or as a combination of several individual problems, but doing so gives little information about the trade-offs involved. As a means of identifying the compromising solutions, we have developed a stochastic optimizer for tackling electromagnetic device optimization and have also demonstrated a new innovative way of studying how different design parameters affect performance.
APA, Harvard, Vancouver, ISO, and other styles
23

Fahey, Mark, and n/a. "Assessment of the suitability of CFD for product design by analysing complex flows around a domestic oven." University of Otago. Department of Design Studies, 2007. http://adt.otago.ac.nz./public/adt-NZDU20070417.111809.

Full text
Abstract:
Competitive global markets are increasing the commercial pressure on manufacturing companies to develop better products in less time. To meet these demands, the appliance manufacturer, Fisher & Paykel, has considered the use of computer simulation of fluid flows to assist in product design. This technology, known as Computational Fluid Dynamics (CFD), has the potential to provide rewarding insight into the behaviour of designs involving fluids. However, the investment in CFD is not without risk. This thesis investigates the use of CFD in oven design expressly to evaluate the numerical accuracy and suitability of CFD in the context of oven product development. CFD was applied to four cases related to oven design, along with detailed experimental investigations, and resulted in a number of relevant findings. In a study of an impinging jet, the SST turbulence model was found to produce better results than the k-ε turbulence model. Measurements indicated that the flow was unsteady, but CFD struggled to reproduce this behaviour. The synergy between experimental and numerical techniques was highlighted in the simulation of a two-pane oven door, and resulted in temperatures on outer surface of the door predicted by CFD to within 2% of measured values. In the third study, a CFD simulation of a tangential fan failed to deliver acceptable steady-state results, however a transient simulation showed promise. The final case examined the flows through the door and cooling circuit of the Titan oven. Velocities predicted by CFD compared well against measurements in some regions, such as the potential core of the jet at the outlet vent, but other regions, such as entrained air, were poor. Temperatures were predicted to within an average of 2% of measured values. It is found that limited accuracy does not necessarily prevent CFD from delivering engineering value to the product development process. The engineering value delivered by CFD is instead more likely to be limited by the abilities of the user. Incompatibilities between CFD and the product development process can reduce the potential value of CFD but the effects can be minimised by appropriate management action. The benefits of CFD are therefore found to be sufficient to merit its use in the product development process, provided its integration into the organisation is managed effectively and the tool is used with discernment. Recommendations for achieving this are provided.
APA, Harvard, Vancouver, ISO, and other styles
24

Shaaban, Mohamed Mohamed Abdel Moneim. "Calculation of available transfer capability of transmission networks including static and dynamic security." Thesis, Click to view the E-thesis via HKUTO, 2002. http://sunzi.lib.hku.hk/hkuto/record/B42576817.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Ghosh, Sushmita. "Real time data acquisition for load management." Thesis, Virginia Tech, 1985. http://hdl.handle.net/10919/45726.

Full text
Abstract:
Demand for Data Transfer between computers has increased ever since the introduction of Personal Computers (PC). Data Communicating on the Personal Computer is much more productive as it is an intelligent terminal that can connect to various hosts on the same I/O hardware circuit as well as execute processes on its own as an isolated system. Yet, the PC on its own is useless for data communication. It requires a hardware interface circuit and software for controlling the handshaking signals and setting up communication parameters. Often the data is distorted due to noise in the line. Such transmission errors are imbedded in the data and require careful filtering. The thesis deals with the development of a Data Acquisition system that collects real time load and weather data and stores them as historical database for use in a load forecast algorithm in a load management system. A filtering technique has been developed here that checks for transmission errors in the raw data. The microcomputers used in this development are the IBM PC/XT and the AT&T 3B2 supermicro computer.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
26

Matalgah, Mustafa M. "Geometric theory for designing optical binary amplitude and binary phase-only filters /." free to MU campus, to others for purchase, 1996. http://wwwlib.umi.com/cr/mo/fullcit?p9717158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Shaalan, Hesham Ezzat. "An interval mathematics approach to economic evaluation of power distribution systems." Diss., Virginia Tech, 1992. http://hdl.handle.net/10919/40081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Yang, Hsien-Min 1957. "PRINCIPAL COMPONENTS AND TEXTURE ANALYSIS OF THE NS-001 THEMATIC MAPPER SIMULATOR DATA IN THE ROSEMONT MINING DISTRICT, ARIZONA (GEOLOGIC, DIGITAL IMAGE PROCESSING, TEXTURE EXTRACTION)." Thesis, The University of Arizona, 1985. http://hdl.handle.net/10150/275436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Mehar, Sara. "The vehicle as a source and consumer of information : collection, dissemination and data processing for sustainable mobility." Thesis, Dijon, 2014. http://www.theses.fr/2014DIJOS069/document.

Full text
Abstract:
Aujourd'hui, les véhicules sont devenus de plus en plus sophistiqués, intelligents et connectés. En effet, ils sont équipés de capteurs, radars, GPS, interfaces de communication et capacités de traitement et de stockage élevés. Ils peuvent collecter, traiter et communiquer les informations relatives à leurs conditions de travail et leur environnement formant un réseau véhiculaire. L'intégration des technologies de communication sur les véhicules fait l'objet d'une immense attention de l'industrie, des autorités gouvernementales et des organisations de standardisations; elle a ouvert la voie à des applications innovantes qui vont révolutionner le marché de l'automobile avec les principaux objectifs d'assurer la sécurité sur les routes, augmenter l'efficacité des transports et offrir un confort aux conducteurs et passagers. En outre, le transport est un secteur en évolution active. Des moyens de transport plus durables comme les véhicules électriques s'introduisent progressivement sur le marché de l'automobile tout en créant de nouveaux défis liés à la contrainte énergétique et la protection de l'environnement qui restent à résoudre.De nombreux projets et études ont été initiés exploitant les avantages des technologies de l'information et de communication (TIC) afin de répondre aux différents défis des systèmes de transport. Cependant, avoir des véhicules connectés et coopératifs crée un réseau hautement dynamique caractérisé par des ruptures de lien et de pertes de messages très fréquentes. Pour résoudre ces problèmes de communication, cette thèse se concentre sur deux axes majeurs: (i) le véhicule connecté (ou mobilité connectée) et (ii) la mobilité durable. Dans la première partie de cette thèse, la diffusion, la collecte et l'acheminement de données dans un réseau de véhicule sont adressés. Ainsi, un nouveau protocole de diffusion est proposé afin de faire face à la fragmentation et la connectivité intermittente dans ces réseaux. Ensuite, une nouvelle stratégie de déploiement d'infrastructure de communication est conçue afin d'améliorer la connectivité réseau et l'utilisation des ressources. Enfin, un nouveau protocole de routage, pour applications sensibles au délai, utilisant cette nouvelle infrastructure de communication est proposé. La deuxième partie se concentre sur la mobilité durable avec un focus sur les véhicules électriques et avec un objectif de réduire les problèmes de pollution et d'utiliser efficacement l'énergie. Une nouvelle architecture de gestion de flottes de véhicules électriques est proposée. Cette dernière utilise les protocoles implémentés dans la première partie de cette thèse afin de collecter, traiter et diffuser les données. Elle permet de surmonter les limitations liées à la courte autonomie des batteries des véhicules électriques. Ensuite, pour répondre aux besoins et défis d'équilibre énergétique, un nouveau schéma de déploiement des stations de recharge pour véhicules électriques est proposé. Cette solution permet de satisfaire les demandes des conducteurs en terme d'énergie, tout en tenant compte les capacités énergétiques disponibles
Today, vehicles have become more sophisticated, intelligent and connected. Indeed, they are equipped with sensors, radars, GPS, communication interfaces and high processing and storage capacities. They can collect, process and communicate information related to their working conditions and their environment forming a vehicular network. The incorporation of communication technologies on vehicles garnered a huge attention of industry, government authorities and standardizations organizations and opened the way for innovative applications that revolutionized the automotive market with the main goals to ensure safety on roads, increase transport efficiency and provide comfort to drivers and passengers. In addition, transportation is still an actively evolving sector. More sustainable means of transportation such as electric vehicles are introduced progressively to the automotive market with new challenges related to energy consumption and environment preservation that remain to be solved. Many research investigations and industrial projects are done to exploit the advantages of information and communication technologies (ICT) to fit with transportation challenges. However, having connected and cooperative vehicles creates a highly dynamic network characterized by frequent link breaks and message losses. To cope with these communication limitations, this thesis focuses on two major axis: (i) connected vehicle or connected mobility and (ii) sustainable mobility. In the first part of this thesis, data dissemination, collection and routing in vehicular networks are addressed. Thus, a new dissemination protocol is proposed to deal with frequent network fragmentation and intermittent connectivity in these networks. Then, a new deployment strategy of new communication infrastructure is developed in order to increase network connectivity and enhance the utilization of the network resources. Finally, a new routing protocol, for delay-sensitive applications, that uses the optimized infrastructure deployment is proposed. The second part focuses on sustainable mobility with a focus on electric vehicles and with the main objective is to reduce pollution issues and make better use of energy. A new architecture for electric vehicles fleet management is proposed. This latter uses the implemented protocols of the first part of this thesis in order to collect, process and disseminate data. It helps to overcome the limitations related to short autonomy of electric vehicles. Then, to meet energy balance challenges, a new deployment scheme for electric vehicles charging stations is developed. This solution helps to satisfy drivers’ demands in term of energy while taking into account available resources
APA, Harvard, Vancouver, ISO, and other styles
30

Weaver, Michael B. "Performance comparison between three different bit allocation algorithms inside a critically decimated cascading filter bank." Diss., Online access via UMI:, 2009.

Find full text
Abstract:
Thesis (M.S.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Electrical and Computer Engineering, 2009.
Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
31

Pajic, Slobodan. "Sequential quadratic programming-based contingency constrained optimal power flow." Link to electronic thesis, 2003. http://www.wpi.edu/Pubs/ETD/Available/etd-0430103-152758.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Hagerty, David Joseph. "Designing and Simulating a Multistage Sampling Rate Conversion System Using a Set of PC Programs." PDXScholar, 1993. https://pdxscholar.library.pdx.edu/open_access_etds/4697.

Full text
Abstract:
The thesis covers a series of PC programs that we have written that will enable users to easily design FIR linear phase lowpass digital filters and multistage sampling rate conversion systems. The first program is a rewrite of the McClellanParks computer program with some slight modifications. The second program uses an algorithm proposed by Rabiner that determines the length of a lowpass digital filter. Rabiner used a formula proposed by Herrmann et al. to initially estimate the filter length in his algorithm. The formula, however, assumes unity gain. We present a modification to the formula so that the gain of the filter is normalized to accommodate filters that have a gain greater than one (as in the case of a lowpass filter used in an interpolator). We have also changed the input specifications from digital to analog. Thus, the user supplies the sampling rate, passband frequency, stopband frequency, gain, and the respective maximum band errors. The program converts the specifications to digital. Then, the program iteratively estimates the filter length and interacts with the McClellan-Parks Program to determine the actual filter length that minimizes the maximum band errors. Once the actual length is known, the filter is designed and the filter coefficients may be saved to a file. Another new finding that we present is the condition that determines when to add a lowpass filter to a multistage decimator in order to reduce the total number of filter taps required to implement the system. In a typical example, we achieved a 34% reduction in the total required number of filter taps. The third program is a new program that optimizes the design of a multistage sampling rate conversion system based upon the sum of weighted computational rates and storage requirements. It determines the optimum number of stages and the corresponding upsampling and downsampling factors of each stage of the design. It also determines the length of the required lowpass digital filters using the second program. Quantization of the filter coefficients may have a significant impact on the frequency response. Consequently, we have included a routine within our program that determines the effects of such quantization on the allowable error margins within the passband and stopband. Once the filter coefficients are calculated, they can be saved to files and used in an appropriate implementation. The only requirements of the user are the initial sampling rate, final sampling rate, passband frequency, stopband frequency, corresponding maximum errors for each band, and the weighting factors to determine the optimization factor. We also present another new program that implements a sampling rate conversion from CD (44.1 kHz) to DAT (48 kHz) for digital audio. Using the third program to design the filter coefficients, the fourth program converts an input sequence (either samples of a sine wave or a unit sample sequence) sampled at the lower rate to an output sequence sampled at the higher rate. The frequency response is then plotted and the output block may be saved to a file.
APA, Harvard, Vancouver, ISO, and other styles
33

Shields, Shawn. "Dynamic thermal response of the data center to cooling loss during facility power failure." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29725.

Full text
Abstract:
Thesis (M. S.)--Mechanical Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Yogendra K. Joshi; Committee Member: Mostafa Ghiaasiaan; Committee Member: Sheldon Jeter. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
34

Hagerty, David Joesph. "Designing and Simulating a Multistage Sampling Rate Conversion System Using a Set of PC Programs." PDXScholar, 1993. https://pdxscholar.library.pdx.edu/open_access_etds/4762.

Full text
Abstract:
The thesis covers a series of PC programs that we have written that will enable users to easily design FIR linear phase lowpass digital filters and multistage sampling rate conversion systems. The first program is a rewrite of the McClellanParks computer program with some slight modifications. The second program uses an algorithm proposed by Rabiner that determines the length of a lowpass digital filter. Rabiner used a formula proposed by Herrmann et al. to initially estimate the filter length in his algorithm. The formula, however, assumes unity gain. We present a modification to the formula so that the gain of the filter is normalized to accommodate filters that have a gain greater than one (as in the case of a lowpass filter used in an interpolator). We have also changed the input specifications from digital to analog. Thus, the user supplies the sampling rate, passband frequency, stopband frequency, gain, and the respective maximum band errors. The program converts the specifications to digital. Then, the program iteratively estimates the filter length and interacts with the McClellan-Parks Program to determine the actual filter length that minimizes the maximum band errors. Once the actual length is known, the filter is designed and the filter coefficients may be saved to a file. Another new finding that we present is the condition that determines when to add a lowpass filter to a multistage decimator in order to reduce the total number of filter taps required to implement the system. In a typical example, we achieved a 34% reduction in the total required number of filter taps. The third program is a new program that optimizes the design of a multistage sampling rate conversion system based upon the sum of weighted computational rates and storage requirements. It determines the optimum number of stages and the corresponding upsampling and downsampling factors of each stage of the design. It also determines the length of the required lowpass digital filters using the second program. Quantization of the filter coefficients may have a significant impact on the frequency response. Consequently, we have included a routine within our program that determines the effects of such quantization on the allowable error margins within the passband and stopband. Once the filter coefficients are calculated, they can be saved to files and used in an appropriate implementation. The only requirements of the user are the initial sampling rate, final sampling rate, passband frequency, stop band frequency, corresponding maximum errors for each band, and the weighting factors to determine the optimization factor. We also present another new program that implements a sampling rate conversion from CD (44.1 kHz) to DAT (48 kHz) for digital audio. Using the third program to design the filter coefficients, the fourth program converts an input sequence (either samples of a sine wave or a unit sample sequence) sampled at the lower rate to an output sequence sampled at the higher rate. The frequency response is then plotted and the output block may be saved to a file.
APA, Harvard, Vancouver, ISO, and other styles
35

Rosa, Luiz Henrique Leite. "Sistema de apoio à gestão de utilidades e energia: aplicação de conceitos de sistemas de informação e de apoio à tomada de decisão." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/3/3143/tde-03082007-165825/.

Full text
Abstract:
Este trabalho trata da especificação, desenvolvimento e utilização do Sistema de Apoio à Gestão de Utilidades e Energia - SAGUE, um sistema concebido para auxiliar na análise de dados coletados de sistemas de utilidades como ar comprimido, vapor, sistemas de bombeamento, sistemas para condicionamento ambiental e outros, integrados com medições de energia e variáveis climáticas. O SAGUE foi desenvolvido segundo conceitos presentes em sistemas de apoio à decisão como Data Warehouse e OLAP - Online Analytical Processing - com o intuito de transformar os dados oriundos de medições em informações que orientem diretamente as ações de conservação e uso racional de energia. As principais características destes sistemas, que influenciaram na especificação e desenvolvimento do SAGUE, são tratadas neste trabalho. Além disso, este texto aborda a gestão energética e os sistemas de gerenciamento de energia visando apresentar o ambiente que motivou o desenvolvimento do SAGUE. Neste contexto, é apresentado o Sistema de Gerenciamento de Energia Elétrica - SISGEN, um sistema de informação para suporte à gestão de energia elétrica e de contratos de fornecimento, cujos dados coletados podem ser analisados através do SAGUE. A aplicação do SAGUE é tratada na forma de um estudo de caso no qual se analisa a correlação existente entre o consumo de energia elétrica da CUASO - Cidade Universitária Armando de Sales Oliveira, obtido através do SISGEN, e as medições de temperatura ambiente, fornecidas pelo IAG - Instituto de Astronomia, Geofísica e Ciências Atmosféricas da USP.
This work deals with specification, development and utilization of the Support System for Utility and Energy Management - SAGUE, a system created to assist in analysis of data collected from utilities systems as compressed air, vapor, water pumping systems, environmental conditioning systems and others, integrated with energy consumption and climatic measurements. The development of SAGUE was based on concepts and methodologies from Decision Support System as Data Warehouse and OLAP - Online Analytical Processing - in order to transform data measurements in information that guide the actions for energy conservation and rational utilization. The main characteristics of Data Warehouse and OLAP tools that influenced in the specifications and development of SAGUE are described in this work. In addition, this text deals with power management and energy management systems in order to present the environment that motivated the SAGUE development. Within this context, it is presented the Electrical Energy Management System - SISGEN, a system for energy management support, whose electrical measurements can be analyzed by SAGUE. The SAGUE utilization is presented in a case study that discusses the relation between electrical energy consumption of CUASO - Cidade Universitária Armando de Sales Oliveira, obtained throughout SISGEN, and the local temperature measurements supplied by IAG - Institute of Astronomic and Atmospheric Science of USP.
APA, Harvard, Vancouver, ISO, and other styles
36

Meghnefi, Fethi. "Étude temporelle et fréquentielle du courant de fuite des isolateurs de poste recouverts de glace en vue du développement d'un système de surveillance et de prédiction en temps réel du contournement électrique /." Thèse, Chicoutimi : Université du Québec à Chicoutimi, 2007. http://theses.uqac.ca.

Full text
Abstract:
Thèse (D.Eng.) -- Université du Québec à Chicoutimi, 2007.
La p. de t. porte en outre: Thèse présentée à l'Université du Québec à Chicoutimi comme exigence partielle du doctorat en ingénierie. CaQQUQ Bibliogr.: f. 235-244. Document électronique également accessible en format PDF. CaQQUQ
APA, Harvard, Vancouver, ISO, and other styles
37

Saldanha, Carlos M. "An algebraic constraint system for computer-aided design in magnetics /." Thesis, McGill University, 1988. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=64003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Yen, Wen-Tsung. "Comparison of SPICE and Network C simulation models using the CAM system." PDXScholar, 1991. https://pdxscholar.library.pdx.edu/open_access_etds/4243.

Full text
Abstract:
The performance of SPICE and Network C (NC) circuit simulator when simulating MOS transistor circuits has been investigated and compared. SPICE analog model, NC analog model and NC MOS_PWL model are the three MOS transistor models being used. The comparison between SPICE and NC includes five areas. They are MOS transistor model, circuit analysis and computational methods, limitation on the ability to simulate circuits containing the MOS transistor diode configuration, run time and the ability to build new circuit component models using derived equations.
APA, Harvard, Vancouver, ISO, and other styles
39

Aluru, Gunasekhar. "Exploring Analog and Digital Design Using the Open-Source Electric VLSI Design System." Thesis, University of North Texas, 2016. https://digital.library.unt.edu/ark:/67531/metadc849770/.

Full text
Abstract:
The design of VLSI electronic circuits can be achieved at many different abstraction levels starting from system behavior to the most detailed, physical layout level. As the number of transistors in VLSI circuits is increasing, the complexity of the design is also increasing, and it is now beyond human ability to manage. Hence CAD (Computer Aided design) or EDA (Electronic Design Automation) tools are involved in the design. EDA or CAD tools automate the design, verification and testing of these VLSI circuits. In today’s market, there are many EDA tools available. However, they are very expensive and require high-performance platforms. One of the key challenges today is to select appropriate CAD or EDA tools which are open-source for academic purposes. This thesis provides a detailed examination of an open-source EDA tool called Electric VLSI Design system. An excellent and efficient CAD tool useful for students and teachers to implement ideas by modifying the source code, Electric fulfills these requirements. This thesis' primary objective is to explain the Electric software features and architecture and to provide various digital and analog designs that are implemented by this software for educational purposes. Since the choice of an EDA tool is based on the efficiency and functions that it can provide, this thesis explains all the analysis and synthesis tools that electric provides and how efficient they are. Hence, this thesis is of benefit for students and teachers that choose Electric as their open-source EDA tool for educational purposes.
APA, Harvard, Vancouver, ISO, and other styles
40

Rahman, Md Raqibur. "Online testing in ternary reversible logic." Thesis, Lethbridge, Alta. : University of Lethbridge, c2011, 2011. http://hdl.handle.net/10133/3208.

Full text
Abstract:
In recent years ternary reversible logic has caught the attention of researchers because of its enormous potential in different fields, in particular quantum computing. It is desirable that any future reversible technology should be fault tolerant and have low power consumption; hence developing testing techniques in this area is of great importance. In this work we propose a design for an online testable ternary reversible circuit. The proposed design can implement almost all of the ternary logic operations and is also capable of testing the reversible ternary network in real time (online). The error detection unit is also constructed in a reversible manner, which results in an overall circuit which meets the requirements of reversible computing. We have also proposed an upgrade of the initial design to make the design more optimized. Several ternary benchmark circuits have been implemented using the proposed approaches. The number of gates required to implement the benchmarks for each approach have also been compared. To our knowledge this is the first such circuit in ternary with integrated online testability feature.
xii, 92 leaves : ill. ; 29 cm
APA, Harvard, Vancouver, ISO, and other styles
41

Eter, Walid. "Système de suivi des tempêtes de verglas en temps réel = Analysis of real time icing events /." Thèse, Chicoutimi : Université du Québec à Chicoutimi, 2003. http://theses.uqac.ca.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Grando, Flavio Lori. "Arquitetura para o desenvolvimento de unidades de medição fasorial sincronizada no monitoramento a nível de distribuição." Universidade Tecnológica Federal do Paraná, 2016. http://repositorio.utfpr.edu.br/jspui/handle/1/1762.

Full text
Abstract:
CAPES
Este trabalho tem por objetivo o desenvolvimento de uma arquitetura de baixo custo para construção de unidades de medição fasorial sincronizada (PMU). O dispositivo prevê conexão com a baixa tensão da rede elétrica, de forma que, instalada neste ponto do sistema permita o monitoramento da rede de transmissão e distribuição. Os desenvolvimentos deste projeto contemplam uma arquitetura completa, com módulo de instrumentação para uso na baixa tens˜ao da rede, módulo GPS para fornecer o sinal de sincronismo e etiqueta de tempo das medidas, unidade de processamento com sistema de aquisição, estimação de fasores e formatação dos dados de acordo com a norma e, por fim, módulo de comunicação para transmissão dos dados. Para o desenvolvimento e avaliação do desempenho da arquitetura, desenvolveu-se um conjunto de aplicativos em ambiente LabVIEW com funcionalidades específicas que permitem analisar o comportamento das medidas e identificar as fontes de erro da PMU, além de aplicar todos os testes previstos pela norma IEEE C37.118.1. O primeiro aplicativo, útil para o desenvolvimento da instrumentação, consiste em um gerador de funções integrado com osciloscópio, que permite a geração e aquisição de sinais de forma sincronizada, além da manipulação das amostras. O segundo e principal deles, é a plataforma de testes capaz de gerar todos os ensaios previstos pela norma, permitindo também armazenar os dados ou fazer a análise das medidas em tempo real. Por fim, um terceiro aplicativo foi desenvolvido para avaliar os resultados dos testes e gerar curvas de ajuste para calibração da PMU. Os resultados contemplam todos os testes previstos pela norma e um teste adicional que avalia o impacto de ruído. Além disso, através de dois protótipos conectados à instalação elétrica de consumidores de um mesmo circuito de distribuição, obteve-se registros de monitoramento que permitiram a identificação das cargas no consumidor, análise de qualidade de energia, além da detecção de eventos a nível de distribuição e transmissão.
This work presents a low cost architecture for development of synchronized phasor measurement units (PMU). The device is intended to be connected in the low voltage grid, which allows the monitoring of transmission and distribution networks. Developments of this project include a complete PMU, with instrumentation module for use in low voltage network, GPS module to provide the sync signal and time stamp for the measures, processing unit with the acquisition system, phasor estimation and formatting data according to the standard and finally, communication module for data transmission. For the development and evaluation of the performance of this PMU, it was developed a set of applications in LabVIEW environment with specific features that let analyze the behavior of the measures and identify the sources of error of the PMU, as well as to apply all the tests proposed by the standard. The first application, useful for the development of instrumentation, consists of a function generator integrated with an oscilloscope, which allows the generation and acquisition of signals synchronously, in addition to the handling of samples. The second and main, is the test platform, with capabality of generating all tests provided by the synchronized phasor measurement standard IEEE C37.118.1, allowing store data or make the analysis of the measurements in real time. Finally, a third application was developed to evaluate the results of the tests and generate calibration curves to adjust the PMU. The results include all the tests proposed by synchrophasors standard and an additional test that evaluates the impact of noise. Moreover, through two prototypes connected to the electrical installation of consumers in same distribution circuit, it was obtained monitoring records that allowed the identification of loads in consumer and power quality analysis, beyond the event detection at the distribution and transmission levels.
APA, Harvard, Vancouver, ISO, and other styles
43

Piga, Leonardo de Paula Rosa 1985. "Modeling, characterization, and optimization of web server power in data centers = Modelagem, caracterização e otimização de potência em centro de dados." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275608.

Full text
Abstract:
Orientadores: Sandro Rigo, Reinaldo Alvarenga Bergamaschi
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-24T00:17:07Z (GMT). No. of bitstreams: 1 Piga_LeonardodePaulaRosa_D.pdf: 5566406 bytes, checksum: 5fcce79bb9fc83646106c7580e0d77fc (MD5) Previous issue date: 2013
Resumo: Para acompanhar uma demanda crescente pelos recursos computacionais, empresas de TI precisaram construir instalações que comportam centenas de milhares de computadores chamadas centro de dados. Este ambiente é altamente dependente de energia elétrica, um recurso que é cada vez mais caro e escasso. Neste contexto, esta tese apresenta uma abordagem para otimizar potência e desempenho em centro de dados Web. Para isto, apresentamos uma infraestrutura para medir a potência dissipada por computadores de prateleiras, desenvolvemos modelos empíricos que estimam a potência de servidores Web e, por fim, implementamos uma de nossas heurísticas de otimização de potência global em um aglomerado de nós de processamento chamado AMD SeaMicro SM15k. A infraestrutura de medição de potência é composta por: uma placa personalizada, que é capaz de medir potência e é instalada em computadores de prateleira; um conversor de dados analógico/digital que amostra os valores de potência; e um software controlador. Mostramos uma nova metodologia para o desenvolvimento de modelos de potência para servidores Web que diminuem a quantidade de parâmetros dos modelos e reduzem as relações não lineares entre medidas de desempenho e potência do sistema. Avaliamos a nossa metodologia em dois servidores Web, um constituído por um processador AMD Opteron e outro por processador Intel i7. Nossos melhores modelos tem erro médio absoluto de 1,92% e noventa percentil para o erro absoluto de 2,66% para o sistema com processador Intel i7. O erro médio para o sistema composto pelo processador AMD Opteron é de 1,46% e o noventa percentil para o erro absoluto é igual a 2,08%. A implantação do sistema de otimização de potência global foi feita em um aglomerado de nós de processamento SeaMicro SM15k. A implementação se baseia no conceito de Virtual Power States, uma combinação de taxa de utilização de CPU com os estados de potência P e C disponíveis em processadores modernos, e no nosso algoritmo de otimização chamado Slack Recovery. Propomos e implementamos também um novo mecanismo capaz de controlar a utilização da CPU. Nossos resultados experimentais mostram que o nosso sistema de otimização pode reduzir o consumo de potência em até 16% quando comparado com o governador de potência do Linux chamado performance e em até 6,7% quando comparado com outro governador de potência do Linux chamado ondemand
Abstract: To keep up with an increasing demand for computational resources, IT companies need to build facilities that host hundreds of thousands of computers, the data centers. This environment is highly dependent on electrical energy, a resource that is becoming expensive and limited. In this context, this thesis develops a global data center-level power and performance optimization approach for Web Server data centers. It presents a power measurement framework for commodity servers, develops empirical models for estimating the power consumed by Web servers, and implements one of the global power optimization heuristics on a state-of-the-art, high-density SeaMicro SM15k cluster by AMD. The power measuring framework is composed of a custom made board, which is able to capture the power consumption; a data acquisition device that samples the measured values; and a piece of software that manages the framework. We show a novel method for developing full system Web server power models that prunes model parameters and reduces non-linear relationships among performance measurements and system power. The Web server power models use as parameters performance indicators read from the machine internal performance counters. We evaluate our approach on an AMD Opteron-based Web server and on an Intel i7-based Web server. Our best model displays an average absolute error of 1.92% for the Intel i7 server and 1.46% for AMD Opteron as compared to actual measurements, and 90th percentile for the absolute percent error equals to 2.66% for Intel i7 and 2.08% for AMD Opteron. We deploy the global power management system in a state-of-the-art SeaMicro SM15k cluster. The implementation relies on the concept of Virtual Power States, a combination of CPU utilization rate to the P/C power states available in modern processors, and on our global optimization algorithm called Slack Recovery. We also propose and implement a novel mechanism to control utilization rates in each server, a key aspect of our power/performance optimization system. Experimental results show that our Slack Recovery-based system can reduce up to 16% of the power consumption when compared to the Linux performance governor and 6.7% when compared to the Linux ondemand governor
Doutorado
Ciência da Computação
Doutor em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
44

Merugu, Shashidhar. "Network Design and Routing in Peer-to-Peer and Mobile Ad Hoc Networks." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7219.

Full text
Abstract:
Peer-to-peer networks and mobile ad hoc networks are emerging distributed networks that share several similarities. Fundamental among these similarities is the decentralized role of each participating node to route messages on behalf of other nodes, and thereby, collectively realizing communication between any pair of nodes. Messages are routed on a topology graph that is determined by the peer relationship between nodes. Although routing is fairly straightforward when the topology graph is static, dynamic variations in the peer relationship that often occur in peer-to-peer and mobile ad hoc networks present challenges to routing. In this thesis, we examine the interplay between routing messages and network topology design in two classes of these networks -- unstructured peer-to-peer networks and sparsely-connected mobile ad hoc networks. In unstructured peer-to-peer networks, we add structure to overlay topologies to support file sharing. Specifically, we investigate the advantages of designing overlay topologies with small-world properties to improve (a) search protocol performance and (b) network utilization. We show, using simulation, that "small-world-like" overlay topologies where every node has many close neighbors and few random neighbors exhibit high chances of locating files close to the source of file search query. This improvement in search protocol performance is achieved while decreasing the traffic load on the links in the underlying network. In the context of sparsely-connected mobile ad hoc networks where nodes provide connectivity via mobility, we present a protocol for routing in space and time where the message forwarding decision involves not only where to forward (space), but also when to forward (time). We introduce space-time routing tables and develop methods to compute these routing tables for those instances of ad hoc networks where node mobility is predictable over either a finite horizon or indefinitely due to periodicity in node motion. Furthermore, when the node mobility is unpredictable, we investigate several forwarding heuristics to address the scarcity in transmission opportunities in these sparsely-connected ad hoc networks. In particular, we present the advantages of fragmenting messages and augmenting them with erasure codes to improve the end-to-end message delivery performance.
APA, Harvard, Vancouver, ISO, and other styles
45

Meira, Paulo César Magalhães 1985. "Análise da filosofia de eliminação de defeitos em sistemas de distribuição considerando aspectos de confiabilidade e de qualidade de energia." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261202.

Full text
Abstract:
Orientador: Walmir de Freitas Filho
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-25T10:49:19Z (GMT). No. of bitstreams: 1 Meira_PauloCesarMagalhaes_D.pdf: 11828003 bytes, checksum: 693bdd2a6d139783c1aa0f2a05a3d6de (MD5) Previous issue date: 2014
Resumo: A estratégia de eliminação de defeitos (faltas) empregada pela concessionária de distribuição de energia elétrica tem grande impacto na confiabilidade e na qualidade de energia do sistema. Por exemplo, a política de empregar religadores automáticos tipicamente tem um impacto benéfico nos índices de confiabilidade baseados na frequência e duração das interrupções sustentadas mas, por outro lado, tem um impacto negativo nos índices de qualidade de energia baseados na frequência de interrupções temporárias. Isto pode ser comprovado pelo número de concessionárias ao redor do mundo que estão revendo suas estratégias de empregar religadores automáticos de forma generalizada conforme cresce a preocupação do consumidor com a qualidade de energia. Somado a isso, tem-se o fato de o sistema estar sendo modernizado com o uso de mais equipamentos de monitoração e automação, como chaves seccionadoras automáticas, relés digitais, etc., dentro do contexto que se convencionou chamar redes inteligentes (smart grids). Portanto, atualmente, as estratégias de eliminação de faltas e de melhoria dos índices de confiabilidade e de qualidade de energia em sistemas de distribuição estão passando por modificações e têm atraído o interesse da comunidade científica e tecnológica. Este trabalho tem como objetivo desenvolver métodos para auxiliar na tomada de decisão sobre a estratégia de eliminação de defeitos em sistemas de distribuição via avaliação integrada dos índices de confiabilidade e qualidade de energia. Os métodos empregados são baseados no uso de registros históricos e de medições da concessionária, no cálculo de índices de confiabilidade e de qualidade de energia e em técnicas de otimização e de tratamentos estatísticos. Para permitir o emprego dos métodos a sistemas reais, algoritmos clássicos para análise de confiabilidade e qualidade de energia são revisitados e reformulados de forma a permitir sua aplicação a sistemas de grande porte em tempo de execução factível. São investigadas também formas de permitir a execução paralela e distribuída dos principais algoritmos empregados nos métodos propostos
Abstract: The fault elimination policy used by an electric energy distribution utility has great impact on the reliability and in the power quality of the system. For example, the policy of using automatic reclosers typically has a positive impact in the reliability indices based on frequency and duration of sustained interruptions but, on the other hand, has a negative impact on the power quality indices based on the frequency of temporary interruptions. This can be verified by the number of utilities around the world that are reevaluating their policies in using automatic reclosers in a generalized fashion as the customers demand better power quality. At the same time, the systems are being modernized, including the usage of more monitoring and automation equipment, such as automatic sectionalizing switches, digital relays, etc., in a context that is usually called smart grids. Therefore, currently, the policies regarding fault elimination and improvement of the reliability and power quality indices in distribution system are being reformulated and have attracted the interest of the academic and technology communities. The objective of this thesis is to develop methods to assist in the decision-making process on the fault elimination policies in distribution systems using the integrated evaluation of reliability and power quality indices. The methods are based on the use of historical records and utility measurements, in the computation of reliability and power quality indices, in optimization techniques and statistical analysis. To achieve the implementation of the methods in actual systems, the classic algorithms used to analyze the reliability and power quality are revisited and reformulated in order to allow their application to large-scale systems in feasible running time. Alternatives to allow the parallel and distributed execution of the main algorithms of the proposed methods are also explored
Doutorado
Energia Eletrica
Doutor em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
46

Pacheco, Edson José. "MorphoMap: mapeamento automático de narrativas clínicas para uma terminologia médica." Universidade Tecnológica Federal do Paraná, 2009. http://repositorio.utfpr.edu.br/jspui/handle/1/124.

Full text
Abstract:
A documentação clínica requer a representação de situações complexas como pareceres clínicos, imagens e resultados de exames, planos de tratamento, dentre outras. Entre os profissionais da área de saúde, a linguagem natural é o meio principal de documentação. Neste tipo de linguagem, caracterizada por uma elevada flexibilidade sintática e léxica, é comum a prevalência de ambigüidades em sentenças e termos. O objetivo do presente trabalho consiste em mapear informações codificadas em narrativas clínicas para uma ontologia de domínio (SNOMED CT). Para sua consecução, aplicaram-se ferramentas processamento de linguagem natural (PLN), assim como adotaram-se heurísticas para o mapeamento de textos para ontologias. Para o desenvolvimento da pesquisa, uma amostra de sumários de alta foi obtida junto ao Hospital das Clínicas de Porto Alegre, RS, Brasil. Parte dos sumários foi manualmente anotada, com a aplicação da estratégia de Active Learning, visando a preparação de um corpus para o treinamento de ferramentas de PLN. Paralelamente, foram desenvolvidos algoritmos para o pré-processamento dos textos ‘sujos’ (com grande quantidade de erros, acrônimos, abreviações, etc). Com a identificação das frases nominais, resultado do processamento das ferramentas de PLN, diversas heurísticas (identificação de acrônimos, correção ortográfica, supressão de valores numéricos e distância conceitual) para o mapeamento para a SNOMED CT foram aplicadas. A versão atual da SNOMED CT não está disponível em português, demandando o uso de ferramentas para processamento multi-lingual. Para tanto, o pesquisa atual é parte da iniciativa do projeto MorphoSaurus, por meio do qual desenvolve-se e disponibiliza-se um thesaurus multi-língue (português, alemão, inglês, espanhol, sueco, francês), bem como componentes de software que permitem o processamento inter-lingual. Para realização da pesquisa, 80% da base de sumários foi analisada e manualmente anotada, resultando em um corpus de domínio (textos médicos e em português) que permitiu a especialização do software OpenNLP (baseado no modelo estatístico para o PLN e selecionado após a avaliação de outras soluções disponíveis). A precisão do etiquetador atingiu 93.67%. O thesaurus multi-língue do MorphoSaurus foi estendido, reestruturado e avaliado (automaticamente com a comparação por meio de textos comparáveis – ‘traduções de um mesmo texto para diferentes idiomas’) e sofreu intervenções objetivando a correção de imperfeições existentes, resultando na melhoria da cobertura lingüística, no caso do português, em 2%; e 50% para o caso do espanhol, medidas obtidas por meio do levantamento das curvas de precisão e revocação para a base do OHSUMED. Por fim, a codificação de informações de narrativas clínicas para uma ontologia de domínio é uma área de elevado interesse científico e clínico, visto que grande parte dos dados produzidos quando do atendimento médico é armazenado em texto livre e não em campos estruturados. Para o alcance deste fim, adotou-se a SNOMED CT. A viabilidade da metodologia de mapeamento foi demonstrada com a avaliação dos resultados do mapeamento automático contra um padrão ouro, manualmente desenvolvido, indicando precisão de 83,9%.
Clinical documentation requires the representation of fine-grained descriptions of patients' history, evolution, and treatment. These descriptions are materialized in findings reports, medical orders, as well as in evolution and discharge summaries. In most clinical environments natural language is the main carrier of documentation. Written clinical jargon is commonly characterized by idiosyncratic terminology, a high frequency of highly context-dependent ambiguous expressions (especially acronyms and abbreviations). Violations of spelling and grammar rules are common. The purpose of this work is to map free text from clinical narratives to a domain ontology (SNOMED CT). To this end, natural language processing (NLP) tools will be combined with a heuristic of semantic mapping. The study uses discharge summaries from the Hospital das Clínicas de Porto Alegre, RS, Brazil. Parts of these texts are used for creating a training corpus, using manual annotation supported by active learning technology, used for the training of NLP tools that are used for the identification of parts of speech, the cleansing of "dirty" text passages. Thus it was possible to obtain relatively well-formed and unambiguous noun phrases, heuristics was implemented to semantic mapping between these noun phrases (in Portuguese) and the terms describing the SNOMED CT concepts (English and Spanish) uses the technology of morphosemantic indexing, using a multilingual subword thesaurus, provided by the MorphoSaurus system, the resolution of acronyms, and the identification of named entities (e.g. numbers). In this study, 80 per cent of the summaries are analyzed and manually annotated, resulting in a domain corpus that supports the specialization of the OpenNLP system, mainly following the paradigm of statistical natural language processing (the accuracy of the tagger obtained was 93.67%). Simultaneously, several techniques have been used for validating and improving the subword thesaurus. To this end, the semantic representations of comparable test corpora from the medical domain in English, Spanish, and Portuguese were compared with regard to the relative frequency of semantic identifiers, improving the corpus coverage (2% to Portuguese, and 50% to Spanish). The result was used as an input by a team of lexicon curators, which continuously fix errors and fill gaps in the trilingual thesaurus underlying the MorphoSaurus system. The progress of this work could be objectified using OHSUMED, a standard medical information retrieval benchmark. The mapping of text-encoded clinical information to a domain ontology constitutes an area of high scientific and practical interest due to the need for the analysis of structured data, whereas the clinical information is routinely recorded in a largely unstructured way. In this work the ontology used was SNOMED CT. The evaluation of mapping methodology indicates accuracy of 83.9%.
APA, Harvard, Vancouver, ISO, and other styles
47

Simão, Daniel Hayashida. "Análise do consumo energético em redes subaquáticas utilizando códigos fontanais." Universidade Tecnológica Federal do Paraná, 2017. http://repositorio.utfpr.edu.br/jspui/handle/1/2774.

Full text
Abstract:
O presente trabalho aborda a aplicação de códigos fontanais em redes subaquáticas. Tais redes transmitem dados abaixo da água fazendo uso de sinais acústicos e possuem diversas aplicações. No entanto, é sabido que esse tipo de rede é caracterizado por uma baixa velocidade de propagação e largura de banda menor que as redes que operam em meios de transmissão mais conhecidos, tais como a transmissão sem fio via ondas de rádio frequência, resultando num maior atraso na entrega de pacotes. Para tentar minimizar estes atrasos e aumentar a eficiência energética das redes subaquáticas, o trabalho otimizou o sistema de transmissão inserindo um código corretor de erros fontanal no transmissor de mensagens. Dentro desse contexto, foi necessário modelar o consumo energético necessário para a transmissão correta de pacotes de dados em redes subaquáticas utilizando códigos fontanais. Dentre os resultados do trabalho, o mais relevante conclui que o uso dos códigos fontanais é capaz de reduzir em até 30% o consumo de energia quando a distância de transmissão é de 20 km para o caso com a taxa de erro de quadro alvo (FER) de Po = 10^−5, e em ate 25% para a FER alvo de Po = 10^−3.
The present work employs fountain codes in an underwater network, in which data is transmitted using acoustic signals and has many applications. However, underwater networks are usually characterized by low propagation speed and smaller bandwidth than networks that use radio frequency signals, resulting in larger transmission delays. Then, aiming at minimizing the delays and increasing the energy efficiency of underwater networks, the present work employs fountain error-correcting codes at the transmitter. To that end, it was first necessary to model the energy consumption of a success data packet transmission in an underwater network using fountain codes. Our results show that the use of fountain codes is able to reduce up to 30% of energy consumption when the transmission distance is of 20 km for the case with a target frame error rate (FER) of Po = 10^−5 , and 25% for the same distance with a target FER of Po = 10^−3.
APA, Harvard, Vancouver, ISO, and other styles
48

Menezes, Ramon Maciel. "Desenvolvimento de um sistema distribuído de identificação em tempo real de parâmetros de qualidade de energia elétrica." Universidade Tecnológica Federal do Paraná, 2012. http://repositorio.utfpr.edu.br/jspui/handle/1/508.

Full text
Abstract:
CNPq, CAPES
O presente trabalho inclui a revisão das normas de qualidade de energia elétrica, a fim de normatizar o desenvolvimento do projeto seguindo normas nacionais e internacionais; a simulação de algoritmos como CFA e FFT, a fim de verificar a viabilidade de seu uso, bem como as limitações associadas ao processamento de formas de onda fortemente distorcidas. Inclui também a proposição e a verificação de um algoritmo capaz de calcular os índices (selecionados durante a revisão das normas) que pudessem avaliar a qualidade de energia através de sinais de tensão e corrente. Para o desenvolvimento do protótipo, foram selecionados sensores de tensão e de corrente confiáveis para o sistema de aquisição; um DSP, que executa os algoritmos previamente simulados, processando em tempo real os sinais adquiridos pelos sensores, a fim de reportar o estado da rede elétrica e/ou eventos ocorridos na rede através de um módulo ZigBee, responsável pela transmissão desses dados de forma segura. A classe de eventos de variação de tensão de curta duração foi incluída no processamento em tempo real realizado pelo DSP. Devido à imprevisibilidade e à rapidez da ocorrência desses eventos, foi desenvolvida uma ferramenta capaz de gerar essa classe de eventos, o gerador de VTCD. A análise de QEE em tempo real se mostrou viável mesmo com a utilização de dispositivos de baixo custo, permitindo, ainda que com algumas limitações, o levantamento de informações de QEE às quais cargas conhecidas estavam submetidas.
The present document includes a comprehensive literature review on power quality issues, to keep the development of this project aligned with national and international standards related; simulation algorithms such as FFT and CFA in order to verify the feasibility of its use, as well as limitations associated with the processing of strongly distorted waveform. It also includes the proposal and verification of an algorithm able to calculate the indices (selected during the standards review) that could assess the power quality through voltage and current signals. For prototype development, voltage and current sensors were selected for reliable acquisition system; a DSP, which running the previously simulated algorithms in order to process in real time the acquired voltage and current signals provided by sensors in order to report the status of the mains grid and/or events occurrence on the network through a ZigBee module, responsible for safety transmission data. The short term voltage change events class was also included in the real time processing performed by the DSP. Due to the unpredictability and short duration of these events, it was developed a tool capable of generating this class of events, the STVC generator. The PQ analysis in real time was feasible even with the use of low cost devices, allowing, although with some limitations, the survey of PQ information which known loads was submitted.
APA, Harvard, Vancouver, ISO, and other styles
49

Gomes, Eduardo Luis. "Arquitetura RF-Miner: uma solução para localização em ambientes internos." Universidade Tecnológica Federal do Paraná, 2017. http://repositorio.utfpr.edu.br/jspui/handle/1/2898.

Full text
Abstract:
A utilização de etiquetas RFID UHF passivas para localização indoor vem sendo amplamente estudada devido ao seu baixo custo. Porém ainda existe uma grande dificuldade em obter bons resultados, principalmente devido à variação de rádio frequência em ambientes que possuem materiais reflexivos, como por exemplo, metais e vidros. Esta pesquisa propõe uma arquitetura de localização para ambientes indoor utilizando etiquetas RFID UHF passivas e técnicas de mineração de dados. Com a aplicação da arquitetura em ambiente real foi possível identificar a posição exata de objetos com a precisão de aproximadamente cinco centímetros e em tempo real. A arquitetura se demonstrou uma eficiente alternativa para implantação de sistemas de localização indoor, além de apresentar uma técnica de derivação de atributos diretos que contribui efetivamente para os resultados finais.
The use of passive UHF RFID tags for indoor location has been widely studied due to its low cost. However, there is still a great difficulty to reach good results, mainly due the radio frequency variation in environments that have materials with reflective surfaces, such as metal and glass. This research proposes a localization architecture for indoor environments using passive UHF RFID tags and data mining techniques. With the application of the architecture in real environment, it was possible to identify the exact position of objects with the precision of approximately five centimeters and in real time. The architecture has demonstrated an efficient alternative for the implantation of indoor localization systems, besides presenting a derivation technique of direct attributes that contributes effectively to the final results.
APA, Harvard, Vancouver, ISO, and other styles
50

Romani, Eduardo. "Avaliação de qualidade de vídeo utilizando modelo de atenção visual baseado em saliência." Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1169.

Full text
Abstract:
A avaliação de qualidade de vídeo possui um papel fundamental no processamento de vídeo e em aplicações de comunicação. Uma métrica de qualidade de vídeo ideal deve garantir a alta correlação entre a predição da distorção do vídeo e a percepção de qualidade do Sistema Visual Humano. Este trabalho propõe o uso de modelos de atenção visual com abordagem bottom up baseados em saliências para avaliação de qualidade de vídeo. Três métricas objetivas de avaliação são propostas. O primeiro método é uma métrica com referência completa baseada na estrutura de similaridade. O segundo modelo é uma métrica sem referência baseada em uma modelagem sigmoidal com solução de mínimos quadrados que usa o algoritmo de Levenberg-Marquardt e extração de características espaço-temporais. E, a terceira métrica é análoga à segunda, porém usa a característica Blockiness na detecção de distorções de blocagem no vídeo. A abordagem bottom-up é utilizada para obter os mapas de saliências que são extraídos através de um modelo multiescala de background baseado na detecção de movimentos. Os resultados experimentais apresentam um aumento da eficiência de predição de qualidade de vídeo nas métricas que utilizam o modelo de saliência em comparação com as respectivas métricas que não usam este modelo, com destaque para as métricas sem referência propostas que apresentaram resultados melhores do que métricas com referência para algumas categorias de vídeos.
Video quality assessment plays a key role in the video processing and communications applications. An ideal video quality metric shall ensure high correlation between the video distortion prediction and the perception of the Human Visual System. This work proposes the use of visual attention models with bottom-up approach based on saliencies for video qualitty assessment. Three objective metrics are proposed. The first method is a full reference metric based on the structural similarity. The second is a no reference metric based on a sigmoidal model with least squares solution using the Levenberg-Marquardt algorithm and extraction of spatial and temporal features. And, the third is analagous to the last one, but uses the characteristic Blockiness for detecting blocking distortions in the video. The bottom-up approach is used to obtain the salient maps, which are extracted using a multiscale background model based on motion detection. The experimental results show an increase of efficiency in the quality prediction of the proposed metrics using salient model in comparission to the same metrics not using these model, highlighting the no reference proposed metrics that had better results than metrics with reference to some categories of videos.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography