To see the other types of publications on this topic, follow the link: Electronic computer unit.

Dissertations / Theses on the topic 'Electronic computer unit'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 48 dissertations / theses for your research on the topic 'Electronic computer unit.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Boettcher, Matthias. "Memory and functional unit design for vector microprocessors." Thesis, University of Southampton, 2014. https://eprints.soton.ac.uk/365071/.

Full text
Abstract:
Modern mobile devices employ SIMD datapaths to exploit small scale data-level parallelism to achieve the performance required to process a continuously growing number of computation intensive applications within a severely energy constrained environment. The introduction of advanced SIMD features expands the applicability of vector ISA extensions from media and signal processing algorithms to general purpose code. Considering the high memory bandwidth demands and the complexity of execution units associated with those features, this dissertation focuses on two main areas of investigation, the efficient handling of parallel memory accesses and the optimization of vector functional units. A key observation, obtained from simulation based analysis on the type and frequency of memory access patterns exhibited by general purpose workloads, is the tendency of consecutive memory references to access the same page. Exploiting this and further observations, Page-Based Memory Access Grouping enables a level one data cache interface to utilize single-ported TLBs and cache banks to achieve performance similar to multi-ported components, while consuming significantly less energy. Page-Based Way Determination extends the proposed scheme with TLB-coupled structures holding way information on recently accessed lines. These structures improve the energy efficiency of the vast majority of memory references by enabling them to bypass tag-arrays and directly target individual cache ways. A vector benchmarking environment - comprised of a flexible ISA extension, a parameterizable simulation framework and a corresponding benchmark suite - is developed and utilized in the second part of this thesis to facilitate investigations into the design aspects and potential performance benefits of advanced SIMD features. Based on it, a set of microarchitecture optimizations is introduced, including techniques to compute hardware interpretable masks for segmented operations, partition scans to allow specific energy - performance trade-offs, re-use existing multiplexers to process predicated and segmented vectors, accelerate scans on incomplete vectors, efficiently handle micro-ops fully comprised of predicated elements, and reference multiple physical registers within individual operands to improve the utilization of the vector register file.
APA, Harvard, Vancouver, ISO, and other styles
2

Han, Yiding. "Graphics Processing Unit-Based Computer-Aided Design Algorithms for Electronic Design Automation." DigitalCommons@USU, 2014. https://digitalcommons.usu.edu/etd/3868.

Full text
Abstract:
The electronic design automation (EDA) tools are a specific set of software that play important roles in modern integrated circuit (IC) design. These software automate the design processes of IC with various stages. Among these stages, two important EDA design tools are the focus of this research: floorplanning and global routing. Specifically, the goal of this study is to parallelize these two tools such that their execution time can be significantly shortened on modern multi-core and graphics processing unit (GPU) architectures. The GPU hardware is a massively parallel architecture, enabling thousands of independent threads to execute concurrently. Although a small set of EDA tools can benefit from using GPU to accelerate their speed, most algorithms in this field are designed with the single-core paradigm in mind. The floorplanning and global routing algorithms are among the latter, and difficult to render any speedup on the GPU due to their inherent sequential nature. This work parallelizes the floorplanning and global routing algorithm through a novel approach and results in significant speedups for both tools implemented on the GPU hardware. Specifically, with a complete overhaul of solution space and design space exploration, a GPU-based floorplanning algorithm is able to render 4-166X speedup, while achieving similar or improved solutions compared with the sequential algorithm. The GPU-based global routing algorithm is shown to achieve significant speedup against existing state-of-the-art routers, while delivering competitive solution quality. Importantly, this parallel model for global routing renders a stable solution that is independent from the level of parallelism. In summary, this research has shown that through a design paradigm overhaul, sequential algorithms can also benefit from the massively parallel architecture. The findings of this study have a positive impact on the efficiency and design quality of modern EDA design flow.
APA, Harvard, Vancouver, ISO, and other styles
3

Buthker, Gregory S. "Automated Vehicle Electronic Control Unit (ECU) Sensor Location Using Feature-Vector Based Comparisons." Wright State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=wright1558613387729083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Davidson, Conda. "Comparative analysis of teaching methods and learning styles in a high school computer spreadsheet unit /." free to MU campus, to others for purchase, 2000. http://wwwlib.umi.com/cr/mo/fullcit?p9974620.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bao, Rui He. "Case-based reasoning for automotive engine electronic control unit calibration." Thesis, University of Macau, 2009. http://umaclib3.umac.mo/record=b2099648.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Horsburgh, Ian J. "The development of a mass memory unit for a micro-satellite using NAND flash memory." Thesis, Stellenbosch : Stellenbosch University, 2005. http://hdl.handle.net/10019.1/50474.

Full text
Abstract:
Thesis (MScEng)--Stellenbosch University, 2005.
ENGLISH ABSTRACT: This thesis investigates the possible use of NAND flash memory for a mass memory unit on a micro-satellite. The investigation begins with an analysis of NAND flash memory devices including the complexity of the internal circuitry and the occurrence of bad memory sections (bad blocks). Design specifications are produced and various design architectures are discussed and evaluated. Subsequently, a four bus serial access architecture using 16- bit NAND flash devices was chosen to be developed further. A VHDL design was created in order to realise the intended system functionality. The main functions of the design include a sustained write data rate of 24 MB/s, bad block management, multiple image storing, error checking and correction, defective device handling and reading while writing. The design was simulated extensively using NAND flash simulation models. Finally, a demonstration test board was designed and produced. This board includes an FPGA and an array of 16 8-bit NAND flash devices. The board was tested sucessfully and a write data rate of 12 MB/s was achieved along with all the other main functions.
AFRIKAANSE OPSOMMING: Hierdie tesis ondersoek die moontlike gebruik van NAND flash tegnologie as die geheue eenheid van ’n mikrosatelliet. As ’n beginpunt word NAND flash tegnologie ondersoek in terme van die kompleksiteit van interne stroombane en die voorkoms van defektiewe geheuesegmente. Daarna word ontwerpspesifikasies voortgebring en verskillende ontwerpsmoontlikhede met mekaar vergelyk. Vanuit hierdie oorwegings is daar besluit om die oplossing te implementeer met ’n vier-bus seri¨ele struktuur bestaande uit 16-bis NAND flash toestelle. Om die ontwerpspesifikasies te realiseer, is ’n VHDL stelsel geskep. Die belangrikste funksies van hierdie stelsel is ’n konstante skryftempo van 24 MB/s, die bestuur van defektiewe geheuesegmente, die stoor van meer as een beeld, foutopsporing en -herstel, optimale werking in die geval van defektiewe geheuetoestelle en laastens, die gelyktydige lees en skryf van data. Die stelsel is breedvoerig getoets met NAND flash simulasiemodelle. Ten slotte is ’n fisiese demonstrasiebord, bestaande uit ’n FPGA en 16 8-bis NAND flash toestelle, ontwerp en gebou. Fisiese metings was ’n sukses. ’n Skryftempo van 12 MB/s is gehaal, tesame met die korrekte werking van die ander hooffunksies.
APA, Harvard, Vancouver, ISO, and other styles
7

Bryer, Bevan. "Protection unit for radiation induced errors in flash memory systems." Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/50070.

Full text
Abstract:
Thesis (MScEng)--University of Stellenbosch, 2004.
ENGLISH ABSTRACT: Flash memory and the errors induced in it by radiation were studied. A test board was then designed and developed as well as a radiation test program. The system was irradiated. This gave successful results, which confirmed aspects of the study and gave valuable insight into flash memory behaviour. To date, the board is still being used to test various flash devices for radiation-harsh environments. A memory protection unit (MPU) was conceptually designed and developed to morntor flash devices, increasing their reliability in radiation-harsh environments. This unit was designed for intended use onboard a micro-satellite. The chosen flash device for this study was the K9F1208XOA model from SAMSUNG. The MPU was designed to detect, maintain, mitigate and report radiation induced errors in this flash device. Most of the design was implemented in field programmable gate arrays and was realised using VHDL. Simulations were performed to verify the functionality of the design subsystems. These simulations showed that the various emulated errors were handled successfully by the MPU. A modular design methodology was followed, therefore allowing the chosen flash device to be replaced with any flash device, following a small reconfiguration. This also allows parts of the system to be duplicated to protect more than one device.
AFRIKAANSE OPSOMMING: 'n Studie is gemaak van" Flash" geheue en die foute daarop wat deur radiasie veroorsaak word. 'n Toetsbord is ontwerp en ontwikkel asook 'n radiasie toetsprogram waarna die stelsel bestraal is. Die resultate was suksesvol en het aspekte van die studie bevestig en belangrike insig gegee ten opsigte van "flash" komponente in radiasie intensiewe omgewmgs. 'n Geheue Beskermings Eenheid (GBE) is konseptueel ontwerp en ontwikkelom die "flash" komponente te monitor. Dit verhoog die betroubaarheid in radiasie intensiewe omgewings. Die eenheid was ontwerp met die oog om dit aan boord 'n mikro-satelliet te gebruik. Die gekose "flash" komponent vir die studie was die K9F1208XOA model van SAMSUNG. Die GBE is ontwerp om foute wat deur radiasie geïnduseer word in die "flash" komponent te identifiseer, herstel en reg te maak. Die grootste deel van die implementasie is gedoen in "field programmable gate arrays" and is gerealiseer deur gebruik te maak van VHDL. Simulasies is gedoen om die funksionaliteit van die ontwikkelde substelsels te verifieer. Hierdie simulasies het getoon dat die verskeie geëmuleerde foute suksesvol deur die GBE hanteer is. 'n Modulre ontwerpsmetodologie is gevolg sodat die gekose "flash" komponent deur enige ander flash komponent vervang kan word na gelang van 'n eenvoudige herkonfigurasie. Dit stelook dele van die sisteem in staat om gedupliseer te word om sodoende meer as een komponent te beskerm.
APA, Harvard, Vancouver, ISO, and other styles
8

Alm, Therese. "Design av en användarvänlig Androidapplikation för trådlös kommunikation med Electronic Control Unit för bil eller testmiljö." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-119859.

Full text
Abstract:
Det här examensarbetet har utförts inom programmet högskoleingenjör datateknik vid Linköpings Universitet under våren 2015 och utförts på begäran av ArcCore i Linköping. Syftet med det här examensarbetet är att skapa och designa en användarvänlig Androidapplikation som trådlöst kan kommunicera med electronic control units i bil eller testmiljö. Androidapplikationen består av fem huvudskärmar, fyra vars uppgift är att skriva ut informationen som åker över CAN bussen. De fem skärmarna är start, felkoder, sensorer, ECU extract och overview. Start tar dig vidare till de andra skärmarna, felkoder skriver ut alla felkoder, sensorer skriver ut alla sensorvärden, ECU extract skriver ut all information och overview visar en virtuell instrumentbräda. Användarutvärderingar har utförts för att ta fram både designen och layouten på applikationen. Utvecklingsprocessen för att få fram applikationen har genomförts med hjälp av utvecklingsmetoden extreme programming och utvärderingar har utförts med traditional usability tests och binary success. Utvärderingarnas feedback har använts för att utveckla både designen och användarvänligheten på applikationen. Androidapplikationen har utvecklats i Android Studio och kommunicerar med ECU:erna med hjälp av en PEAK PCAN wireless gateway som kopplats upp mot Hercules Development Kit TMS570 MCU. Resultatet är att vi tydligt kan se att användarvänligheten har ökat under utvecklingsprocessen och att vi nu med hjälp av utvärderingar har en snygg och lättanvändarvänlig Android applikation som kommer kunna användas av alla som vill ta del av informationen som finns på CAN bussen.
This thesis has been carried out within the program Bachelor of Computer Science and Engineering at Linköping University in the spring of 2015 performed at the request of ArcCore in Linköping. The aim is to create and design a user friendly Android application which wirelessly can communicate with the electronic control units in a car or test environment. The Android application consists of five main screens, four whose task is to print the information travelling on the CAN bus. The five screens are start, fault codes, sensors, ECU Extract and overview. Start will take you to the other screens. Fault codes, print all the fault codes, sensors prints all sensors, the ECU extract prints all information and overview displays a virtual dashboard. User Evaluations have been conducted to develop both the design and layout of the application. The development process is executed by using extreme programming and the evaluations have been carried out with the help of traditional usability tests and binary success. The evaluations feedback has been used to develop both the design and the user friendliness of the application. The application has been developed in the Android Studio and communicates with the ECUs using a PEAK PCAN Wireless Gateway which is connected to the Hercules Development Kit TMS570 MCU. The result is that we can clearly see that the ease of use has increased during the development process and that we now by using evaluations have a nice and easy user-friendly Android application that can be used by all who wants to get the information available on the CAN bus.
APA, Harvard, Vancouver, ISO, and other styles
9

Österberg, Martin. "Design av en användarvänlig Androidapplikation för trådlös kommunikation med Electronic Control Unit för bil eller testmiljö." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-127389.

Full text
Abstract:
This is a study about software usability and information presen-tation in an Android application. The application is meant topresent the information coming being sent on the CAN buss ofa car or to listen to the messages being sent by just a few ECUconnected via a CAN buss. The study aims to evaluate the usabi-lity of the application based on an exploratory research method.The study was conducted using an iterative processes, where wefirst created a prototype. The prototype was then presented to anumber of users and they were asked to do a number of simpletasks within the application. We then used the feedback from thisexamination to improve the usability of the application. After thiswe did a second presentation of the application and compared itto the results to the results from the previous tests to see if wesucceeded in increasing the usability of the application.The first study tests showed that there were several weaknesses inthe application that we ourselves did not see. It showed that ourbackground was too prominent and that the text became hard toread, along with several other small things that we corrected. Wethen saw in our second tests that most parts of the applicationhad improved. There were still some parts of the application thatcould still use some development, and all people what differentthings in an application.
I vårt examensarbete så har vi gjort en undersökning på hur viska utforma en Android applikation för att få den att presenteradata på ett lätt och användarvänligt sätt. Applikationen är me-nad att presentera den data som skickas mellan ECU:erna i enbil eller testmiljö. Undersökningen som görs kommer testa hurpass användarvänlig, lättförståelig och tilltalande applikationenär, och utifrån det förbättra den.Under utvecklingen och undersökningen så använde vi oss av eniterativ process, med denna process så började vi att skapa enprototyp av applikationen. Denna prototyp användes sedan föratt utföra en användarutvärderingar där vi bad ett antal per-soner att testa applikationen, sedan fick de svara på ett antalfrågor angående applikationen. Data ifrån detta användes sedanför att uppdatera applikationens utseende och funktion. Efterdetta gjorde vi en till utvärdering där vi ställde samma frågorsom tidigare. Vi jämförde därefter de nya svaren med svaren frånden första utvärderingen för att se om vi lyckats förbättra appli-kationen.Den första studien visade att det fans flera svagheter i den ur-sprungliga applikationen. Till exempel så tog bakgrunden förmycket fokus och texten var svårläslig, det var utöver detta mångamindre detaljer som anmärktes på. Vi märkte efter vårt andratest att applikationen hade färre och mindre svagheter. Använ-darnas helhetsintryck av applikationen var bättre och mer posi-tivt.
APA, Harvard, Vancouver, ISO, and other styles
10

Lööf, Sam. "Evaluation of Protocols for Transfer of Automotive Data from an Electronic Control Unit." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-176080.

Full text
Abstract:
Nowadays almost all motorized vehicles use electronic control units (ECUs) to control parts of a vehicle’s function. A good way to understand a vehicle’s behaviour is to analyse logging data containing ECU internal variables. Data must then be transferred from the ECU to a computer in order to study such data. Today, Keyword Protocol (KWP) requests are used to read data from the ECUs at Scania. The method is not suitable if many signals should be logged with a higher transfer rate than the one used today. In this thesis, communication protocols, that allow an ECU to communicate with a computer, are studied. The purpose of this master’s thesis is to examine how the transfer rate of variables from Scania’s ECUs to a computer can become faster compared to the method used today in order to get a more frequent logging of the variables. The method that was chosen was implemented, evaluated and also compared to the method used today. The busload, total CPU load and CPU load for the frequency used during the experiments, 100 Hz, was also examined and evaluated. The experiments performed show that the method chosen, data acquisition (DAQ) with CAN Calibration Protocol (CCP), increased the transfer rate of the internal ECU variables significantly compared to the method using KWP requests. The results also show that the number of signals have a major impact on the busload for DAQ. The busload is the parameter that limits the number of signals that can be logged. The total CPU load and the CPU load for 100 Hz are not affected significantly compared to when no transmissions are performed. Even though the busload can become high if many variables are used in DAQ, DAQ with CCP is preferable over KWP requests. This is due to the great increase in transfer rate of the ECU internal variables and thus a great increase in the logging frequency.
Nuförtiden används styrenheter (ECUer) för att styra delar av ett fordons funktion i så gott som alla motoriserade fordon. Ett bra sätt att förstå ett fordons beteende är att analysera loggningsdata som innehåller interna styrenhetsvariabler. Data måste då överföras från styrenheten till en dator för att data ska kunna studeras. Idag används Keyword Protocol-förfrågningar (KWP-förfrågningar) för att läsa data från Scanias styrenheter. Metoden är inte lämplig om man vill logga många variabler med en högre överföringshastighet än den som används idag. I detta examensarbete studeras kommunikationsprotokoll som tillåter en styrenhet att kommunicera med en dator. Examensarbetets syfte är undersöka hur överföringshastigheten av variablerna, från Scanias styrenheter till en dator, kan ökas jämfört med den metod som används idag för att få en mer frekvent loggning av variablerna. Metoden som valdes implementerades, utvärderades och jämfördes med metoden som används idag. Busslasten, totala CPU-lasten och CPU-lasten för den frekvens som användes under experimenten 100 Hz har också undersökts och evaluerats. De utförda experimenten visar att den valda metoden, data acquisition (DAQ) med CAN Calibration Protocol (CCP), ökade överföringshastigheten av de interna styrenhetsvariablerna betydligt jämfört med metoden KWP-förfrågningar. Experimenten visar också att antalet signaler har stor inverkan på busslasten för DAQ. Busslasten är den parameter som begränsar antalet signaler som kan loggas. Den totala CPU-lasten och CPU-lasten för 100 Hz påverkas inte betydligt jämfört med när inga överföringar görs. DAQ med CCP är att föredra framför KWP-förfrågningar även om busslasten blir hög för DAQ då den stora ökningen i överföringshastighet av de interna styrenhetsvariablerna medför en mer frekvent loggning av variablerna.
APA, Harvard, Vancouver, ISO, and other styles
11

Civelek, Utku. "A Software Tool For Vehicle Calibration, Diagnosis And Test Viacontroller Area Network." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614836/index.pdf.

Full text
Abstract:
Controller Area Networks (CAN&rsquo
s) in vehicles need highly sophisticated software tools to be designed and tested in development and production phases. These tools consume a lot of computer resources and usually have complex user interfaces. Therefore, they are not feasible for vehicle service stations where low-performance computers are used and the workers not very familiar with software are employed. In this thesis, we develop a measurement, calibration, test and diagnosis program -diaCAN- that is suitable for service stations. diaCAN can transmit and receive messages over 3 CAN bus channels. It can display and plot the data received from the bus, import network message and Electronic Control Unit (ECU) configurations, and record bus traffic with standard file formats. Moreover, diaCAN can calibrate ECU values, acquire fault records and test vehicle components with CAN Calibration Protocol functions. All of these capabilities are verified and evaluated on a test bed with real CAN bus and ECUs.
APA, Harvard, Vancouver, ISO, and other styles
12

Andersson, Gustav. "Translation of CAN Bus XML Messages to C Source Code." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-96424.

Full text
Abstract:
The concept of translating source code into other target programming languages is extensively used in a wide area of applications. Danfoss Power Solutions AB, a company located in Älmhult, strives to streamline their way of software development for microcontrollers by implementing this idea. Their proprietary software tool PLUS+1 GUIDE is based on the CAN bus communication network, which allows electronic control units to share data represented in the XML format. Due to compatibility problems, the application in the electronic control units requires this data to be translated into the source code in the low-level C programming language. This thesis project proposes an approach for facilitating this task by implementing a source-to-source compiler that performs the translation with a reduced level of manual user involvement. A literature review was conducted in order to find the existing solutions relevant to our project task. An analysis of the provided XML input files was thereafter performed to clarify a software design suitable for the problem. By using a general XML parser, a solution was then constructed. The implementation resulted in a fully functional source-to-source compiler, producing the generated C code within a time range of 73–85 milliseconds for the input test files of typical size. The feedback received from the domain experts at Danfoss confirms the usability of the proposed solution.
APA, Harvard, Vancouver, ISO, and other styles
13

Straathof, Bas Theodoor. "A Deep Learning Approach to Predicting the Length of Stay of Newborns in the Neonatal Intensive Care Unit." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-282873.

Full text
Abstract:
Recent advancements in machine learning and the widespread adoption of electronic healthrecords have enabled breakthroughs for several predictive modelling tasks in health care. One such task that has seen considerable improvements brought by deep neural networks is length of stay (LOS) prediction, in which research has mainly focused on adult patients in the intensive care unit. This thesis uses multivariate time series extracted from the publicly available Medical Information Mart for Intensive Care III database to explore the potential of deep learning for classifying the remaining LOS of newborns in the neonatal intensive care unit (NICU) at each hour of the stay. To investigate this, this thesis describes experiments conducted with various deep learning models, including long short-term memory cells, gated recurrentunits, fully-convolutional networks and several composite networks. This work demonstrates that modelling the remaining LOS of newborns in the NICU as a multivariate time series classification problem naturally facilitates repeated predictions over time as the stay progresses and enables advanced deep learning models to outperform a multinomial logistic regression baseline trained on hand-crafted features. Moreover, it shows the importance of the newborn’s gestational age and binary masks indicating missing values as variables for predicting the remaining LOS.
Framstegen inom maskininlärning och det utbredda införandet av elektroniska hälsoregister har möjliggjort genombrott för flera prediktiva modelleringsuppgifter inom sjukvården. En sådan uppgift som har sett betydande förbättringar förknippade med djupa neurala nätverk är förutsägelsens av vistelsetid på sjukhus, men forskningen har främst inriktats på vuxna patienter i intensivvården. Den här avhandlingen använder multivariata tidsserier extraherade från den offentligt tillgängliga databasen Medical Information Mart for Intensive Care III för att undersöka potentialen för djup inlärning att klassificera återstående vistelsetid för nyfödda i den neonatala intensivvårdsavdelningen (neonatal-IVA) vid varje timme av vistelsen. Denna avhandling beskriver experiment genomförda med olika djupinlärningsmodeller, inklusive longshort-term memory, gated recurrent units, fully-convolutional networks och flera sammansatta nätverk. Detta arbete visar att modellering av återstående vistelsetid för nyfödda i neonatal-IVA som ett multivariat tidsserieklassificeringsproblem på ett naturligt sätt underlättar upprepade förutsägelser över tid och gör det möjligt för avancerade djupa inlärningsmodeller att överträffaen multinomial logistisk regressionsbaslinje tränad på handgjorda funktioner. Dessutom visar det vikten av den nyfödda graviditetsåldern och binära masker som indikerar saknade värden som variabler för att förutsäga den återstående vistelsetiden.
APA, Harvard, Vancouver, ISO, and other styles
14

Skoupý, Petr. "Využití diagnostických metod pro hodnocení technického stavu vozidel." Master's thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2012. http://www.nusl.cz/ntk/nusl-232616.

Full text
Abstract:
The thesis has focused an influence of actual automotive diagnostic methods which can be used for the evaluation of vehicles´ technical conditions. I´ve sorted various automotive diagnostic methods included the different devices in the first part of thesis. Second part describes the vehicle valuation procedure according to methodology of Czech Experts´ Norm No. I/2005. Third part has introduced the valuation procedures carried by means of diagnostic devices included practical examples. Fourth part compares and evaluates results. Lastly the thesis presents capabilities of technical diagnosis during the vehicle valuation procedure and possible view of both methods.
APA, Harvard, Vancouver, ISO, and other styles
15

McManigal, Gerald F. "An electronic bulletin board for UNIX based systems." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9935.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Vandi, Damiano. "ADAS Value Optimization for Rear Park Assist: Improvement and Assessment of Sensor Fusion Strategy." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
The project for this thesis consists in an ADAS Value Optimization activity conducted during an internship in Maserati S.p.A. with the objective of removing the ultrasonic sensors used for the Rear Park Assist (RPA) ADAS feature, obtaining the same functionality and performance in the detection and signaling of obstacles behind the car through a new system based on a sensor fusion strategy between Rear View Camera (RVC) and Blind Spot Radars (BSD). To achieve this goal, a study of the current RPA feature has been conducted, and starting from a previous implementation of the sensor fusion strategy for the new system, multiple updates and improvements have been implemented in order to achieve the functionality and performance required. Both hardware and software components of the system were updated and redesigned in the MATLAB/Simulink environment, and the final system obtained was tested through a standard validation procedure in a virtual simulation environment, obtaining encouraging results compatible with the RPA requirements and demonstrating the technical and economic feasibility of the developed RPA system based on a sensor fusion strategy between RVC and BSD which, after additional tests on the actual vehicle, could go into production.
APA, Harvard, Vancouver, ISO, and other styles
17

Clabough, Douglas M. "An electronic calendar system in a distributed UNIX environment." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

McCanna, Frank. "The task distribution preprocessor (TDP) /." Online version of thesis, 1989. http://hdl.handle.net/1850/10584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Dietrich, Gregory L. "Adapting a portable SIMULA compiler to Perkin-Elmer computers in a UNIX environment." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9910.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Han, Guodong, and 韩国栋. "Profile-guided loop parallelization and co-scheduling on GPU-based heterogeneous many-core architectures." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hub.hku.hk/bib/B50534257.

Full text
Abstract:
The GPU-based heterogeneous architectures (e.g., Tianhe-1A, Nebulae), composing multi-core CPU and GPU, have drawn increasing adoptions and are becoming the norm of supercomputing as they are cost-effective and power-efficient. However, programming such heterogeneous architectures still requires significant effort from application developers using sophisticated GPU programming languages such as CUDA and OpenCL. Although some automatic parallelization tools utilizing static analysis could ease the programming efforts, this approach could only parallelize loops 100% free of inter-iteration dependency (i.e., determined DO-ALL loops) because of imprecision of static analysis. To exploit the abundant runtime parallelism and take full advantage of the computing resources both in CPU and GPU, in this work, we propose a new user-friendly compiler framework and runtime system, which helps Java applications harness the full power of a heterogeneous system. It unveils an all-round system design unifying the programming style and language for transparent use of both CPUs and GPUs, automatically parallelizing all kinds of loops, scheduling workloads efficiently across CPU and GPU resources while ensuring data coherence during highly-threaded execution. By means of simple user annotations, sequential Java source code will be analyzed, translated and compiled into a dual executable consisting of CUDA kernels and multiple Java threads running on GPU and CPU cores respectively. Annotated loops will be automatically split into loop chunks (or tasks) being scheduled to execute on all available GPU/CPU cores. To guide the runtime task scheduling, we develop a novel dynamic loop profiler which generates the program dependency graph (PDG) and computes the density of dependencies across iterations through a hybrid checking scheme combining intra-warp and inter-warp analyses. Implementing a GPU-tailored thread-level speculation (TLS) model, our system supports speculative execution of loops with moderate dependency densities and privatization of loops having only false dependencies on the GPU side. Our scheduler also supports task stealing and task sharing algorithms that allow swift load redistribution across GPU and CPU. We have carried out several experiments to evaluate the profiling overhead and up to 11 real-life applications to evaluate our system performance. Testing results show that the overhead is moderate compared with the sequential execution and prove that almost all the applications could benefit from our system.
published_or_final_version
Computer Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
21

Zhang, Chenggang, and 张呈刚. "Run-time loop parallelization with efficient dependency checking on GPU-accelerated platforms." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B47167658.

Full text
Abstract:
General-Purpose computing on Graphics Processing Units (GPGPU) has attracted a lot of attention recently. Exciting results have been reported in using GPUs to accelerate applications in various domains such as scientific simulations, data mining, bio-informatics and computational finance. However, up to now GPUs can only accelerate data-parallel loops with statically analyzable parallelism. Loops with dynamic parallelism (e.g., with array accesses through subscripted subscripts), an important pattern in many general-purpose applications, cannot be parallelized on GPUs using existing technologies. Run-time loop parallelization using Thread Level Speculation (TLS) has been proposed in the literatures to parallelize loops with statically un-analyzable dependencies. However, most of the existing TLS systems are designed for multiprocessor/multi-core CPUs. GPUs have fundamental differences with CPUs in both hardware architecture and execution model, making the previous TLS designs not work or inefficient when ported to GPUs. This thesis presents GPUTLS, a runtime system designed to support speculative loop parallelization on GPUs. The design of GPU-TLS addresses several key problems encountered when adapting TLS to GPUs: (1) To reduce the possibility of mis-speculation, deferred-update memory versioning scheme is adopted to avoid mis-speculations caused by inter-iteration WAR and WAW dependencies. A technique named intra-warp value forwarding is proposed to respect some inter-iteration RAW dependencies, which further reduces the mis-speculation possibility. (2) An incremental speculative execution scheme is designed to exploit partial parallelism within loops. This avoids excessive re-executions and reduces the mis-speculation penalty. (3) The dependency checking among thousands of speculative GPU threads poses large overhead and can easily become the performance bottleneck. To lower the overhead, we design several e_cient dependency checking schemes named PRW+BDC, SW, SR, SRW+EDC, and SRW+LDC respectively. (4) We devise a novel parallel commit scheme to avoid the overhead incurred by the serial commit phase in most existing TLS designs. We have carried out extensive experiments on two platforms with different NVIDIA GPUs, using both a synthetic loop that can simulate loops with different characteristics and several loops from real-life applications. Testing results show that the proposed intra-warp value forwarding and eager dependency checking techniques can improve the performance for almost all kinds of loop patterns. We observe that compared with other dependency checking schemes, SR and SW can achieve better performance in most cases. It is also shown that the proposed parallel commit scheme is especially useful for loops with large write set size and small number of inter-iteration WAW dependencies. Overall, GPU-TLS can achieve speedups ranging from 5 to 105 for loops with dynamic parallelism.
published_or_final_version
Computer Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
22

Shear, Raymond F. "Implementation of a Modula 2 subset compiler supporting a "C" language interface using commonly available UNIX tools /." Online version of thesis, 1989. http://hdl.handle.net/1850/10505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Tatakis, Thomas Jr. "NAMER : a distributed name server for a connected UNIX environment /." Online version of thesis, 1988. http://hdl.handle.net/1850/10448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Grimm, Frank. "Enabling collaborative modelling for a multi-site model-driven software development approach for electronic control units." Thesis, Bournemouth University, 2012. http://eprints.bournemouth.ac.uk/20688/.

Full text
Abstract:
An important aspect of support for distributed work is to enable users at different sites to work collaboratively, across different sites, even different countries but where they may be working on the same artefacts. Where the case is the design of software systems, design models need to be accessible by more than one modeller at a time allowing them to work independently from each other in what can be called a collaborative modelling process supporting parallel evolution. In addition, as such design is a largely creative process users are free to create layouts which appear to better depict their understanding of certain model elements presented in a diagram. That is, that the layout of the model brings meaning which exceed the simple structural or topological connections. However, tools for merging such models tend to do so from a purely structural perspective, thus losing an important aspect of the meaning which was intended to be conveyed by the modeller. This thesis presents a novel approach to model merging which allows the preservation of such layout meaning when merging. It first presents evidence from an industrial study which demonstrates how modellers use layout to convey meanings. An important finding of the study is that diagram layout conveys domain-specific meaning and is important for modellers. This thesis therefore demonstrates the importance of diagram layout in model-based software engineering. It then introduces an approach to merging which allows for the preservation of domain-specific meaning in diagrams of models, and finally describes a prototype tool and core aspects of its implementation.
APA, Harvard, Vancouver, ISO, and other styles
25

Cooley, Daniel Warren. "Data acquisition unit for low-noise, continuous glucose monitoring." Diss., University of Iowa, 2012. https://ir.uiowa.edu/etd/2844.

Full text
Abstract:
As the number of people with diabetes continues to increase, research efforts improving glucose testing methods and devices are under way to improve outcomes and quality of life for diabetic patients. This dissertation describes the design and testing of a Data Acquisition Unit (DAU) providing low noise photocurrent spectra for use in a continuous glucose monitoring system. The goal of this research is to improve the signal to noise ratio (SNR) of photocurrent measurements to increase glucose concentration measurement accuracy. The glucose monitoring system consists of a portable monitoring device and base station. The monitoring device measures near infrared (IR) absorption spectra from interstitial fluid obtained by microdialysis or ultrafiltration probe and transmits the spectra to a base station via USB or a ZigBee radio link. The base station utilizes chemometric calibration methods to calculate glucose concentration from the photocurrent spectra. Future efforts envisage credit card-sized monitoring devices. The glucose monitor system measures the optical absorbance spectrum of an interstitial fluid (ISF) sample pumped through a fluid chamber inside a glucose sensor. Infrared LEDs in the glucose sensor illuminate the ISF sample with IR light covering the 2.2 to 2.4 micron wavelength region where glucose has unique features in its absorption spectrum. Light that passes through the sample propagates through a linearly variable bandpass filter and impinges on a photodiode array. The center frequency of the variable filter is graded along its length such that the filter and photodiode array form a spectrometer. The data acquisition unit (DAU) conditions and samples photocurrent from each photodiode channel and sends the resulting photocurrent spectra to the Main Controller Unit (MCU). The MCU filters photocurrent samples providing low noise photocurrent spectra to a base station via USB or Zigbee radio link. The glucose monitoring system limit of detection (LOD) from a single glucose sensor wavelength is 5.8 mM with a system bandwidth of 0.00108 Hz. Further analysis utilizing multivariate calibration methods such as the net analyte signal method promise to reduce the glucose monitoring system LOD approaching a clinically useful level of approximately 2 mM.
APA, Harvard, Vancouver, ISO, and other styles
26

Del, Rincon Luis A. 1963. "Performance evaluation of microcomputer execution of AHPL combinational logic units." Thesis, The University of Arizona, 1990. http://hdl.handle.net/10150/278438.

Full text
Abstract:
Design automation systems use Computer Hardware Description Languages as the input languages to test and verify the design of digital systems. AHPL is a popular hardware description language used to describe digital systems. This language is supported by a function-level simulator (HPSIM2). This simulator was upgraded (HPSIM2_CL) to support the use of unit description called Combinational Logic Units or CLUNITs. This thesis presents the transition of HPSIM2_CL from the VAX to the Macintosh microcomputer environment. The modifications made to the simulator are explained, and examples to test and analyze execution performance are also presented.
APA, Harvard, Vancouver, ISO, and other styles
27

McCullagh, Paul J. "DistriX : an implementation of UNIX on transputers." Master's thesis, University of Cape Town, 1989. http://hdl.handle.net/11427/15901.

Full text
Abstract:
Bibliography: pages 104-110.
Two technologies, distributed operating systems and UNIX are very relevant in computing today. Many distributed systems have been produced and many are under development. To a large extent, distributed systems are considered to be the only way to solve the computing needs of the future. UNIX, on the other hand, is becoming widely recognized as the industry standard for operating systems. The transputer, unlike. UNIX and distributed systems is a relatively new innovation. The transputer is a concurrent processing machine based on mathematical principles. Increasingly, the transputer is being used to solve a wide range of problems of a parallel nature. This thesis combines these three aspects in creating a distributed implementation of UNIX on a network of transputers. The design is based on the satellite model. In this model a central controlling processor is surrounded by worker processors, called satellites, in a master/ slave relationship.
APA, Harvard, Vancouver, ISO, and other styles
28

Merritt, John W. "Distributed file systems in an authentication system." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Kaewprag, Pacharmon Fuhry. "Visual Analysis of Bayesian Networks for Electronic Health Records." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1531778349031686.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Hoffman, P. Kuyper. "A file server for the DistriX prototype : a multitransputer UNIX system." Master's thesis, University of Cape Town, 1989. http://hdl.handle.net/11427/17188.

Full text
Abstract:
Bibliography: pages 90-94.
The DISTRIX operating system is a multiprocessor distributed operating system based on UNIX. It consists of a number of satellite processors connected to central servers. The system is derived from the MINIX operating system, compatible with UNIX Version 7. A remote procedure call interface is used in conjunction with a system wide, end-to-end communication protocol that connects satellite processors to the central servers. A cached file server provides access to all files and devices at the UNIX system call level. The design of the file server is discussed in depth and the performance evaluated. Additional information is given about the software and hardware used during the development of the project. The MINIX operating system has proved to be a good choice as the software base, but certain features have proved to be poorer. The Inmos transputer emerges as a processor with many useful features that eased the implementation.
APA, Harvard, Vancouver, ISO, and other styles
31

Andung, Muntaha Muhamad. "Non-intrusive Logging and Monitoring System of a Parameterized Hardware-in-the-loop Real-Time Simulator." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254655.

Full text
Abstract:
Electronic Control Unit (ECU) is a crucial component in today’s vehicle. In a complete vehicle, there are many ECUs installed. Each of these controls a single function of the vehicle. During the development cycle of an ECU, its functionality needs to be validated against the requirement specification. The Hardware-in-the-loop (HIL) method is commonly used to do this by testing the ECU in a virtual representation of its controlled system. One crucial part of the HIL testing method is an intermediary component that acts as a bridge between the simulation computer and the ECU under test. This component runs a parameterized real-time system that translates messages from the simulation computer to the ECU under test and vice versa. It has a strict real-time requirement for each of its tasks to complete.A logging and monitoring system is needed to ensure that the intermediary component is functioning correctly. This functionality is implemented in the form of low priority additional tasks that run concurrently with the high priority message translation tasks. The implementation of these tasks, alongside with a distributed system to support the logging and monitoring functionality, is presented in this thesis work.Several execution time measurements are carried out to get the information on how the parameters of a task affect its execution time. Then, the linear regression analysis is used to model the execution time estimation of the parameterized tasks. Finally, the time demand analysis is utilized to provide a guarantee that the system is schedulable.
Elektronisk styrenhet (ECU) är en viktig del i dagens fordon. I ett komplett fordon finns det många ECU installerade. Var och en av dessa kontrollerar en enda funktion hos fordonet. Under en utvecklingscykel för en ecu måste dess funktionalitet valideras mot kravspecifikationen. HIL-metoden (Hardware-in-the-loop) används vanligtvis för att göra detta genom att testa ECU i en virtuell representation av sitt styrda system. En viktig del av HIL-testmetoden är en mellanliggande komponent som fungerar som en bro mellan simuleringsdatorn och den ecu som testas. Denna komponent driver ett parametrerat realtidssystem som översätter meddelanden från simuleringsdatorn till ECU som testas och vice versa. Det har en strikt realtidskrav för att alla uppgifter ska kunna slutföras.Ett loggnings och övervakningssystem behövs för att den mellanliggande komponenten ska fungera korrekt. Denna funktionalitet är implementerad i form av extraordinära uppgifter med låg prioritet som körs samtidigt med de högsta prioritetsuppgifterna för översättningstjänster. Genomförandet av dessa uppgifter, tillsammans med ett distribuerat system för att stödja loggnings och övervakningsfunktionaliteten, presenteras i detta avhandlingararbete.Flera utförandetidsmätningar utförs för att få information om hur parametrarna för en uppgift påverkar dess körtid. Därefter används den linjära regressionsanalysen för att modellera exekveringstidestimeringen av de parametrerade uppgifterna. Slutligen används tidsanalysanalysen för att garantera att systemet är schemaläggbart.
APA, Harvard, Vancouver, ISO, and other styles
32

Lee, Kum-Yu Enid. "Privacy and security of an intelligent office form." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Kamireddy, Srinath. "Comparison of state estimation algorithms considering phasor measurement units and major and minor data loss." Master's thesis, Mississippi State : Mississippi State University, 2008. http://library.msstate.edu/etd/show.asp?etd=etd-11072008-121521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Persson, Anders. "Platform development of body area network for gait symmetry analysis using IMU and UWB technology." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-39498.

Full text
Abstract:
Having a device with the capability of measure motions from gait produced by a human being, could be of most importance in medicine and sports. Physicians or researchers could measure and analyse key features of a person's gait for the purpose of rehabilitation or science, regarding neurological disabilities. Also in sports, professionals and hobbyists could use such a device for improving their technique or prevent injuries when performing. In this master thesis, I present the research of what technology is capable of today, regarding gait analysis devices. The research that was done has then help the development of a suggested standalone hardware sensor node for a Body Area Network, that can support research in gait analysis. Furthermore, several algorithms like for instance UWB Real-Time Location and Dead Reckoning IMU/AHRS algorithms, have been implemented and tested for the purpose of measuring motions and be able to run on the sensor node device. The work in this thesis shows that a IMU sensor have great potentials for generating high rate motion data while performing on a small mobile device. The UWB technology on the other hand, indicates a disappointment in performance regarding the intended application but can still be useful for wireless communication between sensor nodes. The report also points out the importance of using a high performance micro controller for achieving high accuracy in measurements.
APA, Harvard, Vancouver, ISO, and other styles
35

Wilson, Brian. "The creation of a functional mailing list server with a graphical user interface." Ohio : Ohio University, 1997. http://www.ohiolink.edu/etd/view.cgi?ohiou1185208875.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Harris, Christopher John. "A parallel model for the heterogeneous computation of radio astronomy signal correlation." University of Western Australia. School of Physics, 2009. http://theses.library.uwa.edu.au/adt-WU2010.0019.

Full text
Abstract:
The computational requirements of scientific research are constantly growing. In the field of radio astronomy, observations have evolved from using single telescopes, to interferometer arrays of many telescopes, and there are currently arrays of massive scale under development. These interferometers use signal and image processing to produce data that is useful to radio astronomy, and the amount of processing required scales quadratically with the scale of the array. Traditional computational approaches are unable to meet this demand in the near future. This thesis explores the use of heterogeneous parallel processing to meet the computational demands of radio astronomy. In heterogeneous computing, multiple hardware architectures are used for processing. In this work, the Graphics Processing Unit (GPU) is used as a co-processor along with the Central Processing Unit (CPU) for the computation of signal processing algorithms. Specifically, the suitability of the GPU to accelerate the correlator algorithms used in radio astronomy is investigated. This work first implemented a FX correlator on the GPU, with a performance increase of one to two orders of magnitude over a serial CPU approach. The FX correlator algorithm combines pairs of telescope signals in the Fourier domain. Given N telescope signals from the interferometer array, N2 conjugate multiplications must be calculated in the algorithm. For extremely large arrays (N >> 30), this is a huge computational requirement. Testing will show that the GPU correlator produces results equivalent to that of a software correlator implemented on the CPU. However, the algorithm itself is adapted in order to take advantage of the processing power of the GPU. Research examined how correlator parameters, in particular the number of telescope signals and the Fast Fourier Transform (FFT) length, affected the results.
APA, Harvard, Vancouver, ISO, and other styles
37

Mazloomzadeh, Ali. "Development of Hardware in the Loop Real-Time Control Techniques for Hybrid Power Systems Involving Distributed Demands and Sustainable Energy Sources." FIU Digital Commons, 2014. http://digitalcommons.fiu.edu/etd/1666.

Full text
Abstract:
The future power grid will effectively utilize renewable energy resources and distributed generation to respond to energy demand while incorporating information technology and communication infrastructure for their optimum operation. This dissertation contributes to the development of real-time techniques, for wide-area monitoring and secure real-time control and operation of hybrid power systems. To handle the increased level of real-time data exchange, this dissertation develops a supervisory control and data acquisition (SCADA) system that is equipped with a state estimation scheme from the real-time data. This system is verified on a specially developed laboratory-based test bed facility, as a hardware and software platform, to emulate the actual scenarios of a real hybrid power system with the highest level of similarities and capabilities to practical utility systems. It includes phasor measurements at hundreds of measurement points on the system. These measurements were obtained from especially developed laboratory based Phasor Measurement Unit (PMU) that is utilized in addition to existing commercially based PMU’s. The developed PMU was used in conjunction with the interconnected system along with the commercial PMU’s. The tested studies included a new technique for detecting the partially islanded micro grids in addition to several real-time techniques for synchronization and parameter identifications of hybrid systems. Moreover, due to numerous integration of renewable energy resources through DC microgrids, this dissertation performs several practical cases for improvement of interoperability of such systems. Moreover, increased number of small and dispersed generating stations and their need to connect fast and properly into the AC grids, urged this work to explore the challenges that arise in synchronization of generators to the grid and through introduction of a Dynamic Brake system to improve the process of connecting distributed generators to the power grid. Real time operation and control requires data communication security. A research effort in this dissertation was developed based on Trusted Sensing Base (TSB) process for data communication security. The innovative TSB approach improves the security aspect of the power grid as a cyber-physical system. It is based on available GPS synchronization technology and provides protection against confidentiality attacks in critical power system infrastructures.
APA, Harvard, Vancouver, ISO, and other styles
38

PEREIRA, LILIAN N. "Uso de diodos epitaxiais de Si em dosimetria de fótons." reponame:Repositório Institucional do IPEN, 2013. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10581.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:42:13Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:04:34Z (GMT). No. of bitstreams: 0
Dissertação (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
39

Marak, Laszlo. "On continuous maximum flow image segmentation algorithm." Phd thesis, Université Paris-Est, 2012. http://tel.archives-ouvertes.fr/tel-00786914.

Full text
Abstract:
In recent years, with the advance of computing equipment and image acquisition techniques, the sizes, dimensions and content of acquired images have increased considerably. Unfortunately as time passes there is a steadily increasing gap between the classical and parallel programming paradigms and their actual performance on modern computer hardware. In this thesis we consider in depth one particular algorithm, the continuous maximum flow computation. We review in detail why this algorithm is useful and interesting, and we propose efficient and portable implementations on various architectures. We also examine how it performs in the terms of segmentation quality on some recent problems of materials science and nano-scale biology
APA, Harvard, Vancouver, ISO, and other styles
40

Scarlato, Michele. "Sicurezza di rete, analisi del traffico e monitoraggio." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/3223/.

Full text
Abstract:
Il lavoro è stato suddiviso in tre macro-aree. Una prima riguardante un'analisi teorica di come funzionano le intrusioni, di quali software vengono utilizzati per compierle, e di come proteggersi (usando i dispositivi che in termine generico si possono riconoscere come i firewall). Una seconda macro-area che analizza un'intrusione avvenuta dall'esterno verso dei server sensibili di una rete LAN. Questa analisi viene condotta sui file catturati dalle due interfacce di rete configurate in modalità promiscua su una sonda presente nella LAN. Le interfacce sono due per potersi interfacciare a due segmenti di LAN aventi due maschere di sotto-rete differenti. L'attacco viene analizzato mediante vari software. Si può infatti definire una terza parte del lavoro, la parte dove vengono analizzati i file catturati dalle due interfacce con i software che prima si occupano di analizzare i dati di contenuto completo, come Wireshark, poi dei software che si occupano di analizzare i dati di sessione che sono stati trattati con Argus, e infine i dati di tipo statistico che sono stati trattati con Ntop. Il penultimo capitolo, quello prima delle conclusioni, invece tratta l'installazione di Nagios, e la sua configurazione per il monitoraggio attraverso plugin dello spazio di disco rimanente su una macchina agent remota, e sui servizi MySql e DNS. Ovviamente Nagios può essere configurato per monitorare ogni tipo di servizio offerto sulla rete.
APA, Harvard, Vancouver, ISO, and other styles
41

Jaume, Bennasar Andrés. "Las nuevas tecnologías en la administración de justicia. La validez y eficacia del documento electrónico en sede procesal." Doctoral thesis, Universitat de les Illes Balears, 2009. http://hdl.handle.net/10803/9415.

Full text
Abstract:
La tesis se encarga de analizar, por un lado, la integración y el desarrollo de las nuevas tecnologías en la Administración de Justicia; y, por otro, los parámetros que constituyen la validez y eficacia del documento electrónico.
La primera cuestión se centra en la configuración de los Sistemas de Información de la Oficina Judicial y del Ministerio Fiscal, así como de la informatización de los Registros Civiles, donde el art. 230 LOPJ es la pieza clave. Se estudian sus programas, aplicaciones, la videoconferencia, los ficheros judiciales y las redes de telecomunicaciones que poseen la cobertura de la firma electrónica reconocida, donde cobran gran relevancia los convenios de colaboración tecnológica. La digitalización de las vistas quizá sea una de las cuestiones con más trascendencia, teniendo en cuenta que el juicio es el acto que culmina el proceso. Aunque no todos los proyectos adoptados en el ámbito de la e.justicia se han desarrollado de forma integral, ni han llegado a la totalidad de los órganos judiciales. El objetivo final es lograr una Justicia más ágil y de calidad, a lo cual aspira el Plan Estratégico de Modernización de la Justicia 2009-2012 aprobado recientemente.
En referencia a la segunda perspectiva, no cabe duda que el Ordenamiento jurídico y los tribunales, en el ámbito de la justicia material, otorgan plena validez y eficacia al documento electrónico. Nuestra línea de investigación se justifica porque cada vez son más los procesos que incorporan soportes electrónicos de todo tipo, ya sea al plantearse la acción o posteriormente como medio de prueba (art. 299.2 LEC). Entre otros temas examinamos el documento informático, la problemática que rodea al fax, los sistemas de videograbación y el contrato electrónico.
La tesi s'encarrega d'analitzar, per una part, la integració i el desenvolupament de les noves tecnologies dins l´Administració de Justícia; i, per l'altra, els paràmetres que constitueixen la validesa i l'eficàcia del document electrònic.
La primera qüestió es centra en la configuració dels Sistemes d´Informació de l´Oficina Judicial i del Ministeri Fiscal, així com de la informatització dels Registres Civils, on l'art. 230 LOPJ es la peça clau. S'estudien els seus programes, aplicacions, la videoconferència, el fitxers judicials i les xarxes de telecomunicacions que tenen la cobertura de la firma electrònica reconeguda, on cobren gran rellevància els convenis de col·laboració tecnològica. La digitalització de les vistes tal vegada sigui una de les qüestions amb més transcendència, tenint amb compte que el judici es l'acte que culmina el procés. Però no tots el projectes adoptats en l'àmbit de la e.justicia s'han desenvolupat d'una manera integral ni han arribat a la totalitat dels òrgans judicials. L'objectiu final es assolir una Justícia més àgil i de qualitat, al que aspira el Pla Estratègic de Modernització de la Justícia 2009-2012 aprovat recentment.
En referència a la segona perspectiva, no hi ha dubte que l´Ordenament jurídic i els tribunals, en l'àmbit de la justícia material, donen plena validesa i eficàcia al document electrònic. La nostra línia d'investigació es justifica perquè cada vegada son més el processos que incorporen suports electrònics de tot tipus, ja sigui quant es planteja l'acció o posteriorment como a medi de prova (art. 299.2 LEC). Entre altres temes examinem el document informàtic, la problemàtica que envolta al fax, els sistemes de videogravació i el contracte electrònic.
The thesis seeks to analyse, on the one hand, the integration and development of the new technologies in the Administration of Justice; and, on the other, the parameters which constitute the validity and efficiency of the electronic document.
The first question centres on the configuration of the Information Systems of the Judicial Office and the Public Prosecutor, as well as the computerisation of the Civil Registers, where the art. 230 LOPJ it's the part key. Their programmes, applications, the Video Conferencing, the judicial registers and the telecommunication networks which are covered by the recognised electronic signatures, are studied, where the agreements on technological collaboration gain great relevance. The digitalisation of evidence might perhaps be one of the questions with most consequence, bearing in mind that the judgment is the act by which the process is culminated. Although not all the projects adopted within the compass of e.justice have developed completely nor have reached all the judicial organs. The final objective is to achieve an agile, quality Justice, to which the recently approved Strategic Plan for the Modernisation of Justice aspires.
With reference to the second perspective, there is no doubt that the juridical Ordinance and the tribunals within the compass of material justice grant full validity and efficacy to the electronic document. Our line of investigation is justified because there are more and more processes which are sustained by electronic supports of all kinds, whether it be at the establishment of the action or later, as a proof of it (art. 299.2 LEC). Amongst other things, we examine the computerised document, the problems which surround the fax, the systems for video recording and the electronic contract.
APA, Harvard, Vancouver, ISO, and other styles
42

Wu, Shih-Ting, and 吳事庭. "The Research of applying Electronic WhiteBoard and 3D computer graphic software - taking Cylinder Volume unit of The sixth grade as an example." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/uub2b2.

Full text
Abstract:
碩士
國立臺南大學
應用數學系數學科教學碩士班
102
The purpose of this study was to discuss the students’ wrong types and the theory of problem solving in cylinder volume by Interctive Electronic WhiteBoard and 3D computer graph software Google SketchUp 8. The researcher used questionnaires and semi-structured interview sheets to collect research data. The sample selects sixth grade students(male 11, female 4) under elementary school in remote area in Tainan City and subdivides high score and low score according to the grades in fifth grade. The research results are in the following. 1. The participant’s passing rate on performance of the three different representations:visual representation is higher than phrase and contextual one. Phrase representation is lightly better than contextual representation. 2. The representation performance of cylinder volume had no significant difference between students from schools in flat region and in mountainous region. 3. Experimental students’ wrong types of cylinder volume are: (1) The whole question blank (2) miscalculation (3) to piece together answers (4) unit error (5) misuse formula 4. Experimental students’ theory of problem solving are in the following: (1) Two groups all can use the cylinder volume formula. (2) High group’s volume retention concept is superiorly than low group’s. (3) High grouping correction ability, the operational capability are superiorly than low group, therefore the problem solving accuracy is higher than low group’s .
APA, Harvard, Vancouver, ISO, and other styles
43

Njova, Dion. "Evaluating of DNP3 protocol over serial eastern operating unit substations and improving SCADA performance." Diss., 2021. http://hdl.handle.net/10500/27683.

Full text
Abstract:
A thesis which models the DNP3 and IEC 61850 protocol in OPNET
Supervisory Control and Data Acquisition (SCADA) is a critical part of monitoring and controlling of the electrical substation. The aim of this dissertation is to investigate the performance of the Distributed Network Protocol Version 3.3 (DNP3) protocol and to compare its performance to that of International Electro-technical Commission (IEC) 61850 protocol in an electrical substation communication network environment. Building an electrical substation control room and installing the network equipment was going to be expensive and take a lot of time. The better option was to build a model of the electrical substation communication network and run simulations. Riverbend modeller academic edition known as Optimized Network Engineering Tool (OPNET) was chosen as a software package to model substation communication network, DNP3 protocol and IEC 61850 Protocol stack. Modelling the IEC 61850 protocol stack on OPNET involved building the used Open System Interconnection (OSI) layers of the IEC 61850 protocol stack onto the application definitions of OPNET. The Transmission Control Protocol/Internet Protocol (TCP/IP) configuration settings of DNP3 protocol were also modelled on the OPNET application definitions. The aim is to compare the two protocols and determine which protocol is the best performing one in terms of throughput, data delay and latency. The substation communication model consists of 10 ethernet nodes which simulate protection Intelligent Electronic Devices (IEDs), 13 ethernet switches, a server which simulates the substation Remote Terminal Unit (RTU) and the DNP3 Protocol over TCP/IP simulated on the model. DNP3 is a protocol that can be used in a power utility computer network to provide communication service for the grid components. DNP3 protocol is currently used at Eskom as the communication protocol because it is widely used by equipment vendors in the energy sector. DNP3 protocol will be modelled before being compared to the new recent robust protocol IEC 61850 in the same model and determine which protocol is the best for Eskom on the network of the power grid. The network load and packet delay parameters were sampled when 10%, 50%, 90% and 100% of devices are online. The IEC 61850 protocol model has three scenarios and they are normal operation of a Substation, maintenance in a Substation and Buszone operation at a Substation. In these scenarios packet end to end delay of Generic Object Oriented Substation Event (GOOSE), vi © University of South Africa 2020 Generic Substation Status Event (GSSE), Sampled Values (SV) and Manufacturing Messaging Specification (MMS) messages are monitored. The throughput from the IED under maintenance and the throughput at the Substation RTU end is monitored in the model. Analysis of the results of the DNP3 protocol simulation showed that with an increase in number of nodes there was an increase in packet delay as well as the network load. The load on the network should be taken into consideration when designing a substation communication network that requires a quick response such as a smart gird. GOOSE, GSSE, SV results on the IEC 61850 model met all the requirements of the IEC 61850 standard and the MMS did not meet all the requirements of the IEC standard. The design of the substation communication network using IEC 61850 will assist when trying to predict the behavior of the network with regards to this specific protocol during maintenance and when there are faults in the communication network or IED’s. After the simulation of the DNP3 protocol and the IEC 61850 the throughput of DNP3 protocol was determined to be in the range (20 – 450) kbps and the throughput of IEC61850 protocol was determined to be in the range (1.6 – 16) Mbps.
College of Engineering, Science and Technology
M. Tech. (Electrical Engineering)
APA, Harvard, Vancouver, ISO, and other styles
44

Pai, Vivek Sadananda. "IO-lite: A copy-free UNIX I/O system." Thesis, 1997. http://hdl.handle.net/1911/17117.

Full text
Abstract:
Memory copy speed is known to be a significant barrier to high-speed communication. We perform an analysis of the requirements for a copy-free buffer system, develop an implementation-independent applications programming interface (API) based on those requirements, and then implement a system that conforms to the API. In addition, we design and implement a fully copy-free filesystem cache. Performance tests indicate that our system dramatically outperforms traditional systems on communications-oriented tasks by a factor of 2 to 10. Application programs that have been modified to utilize our copy-free system have also shown reductions in run time, ranging from 10% to nearly 50%.
APA, Harvard, Vancouver, ISO, and other styles
45

Davies, Trevor Rowland. "Implementation of a proprietary CAD graphics subsystem using the GKS standard interface." Thesis, 1989. http://hdl.handle.net/10413/6102.

Full text
Abstract:
This project involved porting a Graphical Software Package (GSP) from the proprietary IDS-BO Gerber CAD system onto a more modern computer that would allow student access for further study and development. Because of the popularity of Unix as an "open systems environment", the computer chosen was an HP9000 using the HP-UX operating system. In addition, it was decided to implement a standard Graphical Kernel System (GKS) interface to provide further portability and to cater for the expected growth of the GKS as an international standard. By way of introduction, a brief general overview of computer graphics, some of the essential considerations for the design of a graphics package and a description of the work undertaken are presented. Then follows a detailed presentation of the two systems central to this project i) the lDS-8O Gerber proprietary CAD system, with particular attention being paid to the Graphical Software Package (GSP) which it uses and ii) the Graphical Kernel System (GKS) which has become a widely accepted international graphics standard. The major differences between the lDS-8O Gerber GSP system and the GKS system are indicated. Following the theoretical presentation of the GSP and GKS systems, the practical work involved in first implementing a "skeleton" GKS interface on the HP9000 Unix System, incorporating the existing Advanced Graphics Package (AGP) is presented. The establishment of a GKS interface then allows an lDS-8O Gerber GSP interface to be developed and mapped onto this. Detailed description is given of the methods employed for this implementation and the reasons for the data structures chosen. The procedures and considerations for the testing and verification of the total .system implemented on the HP9000 then follow. Original lDS-8O Gerber 2-D .applications software was used for the purpose of testing. The implementation of the data base that this software uses is also presented. Conclusions on system performance are finally presented as well as suggested areas for possible further work.
Thesis (M.Sc.)-University of Natal, Durban, 1989.
APA, Harvard, Vancouver, ISO, and other styles
46

Sitaridi, Evangelia. "GPU-Acceleration of In-Memory Data Analytics." Thesis, 2016. https://doi.org/10.7916/D8FN16BZ.

Full text
Abstract:
Hardware advances strongly influence the database system design. The flattening speed of CPU cores makes many-core accelerators, such as GPUs, a vital alternative to explore for processing the ever-increasing amounts of data. GPUs have a significantly higher degree of parallelism than multi-core CPUs but their cores are simpler. As a result, they do not face the power constraints limiting the parallelism of CPUs. Their trade-off, however, is the increased implementation complexity. This thesis adapts and redesigns data analytics operators to better exploit the GPU special memory and threading model. Due to the increasing memory capacity and also the user's need for fast interaction with the data, we focus on in-memory analytics. Our techniques span different steps of the data processing pipeline: (1) Data preprocessing, (2) Query compilation, and (3) Algorithmic optimization of the operators. Our data preprocessing techniques adapt the data layout for numeric and string columns to maximize the achieved GPU memory bandwidth. Our query compilation techniques compute the optimal execution plan for conjunctive filters. We formulate \textit{memory divergence} for string matching algorithms and suggest how to eliminate it. Finally, we parallelize decompression algorithms in our compression framework \textit{Gompresso} to fit more data into the limited GPU memory. Gompresso achieves high speed-ups on GPUs over multi-core CPU state-of-the-art libraries and is suitable for any massively parallel processor.
APA, Harvard, Vancouver, ISO, and other styles
47

Abell, Stephen W. "Parallel acceleration of deadlock detection and avoidance algorithms on GPUs." Thesis, 2013. http://hdl.handle.net/1805/3653.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
Current mainstream computing systems have become increasingly complex. Most of which have Central Processing Units (CPUs) that invoke multiple threads for their computing tasks. The growing issue with these systems is resource contention and with resource contention comes the risk of encountering a deadlock status in the system. Various software and hardware approaches exist that implement deadlock detection/avoidance techniques; however, they lack either the speed or problem size capability needed for real-time systems. The research conducted for this thesis aims to resolve issues present in past approaches by converging the two platforms (software and hardware) by means of the Graphics Processing Unit (GPU). Presented in this thesis are two GPU-based deadlock detection algorithms and one GPU-based deadlock avoidance algorithm. These GPU-based algorithms are: (i) GPU-OSDDA: A GPU-based Single Unit Resource Deadlock Detection Algorithm, (ii) GPU-LMDDA: A GPU-based Multi-Unit Resource Deadlock Detection Algorithm, and (iii) GPU-PBA: A GPU-based Deadlock Avoidance Algorithm. Both GPU-OSDDA and GPU-LMDDA utilize the Resource Allocation Graph (RAG) to represent resource allocation status in the system. However, the RAG is represented using integer-length bit-vectors. The advantages brought forth by this approach are plenty: (i) less memory required for algorithm matrices, (ii) 32 computations performed per instruction (in most cases), and (iii) allows our algorithms to handle large numbers of processes and resources. The deadlock detection algorithms also require minimal interaction with the CPU by implementing matrix storage and algorithm computations on the GPU, thus providing an interactive service type of behavior. As a result of this approach, both algorithms were able to achieve speedups over two orders of magnitude higher than their serial CPU implementations (3.17-317.42x for GPU-OSDDA and 37.17-812.50x for GPU-LMDDA). Lastly, GPU-PBA is the first parallel deadlock avoidance algorithm implemented on the GPU. While it does not achieve two orders of magnitude speedup over its CPU implementation, it does provide a platform for future deadlock avoidance research for the GPU.
APA, Harvard, Vancouver, ISO, and other styles
48

Shafer, Brandon Andrew. "Real-time adaptive-optics optical coherence tomography (AOOCT) image reconstruction on a GPU." Thesis, 2014. http://hdl.handle.net/1805/6105.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
Adaptive-optics optical coherence tomography (AOOCT) is a technology that has been rapidly advancing in recent years and offers amazing capabilities in scanning the human eye in vivo. In order to bring the ultra-high resolution capabilities to clinical use, however, newer technology needs to be used in the image reconstruction process. General purpose computation on graphics processing units is one such way that this computationally intensive reconstruction can be performed in a desktop computer in real-time. This work shows the process of AOOCT image reconstruction, the basics of how to use NVIDIA's CUDA to write parallel code, and a new AOOCT image reconstruction technology implemented using NVIDIA's CUDA. The results of this work demonstrate that image reconstruction can be done in real-time with high accuracy using a GPU.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography