To see the other types of publications on this topic, follow the link: Real-Time Computing System.

Dissertations / Theses on the topic 'Real-Time Computing System'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Real-Time Computing System.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Anderson, Keith William John. "A real-time facial expression recognition system for affective computing." Thesis, Queen Mary, University of London, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.405823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Okonoboh, Matthias Aifuobhokhan, and Sudhakar Tekkali. "Real-Time Software Vulnerabilities in Cloud Computing : Challenges and Mitigation Techniques." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2645.

Full text
Abstract:
Context: Cloud computing is rapidly emerging in the area of distributed computing. In the meantime, many organizations also attributed the technology to be associated with several business risks which are yet to be resolved. These challenges include lack of adequate security, privacy and legal issues, resource allocation, control over data, system integrity, risk assessment, software vulnerabilities and so on which all have compromising effect in cloud environment. Organizations based their worried on how to develop adequate mitigation strategies for effective control measures and to balancing common expectation between cloud providers and cloud users. However, many researches tend to focus on cloud computing adoption and implementation and with less attention to vulnerabilities and attacks in cloud computing. This paper gives an overview of common challenges and mitigation techniques or practices, describes general security issues and identifies future requirements for security research in cloud computing, given the current trend and industrial practices. Objectives: We identified common challenges and linked them with some compromising attributes in cloud as well as mitigation techniques and their impacts in cloud practices applicable in cloud computing. We also identified frameworks we consider relevant for identifying threats due to vulnerabilities based on information from the reviewed literatures and findings. Methods: We conducted a systematic literature review (SLR) specifically to identify empirical studies focus on challenges and mitigation techniques and to identify mitigation practices in addressing software vulnerabilities and attacks in cloud computing. Studies were selected based on the inclusion/exclusion criteria we defined in the SLR process. We search through four databases which include IEEE Xplore, ACM Digital Library, SpringerLinks and SciencDirect. We limited our search to papers published from 2001 to 2010. In additional, we then used the collected data and knowledge from finding after the SLR, to design a questionnaire which was used to conduct industrial survey which also identifies cloud computing challenges and mitigation practices persistent in industry settings. Results: Based on the SLR a total of 27 challenges and 20 mitigation techniques were identified. We further identified 7 frameworks we considered relevant for mitigating the prevalence real-time software vulnerabilities and attacks in the cloud. The identified challenges and mitigation practices were linked to compromised cloud attributes and the way mitigations practices affects cloud computing, respectively. Furthermore, 5 and 3 additional challenges and suggested mitigation practices were identified in the survey. Conclusion: This study has identified common challenges and mitigation techniques, as well as frameworks practices relevant for mitigating real-time software vulnerabilities and attacks in cloud computing. We cannot make claim on exhaustive identification of challenges and mitigation practices associated with cloud computing. We acknowledge the fact that our findings might not be sufficient to generalize the effect of the different service models which include SaaS, IaaS and PaaS, and also true for the different deployment models such as private, public, community and hybrid. However, this study we assist both cloud provider and cloud customers on the security, privacy, integrity and other related issues and useful in the part of identifying further research area that can help in enhancing security, privacy, resource allocation and maintain integrity in the cloud environment.
Kungsmarksvagen 67 SE-371 44 Karlskrona Sweden Tel: 0737159290
APA, Harvard, Vancouver, ISO, and other styles
3

Say, Fatih. "A Reconfigurable Computing Platform For Real Time Embedded Applications." Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613628/index.pdf.

Full text
Abstract:
Today&rsquo
s reconfigurable devices successfully combine &lsquo
reconfigurable computing machine&rsquo
paradigm and &lsquo
high degree of parallelism&rsquo
and hence reconfigurable computing emerged as a promising alternative for computing-intensive applications. Despite its superior performance and lower power consumption compared to general purpose computing using microprocessors, reconfigurable computing comes with a cost of design complexity. This thesis aims to reduce this complexity by providing a flexible and user friendly development environment to application programmers in the form of a complete reconfigurable computing platform. The proposed computing platform is specially designed for real time embedded applications and supports true multitasking by using available run time partially reconfigurable architectures. For this computing platform, we propose a novel hardware task model aiming to minimize logic resource requirement and the overhead due to the reconfiguration of the device. Based on this task model an optimal 2D surface partitioning strategy for managing the hardware resource is presented. A mesh network-on-chip is designed to be used as the communication environment for the hardware tasks and a runtime mapping technique is employed to lower the communication overhead. As the requirements of embedded systems are known prior to field operation, an oine design flow is proposed for generating the associated bit-stream for the hardware tasks. Finally, an online real time operating system scheduler is given to complete the necessary building blocks of a reconfigurable computing platform suitable for real time computing-intensive embedded applications. In addition to providing a flexible development environment, the proposed computing platform is shown to have better device utilization and reconfiguration time overhead compared to existing studies.
APA, Harvard, Vancouver, ISO, and other styles
4

Young, Richard. "Real-time distributed system architecture using local area networks." Master's thesis, University of Cape Town, 1992. http://hdl.handle.net/11427/18231.

Full text
Abstract:
Bibliography: pages 61-66.
This dissertation addresses system architecture concepts for the implementation of real-time distributed systems. In particular, it addresses the requirements of a specific mission and real-time critical distributed system application as this exemplifies most of the issues of concern. Of specific significance is the integration of real-time distributed data services into a platform-wide Information Management Infrastructure. The dissertation commences with an overview of the system-level allocated requirements. Derived requirements for an Information Management Infrastructure (IMI) are then determined. A generic system architecture is then presented in terms of the allocated and derived requirements. A specific topology, based on this architecture, as well as available technology, is described. The scalability of the architecture to -different platforms, including non-surface platforms, is discussed. As financial considerations are an important design driver and constraint, some anticipated order-of-magnitude system acquisition costs for a range of system complexities and configurations are briefly reviewed. Finally some conclusions and recommendations within the context of the allocated and derived requirements, as well as the RSA's politico-economic environment, are offered.
APA, Harvard, Vancouver, ISO, and other styles
5

Kao, Ming-lai. "A reconfigurable fault-tolerant multiprocessor system for real-time control /." The Ohio State University, 1986. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487266011223248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Iturbe, Xabier. "Design and implementation of a reliable reconfigurable real-time operating system (R3TOS)." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/9413.

Full text
Abstract:
Twenty-first century Field-Programmable Gate Arrays (FPGAs) are no longer used for implementing simple “glue logic” functions. They have become complex arrays of reconfigurable logic resources and memories as well as highly optimised functional blocks, capable of implementing large systems on a single chip. Moreover, Dynamic Partial Reconfiguration (DPR) capability permits to adjust some logic resources on the chip at runtime, whilst the rest are still performing active computations. During the last few years, DPR has become a hot research topic with the objective of building more reliable, efficient and powerful electronic systems. For instance, DPR can be used to mitigate spontaneously occurring bit upsets provoked by radiation, or to jiggle around the FPGA resources which progressively get damaged as the silicon ages. Moreover, DPR is the enabling technology for a new computing paradigm which combines computation in time and space. In Reconfigurable Computing (RC), a battery of computation-specific circuits (“hardware tasks”) are swapped in and out of the FPGA on demand to hold a continuous stream of input operands, computation and output results. Multitasking, adaptation and specialisation are key properties in RC, as multiple swappable tasks can run concurrently at different positions on chip, each with custom data-paths for efficient execution of specific computations. As a result, considerable computational throughput can be achieved even at low clock frequencies. However, DPR penetration in the commercial market is still testimonial, mainly due to the lack of suitable high-level design tools to exploit this technology. Indeed, currently, special skills are required to successfully develop a dynamically reconfigurable application. In light of the above, this thesis aims at bridging the gap between high-level application and low-level DPR technology. Its main objective is to develop Operating System (OS)-like support for high-level software-centric application developers in order to exploit the benefits brought about by DPR technology, without having to deal with the complex low-level hardware details. The developed solution in this thesis is named as R3TOS, which stands for Reliable Reconfigurable Real-Time Operating System. R3TOS defines a flexible infrastructure for reliably executing reconfigurable hardware-based applications under real-time constraints. In R3TOS, the hardware tasks are scheduled in order to meet their computation deadlines and allocated to non-damaged resources, keeping the system fault-free at all times. In addition, R3TOS envisages a computing framework whereby both hardware and software tasks coexist in a seamless manner, allowing the user to access the advanced computation capabilities of modern reconfigurable hardware from a software “look and feel” environment. This thesis covers all of the design and implementation aspects of R3TOS. The thesis proposes a novel EDF-based scheduling algorithm, two novel task allocation heuristics (EAC and EVC) and a novel task allocation strategy (called Snake), addressing many RC-related particularities as well as technological constraints imposed by current FPGA technology. Empirical results show that these approaches improve on the state of the art. Besides, the thesis describes a novel way to harness the internal reconfiguration mechanism of modern FPGAs to performinter-task communications and synchronisation regardless of the physical location of tasks on-chip. This paves the way for implementing more sophisticated RC solutions which were only possible in theory in the past. The thesis illustrates R3TOS through a proof-of-concept prototype with two demonstrator applications: (1) dependability oriented control of the power chain of a railway traction vehicle, and (2) datastreaming oriented Software Defined Radio (SDR).
APA, Harvard, Vancouver, ISO, and other styles
7

ROSALES, MARCELO V. "MICROPROCESSOR-BASED DIGITAL CONTROLLER FOR THE ADVANCED TELEMETRY TRACKING SYSTEM." International Foundation for Telemetering, 1991. http://hdl.handle.net/10150/613173.

Full text
Abstract:
International Telemetering Conference Proceedings / November 04-07, 1991 / Riviera Hotel and Convention Center, Las Vegas, Nevada
This paper discusses the design and implementation of a microcomputer system that functions as the central processing unit for performing servo system control, tracking mode determination, operator interface, switching, and logic operations. The computer hardware consists of VMEbus compatible boards that include a Motorola 32-bit MC68020 microprocessor-based CPU board, and a variety of interface boards. The computer is connected to the Radio Frequency system, Antenna Control Unit, azimuth and elevation servo systems, and other systems of the Advanced Transportable Telemetry Acquisition System (TTAS-A) through extensive serial, analog, and digital input/output interfacing. The software platform consists of a commercially-acquired real-time multi-tasking operating system, and in-house developed device drivers and tracking system software. The operating system kernel is written in assembly language, while the application software is written using the C programming language. To enhance the operation of the TTAS-A, software was also developed to provide color graphics, CRT menus, printer listings, interactive real-time hardware/software diagnostics, and a GPIB (IEEE-488 bus) interface for Automated Testing System support.
APA, Harvard, Vancouver, ISO, and other styles
8

Hong, Chuan. "Towards the development of a reliable reconfigurable real-time operating system on FPGAs." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/8948.

Full text
Abstract:
In the last two decades, Field Programmable Gate Arrays (FPGAs) have been rapidly developed from simple “glue-logic” to a powerful platform capable of implementing a System on Chip (SoC). Modern FPGAs achieve not only the high performance compared with General Purpose Processors (GPPs), thanks to hardware parallelism and dedication, but also better programming flexibility, in comparison to Application Specific Integrated Circuits (ASICs). Moreover, the hardware programming flexibility of FPGAs is further harnessed for both performance and manipulability, which makes Dynamic Partial Reconfiguration (DPR) possible. DPR allows a part or parts of a circuit to be reconfigured at run-time, without interrupting the rest of the chip’s operation. As a result, hardware resources can be more efficiently exploited since the chip resources can be reused by swapping in or out hardware tasks to or from the chip in a time-multiplexed fashion. In addition, DPR improves fault tolerance against transient errors and permanent damage, such as Single Event Upsets (SEUs) can be mitigated by reconfiguring the FPGA to avoid error accumulation. Furthermore, power and heat can be reduced by removing finished or idle tasks from the chip. For all these reasons above, DPR has significantly promoted Reconfigurable Computing (RC) and has become a very hot topic. However, since hardware integration is increasing at an exponential rate, and applications are becoming more complex with the growth of user demands, highlevel application design and low-level hardware implementation are increasingly separated and layered. As a consequence, users can obtain little advantage from DPR without the support of system-level middleware. To bridge the gap between the high-level application and the low-level hardware implementation, this thesis presents the important contributions towards a Reliable, Reconfigurable and Real-Time Operating System (R3TOS), which facilitates the user exploitation of DPR from the application level, by managing the complex hardware in the background. In R3TOS, hardware tasks behave just like software tasks, which can be created, scheduled, and mapped to different computing resources on the fly. The novel contributions of this work are: 1) a novel implementation of an efficient task scheduler and allocator; 2) implementation of a novel real-time scheduling algorithm (FAEDF) and two efficacious allocating algorithms (EAC and EVC), which schedule tasks in real-time and circumvent emerging faults while maintaining more compact empty areas. 3) Design and implementation of a faulttolerant microprocessor by harnessing the existing FPGA resources, such as Error Correction Code (ECC) and configuration primitives. 4) A novel symmetric multiprocessing (SMP)-based architectures that supports shared memory programing interface. 5) Two demonstrations of the integrated system, including a) the K-Nearest Neighbour classifier, which is a non-parametric classification algorithm widely used in various fields of data mining; and b) pairwise sequence alignment, namely the Smith Waterman algorithm, used for identifying similarities between two biological sequences. R3TOS gives considerably higher flexibility to support scalable multi-user, multitasking applications, whereby resources can be dynamically managed in respect of user requirements and hardware availability. Benefiting from this, not only the hardware resources can be more efficiently used, but also the system performance can be significantly increased. Results show that the scheduling and allocating efficiencies have been improved up to 2x, and the overall system performance is further improved by ~2.5x. Future work includes the development of Network on Chip (NoC), which is expected to further increase the communication throughput; as well as the standardization and automation of our system design, which will be carried out in line with the enablement of other high-level synthesis tools, to allow application developers to benefit from the system in a more efficient manner.
APA, Harvard, Vancouver, ISO, and other styles
9

Koblik, Katerina. "Simulation of rain on a windshield : Creating a real-time effect using GPGPU computing." Thesis, Umeå universitet, Institutionen för fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-185027.

Full text
Abstract:
Modelling and rendering natural phenomena, such as rain, is an important aspect of creating a realistic driving simulator. Rain is a crucial issue when driving in the real world as it for instance obstructs the driver’s vision. The difficulty is to implement it in a visually appealing way while simultaneously making it look realistic and keeping the computational cost low. In this report, a GPGPU (general-purpose computing on graphical processing units) based approach is presented where the final product is a rain simulation rendered onto a 2D texture, which can then be applied to a surface. The simulated raindrops interact with gravity, wind, a windshield wiper as well as with each other, and are then used to distort the background behind them in a convincing manner. The simulation takes into account multiple physical properties of raindrops and is shown to be suitable to run in real-time. The result is presented in form of a visual demonstration. In conclusion, even though the final simulation is still in its first iteration, it clearly highlights what can be accomplished by utilizing the GPU and the benefits of using a texture-based approach. The appropriate simulation approach will however always depend on the characteristics of the problem and the limitations of the hardware.
APA, Harvard, Vancouver, ISO, and other styles
10

Uddin-Al-Hasan, Main. "Real-time Embedded Panoramic Imaging for Spherical Camera System." Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2518.

Full text
Abstract:
Panoramas or stitched images are used in topographical mapping, panoramic 3D reconstruction, deep space exploration image processing, medical image processing, multimedia broadcasting, system automation, photography and other numerous fields. Generating real-time panoramic images in small embedded computer is of particular importance being lighter, smaller and mobile imaging system. Moreover, this type of lightweight panoramic imaging system is used for different types of industrial or home inspection. A real-time handheld panorama imaging system is developed using embedded real-time Linux as software module and Gumstix Overo and PandaBoard ES as hardware module. The proposed algorithm takes 62.6602 milliseconds to generate a panorama frame from three images using a homography matrix. Hence, the proposed algorithm is capable of generating panorama video with 15.95909365 frames per second. However, the algorithm is capable to be much speedier with more optimal homography matrix. During the development, Ångström Linux and Ubuntu Linux are used as the operating system with Gumstix Overo and PandaBoard ES respectively. The real-time kernel patch is used to configure the non-real-time Linux distribution for real-time operation. The serial communication software tools C-Kermit, Minicom are used for terminal emulation between development computer and small embedded computer. The software framework of the system consist UVC driver, V4L/V4L2 API, OpenCV API, FFMPEG API, GStreamer, x264, Cmake, Make software packages. The software framework of the system also consist stitching algorithm that has been adopted from available stitching methods with necessary modification. Our proposed stitching process automatically finds out motion model of the Spherical camera system and saves the matrix in a look file. The extracted homography matrix is then read from look file and used to generate real-time panorama image. The developed system generates real-time 180° view panorama image from a spherical camera system. Beside, a test environment is also developed to experiment calibration and real-time stitching with different image parameters. It is able to take images with different resolutions as input and produce high quality real-time panorama image. The QT framework is used to develop a multifunctional standalone software that has functions for displaying real-time process algorithm performance in real-time through data visualization, camera system calibration and other stitching options. The software runs both in Linux and Windows. Moreover, the system has been also realized as a prototype to develop a chimney inspection system for a local company.
Main Uddin-Al-Hasan, E-mail: main.hasan@gmail.com
APA, Harvard, Vancouver, ISO, and other styles
11

Charalampidis, Vasileios. "Real-Time Monitoring System of Sedentary Behavior with Android Wear and Cloud Computing : An office case study." Thesis, KTH, Skolan för teknik och hälsa (STH), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210017.

Full text
Abstract:
Nowadays, prolonged sitting among office workers is a widespread problem, which is highly related to several health problems. Many proposals have been reported and evaluated to address this issue. However, motivating and engaging workers to change health behavior to a healthier working life is still a challenge. In this project, a specific application has been deployed for real-time monitoring and alerting office workers for prolonged sitting. The proposed system consists of three distinct parts: The first one is an android smartwatch, which was used to collect sensor data e.g., accelerometer and gyro data, with a custom android wear app. The second one is an android application, which was developed to act as a gateway for receiving the smartwatch’s data and sending them to IBM Bluemix cloud with MQTT protocol. The final part is a Node-Red cloud application, which was deployed for storing, analyzing and processing of the sensor data for activity detection i.e., sitting or walking/standing. The main purpose of the last one was to return relevant feedback to the user, while combining elements from gaming contexts (gamification methods), for motivating and engaging office workers to a healthier behavior. The system was firstly tested for defining appropriate accelerometer thresholds to five participants (control group), and then evaluated with five different participants (treatment group), in order to analyze its reliability for prolonged sitting detection. The results showed a good precession for the detection. No confusing between sitting and walking/standing was noticed. Communication, storage and analysis of the data was successfully done, while the push notifications to the participants, for alerting or rewarding them, were always accurate and delivered on time. Every useful information was presented to the user to a web-based dashboard accessed through a smartphone, tablet or a PC.     The proposed system can easily be implemented at a real-life scenario with office workers. Certainly, there is a lot space for improvement, considering mostly the type of data registered at the system, the method for sitting detection, and the user interface for presenting relevant information.
Numera är förlängt sittande bland kontorsarbetare ett utbrett problem som är väldigt relaterat till flera hälsoproblem. Många förslag har rapporterats och utvärderas för att ta itu med denna fråga. Tydligen är det fortfarande en utmaning att motivera och engagera arbetstagare för att förändra deras hälsobeteende till hälsosammare arbetsliv. I detta projekt har en särskild applikation använts för realtidsövervakning och varnar kontorsarbetare för förlängt sittande. Det föreslagna systemet består av tre olika delar: Den första är en android smartwatch, som användes för att samla sensordata t.ex. accelerometer och gyrodata, med en anpassad android wear app. Den andra är en en androidapplikation som fungerade som en gateway för att ta emot smartwatchens data och skickar datan till IBM Bluemix-Cloud med MQTT-protokollet. Den sista delen är en Node-RED Cloud-Applikation som användes för lagring, analysering och behandling av sensordata för aktivitetsdetektering. Detta innebär sittande eller gå/stående med det huvudsakliga ändamålet att returnera relevant återkoppling till användaren, samtidigt som man kombinerar element från spelkontekster (gamification metoder), för att motivera och engagera arbetarna till ett hälsosammare beteende. Systemet testades först för att definiera lämpliga accelerometertrösklar till fem deltagare (kontroll grupp) och utvärderades sedan med fem olika deltagare (behandingsgrupp) för att analysera dess tillförlitlighet för långvarig sittdetektering. Resultaten visade en bra precession för detektionen. Ingen förvirring mellan att sitta och gå / stående märktes. Kommunikation, lagring och analys av data gjordes framgångsrikt, medan push-meddelandena till deltagarna, för att varna eller belöna dem, var alltid korrekta och levererade i tid. All användbar information presenterades för användaren på en webbaserad dashboard som nås via en smartphone surfplatta eller en dator. Det föreslagna systemet kan enkelt implementeras i ett verkligt scenario med kontorsarbetare. Visst finns det mycket utrymme för förbättring om man tänker på majoriteten av data som registrerats i systemet, metoden för sittande detektion och användargränssnittet för presentering av relevant information.
APA, Harvard, Vancouver, ISO, and other styles
12

Suthakar, Uthayanath. "A scalable data store and analytic platform for real-time monitoring of data-intensive scientific infrastructure." Thesis, Brunel University, 2017. http://bura.brunel.ac.uk/handle/2438/15788.

Full text
Abstract:
Monitoring data-intensive scientific infrastructures in real-time such as jobs, data transfers, and hardware failures is vital for efficient operation. Due to the high volume and velocity of events that are produced, traditional methods are no longer optimal. Several techniques, as well as enabling architectures, are available to support the Big Data issue. In this respect, this thesis complements existing survey work by contributing an extensive literature review of both traditional and emerging Big Data architecture. Scalability, low-latency, fault-tolerance, and intelligence are key challenges of the traditional architecture. However, Big Data technologies and approaches have become increasingly popular for use cases that demand the use of scalable, data intensive processing (parallel), and fault-tolerance (data replication) and support for low-latency computations. In the context of a scalable data store and analytics platform for monitoring data-intensive scientific infrastructure, Lambda Architecture was adapted and evaluated on the Worldwide LHC Computing Grid, which has been proven effective. This is especially true for computationally and data-intensive use cases. In this thesis, an efficient strategy for the collection and storage of large volumes of data for computation is presented. By moving the transformation logic out from the data pipeline and moving to analytics layers, it simplifies the architecture and overall process. Time utilised is reduced, untampered raw data are kept at storage level for fault-tolerance, and the required transformation can be done when needed. An optimised Lambda Architecture (OLA), which involved modelling an efficient way of joining batch layer and streaming layer with minimum code duplications in order to support scalability, low-latency, and fault-tolerance is presented. A few models were evaluated; pure streaming layer, pure batch layer and the combination of both batch and streaming layers. Experimental results demonstrate that OLA performed better than the traditional architecture as well the Lambda Architecture. The OLA was also enhanced by adding an intelligence layer for predicting data access pattern. The intelligence layer actively adapts and updates the model built by the batch layer, which eliminates the re-training time while providing a high level of accuracy using the Deep Learning technique. The fundamental contribution to knowledge is a scalable, low-latency, fault-tolerant, intelligent, and heterogeneous-based architecture for monitoring a data-intensive scientific infrastructure, that can benefit from Big Data, technologies and approaches.
APA, Harvard, Vancouver, ISO, and other styles
13

Yuan, Man. "A SIMD Approach To Large-scale Real-time System Air Traffic Control Using Associative Processor and Consequences For Parallel Computing." Kent State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=kent1345058186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Baldellon, Olivier. "Supervision en ligne de propriétés temporelles dans les systèmes distribués temps-réel." Phd thesis, Toulouse, INPT, 2014. http://oatao.univ-toulouse.fr/13299/1/baldellon.pdf.

Full text
Abstract:
Les systèmes actuels deviennent chaque jour de plus en plus complexe; à la distribution s’ajoutent les contraintes temps réel. Les méthodes classiques en charge de garantir la sûreté de fonctionnement, comme le test, l’injection de fautes ou les méthodes formelles ne sont plus suffisantes à elles seules. Afin de pouvoir traiter les éventuelles erreurs lors de leur apparition dans un système distribué donné, nous désirons mettre en place un programme, surveillant ce système, capable de lancer une alerte lorsque ce dernier s’éloigne de ses spécifications ; un tel programme est appelé superviseur (ou moniteur). Le fonctionnement d’un superviseur consiste simplement à interpréter un ensemble d’informations provenant du système sous forme de message, que l’on qualifiera d’évènement, et d’en déduire un diagnostic. L’objectif de cette thèse est de mettre un place un superviseur distribué permettant de vérifier en temps réel des propriétés temporelles. En particulier nous souhaitons que notre moniteur soit capable de vérifier un maximum de propriétés avec un minimum d’information. Ainsi notre outil est spécialement conçu pour fonctionner parfaitement même si l’observation est imparfaite, c’est-à-dire, même si certains évènements arrivent en retard ou s’ils ne sont jamais reçus. Nous avons de plus cherché à atteindre cet objectif de manière distribuée pour des raisons évidentes de performance et de tolérance aux fautes. Nous avons ainsi proposé un protocole distribuable fondé sur l’exécution répartie d’un réseau de Petri temporisé. Pour vérifier la faisabilité et l’efficacité de notre approche, nous avons mis en place une implémentation appelée Minotor qui s’est révélée avoir de très bonnes performances. Enfin, pour montrer l’expressivité du formalisme utilisé pour exprimer les spécifications que l’on désire vérifier, nous avons détaillé un ensemble de propriétés sous forme de réseaux de Petri à double sémantique introduite dans cette thèse (l’ensemble des transitions étant partitionné en deux catégories de transitions, chacune de ces parties ayant sa propre sémantique).
APA, Harvard, Vancouver, ISO, and other styles
15

Owa, Kayode Olayemi. "Non-linear model predictive control strategies for process plants using soft computing approaches." Thesis, University of Plymouth, 2014. http://hdl.handle.net/10026.1/3031.

Full text
Abstract:
The developments of advanced non-linear control strategies have attracted a considerable research interests over the past decades especially in process control. Rather than an absolute reliance on mathematical models of process plants which often brings discrepancies especially owing to design errors and equipment degradation, non-linear models are however required because they provide improved prediction capabilities but they are very difficult to derive. In addition, the derivation of the global optimal solution gets more difficult especially when multivariable and non-linear systems are involved. Hence, this research investigates soft computing techniques for the implementation of a novel real time constrained non-linear model predictive controller (NMPC). The time-frequency localisation characteristics of wavelet neural network (WNN) were utilised for the non-linear models design using system identification approach from experimental data and improve upon the conventional artificial neural network (ANN) which is prone to low convergence rate and the difficulties in locating the global minimum point during training process. Salient features of particle swarm optimisation and a genetic algorithm (GA) were combined to optimise the network weights. Real time optimisation occurring at every sampling instant is achieved using a GA to deliver results both in simulations and real time implementation on coupled tank systems with further extension to a complex quadruple tank process in simulations. The results show the superiority of the novel WNN-NMPC approach in terms of the average controller energy and mean squared error over the conventional ANN-NMPC strategies and PID control strategy for both SISO and MIMO systems.
APA, Harvard, Vancouver, ISO, and other styles
16

Crellin, Kenneth Thomas. "Network time : synchronisation in real time distributed computing systems." Master's thesis, University of Cape Town, 1998. http://hdl.handle.net/11427/17933.

Full text
Abstract:
In the past, network clock synchronization has been sufficient for the needs of traditional distributed systems, for such purposes as maintaining Network File Systems, enabling Internet mail services and supporting other applications that require a degree of clock synchronization. Increasingly real time systems arc requiring high degrees of time synchronization. Where this is required, the common approach up until now has been to distribute the clock to each processor by means of hardware (e.g. GPS and cesium clocks) or to distribute time by means of an additional dedicated timing network. Whilst this has proved successful for real time systems, the use of present day high speed networks with definable quality of service from the protocol layers has lead to the possibility of using the existing data network to distribute the time. This thesis demonstrates that by using system integration and implementation of commercial off the shelf (COTS) products it is possible to distribute and coordinate the time of the computer time clocks to microsecond range. Thus providing close enough synchronization to support real time systems whilst avoiding the additional time, infrastructure and money needed to build and maintain a specialized timing network.
APA, Harvard, Vancouver, ISO, and other styles
17

Struhar, Vaclav. "Improving Soft Real-time Performance of Fog Computing." Licentiate thesis, Mälardalens högskola, Inbyggda system, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-55679.

Full text
Abstract:
Fog computing is a distributed computing paradigm that brings data processing from remote cloud data centers into the vicinity of the edge of the network. The computation is performed closer to the source of the data, and thus it decreases the time unpredictability of cloud computing that stems from (i) the computation in shared multi-tenant remote data centers, and (ii) long distance data transfers between the source of the data and the data centers. The computation in fog computing provides fast response times and enables latency sensitive applications. However, industrial systems require time-bounded response times, also denoted as RT. The correctness of such systems depends not only on the logical results of the computations but also on the physical time instant at which these results are produced. Time-bounded responses in fog computing are attributed to two main aspects: computation and communication.    In this thesis, we explore both aspects targeting soft RT applications in fog computing in which the usefulness of the produced computational results degrades with real-time requirements violations. With regards to the computation, we provide a systematic literature survey on a novel lightweight RT container-based virtualization that ensures spatial and temporal isolation of co-located applications. Subsequently, we utilize a mechanism enabling RT container-based virtualization and propose a solution for orchestrating RT containers in a distributed environment. Concerning the communication aspect, we propose a solution for a dynamic bandwidth distribution in virtualized networks.
APA, Harvard, Vancouver, ISO, and other styles
18

Barnes, Richard Neil. "Global synchronization of asynchronous computing systems." Master's thesis, Mississippi State : Mississippi State University, 2001. http://library.msstate.edu/etd/show.asp?etd=etd-10262001-094922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Thammawichai, Mason. "Energy-efficient optimal control for real-time computing systems." Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/33813.

Full text
Abstract:
Moving toward ubiquitous Cyber-Physical Systems - where computation, control and communication units are mutually interacting - this thesis aims to provide fundamental frameworks to address the problems arising from such a system, namely the real-time multiprocessor scheduling problem (RTMSP) and the multi-UAV topology control problem (MUTCP). The RTMSP is concerned with how tasks can be scheduled on available computing resources such that no task misses a deadline. An optimization-based control method was used to solve the problem. Though it is quite natural to formulate the task assignment problem as a mixed-integer nonlinear program, the computation cost is high. By reformulating the scheduling problem as a problem of first determining a percentage of task execution time and then finding the task execution order, the computation complexity can be reduced. Simulation results illustrate that our methods are both feasibility optimal and energy optimal. The framework is then extended to solve a scheduling problem with uncertainty in task execution times by adopting a feedback approach. The MUTCP is concerned with how a communication network topology can be determined such that the energy cost is minimized. An optimal control framework to construct a data aggregation network is proposed to optimally trade-off between communication and computation energy. The benefit of our network topology model is that it is a self-organized multi-hop hierarchical clustering network, which provides better performance in term of energy consumption, reliability and network scalability. In addition, our framework can be applied to both homogeneous and heterogeneous mobile sensor networks due to the generalization of the network model. Two multi-UAV information gathering applications, i.e. target tracking and area mapping, were chosen to test the proposed algorithm. Based on simulation results, our method can save up to 40% energy for a target tracking and 60% for an area mapping compared to the baseline approach.
APA, Harvard, Vancouver, ISO, and other styles
20

Huang, Huang. "Power and Thermal Aware Scheduling for Real-time Computing Systems." FIU Digital Commons, 2012. http://digitalcommons.fiu.edu/etd/610.

Full text
Abstract:
Over the past few decades, we have been enjoying tremendous benefits thanks to the revolutionary advancement of computing systems, driven mainly by the remarkable semiconductor technology scaling and the increasingly complicated processor architecture. However, the exponentially increased transistor density has directly led to exponentially increased power consumption and dramatically elevated system temperature, which not only adversely impacts the system's cost, performance and reliability, but also increases the leakage and thus the overall power consumption. Today, the power and thermal issues have posed enormous challenges and threaten to slow down the continuous evolvement of computer technology. Effective power/thermal-aware design techniques are urgently demanded, at all design abstraction levels, from the circuit-level, the logic-level, to the architectural-level and the system-level. In this dissertation, we present our research efforts to employ real-time scheduling techniques to solve the resource-constrained power/thermal-aware, design-optimization problems. In our research, we developed a set of simple yet accurate system-level models to capture the processor's thermal dynamic as well as the interdependency of leakage power consumption, temperature, and supply voltage. Based on these models, we investigated the fundamental principles in power/thermal-aware scheduling, and developed real-time scheduling techniques targeting at a variety of design objectives, including peak temperature minimization, overall energy reduction, and performance maximization. The novelty of this work is that we integrate the cutting-edge research on power and thermal at the circuit and architectural-level into a set of accurate yet simplified system-level models, and are able to conduct system-level analysis and design based on these models. The theoretical study in this work serves as a solid foundation for the guidance of the power/thermal-aware scheduling algorithms development in practical computing systems.
APA, Harvard, Vancouver, ISO, and other styles
21

Hamza-Lup, Georgiana. "SENSOR-BASED COMPUTING TECHNIQUES FOR REAL-TIME TRAFFIC EVACUATION MANAGEMENT." Doctoral diss., University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3477.

Full text
Abstract:
The threat of terrorist incidents is higher than ever before and devastating acts, such as the terrorist attacks on the World Trade Center and the Pentagon, have left many concerns about the possibility of future incidents and their potential impact. Unlike some natural disasters that can be anticipated, terrorist attacks are sudden and unexpected. Even if sometimes we do have partial information about a possible attack, it is generally not known exactly where, when, or how an attack will occur. This lack of information posses great challenges on those responsible for security, specifically, on their ability to respond fast, whenever necessary with flexibility and coordination. The surface transportation system plays a critical role in responding to terrorist attacks or other unpredictable human-caused disasters. In particular, existing Intelligent Transportation Systems (ITS) can be enhanced to improve the ability of the surface transportation system to efficiently respond to emergencies and recover from disasters. This research proposes the development of new information technologies to enhance today's ITS with capabilities to improve the crisis response capabilities of the surface transportation system. The objective of this research is to develop a Smart Traffic Evacuation Management System (STEMS) that responds rapidly and effectively to terrorist threats or other unpredictable disasters, by creating dynamic evacuation plans adaptable to continuously changing traffic conditions based on real-time information. The intellectual merit of this research is that the proposed STEMS will possess capabilities to support both the unexpected and unpredictable aspects of a terrorist attack and the dynamic aspect of the traffic network environment. Studies of related work indicate that STEMS is the first system that automatically generates evacuation plans, given the location and scope of an incident and the current traffic network conditions, and dynamically adjusts the plans based on real-time information received from sensors and other surveillance technologies. Refining the plans to keep them consistent with the current conditions significantly improves evacuation effectiveness. The changes that STEMS can handle range from slow, steady variations in traffic conditions, to more sudden variations caused by secondary accidents or other stochastic factors (e.g., high visibility events that determine a sudden increase in the density of the traffic). Being especially designed to handle evacuation in case of terrorist-caused disasters, STEMS can also handle multiple coordinated attacks targeting some strategic area over a short time frame. These are frequently encountered in terrorist acts as they are intended to create panic and terror. Due to the nature of the proposed work, an important component of this project is the development of a simulation environment to support the design and test of STEMS. Developing analytical patterns for modeling traffic dynamics has been explored in the literature at different levels of resolution and realism. Most of the proposed approaches are either too limited in representing reality, or too complex for handling large networks. The contribution of this work consists of investigating and developing traffic models and evacuation algorithms that overcome both of the above limitations. Two of the greatest impacts of this research in terms of science are as follows. First, the new simulation environment developed for this project provides a test bed to facilitate future work on traffic evacuation systems. Secondly, although the models and algorithms developed for STEMS are targeted towards traffic environments and evacuation, their applicability can be extended to other environments (e.g., building evacuation) and other traffic related problems (e.g., real-time route diversion in case of accidents). One of the broader impacts of this research would be the deployment of STEMS in a real environment. This research provides a fundamental tool for handling emergency evacuation for a full range of unpredictable incidents, regardless of cause, origin and scope. Wider and swifter deployment of STEMS will support Homeland Security in general, and will also enhance the surface transportation system on which so many Homeland Security stakeholders depend.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO, and other styles
22

Chaparro-Baquero, Gustavo A. "Memory-Aware Scheduling for Fixed Priority Hard Real-Time Computing Systems." FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3712.

Full text
Abstract:
As a major component of a computing system, memory has been a key performance and power consumption bottleneck in computer system design. While processor speeds have been kept rising dramatically, the overall computing performance improvement of the entire system is limited by how fast the memory can feed instructions/data to processing units (i.e. so-called memory wall problem). The increasing transistor density and surging access demands from a rapidly growing number of processing cores also significantly elevated the power consumption of the memory system. In addition, the interference of memory access from different applications and processing cores significantly degrade the computation predictability, which is essential to ensure timing specifications in real-time system design. The recent IC technologies (such as 3D-IC technology) and emerging data-intensive real-time applications (such as Virtual Reality/Augmented Reality, Artificial Intelligence, Internet of Things) further amplify these challenges. We believe that it is not simply desirable but necessary to adopt a joint CPU/Memory resource management framework to deal with these grave challenges. In this dissertation, we focus on studying how to schedule fixed-priority hard real-time tasks with memory impacts taken into considerations. We target on the fixed-priority real-time scheduling scheme since this is one of the most commonly used strategies for practical real-time applications. Specifically, we first develop an approach that takes into consideration not only the execution time variations with cache allocations but also the task period relationship, showing a significant improvement in the feasibility of the system. We further study the problem of how to guarantee timing constraints for hard real-time systems under CPU and memory thermal constraints. We first study the problem under an architecture model with a single core and its main memory individually packaged. We develop a thermal model that can capture the thermal interaction between the processor and memory, and incorporate the periodic resource sever model into our scheduling framework to guarantee both the timing and thermal constraints. We further extend our research to the multi-core architectures with processing cores and memory devices integrated into a single 3D platform. To our best knowledge, this is the first research that can guarantee hard deadline constraints for real-time tasks under temperature constraints for both processing cores and memory devices. Extensive simulation results demonstrate that our proposed scheduling can improve significantly the feasibility of hard real-time systems under thermal constraints.
APA, Harvard, Vancouver, ISO, and other styles
23

Jonsson, Magnus. "High performance fiber-optic interconnection networks for real-time computing systems." Doctoral thesis, Högskolan i Halmstad, Inbyggda system (CERES), 1999. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-46.

Full text
Abstract:
Parallel and distributed computing systems become more and more powerful and hence place increasingly higher demands on the networks that interconnect their processors or processing nodes. Many of the applications running on such systems, especially embedded systems applications, have real-time requirements and, with increasing application demands, high-performance networks are the hearts of these systems. Fiber-optic networks are good candidates for use in such systems in the future. This thesis contributes to the relatively unexplored area of fiber-optic networks for parallel and distributed real-time computer systems and  suggests and evaluates several fiber-optic networks and protocols. Two different technologies are used in the networks, WDM (Wavelength Division Multiplexing) and fiber-ribbon point-to-point links. WDM offers multiple channels, each with a capacity of several Gbit/s. A WDM star network in which protocols and services are efficiently integrated to support different kinds of real-time demands, especially hard ones, has been developed. The star-of-stars topology can be chosen to offer better network scalability. The WDM star architecture is attractive but its future success depends on components becoming more commercially mature. Fiber-ribbon links, offering instead an aggregated bandwidth of several Gbit/s, have already reached the market with a promising price/performance ratio. This has motivated the development and investigation of two new ring networks based on fiber-ribbon links. The networks take advantage of spatial bandwidth reuse, which can greatly enhance performance in applications with a significant amount of nearest downstream neighbor communication. One of the ring networks is control channel based and not only has support for real-time services like the WDM star network but also low level support for, e.g., group communication. The approach has been to develop network protocols with support for dynamic real-time services, out of time-deterministic static TDMA systems. The focus has been on functionality more than pure performance figures, mostly on real-time features but also on other types of functionality for parallel and distributed systems. Worst-case analyses, some simulations, and case studies are reported for the networks. The focus has been on embedded supercomputer applications, where each node itself can be a parallel computer, and it is shown that the networks are well suited for use in the radar signal processing systems studied. Other application examples in which these kinds of networks are valuable are distributed multimedia systems, satellite imaging and other image processing applications.
Technical report / School of Electrical and Computer Engineering, Chalmers University of Technology, Göteborg, Sweden, 0282-5406 ; 379
APA, Harvard, Vancouver, ISO, and other styles
24

Dridi, Mourad. "Vers le support des systèmes à criticité mixte sur des architectures NoC Design and multi-abstraction-level evaluation of a NoC router for mixed-criticality real-time systems, in ACM Journal on Emerging Technologies in Computing Systems 15(1), February 2019 DTFM: a flexible model for schedulability analysis of real-time applications on NoC-based architectures, in ACM SIGBED Review 14(4), January 2018 NORTH : Non-intrusive observation and run time verification of cyber-physical systems, in Ada User Journal 39(4), December 2018." Thesis, Brest, 2019. http://www.theses.fr/2019BRES0051.

Full text
Abstract:
Nous nous intéressons dans le cadre de ce travail au challenge consistant à intégrer des systèmes à criticité mixte sur des architectures NoC. Cette intégration exige l'assurance des contraintes temporelles pour les applications critiques tout en minimisant l'impact de partage de ressources sur les applications non critiques. Afin d'exécuter des systèmes à criticité mixte sur des architectures NoC, nous avons proposé plusieurs contributions sous la forme d'un routeur, de modèles de tâches et de communications pour les architectures NoC. Nous avons proposé DAS, un routeur NoC conçu pour exécuter des systèmes à criticité mixte sur des architectures NoC. Il assure les contraintes temporelles pour les communications critiques tout en limitant la réservation des ressources pour les communications non critiques. DAS implante deux modes de fonctionnement, deux niveaux de préemption, deux techniques de contrôle de flux et deux étages d'arbitrage. Nous avons implanté DAS dans un simulateur de NoC appelé SHoC. Ensuite, DAS a été evalué sur plusieurs niveaux d'abstraction et selon plusieurs critères. Nous avons ensuite proposé DTFM : un modèle de tâche et de flux pour les systèmes temps réel déployés sur un NoC. À partir du modèle de tâches, du modèle de NoC et du placement des tâches, DTFM calcule automatiquement le modèle de flux correspondant.Finalement, nous avons proposé ECTM : un modèle de communications pour les architectures NoC. ECTM conduit à une analyse d'ordonnancement efficace. Il modélise les communications sous la forme d'un graphe de tâches tout en tenant compte du modèle de NoC utilisé. Nous avons implanté ECTM et DTFM dans un simulateur d'ordonnancement appelé Cheddar
This thesis addresses existing challenges that are associated with the implementation of Mixed Criticality Systems over NoC architectures. In such system, we must ensure the timing constraints for critical applications while limiting the bandwidth reservation for them.In order to run Mixed Criticality systems on NoC architectures, we have proposed several contributions in the form of a NoC router, a task and flow model, and a communications model.First, we propose a NoC router called DAS (Double Arbiter and Switching), designed to efficiently run mixed criticality applications on Network On Chip. To enforce MCS requirements, DAS implements automatic mode changes, two levels of preemption, two flow control techniques and two stages of arbitration. We have implemented DAS in the cycle accurate SystemC-TLM simulator SHOC. Then, we have evaluated DAS with several abstraction-level methods. Second, we propose DTFM, a Dual Task and Flow Model in order to overcome the limitation of existent task and flow models. DTFM allows us, from the task model, the NoC model and the task mapping, to compute the corresponding flow model. Finally, we propose a new NoC communication model called Exact Communication Time Model (ECTM) in order to analyze the scheduling of periodic tasks exchanging messages over a NoC. We have implemented our approach in a real-time scheduling simulator called Cheddar
APA, Harvard, Vancouver, ISO, and other styles
25

Pollmeier, Klemens. "Parallel computing for real-time simulation and condition monitoring of fluid power systems." Thesis, University of Bath, 1997. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.388563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Gadea, Cristian. "Architectures and Algorithms for Real-Time Web-Based Collaboration." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/41944.

Full text
Abstract:
Originating in the theory of distributed computing, the optimistic consistency control method known as Operational Transformation (OT) has been studied by researchers since the late 1980s. Algorithms were devised for managing the concurrent nature of user actions and for maintaining the consistency of replicated data as changes are introduced by multiple geographically-distributed users in real-time. Web-Based Collaborative Platforms are now essential components of modern organizations, with real-time protocols and standards such as WebSocket enabling the development of online collaboration tools to facilitate information sharing, content creation, document management, audio and video streaming, and communication among team members. Products such as Google Docs have shown that centralized web-based co-editing is now possible in a reliable way, with benefits in user productivity and efficiency. However, as the demand for effective real-time collaboration between team members continues to increase, web applications require new synchronization algorithms and architectures to resolve the editing conflicts that may appear when multiple individuals are modifying the same data at the same time. In addition, collaborative applications need to be supported by scalable distributed backend services, as can be achieved with "serverless" technologies. While much existing research has addressed problems of optimistic consistency maintenance, previous approaches have not focused on capturing the dynamic client-server interactions of OT systems by modeling them as real-time systems using Finite State Machine (FSM) theory. This thesis includes an exploration of how the principles of control theory and hierarchical FSMs can be applied to model the distributed system behavior when processing and transforming HTML DOM changes initiated by multiple concurrent users. The FSM-based OT implementation is simulated, including with random inputs, and the approach is shown to be invaluable for organizing the algorithms required for synchronizing complex data structures. The real-time feedback control mechanism is used to develop a Web-Based Collaborative Platform based on a new OT integration algorithm and architecture that brings "Virtual DOM" concepts together with state-of-the-art OT principles to enable the next generation of collaborative web-based experiences, as shown with implementations of a rich-text editor and a 3D virtual environment.
APA, Harvard, Vancouver, ISO, and other styles
27

Cho, Yŏng-gwan. "RTDEVS/CORBA: A distributed object computing environment for simulation-based design of real-time discrete event systems." Diss., The University of Arizona, 2001. http://hdl.handle.net/10150/279904.

Full text
Abstract:
Ever since distributed systems technology became increasingly popular in the real-time computing area about two decades ago, real-time distributed object computing technologies have been attracting more attention from researchers and engineers. While highly effective object-oriented methodologies are now widely adopted to reduce the development complexity and maintenance costs of large scale non-real-time software applications, real-time systems engineering practice has not kept pace with these system development methodologies. Indeed, real-time design techniques have not fully adopted the concepts of modular design and analysis which are the main virtues of object-oriented design technologies. As a consequence, the demand for object-oriented analysis, design, and implementation of large-scale real-time applications has been growing. To address the need for object-oriented real-time systems engineering environments we propose the Real-Time DEVS/CORBA (RTDEVS/CORBA) distributed object computing environment. In this dissertation, we show how this environment is an extension of previously developed DEVS-based modeling and simulation frameworks that have been shown to support an effective modeling and simulation methodology in various application areas. The major objective in developing Distributed Real-Time DEVS/CORBA is to establish a framework in which distributed real-time systems can be designed through DEVS-based modeling and simulation studies, and then migrated with minimal additional effort to be executed in the real-time distributed environment. This environment provides generic support for developing models of distributed embedded software systems, evaluating their performance and timing behavior through simulation and easing the transition from the simulation to actual executions. In this dissertation we describe, in some detail, the design and implementation of the RTDEVS/CORBA environment. It was implemented over Visibroker CORBA middleware along with the use of ACE/TAO real-time CORBA services, such as the real-time event service and the runtime scheduling service. Implementation aspects considered include time synchronization issues, priority-based message dispatching for timely message delivery, implementation of activity with threads, and other features required for simulating and executing real-time DEVS models. Finally, application examples are presented in the last part of the dissertation to show applicability of the environment to real systems-engineering problems.
APA, Harvard, Vancouver, ISO, and other styles
28

Söderén, Martin. "Online Transverse Beam Instability Detection in the LHC : High-Throughput Real-Time Parallel Data Analysis." Thesis, Linköpings universitet, Programvara och system, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-143440.

Full text
Abstract:
This thesis presents the ADT transverse instability detection system, the next generation of instability detection in the LHC at CERN, Geneva. The system is presented after a thorough study of underlying causes for instabilities in high energy particle accelerators, current parallel programming paradigms, the available hardware and software at CERN and possible instability detection techniques. The requirements for the system involve handling vast amounts of data which need to be analyzed in real-time and in this data detect rapid amplitude growth while limiting the computational resources required to a minimum. The result of this thesis was a system that could generate a trigger when an instability was detected, which was used to save data from observation instruments around the LHC. A fixed display in the CERN control centre was also created which allows scientists and operators at CERN to monitor the oscillation amplitude of all particle bunches. The conclusion is that the complete system will be a valuable asset at CERN to help further develop the LHC.
APA, Harvard, Vancouver, ISO, and other styles
29

Davis, Don, Toby Bennett, and Jay Costenbader. "RECONFIGURABLE GATEWAY SYSTEMS FOR SPACE DATA NETWORKING." International Foundation for Telemetering, 1996. http://hdl.handle.net/10150/608358.

Full text
Abstract:
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California
Over a dozen commercial remote sensing programs are currently under development representing billions of dollars of potential investment. While technological advances have dramatically decreased the cost of building and launching these satellites, the cost and complexity of accessing their data for commercial use are still prohibitively high. This paper describes Reconfigurable Gateway Systems which provide, to a broad spectrum of existing and new data users, affordable telemetry data acquisition, processing and distribution for real-time remotely sensed data at rates up to 300 Mbps. These Gateway Systems are based upon reconfigurable computing, multiprocessing, and process automation technologies to meet a broad range of satellite communications and data processing applications. Their flexible architecture easily accommodates future enhancements for decompression, decryption, digital signal processing and image / SAR data processing.
APA, Harvard, Vancouver, ISO, and other styles
30

Karunanidhi, Karthikeyan. "ARROS; distributed adaptive real-time network intrusion response." Ohio : Ohio University, 2006. http://www.ohiolink.edu/etd/view.cgi?ohiou1141074467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Han, Li. "Fault-tolerant and energy-aware algorithms for workflows and real-time systems." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN013.

Full text
Abstract:
Cette thèse se concentre sur deux problèmes majeurs dans le contexte du calcul haute performance:la résilience et la consommation d'énergie.Le nombre d'unités de calcul dans les superordinateurs a considérablement augmenté ces dernièresannées, entraînant une augmentation de la fréquence des pannes. Le recours à des mécanismes detolérance aux pannes est maintenant critique pour les applications utilisant un grand nombre decomposants pendant une période de temps significative. Il est par ailleurs nécessaire de minimiserla consommation énergétique pour des raisons budgétaires et environnementales. Ceci est d'autantplus important que la tolérance aux pannes nécessite une redondance en temps ou en espace quiinduit un surcoût énergétique. Par ailleurs, certaines technologies qui réduisant la consommationd'énergie ont des effets négatifs sur les performances et la résilience.Nous concevons des algorithmes d'ordonnancement pour étudier les compromis entre performance,résilience et consommation d'énergie. Dans une première partie, nous nous concentrons surl'ordonnancement des graphes de tâches sujets à des pannes. La question est alors de décider quelletâche sauvegarder afin de minimiser le temps d'exécution. Nous concevons des solutions optimalespour des classes de graphes et fournissons des heuristiques pour le cas général. Nous considéronsdans une deuxième partie l'ordonnancement de tâches périodiques indépendantes sujettes à deserreurs silencieuses dans un contexte temps-réel. Nous étudions combien de réplicats sontnécessaires et l'interaction entre dates butoir, fiabilité, et minimisation d'énergie
This thesis is focused on the two major problems in the high performance computing context: resilience and energyconsumption.To satisfy the computing power required by modern scientific research, the number of computing units insupercomputers increases dramatically in the past years. This leads to more frequent errors than expected. Obviously,failure handling is critical for highly parallel applications that use a large number of components for a significant amountof time. Otherwise, one may spend infinite time re-executing. At the other side, power management is necessary due toboth monetary and environmental constraints. Especially because resilience often calls for redundancy in time and/or inspace , which in turn consumes extra energy. In addition, technologies that reduce energy consumption often havenegative effects on performance and resilience.In this context, we re-design scheduling algorithms to investigate trade-offs between performance, resilience and energyconsumption. The first part is focused around task graph scheduling and fail-stop errors. Which task should becheckpointed (redundancy in time) in order to minimize the total execution time? The objective is to design optimalsolutions for special classes of task graphs, and to provide general-purpose heuristics for arbitrary ones. Then in thesecond part of the thesis, we consider periodically independent task sets, which is the context of real-time scheduling,and silent errors. We investigate the number of replicas (redundancy in space) that are needed, and the interplay betweendeadlines, energy minimization and reliability
APA, Harvard, Vancouver, ISO, and other styles
32

Hebbache, Farouk. "Work-conserving dynamic TDM-based memory arbitration for multi-criticality real-time systems." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT044.

Full text
Abstract:
Les architectures multi-cœurs posent de nombreux défis dans les systèmes temps réel, qui découlent des conflits entre les accès simultanés à la mémoire partagée. Parmi les politiques d'arbitrage mémoire disponibles, le multiplexage temporel, en anglais Time-Division Multiplexing (TDM), assure un comportement prédictible en limitant les latences d'accès et en garantissant une bande passante aux tâches indépendamment des autres tâches. Pour ce faire, TDM garantit un accès exclusif à la mémoire partagée dans une fenêtre temporelle fixe. L'approche TDM, cependant, fournit une faible utilisation des ressources car elle oisive. De plus, elle est très inefficace pour les ressources ayant des latences d'accès très variables, comme le partage de l'accès à une mémoire DRAM. La longueur constante d'une fenêtre TDM est donc très pessimiste et entraîne une sous-utilisation de la mémoire. Pour pallier ces limitations, nous présentons des mécanismes d'arbitrage dynamique basés sur TDM. Cependant, plutôt que d'arbitrer au niveau des fenêtres TDM, notre approche fonctionne à la granularité des cycles d'horloge en exploitant les temps morts accumulés par les requêtes précédentes. Cela permet à l'arbitre de réorganiser les requêtes mémoire, d'exploiter les latences d'accès réelles des requêtes, et donc d'améliorer l'utilisation de la mémoire. Nous démontrons que nos politiques sont analysables car elles préservent les garanties de TDM dans le pire des cas, alors que nos expériences montrent une amélioration significative de l'utilisation de la mémoire
Multi-core architectures pose many challenges in real-time systems, which arise from contention between concurrent accesses to shared memory. Among the available memory arbitration policies, Time-Division Multiplexing (TDM) ensures a predictable behavior by bounding access latencies and guaranteeing bandwidth to tasks independently from the other tasks. To do so, TDM guarantees exclusive access to the shared memory in a fixed time window. TDM, however, provides a low resource utilization as it is non-work-conserving. Besides, it is very inefficient for resources having highly variable latencies, such as sharing the access to a DRAM memory. The constant length of a TDM slot is, hence, highly pessimistic and causes an underutilization of the memory. To address these limitations, we present dynamic arbitration schemes that are based on TDM. However, instead of arbitrating at the level of TDM slots, our approach operates at the granularity of clock cycles by exploiting slack time accumulated from preceding requests. This allows the arbiter to reorder memory requests, exploit the actual access latencies of requests, and thus improve memory utilization. We demonstrate that our policies are analyzable as they preserve the guarantees of TDM in the worst case, while our experiments show an improved memory utilization
APA, Harvard, Vancouver, ISO, and other styles
33

Blair, James M. "Architectures for Real-Time Automatic Sign Language Recognition on Resource-Constrained Device." UNF Digital Commons, 2018. https://digitalcommons.unf.edu/etd/851.

Full text
Abstract:
Powerful, handheld computing devices have proliferated among consumers in recent years. Combined with new cameras and sensors capable of detecting objects in three-dimensional space, new gesture-based paradigms of human computer interaction are becoming available. One possible application of these developments is an automated sign language recognition system. This thesis reviews the existing body of work regarding computer recognition of sign language gestures as well as the design of systems for speech recognition, a similar problem. Little work has been done to apply the well-known architectural patterns of speech recognition systems to the domain of sign language recognition. This work creates a functional prototype of such a system, applying three architectures seen in speech recognition systems, using a Hidden Markov classifier with 75-90% accuracy. A thorough search of the literature indicates that no cloud-based system has yet been created for sign language recognition and this is the first implementation of its kind. Accordingly, there have been no empirical performance analyses regarding a cloud-based Automatic Sign Language Recognition (ASLR) system, which this research provides. The performance impact of each architecture, as well as the data interchange format, is then measured based on response time, CPU, memory, and network usage across an increasing vocabulary of sign language gestures. The results discussed herein suggest that a partially-offloaded client-server architecture, where feature extraction occurs on the client device and classification occurs in the cloud, is the ideal selection for all but the smallest vocabularies. Additionally, the results indicate that for the potentially large data sets transmitted for 3D gesture classification, a fast binary interchange protocol such as Protobuf has vastly superior performance to a text-based protocol such as JSON.
APA, Harvard, Vancouver, ISO, and other styles
34

DENG, HAOYANG. "Fast Optimization Methods for Model Predictive Control via Parallelization and Sparsity Exploitation." Kyoto University, 2020. http://hdl.handle.net/2433/259076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Fox, Paul James. "Massively parallel neural computation." Thesis, University of Cambridge, 2013. https://www.repository.cam.ac.uk/handle/1810/245013.

Full text
Abstract:
Reverse-engineering the brain is one of the US National Academy of Engineering’s “Grand Challenges.” The structure of the brain can be examined at many different levels, spanning many disciplines from low-level biology through psychology and computer science. This thesis focusses on real-time computation of large neural networks using the Izhikevich spiking neuron model. Neural computation has been described as “embarrassingly parallel” as each neuron can be thought of as an independent system, with behaviour described by a mathematical model. However, the real challenge lies in modelling neural communication. While the connectivity of neurons has some parallels with that of electrical systems, its high fan-out results in massive data processing and communication requirements when modelling neural communication, particularly for real-time computations. It is shown that memory bandwidth is the most significant constraint to the scale of real-time neural computation, followed by communication bandwidth, which leads to a decision to implement a neural computation system on a platform based on a network of Field Programmable Gate Arrays (FPGAs), using commercial off- the-shelf components with some custom supporting infrastructure. This brings implementation challenges, particularly lack of on-chip memory, but also many advantages, particularly high-speed transceivers. An algorithm to model neural communication that makes efficient use of memory and communication resources is developed and then used to implement a neural computation system on the multi- FPGA platform. Finding suitable benchmark neural networks for a massively parallel neural computation system proves to be a challenge. A synthetic benchmark that has biologically-plausible fan-out, spike frequency and spike volume is proposed and used to evaluate the system. It is shown to be capable of computing the activity of a network of 256k Izhikevich spiking neurons with a fan-out of 1k in real-time using a network of 4 FPGA boards. This compares favourably with previous work, with the added advantage of scalability to larger neural networks using more FPGAs. It is concluded that communication must be considered as a first-class design constraint when implementing massively parallel neural computation systems.
APA, Harvard, Vancouver, ISO, and other styles
36

Amur, Hrishikesh. "Storage and aggregation for fast analytics systems." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50397.

Full text
Abstract:
Computing in the last decade has been characterized by the rise of data- intensive scalable computing (DISC) systems. In particular, recent years have wit- nessed a rapid growth in the popularity of fast analytics systems. These systems exemplify a trend where queries that previously involved batch-processing (e.g., run- ning a MapReduce job) on a massive amount of data, are increasingly expected to be answered in near real-time with low latency. This dissertation addresses the problem that existing designs for various components used in the software stack for DISC sys- tems do not meet the requirements demanded by fast analytics applications. In this work, we focus specifically on two components: 1. Key-value storage: Recent work has focused primarily on supporting reads with high throughput and low latency. However, fast analytics applications require that new data entering the system (e.g., new web-pages crawled, currently trend- ing topics) be quickly made available to queries and analysis codes. This means that along with supporting reads efficiently, these systems must also support writes with high throughput, which current systems fail to do. In the first part of this work, we solve this problem by proposing a new key-value storage system – called the WriteBuffer (WB) Tree – that provides up to 30× higher write per- formance and similar read performance compared to current high-performance systems. 2. GroupBy-Aggregate: Fast analytics systems require support for fast, incre- mental aggregation of data for with low-latency access to results. Existing techniques are memory-inefficient and do not support incremental aggregation efficiently when aggregate data overflows to disk. In the second part of this dis- sertation, we propose a new data structure called the Compressed Buffer Tree (CBT) to implement memory-efficient in-memory aggregation. We also show how the WB Tree can be modified to support efficient disk-based aggregation.
APA, Harvard, Vancouver, ISO, and other styles
37

Shaker, Alfred M. "COMPARISON OF THE PERFORMANCE OF NVIDIA ACCELERATORS WITH SIMD AND ASSOCIATIVE PROCESSORS ON REAL-TIME APPLICATIONS." Kent State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=kent1501084051233453.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Root, Eric. "A Re-Configurable Hardware-in-the-Loop Flight Simulator." Ohio University / OhioLINK, 2004. http://www.ohiolink.edu/etd/view.cgi?ohiou1090939388.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Chen, Liang. "A grid-based middleware for processing distributed data streams." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1157990530.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Johnston, Christopher Troy. "VERTIPH : a visual environment for real-time image processing on hardware : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Systems Engineering at Massey University, Palmerston North, New Zealand." Massey University, 2009. http://hdl.handle.net/10179/1219.

Full text
Abstract:
This thesis presents VERTIPH, a visual programming language for the development of image processing algorithms on FPGA hardware. The research began with an examination of the whole design cycle, with a view to identifying requirements for implementing image processing on FPGAs. Based on this analysis, a design process was developed where a selected software algorithm is matched to a hardware architecture tailor made for its implementation. The algorithm and architecture are then transformed into an FPGA suitable design. It was found that in most cases the most efficient mapping for image processing algorithms is to use a streamed processing approach. This constrains how data is presented and requires most existing algorithms to be extensively modified. Therefore, the resultant designs are heavily streamed and pipelined. A visual notation was developed to complement this design process, as both streaming and pipelining can be well represented by data flow visual languages. The notation has three views each of which represents and supports a different part of the design process. An architecture view gives an overview of the design's main blocks and their interconnections. A computational view represents lower-level details by representing each block by a set of computational expressions and low-level controls. This includes a novel visual representation of pipelining that simplifies latency analysis, multiphase design, priming, flushing and stalling, and the detection of sequencing errors. A scheduling view adds a state machine for high-level control of processing blocks. This extended state objects to allow for the priming and flushing of pipelined operations. User evaluations of an implementation of the key parts of this language (the architecture view and the computational view) found that both were generally good visualisations and aided in design (especially the type interface, pipeline and control notations). The user evaluations provided several suggestions for the improvement of the language, and in particular the evaluators would have preferred to use the diagrams as a verification tool for a textual representation rather than as the primary data capture mechanism. A cognitive dimensions analysis showed that the language scores highly for thirteen of the twenty dimensions considered, particularly those related to making details of the design clearer to the developer.
APA, Harvard, Vancouver, ISO, and other styles
41

Doucet, Nicolas. "Design of an optimized supervisor module for tomographic adaptive optics systems of extremely large telescopes." Thesis, Université de Paris (2019-....), 2019. http://www.theses.fr/2019UNIP7177.

Full text
Abstract:
L'arrivée de la nouvelle génération de télescopes au sol, dénommés les télescopes extrêmement grands (ELT en anglais), marque l'avènement d'une ère de développement d'instruments capables d'exploiter la lumière collectée par des miroirs primaires de taille sans précédent. La communauté astronomique se trouve confrontée à des défis de taille ainsi qu'à des opportunités uniques. Ces défis surviennent avec la différence de complexité des instruments actuels et celle des instruments à venir, évoluant avec le carré du diamètre des télescopes. Les astronomes doivent donc concevoir des technologies permettant d'exploiter pleinement les capacités de ses futurs ELT, et notamment de compenser les effets de la turbulence atmosphérique en temps réel. Ce problème représente une opportunité dans la mesure où la communauté astronomique doit repenser des composants essentiels des systèmes optiques, ainsi que l'écosystème matériel/logiciel traditionnel afin d'assurer une performance optique élevée et un temps de calcul quasi-réel. Pour permettre d'utiliser ces instruments à leur plein potentiel, nous utilisons l'optique adaptative, qui recourt à la tomographie de la turbulence atmosphérique. Le module de supervision est un composant essentiel de ces systèmes calculant le reconstructeur tomographique du système, régulièrement, afin de tenir compte de l'évolution de la turbulence atmosphérique au cours de l'observation. Dans cette thèse, nous implémentons un module de supervision optimisé et évaluons ses performances dans des configurations correspondant au future ELT Européen, le plus grand télescope conçu aujourd'hui avec un diamètre de 40 m. Les calculs nécessaires font intervenir de grandes matrices (i.e., jusqu'à 100k x 100k), obtenues à partir des mesures de plusieurs analyseurs de surface d'onde. Afin de faire face à la complexité du problème, nous recourons à des logiciels de calcul haute performance, utilisant des algorithmes de calculs asynchrones, à granularité fine, ainsi que des techniques d'approximation exploitant la structure particulière des matrices. De plus, nous utilisons matériel et logiciel en conjonction afin d'assurer un temps de réponse acceptable pour suivre l'évolution de la structure de la turbulence atmosphérique. Nous démontrons la validité du module de supervision, à l'aide d'un outil de simulation tiers, à l'échelle des ELT, ouvrant la voie au premier prototype installé sur site
The recent advent of next generation ground-based telescopes, code-named Extremely Large Telescopes (ELT), highlights the beginning of a forced march toward an era of deploying instruments capable of exploiting starlight captured by mirrors at an unprecedented scale. This confronts the astronomy community with both a daunting challenge and a unique opportunity. The challenge arises from the mismatch between the complexity of current instruments and their expected scaling with the square of the future telescope diameters, on which astronomy applications have relied to produce better science. To deliver on the promise of tomorrow's ELT, astronomers must design new technologies that can effectively enhance the performance of the instrument at scale, while compensating for the atmospheric turbulence in real-time. This is an unsolved problem. This problem presents an opportunity because the astronomy community is now compelled to rethink essential components of the optical systems and their traditional hardware/software ecosystems in order to achieve high optical performance with a near real-time computational response. In order to realize the full potential of such instruments, we investigate a technique supporting Adaptive Optics (AO), i.e., a dedicated concept relying on turbulence tomography. In particular, a critical part of AO systems is the supervisor module, which is responsible for providing the system with a Tomographic Reconstructor (ToR) at a regular pace, as the atmospheric turbulence evolves over an observation window. In this thesis, we implement an optimized supervisor module and assess it under real configurations of the future European ELT (E-ELT) with a 40 m diameter, the largest and most complex optical telescope ever conceived. This necessitates manipulating large matrix sizes (i.e., up to 100k x 100k ) that contain measurements captured by multiple wavefront sensors. To address the complexity bottleneck, we employ high performance computing software solutions based on cutting-edge numerical algorithms using asynchronous, fine-grained computations as well as approximations techniques that leverage the resulting matrix data structure. Furthermore, GPU-based hardware accelerators are used in conjunction with the software solutions to ensure reasonable time-to-solution to cope with rapidly evolving atmospheric turbulence. The proposed software/hardware solution permits to reconstruct an image with high accuracy. We demonstrate the validity of the AO systems with a third-party testbed simulating at the E-ELT scale, which is intended to pave the way for a first prototype installed on-site
APA, Harvard, Vancouver, ISO, and other styles
42

Giansiracusa, Michelangelo Antonio. "A Secure Infrastructural Strategy for Safe Autonomous Mobile Agents." Queensland University of Technology, 2005. http://eprints.qut.edu.au/16052/.

Full text
Abstract:
Portable languages and distributed paradigms have driven a wave of new applications and processing models. One of the most promising, certainly from its early marketing, but disappointing (from its limited uptake)is the mobile agent execution and data processing model. Mobile agents are autonomous programs which can move around a heterogeneous network such as the Internet, crossing through a number of different security domains, and perform some work at each visited destination as partial completion of a mission for their agent user. Despite their promise as a technology and paradigm to drive global electronic services (i.e.any Internet-driven-and-delivered service, not solely e-commerce related activities), their up take on the Internet has been very limited. Chief among the reasons for the paradigm's practical under-achievement is there is no ubiquitous frame work for using Internet mobile agents, and non-trivial security concerns abound for the two major stake holders (mobile agent users and mobile agent platform owners). While both stake holders have security concerns with the dangers of the mobile agent processing model, most investigators in the field are of the opinion that protecting mobile agents from malicious agent platforms is more problematic than protecting agent platforms from malicious mobile agents. Traditional cryptographic mechanisms are not well-suited to counter the bulk of the threats associated with the mobile agent paradigm due to the untrusted hosting of an agent and its intended autonomous, flexible movement and processing. In our investigation, we identified that the large majority of the research undertaken on mobile agent security to date has taken a micro-level perspective. By this we mean research focused solely on either of the two major stakeholders, and even then often only on improving measures to address one security issue dear to the stake holder - for example mobile agent privacy (for agent users) or access control to platform resources (for mobile agent platform owners). We decided to take a more encompassing, higher-level approach in tackling mobile agent security issues. In this endeavour, we developed the beginnings of an infrastructural-approach to not only reduce the security concerns of both major stakeholders, but bring them transparently to a working relationship. Strategic utilisation of both existing distributed system trusted-third parties (TTPs) and novel mobile agent paradigm-specific TTPs are fundamental in the infrastructural framework we have devised. Besides designing an application and language independent frame work for supporting a large-scale Internet mobile agent network, our Mobile Agent Secure Hub Infrastructure (MASHIn) proposal encompasses support for flexible access control to agent platform resources. A reliable means to track the location and processing times of autonomous Internet mobile agents is discussed, withfault-tolerant handling support to work around unexpected processing delays. Secure,highly-effective (incomparison to existing mechanisms) strategies for providing mobile agent privacy, execution integrity, and stake holder confidence scores were devised - all which fit comfortably within the MASHIn framework. We have deliberately considered the interests - withoutbias -of both stake holders when designing our solutions. In relation to mobile agent execution integrity, we devised a new criteria for assessing the robustness of existing execution integrity schemes. Whilst none of the existing schemes analysed met a large number of our desired properties for a robust scheme, we identified that the objectives of Hohl's reference states scheme were most admirable - particularly real - time in - mission execution integrity checking. Subsequently, we revised Hohl's reference states protocols to fit in the MASHIn framework, and were able to overcome not only the two major limitations identified in his scheme, but also meet all of our desired properties for a robust execution integrity scheme (given an acceptable decrease in processing effiency). The MASHIn offers a promising new perspective for future mobile agent security research and indeed a new frame work for enabling safe and autonomous Internet mobile agents. Just as an economy cannot thrive without diligent care given to micro and macro-level issues, we do not see the security prospects of mobile agents (and ultimately the prospects of the mobile agent paradigm) advancing without diligent research on both levels.
APA, Harvard, Vancouver, ISO, and other styles
43

Lin, Wei-Chen, and 林威辰. "Parallel Computing on Real-Time Arbitrage-Stratege Trading System." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/54033685195814600133.

Full text
Abstract:
碩士
國立交通大學
應用數學系數學建模與科學計算碩士班
99
Parallel computing is usually applied to solve hard problems with great computational time complexity due to huge amount of data or complicated calculations. The parallel computing can significantly improve the computational efficiency of a big problem with weak data dependencies by spliting the problem into many small and independent problems that can be parallelly solved by different computational units. The overall computational time is thus significantly reduced. This research focuses on a rare application in parallel computing— to improve the computational performance for a light-weight computational problem in a real time environment: finding arbitrage strategies in derivatives markets. Under a competitive environment such as derivatives markets, the speed for searching arbitrage strategies is critical since a late arbitrage order almost fails to get deal. In this thesis, we construct a virtual future exchange to simulate the real world futures and options market. The input orders are the historical data provided by the Taiwan Future Exchange. Two arbitrage strategies being adopted in this paper are modified from the “Convexity of Option Price” (see Robert C. Merton(1973)) and Put-Call-Future Parity (see Tucker(1991)) , which discuss the price relationships between the futures and options. Arbitrage opportunities are found if these relationships are violated. We insert two virtual traders, one use CPU, another one use CUDA, a parallel computing architecture developed by NVIDA, to find the arbitrage opportunities. The GPU can find more arbitrage opportunities and make more profit than the CPU by equally splitting the workload to achieve load balance. We show that finding arbitrage opportunities with parallel computing can greatly enhance profitability in real world financial market.
APA, Harvard, Vancouver, ISO, and other styles
44

Chen, Wen-Ling, and 陳玟伶. "Real-time Freeway Travel Time Prediction in a Fog-cloud Computing System." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/q7nn45.

Full text
Abstract:
碩士
國立交通大學
網路工程研究所
107
Travel time prediction is an important issue in Intelligent Transportation Systems (ITS), and it can be used for traffic control, route planning, and travel guidance. Existing studies on travel time prediction focus on prediction accuracy in cloud environments, which may not meet a real-time constraint. In order to achieve real-time as well as accurate travel time prediction, we propose a freeway Travel Time Prediction system based on a Fog-Cloud computing paradigm (TTP-FC), using a prediction model that combines the long short-term memory (LSTM) model with the gradient boosting regression tree (GBRT) model. Based on the data collected from the Traffic Data Collection System (TDCS) in Taiwan, evaluation results show that the average MAPE (mean absolute percent error) of the proposed TTP-FC is 2.145%, which is less than that (3.443%) of OTTP, a method based on random forests. In addition, the proposed TTP-FC can reduce the average response time by 26%, compared to that implemented in the cloud only.
APA, Harvard, Vancouver, ISO, and other styles
45

Unsal, Osman Sabri. "System-level power -aware computing in complex real-time and multimedia systems." 2003. https://scholarworks.umass.edu/dissertations/AAI3078723.

Full text
Abstract:
In this thesis, we will address post-manufacturing power-aware computing at the system-level for real-time and multimedia systems. We divide the system-level domain into four layers: the microarchitectural, compiler, operating system and network. We will isolate and examine two main tracks. First, we will show in Chapter 2 that the current system-level power savings methods are piecemeal approaches, not even comprehensively addressing single issues within the layers. We will be capitalizing on this research gap and our contribution will be to consider inter- as well as intra-layer system-level power-savings. By this, we mean that we will examine energy management across system layer boundaries: the network/OS or the compiler/hardware layers for example. Second, although there is some previous research in system-level power issues in real-time systems, much remains to be done; especially for developing new power-aware heuristics specifically for real-time systems. Historically, the main performance metrics in complex real-time systems has been timeliness, determinism and fault tolerance. Power-aware issues require new performance measures: power and energy efficiency. These new metrics imply new approaches and novel heuristics. In line with the above vision, we will be looking into the energy implications of task assignment and scheduling algorithms, communication protocols, network topology, data redundancy, fault tolerance, predictability, node architecture, compiler and operating systems, all of which span multiple layers and are important concerns in real-time and embedded systems. This thesis is organized as follows: In Chapter 1, we introduce the problem and develop our approach. In Chapter 2, we survey previous work which will expose the void that we aim to fill. In Chapter 3, we report on our work at the Network/Operating System (OS) layers. Chapter 4 discusses power-aware fault tolerance, an OS level contribution. In Chapter 5, we lay the ground for the compiler-related aspects of our analysis. Chapters 6 and 7 discuss two compiler-microarchitectural level power-aware data-cache designs. Chapter 8 introduces a microarchitectural-level fetch-throttling scheme. We conclude with future work in Chapter 9.
APA, Harvard, Vancouver, ISO, and other styles
46

Ynug-Wen, Lee, and 李永文. "A File Transmission Algorithm for Distributed Real-Time Computing System." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/04967173302928920216.

Full text
Abstract:
碩士
國立交通大學
資訊工程研究所
82
Distributed real-time systems have been applied in various life-critical application domains; including the military, industrial manufacturing, and medicalcare sectors. One of important issue in a distributed real-time system is the response time for the program to be executed. If the response time is not controlledeffectively, it may lead to unexpected catastrophes where used in life-critical application. The response time for the executed program can be divided into two parts: the time to get all of the files required and the time to execute the program. In general, time to execute a program usually can be considered to be constant. If we want to speed up the response time, what we need to do is to efficiently access all files required for the program to be executed. In this thesis, we propose a new algorithm to transmit the files required for the program to speed up the response time. The proposed algorithm is compared with existing approaches to demonstrate its advantages on both speed and easy for implementation.
APA, Harvard, Vancouver, ISO, and other styles
47

Kazarian, Jason Paul. "Automating non-functional design for net-centric real-time systems /." 2007. http://proquest.umi.com/pqdweb?did=1453205821&sid=14&Fmt=2&clientId=10361&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Chian-Chung, and 王健忠. "Preliminary Study on the Cloud Computing System for Real-time Flood Forecasting." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/72840291609113058570.

Full text
Abstract:
碩士
國立臺灣海洋大學
河海工程學系
102
In urban development, the increasing area of impermeable pavement prevents rainwater from penetrating the surface and into the ground. Because the ground surface roughness decreases, the surface runoff and peak flow increases and the time of concentration decreases. In addition, because extreme rainfall and typhoon events have increased in frequency in recent years, relying solely on structural measures, such as building embankments and water pumping stations, cannot thoroughly solve flood disaster problems. Therefore, governments at all levels have adopted integrated flood management approaches to control floods. In addition to emphasizing structural watershed management, governments must focus on establishing flood forecast systems as nonstructural measures to reduce the loss of life and property. Flood prevention operation and contingency rely primarily on timeliness and effectiveness. Although hydrologic and hydraulic simulation analysis has developed toward maturity, it cannot provide real-time flood wave propagation and contingency information to on-site flood-prevention personnel. Therefore, cloud computing technology was used to plan and establish a prototype platform for real-time flood forecast cloud computing systems. Based on a service-oriented architecture, Web services and XML were used as the development standards and programming language, respectively. GlassFish was used as the system server. A J2EE multitier distributed architecture was used to develop the platform and various functional module executions; thus, a system that conforms to the open geographic information system architecture and exhibits openness and cross-platform functionalities was developed. Users can use any device to log into the platform at any time and location. Subsequently, they can be immediately linked to functional modules, such as administrative system management, flood information, flood routing, and precautionary management modules, that integrate real-time catchment and river flood information and provide background information for making flood contingency decisions. In this study, a preliminary flood routing model was established and the rainfall collected during Typhoon Nari was used as the input. The calculation result was close to the qualitative objective of the actual measurement. Therefore, the routing model can be used as a basis for flood routing. The river flood information module was combined with real-time river flow rate measurement technology developed by the Laboratory of Marine Surveying of the Department of Harbor and River Engineering, National Taiwan Ocean University. Through real-time flow rate observation and measurement data, users can indirectly infer the river discharge and immediately respond to and adjust the flood routing results. If a stage-discharge rating curve can be established in the future, the calculation efficiency and accuracy can be further improved.
APA, Harvard, Vancouver, ISO, and other styles
49

Olander, Peter Andrew. "Built-in tests for a real-time embedded system." Thesis, 1991. http://hdl.handle.net/10413/5680.

Full text
Abstract:
Beneath the facade of the applications code of a well-designed real-time embedded system lies intrinsic firmware that facilitates a fast and effective means of detecting and diagnosing inevitable hardware failures. These failures can encumber the availability of a system, and, consequently, an identification of the source of the malfunction is needed. It is shown that the number of possible origins of all manner of failures is immense. As a result, fault models are contrived to encompass prevalent hardware faults. Furthermore, the complexity is reduced by determining syndromes for particular circuitry and applying test vectors at a functional block level. Testing phases and philosophies together with standardisation policies are defined to ensure the compliance of system designers to the underlying principles of evaluating system integrity. The three testing phases of power-on self tests at system start up, on-line health monitoring and off-line diagnostics are designed to ensure that the inherent test firmware remains inconspicuous during normal applications. The prominence of the code is, however, apparent on the detection or diagnosis of a hardware failure. The authenticity of the theoretical models, standardisation policies and built-in test philosophies are illustrated by means of their application to an intricate real-time system. The architecture and the software design implementing the idealogies are described extensively. Standardisation policies, enhanced by the proposition of generic tests for common core components, are advocated at all hierarchical levels. The presentation of the integration of the hardware and software are aimed at portraying the moderately complex nature of the task of generating a set of built-in tests for a real-time embedded system. In spite of generic policies, the intricacies of the architecture are found to have a direct influence on software design decisions. It is thus concluded that the diagnostic objectives of the user requirements specification be lucidly expressed by both operational and maintenance personnel for all testing phases. Disparity may exist between the system designer and the end user in the understanding of the requirements specification defining the objectives of the diagnosis. It is thus essential for complete collaboration between the two parties throughout the development life cycle, but especially during the preliminary design phase. Thereafter, the designer would be able to decide on the sophistication of the system testing capabilities.
Thesis (M.Sc.)-University of Natal, Durban, 1991.
APA, Harvard, Vancouver, ISO, and other styles
50

Siao, Wei-jhong, and 蕭為中. "GPU-Accelerated High Performance Computing System: Application to Real-Time Functional Magnetic Resonance Imaging." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/e448bc.

Full text
Abstract:
碩士
國立臺灣科技大學
電機工程系
99
This study attempts to build a real-time functional magnetic resonance imaging system (rtfMRI) to monitor blood-oxygen-level-dependent (BOLD) signal during fMRI experiment. To detect the BOLD signal change, a Gaussian filter and a general linear model analysis were performed on MRI images immediately subsequent to image acquisitions. A graphic processing unit (GPU) with massively parallel computation kernels was used to accelerate the image processing [i.e. gaussian filter and general linear model analysis (GLM)]. The GPU program was compatible to MATLAB environment through a communication interface of MATLAB and C language. Using GPU computation, the analysis of rtfMRI could be accomplished in less than 1 second in a conventional personal computer.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography