To see the other types of publications on this topic, follow the link: Worst Case Circuit Analysis.

Dissertations / Theses on the topic 'Worst Case Circuit Analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Worst Case Circuit Analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Cakir, Sinan. "Tolerance Based Reliability Of An Analog Electric Circuit." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12612929/index.pdf.

Full text
Abstract:
This thesis deals with the reliability analysis of a fuel pump driver circuit (FPDC), which regulates the amount of fuel pumped to a turbojet engine. Reliability analysis in such critical circuits has great importance since unexpected failures may cause serious financial loss and even human death. In this study, two types of reliability analysis are used: &ldquo
Worst Case Circuit Tolerance Analysis&rdquo
(WCCTA) and &ldquo
Failure Modes and Effects Analysis&rdquo
(FMEA). WCCTA involves the analysis of the circuit operation under varying parameters in their tolerance bands. These parameters include the resistances of the resistors, operating temperature and voltage input value. The operation of FPDC is checked and the most critical parameters are determined in the worst case conditions. While performing WCCTA, a method that guarantees the exact worst case conditions is used rather than probabilistic methods like Monte Carlo analysis. The results showed that the parameter variations do not affect the circuit operation unfavorably
operating temperature, voltage input variation and tolerance bands for the resistances are fairly compatible with the circuit operation. FMEA is implemented according to the short circuit and open circuit failures of all the electronic components used in FPDC. The components whose failure has catastrophic effect on the circuit operation have been determined and some preventive actions have been offered for some catastrophic failures.
APA, Harvard, Vancouver, ISO, and other styles
2

Lingasubramanian, Karthikeyan. "Probabilistic Error Analysis Models for Nano-Domain VLSI Circuits." Scholar Commons, 2010. https://scholarcommons.usf.edu/etd/1699.

Full text
Abstract:
Technology scaling to the nanometer levels has paved the way to realize multi-dimensional applications in a single product by increasing the density of the electronic devices on integrated chips. This has naturally attracted a wide variety of industries like medicine, communication, automobile, defense and even house-hold appliance, to use high speed multi-functional computing machines. Apart from the advantages of these nano-domain computing devices, their usage in safety-centric applications like implantable biomedical chips and automobile safety has immensely increased the need for comprehensive error analysis to enhance their reliability. Moreover, these nano-electronic devices have increased propensity to transient errors due to extremely small device dimensions and low switching energy. The nature of these transient errors is more probabilistic than deterministic, and so requires probabilistic models for estimation and analysis. In this dissertation, we present comprehensive analytic studies of error behavior in nano-level digital logic circuits using probabilistic reliability models. It comprises the design of exact probabilistic error models, to compute the maximum error over all possible input space in a circuit-specific manner; to study the behavior of transient errors in sequential circuits; and to achieve error mitigation through redundancy techniques. The model to compute maximum error, also provides the worst-case input vector, which has the highest probability to generate an erroneous output, for any given logic circuit. The model for sequential logic that can measure the expected output error probability, given a probabilistic input space, can account for both spatial dependencies and temporal correlations across the logic, using a time evolving causal network. For comprehensive error reduction in logic circuits, temporal, spatial and hybrid redundancy models, are implemented. The temporal redundancy model uses the triple temporal redundancy technique that applies redundancy in the input space, spatial redundancy model uses the cascaded triple modular redundancy technique that applies redundancy in the intermediate signal space and the hybrid redundancy techniques encapsulates both temporal and spatial redundancy schemes. All the above studies are performed on standard benchmark circuits from ISCAS and MCNC suites and the subsequent experimental results are obtained. These results clearly encompasses the various aspects of error behavior in nano VLSI circuits and also shows the efficiency and versatility of the probabilistic error models.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Wenfei. "Worst-case Analysis of Space Systems." Thesis, University of Exeter, 2011. http://hdl.handle.net/10036/3550.

Full text
Abstract:
Worst-case analysis is one of the most important elements in the verifica-tion and validation process used to ensure the reliable operation of safety-critical systems for defence, aerospace and space applications. In this the-sis, an optimization-based worst-case analysis framework is developed forspace applications. The proposed framework has been applied and success-fully validated on a number of European Space Agency funded researchprojects in the areas of flexible satellites, hypersonic re-entry vehicles, andautonomous rendezvous systems. Firstly, the problem of analyzing the robustness of an Attitude and OrbitalControl Systems (AOCS) for a flexible scientific satellite with a large num-ber of uncertainties is considered. The analysis employs a detailed simula-tion model of a flexible satellite and multivariable controller, together witha number of frequency and time domain performance criteria which arecommonly used by the space industry to verify correct functionality of full-authority multivariable satellite control systems. Second, the flying qualitiesanalysis of a re-entry vehicle is investigated for a number of complex sce-narios involving different types of uncertainties and disturbances. Specificmethods are utilized to deal with analysis problems involving probabilisticuncertainties, physically correlated uncertainties and highly dynamical dis-turbances. In another study, an integrated analytical/optimization-basedanalysis framework is proposed for the robustness analysis of AOCS fora telecoms satellite with flexible appendages. We develop detailed LinearFractional Transformation (LFT)-based models of the uncertainties presentin a modern telecom satellite and apply µ-analysis to these models in or-der to generate robustness guarantees. We validate these models and re-sults by cross-checking them against worst-case analysis results producedby global optimization algorithms applied to the original system model. Fi-nally, the optimization-based framework developed in this thesis is employedto analyze the robustness of the Guidance, Navigation and Control (GNC)system for autonomous spacecraft. This study considers the autonomousrendezvous problem over the terminal flight phase in the presence of a largenumber of realistic parametric uncertainties and a number of safety criteriarelated to the capture specification. An integrated analytical/optimization-based approach was also developed for this problem so that the computa-tional cost of simulation-based analyses can be reduced, through leveragingresults from robust control tools such asµ-analysis. The main contributions of the thesis are (a) to provide convincing demon-strations of the usefulness of optimization-based worst-case analysis on anumber of different space applications, each of which involves highly com-plex simulators developed by leading industrial companies from the Euro-pean Space sector, and (b) to show how optimization-based analysis meth-ods may be combined with analytical tools from robust control theory tocreate a more integrated, efficient and reliable verification and validationprocess for space applications.
APA, Harvard, Vancouver, ISO, and other styles
4

Marref, Amine. "Predicated Worst Case Execution Time Analysis." Thesis, University of York, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.507541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Engblom, Jakob. "Processor Pipelines and Static Worst-Case Execution Time Analysis." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis : Univ.-bibl. [distributör], 2002. http://publications.uu.se/theses/91-554-5228-0/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Edgar, Stewart Frederick. "Estimation of worst-case execution time using statistical analysis." Thesis, University of York, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.434164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Reutemann, Ralf Dieter. "Worst-case execution time analysis for dynamic branch predictors." Thesis, University of York, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.444749.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Al-Tarawneh, Mutaz. "Worst-case performance analysis of low-power instruction caches /." Available to subscribers only, 2008. http://proquest.umi.com/pqdweb?did=1594486421&sid=9&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ermedahl, Andreas. "A Modular Tool Architecture for Worst-Case Execution Time Analysis." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis : Univ.-bibl. [distributör], 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-3502.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shi, Zhenwu. "Non-worst-case response time analysis for real-time systems design." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51827.

Full text
Abstract:
A real-time system is a system such that the correctness of operations depends not only on the logical results, but also on the time at which these results are available. A fundamental problem in designing real-time systems is to analyze response time of operations, which is defined as the time elapsed from the moment when the operation is requested to the moment when the operation is completed. Response time analysis is challenging due to the complex dynamics among operations. A common technique is to study response time under worst-case scenario. However, using worst-case response time may lead to the conservative real-time system designs. To improve the real-time system design, we analyze the non-worst-case response time of operations and apply these results in the design process. The main contribution of this thesis includes mathematical modeling of real-time systems, calculation of non-worst-case response time, and improved real-time system design. We perform analysis and design on three common types of real-time systems as the real-time computing system, real-time communication network, and real-time energy management. For the real-time computing systems, our non-worst-response time analysis leads a necessary and sufficient online schedulability test and a measure of robustness of real-time systems. For the real-time communication network, our non-worst-response time analysis improves the performance for the model predictive control design based on the real-time communication network. For the real-time energy management, we use the non-worst-case response time to check whether the micro-grid can operate independently from the main grid.
APA, Harvard, Vancouver, ISO, and other styles
11

Etscheid, Michael [Verfasser]. "Beyond Worst-Case Analysis of Max-Cut and Local Search / Michael Etscheid." Bonn : Universitäts- und Landesbibliothek Bonn, 2018. http://d-nb.info/1167857003/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Urquhart, Luke Dominic Mark. "Worst-case resource-usage analysis of Java Card classic editions application bytecode." Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/42538.

Full text
Abstract:
Java Card is the dominant smartcard technology in use today, with over 12 billion Java Card smartcards having shipped globally in the last 15 years. Almost exclusively, the deployed Java Card smartcards are instances of a Classic edition for which garbage collection is an optional component in even the most recent Classic edition. Poorly written or malicious Java Card applications may drain the available memory of a Java Card Virtual Machine to the point the card becomes unusable, and undisciplined use of the transaction mechanism may exhaust the available transaction buffers, resulting in programmatic abort by the Java Card Runtime Environment and so limit the range of services a Java Card application may successfully be able to offer. Given the size and global nature of the user base, and the commercial importance of Java Card, there is a stunning lack of tools supporting analysis or certification of the memory, transactional or CPU usage of Java Card applications. In this thesis we present a worst-case resource-usage analysis tool for Java Card which is capable of producing worst-case memory usage and worst-case execution-time estimates for Java Card applications (also known as applets). Our main theoretical contribution is a static analysis for Java Card applets at the bytecode level which conservatively approximates properties of interest affecting memory usage, input-output/APDU usage and transaction usage. Our static analysis provides the high-level information for subsequent worst-case resource-usage analysis in our tool which exploits well-known results and techniques from hard real-time systems. We generate a resource usage graph per registered applet lifecycle method entry point as the start node and the control-flow returning to the Java Card Runtime Environment as the final node. We use the Implicit Path Enumeration Technique to generate and solve Integer Linear Programming problems representing the worst-case memory-usage and worst-case execution-time.
APA, Harvard, Vancouver, ISO, and other styles
13

Plociennik, Kai. "From Worst-Case to Average-Case Efficiency – Approximating Combinatorial Optimization Problems." Doctoral thesis, Universitätsbibliothek Chemnitz, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-65314.

Full text
Abstract:
In theoretical computer science, various notions of efficiency are used for algorithms. The most commonly used notion is worst-case efficiency, which is defined by requiring polynomial worst-case running time. Another commonly used notion is average-case efficiency for random inputs, which is roughly defined as having polynomial expected running time with respect to the random inputs. Depending on the actual notion of efficiency one uses, the approximability of a combinatorial optimization problem can be very different. In this dissertation, the approximability of three classical combinatorial optimization problems, namely Independent Set, Coloring, and Shortest Common Superstring, is investigated for different notions of efficiency. For the three problems, approximation algorithms are given, which guarantee approximation ratios that are unachievable by worst-case efficient algorithms under reasonable complexity-theoretic assumptions. The algorithms achieve polynomial expected running time for different models of random inputs. On the one hand, classical average-case analyses are performed, using totally random input models as the source of random inputs. On the other hand, probabilistic analyses are performed, using semi-random input models inspired by the so called smoothed analysis of algorithms. Finally, the expected performance of well known greedy algorithms for random inputs from the considered models is investigated. Also, the expected behavior of some properties of the random inputs themselves is considered.
APA, Harvard, Vancouver, ISO, and other styles
14

Mao, Jia. "On the design and worst-case analysis of certain interactive and approximation algorithms." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2007. http://wwwlib.umi.com/cr/ucsd/fullcit?p3244382.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2007.
Title from first page of PDF file (viewed February 12, 2007). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 109-113).
APA, Harvard, Vancouver, ISO, and other styles
15

Hu, Yu-Shing. "A portable worst-case execution time analysis framework for real-time Java architectures." Thesis, University of York, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.423749.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Abdallah, Laure. "Worst-case delay analysis of core-to-IO flows over many-cores architectures." Phd thesis, Toulouse, INPT, 2017. http://oatao.univ-toulouse.fr/17836/1/abdallah_2.pdf.

Full text
Abstract:
Many-core architectures are more promising hardware to design real-time systems than multi-core systems as they should enable an easier mastered integration of a higher number of applications, potentially of different level of criticalities. In embedded real-time systems, these architectures will be integrated within backbone Ethernet networks, as they mostly provide Ethernet controllers as Input/Output(I/O) interfaces. Thus, a number of applications of different level of criticalities could be allocated on the Network-on-Chip (NoC) and required to communicate with sensors and actuators. However, the worst-case behavior of NoC for both inter-core and core-to-I/O communications must be established. Several NoCs targeting hard real-time systems, made of specific hardware extensions, have been designed. However, none of these extensions are currently available in commercially available NoC-based many-core architectures, that instead rely on wormhole switching with round-robin arbitration. Using this switching strategy, interference patterns can occur between direct and indirect flows on many-cores. Besides, the mapping over the NoC of both critical and non-critical applications has an impact on the network contention these core-to-I/O communications exhibit. These core-to-I/O flows (coming from the Ethernet interface of the NoC) cross two networks of different speeds: NoC and Ethernet. On the NoC, the size of allowed packets is much smaller than the size of Ethernet frames. Thus, once an Ethernet frame is transmitted over the NoC, it will be divided into many packets. When all the data corresponding to this frame are received by the DDR-SDRAM memory on the NoC, the frame is removed from the buffer of the Ethernet interface. In addition, the congestion on the NoC, due to wormhole switching, can delay these flows. Besides, the buffer in the Ethernet interface has a limited capacity. Then, this behavior may lead to a problem of dropping Ethernet frames. The idea is therefore to analyze the worst case transmission delays on the NoC and reduce the delays of the core-to-I/O flows. In this thesis, we show that the pessimism of the existing Worst-Case Traversal Time (WCTT) computing methods and the existing mapping strategies lead to drop Ethernet frames due to an internal congestion in the NoC. Thus, we demonstrate properties of such NoC-based wormhole networks to reduce the pessimism when modeling flows in contentions. Then, we propose a mapping strategy that minimizes the contention of core-to-I/O flows in order to solve this problem. We show that the WCTT values can be reduced up to 50% compared to current state-of-the-art real-time packet schedulability analysis. These results are due to the modeling of the real impact of the flows in contention in our proposed computing method. Besides, experimental results on real avionics applications show significant improvements of core-to-I/O flows transmission delays, up to 94%, without significantly impacting transmission delays of core-to-core flows. These improvements are due to our mapping strategy that allocates the applications in such a way to reduce the impact of non-critical flows on critical flows. These reductions on the WCTT of the core-to-I/O flows avoid the drop of Ethernet frames.
APA, Harvard, Vancouver, ISO, and other styles
17

Preda, Valentin. "Robust microvibration control and worst-case analysis for high pointing stability space missions." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0785/document.

Full text
Abstract:
Le contexte général des travaux de recherche de cette thèse concerne les problématiques liées à l’optimisation glob-ale liée à la conception des futurs satellites d’observation terrestre et de missions scientifiques, nécessitantune très haute stabilité en pointage (capacité du satellite à garder son point de visée). Plus particulièrement,les travaux concernent le contrôle actif des modes micro-vibratoires.Dans une mission satellitaire d’observation terrestre, la qualité des images dépend bien évidemmentdes instruments de mesure optique (diamètre du miroir, aberrations optiques et qualité du polissage)mais également des performances de la stabilité de la ligne de visée du satellite qui peut s’avérer dégradéepour cause de micro-vibrations. La présence de ces micro-vibrations est liée aux divers éléments tournantdu satellite tels que les mécanismes de rotation des panneaux solaires ou de contrôle d’orientation dusatellite (on parle de contrôle d’attitude réalisé au moyen de roues inertielles).Le contrôle des micro-vibrations représentent ainsi un défit technologique, conduisant l’ESA et les ac-teurs industriels du monde spatial, a considéré cette problématique comme hautement prioritaire pour ledéveloppement des satellites d’observation terrestre nouvelle génération.Il existe à l’heure actuelle deux principes fondamentaux de contrôle des micro-vibrations :• le contrôle dit passif: la stratégie consiste à introduire des dispositions constructives et des matériauxparticuliers permettant de minimiser la transmission des vibrations à l’environnement.• le contrôle dit actif : le concept de contrôle actif des vibrations est tout autre : l’idée est cette fois-ci,de bloquer la micro-vibration en exerçant une vibration antagoniste créée artificiellement avec despropriétés en opposition, à tout instant, relativement à la vibration indésirable, pour rendre nulleleur somme.L’industrie spatiale aborde cette problématique en plaçant des isolateurs en élastomère au voisinage dechaque source de micro-vibrations. Cette solution, qui a fait ses preuves puisqu’elle équipe actuelle-ment nombre de satellites en orbite, permet de rejeter nombre de micro-vibrations. Malheureusement,la demande de plus en plus importante de grande stabilité de la ligne de visée pour les futures missionsd’observation terrestres telles que les missions GAIA rend l’approche passive insuffisante.L’ESA et Airbus Defence and Space, ont donc collaborer conjointement avec l’équipe ARIA au travers decette thèse, dans des travaux de recherche dans le domaine du contrôle actif pour palier ces problèmes.L’objectif visé est de coupler les approches passives et actives afin de rejeter à la fois les micro-vibrations enhautes fréquences (approche passive existant) et en basses fréquences (approche active objet des travauxde la thèse)
Next generation satellite missions will have to meet extremely challenging pointing stability requirements. Even low levels of vibration can introduce enough jitter in the optical elements to cause a significant reduction in image quality. The success of these projects is therefore constrained by the ability of on-board vibration isolation and optical control techniques to keep stable the structural elements of the spacecraft in the presence of external and internal disturbances.In this context, the research work presented in this thesis combines the expertise of the European Space Agency (ESA), the industry (Airbus Defence and Space) and the IMS laboratory (laboratoire de l’Intégration du Matériau au Système) with the aim of developing new generation of robust microvibration isolation systems for future space observation missions. More precisely, the thesis presents the development of an Integrated Modeling, Control and Analysis framework in which to conduct advanced studies related to reaction wheel microvibration mitigation.The thesis builds upon the previous research conducted by Airbus Defence and Space and ESA on the use of mixed active/passive microvibration mitigation techniques and provides a complete methodology for the uncertainty modeling, robust control system design and worst-case analysis of such systems for a typical satellite observation mission. It is shown how disturbances produced by mechanical spinning devices such as reaction wheels can be significantly attenuated in order to improve the pointing stability of the spacecraft even in the presence of model uncertainty and other nonlinear phenomenon.Finally, the work introduces a new disturbance model for the multi harmonic perturbation spectrum produced by spinning reaction wheels that is suitable for both controller synthesis and worst-case analysis using modern robust control tools. This model is exploited to provide new ways of simulating the image distortions induced by such disturbances
APA, Harvard, Vancouver, ISO, and other styles
18

Menon, Prathyush Purushothama. "Optimisation-based worst-case analysis and anti-windup synthesis for uncertain nonlinear systems." Thesis, University of Leicester, 2007. http://hdl.handle.net/2381/30245.

Full text
Abstract:
This thesis describes the development and application of optimisation-based methods for worst-case analysis and anti-windup synthesis for uncertain nonlinear systems. The worst-case analysis methods developed in the thesis are applied to the problem of nonlinear flight control law clearance for highly augmented aircraft. Local, global and hybrid optimisation algorithms are employed to evaluate worst-case violations of a nonlinear response clearance criterion, for a highly realistic aircraft simulation model and flight control law. The reliability and computational overheads associated with different opti misation algorithms are compared, and the capability of optimisation-based approaches to clear flight control laws over continuous regions of the flight envelope is demonstrated. An optimisation-based method for computing worst-case pilot inputs is also developed, and compared with current industrial approaches for this problem. The importance of explicitly considering uncertainty in aircraft parameters when computing worst-case pilot demands is clearly demonstrated. Preliminary results on extending the proposed framework to the problems of limit-cycle analysis and robustness analysis in the pres ence of time-varying uncertainties are also included. A new method for the design of anti-windup compensators for nonlinear constrained systems controlled using nonlinear dynamics inversion control schemes is presented and successfully applied to some simple examples. An algorithm based on the use of global optimisation is proposed to design the anti-windup compensator. Some conclusions are drawn from the results of the research presented in the thesis, and directions for future work are identified.
APA, Harvard, Vancouver, ISO, and other styles
19

Traulsen, Claus [Verfasser]. "Reactive processing for synchronous languages and its worst case reaction time analysis / Claus Traulsen." Kiel : Universitätsbibliothek Kiel, 2010. http://d-nb.info/1020002255/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Haugli, Fredrik Bakkevig. "Using online worst-case execution time analysis and alternative tasks in real time systems." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for teknisk kybernetikk, 2014. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-26100.

Full text
Abstract:
As embedded hardware becomes more powerful, it allows for more complex realtime systems running tasks with highly dynamic execution times. This dynamicitymakes the already formidable task of producing accurate WCET analysis evenmore di?cult. Since the variation in execution time depends on task input andthe state of the system, it is postulated that a more accurate estimate for theWCET can be found online with knowledge about the task parameters.This thesis will explore the concept of online execution time analysis and itspotential utilization. Line detection in images through Hough line transform isfound to be a relevant application whose execution time can be estimated bythe contrast of the input image. A system for scheduling tasks utilizing theironline WCET estimate is then discussed. It dynamically checks for potentialdeadline misses and degrades tasks, either by running a more e?cient alternativetask instead or by aborting the task, until timely execution is guaranteed. Anexperiment is presented, demonstrating a higher throughput of tasks with onlineWCET estimation. Finally, the work on a framework for more precise simulationsand experiments is presented.
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Xiaoting. "Worst-case delay analysis of real-time switched Ethernet networks with flow local synchronization." Phd thesis, Toulouse, INPT, 2013. http://oatao.univ-toulouse.fr/10305/1/li.pdf.

Full text
Abstract:
Full-duplex switched Ethernet is a promising candidate for interconnecting real-time industrial applications. But due to IEEE 802.1d indeterminism, the worst-case delay analysis of critical flows supported by such a network is still an open problem. Several methods have been proposed for upper-bounding communication delays on a real-time switched Ethernet network, assuming that the incoming traffic can be upper bounded. The main problem remaining is to assess the tightness, i.e. the pessimism, of the method calculating this upper bound on the communication delay. These methods consider that all flows transmitted over the network are independent. This is true for flows emitted by different source nodes since, in general, there is no global clock synchronizing them. But the flows emitted by the same source node are local synchronized. Such an assumption helps to build a more precise flow model that eliminates some impossible communication scenarios which lead to a pessimistic delay upper bounds. The core of this thesis is to study how local periodic flows synchronized with offsets can be handled when computing delay upper-bounds on a real-time switched Ethernet. In a first step, the impact of these offsets on the delay upper-bound computation is illustrated. Then, the integration of offsets in the Network Calculus and the Trajectory approaches is introduced. Therefore, a modified Network Calculus approach and a modified Trajectory approach are developed whose performances are compared on an Avionics Full-DupleX switched Ethernet (AFDX) industrial configuration with one thousand of flows. It has been shown that, in the context of this AFDX configuration, the Trajectory approach leads to slightly tighter end-to-end delay upper bounds than the ones of the Network Calculus approach. But offsets of local flows have to be chosen. Different offset assignment algorithms are then investigated on the AFDX industrial configuration. A near-optimal assignment can be exhibited. Next, a pessimism analysis of the computed upper-bounds is proposed. This analysis is based on the Trajectory approach (made optimistic) which computes an under-estimation of the worst-case delay. The difference between the upper-bound (computed by a given method) and the under-estimation of the worst-case delay gives an upper-bound of the pessimism of the method. This analysis gives interesting comparison results on the Network Calculus and the Trajectory approaches pessimism. The last part of the thesis, deals with a real-time heterogeneous network architecture where CAN buses are interconnected through a switched Ethernet backbone using dedicated bridges. Two approaches, the component-based approach and the Trajectory approach, are developed to conduct a worst-case delay analysis for such a network. Clearly, the ability to compute end-to-end delays upper-bounds in the context of heterogeneous network architecture is promising for industrial domains.
APA, Harvard, Vancouver, ISO, and other styles
22

Mohr, Esther Verfasser], and Günter [Akademischer Betreuer] [Schmidt. "Online algorithms for conversion problems : an approach to conjoin worst-case analysis and empirical-case analysis / Esther Mohr. Betreuer: Günter Schmidt." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2011. http://d-nb.info/1051432529/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Mohr, Esther [Verfasser], and Günter [Akademischer Betreuer] Schmidt. "Online algorithms for conversion problems : an approach to conjoin worst-case analysis and empirical-case analysis / Esther Mohr. Betreuer: Günter Schmidt." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2011. http://d-nb.info/1051432529/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Stappert, Friedhelm [Verfasser]. "From Low-Level to Model-Based and Constructive Worst-Case Execution Time Analysis / Friedhelm Stappert." Aachen : Shaker, 2004. http://d-nb.info/1170545211/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Nowotsch, Jan [Verfasser], and Theo [Akademischer Betreuer] Ungerer. "Interference-sensitive Worst-case Execution Time Analysis for Multi-core Processors / Jan Nowotsch. Betreuer: Theo Ungerer." Augsburg : Universität Augsburg, 2014. http://d-nb.info/1077704410/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Panigrahi, Sunil Kumar, Soubhik Chakraborty, and Jibitesh Mishra. "A Statistical Analysis of Bubble Sort in terms of Serial and Parallel Computation." IJCSN Journal, 2012. http://hdl.handle.net/10150/214089.

Full text
Abstract:
In some recent papers, the weight based statistical bounds have arguably explained time complexity better than the count based mathematical bounds. This is definitely true for average case where for an arbitrary code it is difficult to identify the pivotal operation or pivotal region in the code for taking the expectation and/or when the probability distribution, over which expectation is taken, becomes unrealistic over the problem domain. In worst case, it can certify whether a mathematical bound is conservative or not. Here we revisit the results on Bubble sort in sequential mode and make an independent study of the same algorithm in parallel mode using statistical bound
APA, Harvard, Vancouver, ISO, and other styles
27

Celik, Vakkas. "Development Of Strategies For Reducing The Worst-case Messageresponse Times On The Controller Area Network." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614075/index.pdf.

Full text
Abstract:
The controller area network (CAN) is the de-facto standard for in-vehicle communication. The growth of time-critical applications in modern cars leads to a considerable increase in the message trac on CAN. Hence, it is essential to determine ecient message schedules on CAN that guarantee that all communicated messages meet their timing constraints. The aim of this thesis is to develop oset scheduling strategies that
APA, Harvard, Vancouver, ISO, and other styles
28

Bondorf, Steffen [Verfasser], and Jens [Akademischer Betreuer] Schmitt. "Worst-Case Performance Analysis of Feed-Forward Networks – An Efficient and Accurate Network Calculus / Steffen Bondorf. Betreuer: Jens Schmitt." Kaiserslautern : Technische Universität Kaiserslautern, 2016. http://d-nb.info/111213235X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Sdobnova, Alena, and Jakub Blaszkiewicz. "Analysis of An Uncertain Volatility Model in the framework of static hedging for different scenarios." Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-2199.

Full text
Abstract:

In Black-Scholes model, the parameters -a volatility and an interest rate were assumed as constants. In this thesis we concentrate on behaviour of the volatility as

a function and we find more realistic models for the volatility, which elimate a risk

connected with behaviour of the volatility of an underlying asset. That is

the reason why we will study the Uncertain Volatility Model. In Chapter

1 we will make some theoretical introduction to the Uncertain Volatility Model

introduced by Avellaneda, Levy and Paras and study how it behaves in the different scenarios. In

Chapter 2 we choose one of the scenarios. We also introduce the BSB equation

and try to make some modification to narrow the uncertainty bands using

the idea of a static hedging. In Chapter 3 we try to construct the proper

portfolio for the static hedging and compare the theoretical results with the real

market data from the Stockholm Stock Exchange.

APA, Harvard, Vancouver, ISO, and other styles
30

Goemans, Michel X., and Dimitris J. Bertsimas. "Survivable Networks, Linear Programming Relaxations and the Parsimonious Property." Massachusetts Institute of Technology, Operations Research Center, 1990. http://hdl.handle.net/1721.1/5217.

Full text
Abstract:
We consider the survivable network design problem - the problem of designing, at minimum cost, a network with edge-connectivity requirements. As special cases, this problem encompasses the Steiner tree problem, the traveling salesman problem and the k-connected network design problem. We establish a property, referred to as the parsimonious property, of the linear programming (LP) relaxation of a classical formulation for the problem. The parsimonious property has numerous consequences. For example, we derive various structural properties of these LP relaxations, we present some algorithmic improvements and we perform tight worstcase analyses of two heuristics for the survivable network design problem.
APA, Harvard, Vancouver, ISO, and other styles
31

MUSA, RAMI ADNAN. "SIMULATION-BASED TOLERANCE STACKUP ANALYSIS IN MACHINING." University of Cincinnati / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1060975896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Kafle, Bishoksan. "Modeling assembly program with constraints. A contribution to WCET problem." Master's thesis, Faculdade de Ciências e Tecnologia, 2012. http://hdl.handle.net/10362/7968.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Lógica Computacional
Model checking with program slicing has been successfully applied to compute Worst Case Execution Time (WCET) of a program running in a given hardware. This method lacks path feasibility analysis and suffers from the following problems: The model checker (MC) explores exponential number of program paths irrespective of their feasibility. This limits the scalability of this method to multiple path programs. And the witness trace returned by the MC corresponding to WCET may not be feasible (executable). This may result in a solution which is not tight i.e., it overestimates the actual WCET. This thesis complements the above method with path feasibility analysis and addresses these problems. To achieve this: we first validate the witness trace returned by the MC and generate test data if it is executable. For this we generate constraints over a trace and solve a constraint satisfaction problem. Experiment shows that 33% of these traces (obtained while computing WCET on standard WCET benchmark programs) are infeasible. Second, we use constraint solving technique to compute approximate WCET solely based on the program (without taking into account the hardware characteristics), and suggest some feasible and probable worst case paths which can produce WCET. Each of these paths forms an input to the MC. The more precise WCET then can be computed on these paths using the above method. The maximum of all these is the WCET. In addition this, we provide a mechanism to compute an upper bound of over approximation for WCET computed using model checking method. This effort of combining constraint solving technique with model checking takes advantages of their strengths and makes WCET computation scalable and amenable to hardware changes. We use our technique to compute WCET on standard benchmark programs from M¨alardalen University and compare our results with results from model checking method.
APA, Harvard, Vancouver, ISO, and other styles
33

Lee, Yen Ling. "Dynamic analysis of the National Innovation Systems model - a case study of Taiwan's integrated circuit industry." Thesis, University of Manchester, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.488411.

Full text
Abstract:
This is claimed to be an era of knowledge-based economies; the knowledge developed in National Innovation Systems (NISs) is widely thought to have become crucial to science and technology development in leading economies. Most scholars admit the NIS is a complex, dynamic and non-linear system. In order to enhance understanding of the structure and process of the NIS as well as the level and the rate of flows within an NIS, a system dynamics approach and computer simulations are applied in this research. This research will therefore centre on an attempt to develop a mathematical model of the national innovation system of Taiwan, particularly with regard to its Integrated Circuit (IC) industry. Various definitions and models of an NIS have been proposed from different points of view (e.g. Freeman, 1987; Lundvall, 1992; Nelson, 1993; Patel and Pavitt, 1994; Metcalfe, 1995; Smith, 1996; OECD, 1997; Gregersen et al., 1997; Vanichseni, 1998). The approach taken here is additionally based on the viewpoint of System Dynamics to describe its complex status. Therefore, the main aim of this research is to combine related theories/practices of innovation systems and system dynamics in order to understand both the dynamic relations and the innovative performance among the structural elements (actors) of Taiwan's IC industry. One objective is to increase our insight into the dynamics of national systems of innovation by means of computer modelling and formulating research questions for future research. Another objective is to create scenarios to verify the behaviour of the institutions under investigation by simulation, and to assess possible outcomes in those varying scenarios. By means of questionnaire/in-depth interviews and SD model simulation, as cross-comparisons between them, the thesis aims to increase our insight into the dynamic processes of the Taiwanese IC industry's systems of innovation and our understanding of the interdependence and interaction among the capital flow, human resource flow, knowledge & technology flow and product flow in the NIS. In addition, a comparison of innovation commercialization in Taiwan's IC industry under the different policy tests and scenario tests is undertaken. These simulations show that single policies are relatively ineffective and that innovation performance requires combining a range of policies and capabilities.
APA, Harvard, Vancouver, ISO, and other styles
34

Palframan, Mark C. "Robust Control Design and Analysis for Small Fixed-Wing Unmanned Aircraft Systems Using Integral Quadratic Constraints." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/71881.

Full text
Abstract:
The main contributions of this work are applications of robust control and analysis methods to complex engineering systems, namely, small fixed-wing unmanned aircraft systems (UAS). Multiple path-following controllers for a small fixed-wing Telemaster UAS are presented, including a linear parameter-varying (LPV) controller scheduled over path curvature. The controllers are synthesized based on a lumped path-following and UAS dynamic system, effectively combining the six degree-of-freedom aircraft dynamics with established parallel transport frame virtual vehicle dynamics. The robustness and performance of these controllers are tested in a rigorous MATLAB simulation environment that includes steady winds, turbulence, measurement noise, and delays. After being synthesized off-line, the controllers allow the aircraft to follow prescribed geometrically defined paths bounded by a maximum curvature. The controllers presented within are found to be robust to the disturbances and uncertainties in the simulation environment. A robust analysis framework for mathematical validation of flight control systems is also presented. The framework is specifically developed for the complete uncertainty characterization, quantification, and analysis of small fixed-wing UAS. The analytical approach presented within is based on integral quadratic constraint (IQC) analysis methods and uses linear fractional transformations (LFTs) on uncertainties to represent system models. The IQC approach can handle a wide range of uncertainties, including static and dynamic, linear time-invariant and linear time-varying perturbations. While IQC-based uncertainty analysis has a sound theoretical foundation, it has thus far mostly been applied to academic examples, and there are major challenges when it comes to applying this approach to complex engineering systems, such as UAS. The difficulty mainly lies in appropriately characterizing and quantifying the uncertainties such that the resulting uncertain model is representative of the physical system without being overly conservative, and the associated computational problem is tractable. These challenges are addressed by applying IQC-based analysis tools to analyze the robustness of the Telemaster UAS flight control system. Specifically, uncertainties are characterized and quantified based on mathematical models and flight test data obtained in house for the Telemaster platform and custom autopilot. IQC-based analysis is performed on several time-invariant H∞ controllers along with various sets of uncertainties aimed at providing valuable information for use in controller analysis, controller synthesis, and comparison of multiple controllers. The proposed framework is also transferable to other fixed-wing UAS platforms, effectively taking IQC-based analysis beyond academic examples to practical application in UAS control design and airworthiness certification. IQC-based analysis problems are traditionally solved using convex optimization techniques, which can be slow and memory intensive for large problems. An oracle for discrete-time IQC analysis problems is presented to facilitate the use of a cutting plane algorithm in lieu of convex optimization in order to solve large uncertainty analysis problems relatively quickly, and with reasonable computational effort. The oracle is reformulated to a skew-Hamiltonian/Hamiltonian eigenvalue problem in order to improve the robustness of eigenvalue calculations by eliminating unnecessary matrix multiplications and inverses. Furthermore, fast, structure exploiting eigensolvers can be employed with the skew-Hamiltonian/Hamiltonian oracle to accurately determine critical frequencies when solving IQC problems. Applicable solution algorithms utilizing the IQC oracle are briefly presented, and an example shows that these algorithms can solve large problems significantly faster than convex optimization techniques. Finally, a large complex engineering system is analyzed using the oracle and a cutting-plane algorithm. Analysis of the same system using the same computer hardware failed when employing convex optimization techniques.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
35

Jayaraman, Dheepakkumaran. "Optimization Techniques for Performance and Power Dissipation in Test and Validation." OpenSIUC, 2012. https://opensiuc.lib.siu.edu/dissertations/473.

Full text
Abstract:
The high cost of chip testing makes testability an important aspect of any chip design. Two important testability considerations are addressed namely, the power consumption and test quality. The power consumption during shift is reduced by efficiently adding control logic to the design. Test quality is studied by determining the sensitization characteristics of a path to be tested. The path delay fault models have been used for the purpose of studying this problem. Another important aspect in chip design is performance validation, which is increasingly perceived as the major bottleneck in integrated circuit design. Given the synthesizable HDL code, the proposed technique will efficiently identify infeasible paths, subsequently, it determines the worst case execution time (WCET) in the HDL code.
APA, Harvard, Vancouver, ISO, and other styles
36

Heinze, Sebastian. "Aeroelastic Concepts for Flexible Aircraft Structures." Doctoral thesis, Stockholm : Farkost och flyg Aeronautics and Vehicle Engineering, Kungliga Tekniska högskolan, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4419.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Neikter, Carl-Fredrik. "Cache Prediction and Execution Time Analysis on Real-Time MPSoC." Thesis, Linköping University, Department of Computer and Information Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15394.

Full text
Abstract:

Real-time systems do not only require that the logical operations are correct. Equally important is that the specified time constraints always are complied. This has successfully been studied before for mono-processor systems. However, as the hardware in the systems gets more complex, the previous approaches become invalidated. For example, multi-processor systems-on-chip (MPSoC) get more and more common every day, and together with a shared memory, the bus access time is unpredictable in nature. This has recently been resolved, but a safe and not too pessimistic cache analysis approach for MPSoC has not been investigated before. This thesis has resulted in designed and implemented algorithms for cache analysis on real-time MPSoC with a shared communication infrastructure. An additional advantage is that the algorithms include improvements compared to previous approaches for mono-processor systems. The verification of these algorithms has been performed with the help of data flow analysis theory. Furthermore, it is not known how different types of cache miss characteristic of a task influence the worst case execution time on MPSoC. Therefore, a program that generates randomized tasks, according to different parameters, has been constructed. The parameters can, for example, influence the complexity of the control flow graph and average distance between the cache misses.

APA, Harvard, Vancouver, ISO, and other styles
38

Uhlin, Pernilla. "Aspect Analyzer: Ett verktyg för automatiserad exekveringstidsanalys av komponenter och aspekter." Thesis, Linköping University, Department of Computer and Information Science, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1914.

Full text
Abstract:

The increasing complexity in the development of a configurable real-time system has emerged new principles of software techniques, such as aspect-oriented software development and component-based software development. These techniques allow encapsulation of the system's crosscutting concerns and increase the modularity of the software. The properties of a component that influences the systems performance or semantics are specified separately in entities called aspects, while basic functionality of the property still remains in the component.

When building a real-time system, different sets of configurations of aspects and components can be combined, resulting in different configurations of the system. The temporal behavior of the system changes and a way to ensure the predictability of the system is needed.

This thesis presents a tool for aspect-level worst-case execution time analysis, which gives a priori information about the temporal behavior of the system, before the process of composing aspects with components.

APA, Harvard, Vancouver, ISO, and other styles
39

Bodin, Joakim. "Verifikation av verktyget aspect analyzer." Thesis, Linköping University, Department of Computer and Information Science, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1985.

Full text
Abstract:

Rising complexity in the development of real-time systems has made it crucial to have reusable components and a more flexible way of configuring these components into a coherent system. Aspect-oriented system development (AOSD) is a technique that allows one to put a system’s crosscutting concerns into"modules"that are called aspects. Applying AOSD in real-time and embedded system development one can expect reductions in the complexity of the system design and development.

A problem with AOSD in its current form is that it does not support predictability in the time domain. Hence, in order to use AOSD in real-time system development, we need to provide ways of analyzing temporal behavior of aspects, components and resulting system (made from weaving aspects and components). Aspect analyzer is a tool that computes the worst-case execution time (WCET) for a set of components and aspects, thus, enabling support for predictability in the time domain of aspect-oriented real-time software.

A limitation of the aspect analyzer, until now, were that no verification had been made whether the aspect analyzer would produce WCET values that were close to the measured or computed (with another WCET analysis technique) WCET of an aspect-oriented real-time system. Therefore, in this thesis we perform a verification of the correctness of the aspect analyzer using a number of different methods for WCET analysis. These investigations of the correctness of the output from the aspect analyzer gave confidence to the automated WCET analysis. In addition, performing this verification led to the identification of the steps necessary to compute the WCETs of a piece of program, when using a third party tool, which gives the ability to write accurate input files for the aspect analyzer.

APA, Harvard, Vancouver, ISO, and other styles
40

Henry, Julien. "Static analysis of program by Abstract Interpretation and Decision Procedures." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM037/document.

Full text
Abstract:
L'analyse statique de programme a pour but de prouver automatiquement qu'un programme vérifie certaines propriétés. L'interprétation abstraite est un cadre théorique permettant de calculer des invariants de programme. Ces invariants sont des propriétés sur les variables du programme vraies pour toute exécution. La précision des invariants calculés dépend de nombreux paramètres, en particulier du domaine abstrait et de l'ordre d'itération utilisés pendant le calcul d'invariants. Dans cette thèse, nous proposons plusieurs extensions de cette méthode qui améliorent la précision de l'analyse.Habituellement, l'interprétation abstraite consiste en un calcul de point fixe d'un opérateur obtenu après convergence d'une séquence ascendante, utilisant un opérateur appelé élargissement. Le point fixe obtenu est alors un invariant. Il est ensuite possible d'améliorer cet invariant via une séquence descendante sans élargissement. Nous proposons une méthode pour améliorer un point fixe après la séquence descendante, en recommençant une nouvelle séquence depuis une valeur initiale choisie judiscieusement. L'interprétation abstraite peut égalementêtre rendue plus précise en distinguant tous les chemins d'exécution du programme, au prix d'une explosion exponentielle de la complexité. Le problème de satisfiabilité modulo théorie (SMT), dont les techniques de résolution ont été grandement améliorée cette décennie, permettent de représenter ces ensembles de chemins implicitement. Nous proposons d'utiliser cette représentation implicite à base de SMT et de les appliquer à des ordres d'itération de l'état de l'art pour obtenir des analyses plus précises.Nous proposons ensuite de coupler SMT et interprétation abstraite au sein de nouveaux algorithmes appelés Modular Path Focusing et Property-Guided Path Focusing, qui calculent des résumés de boucles et de fonctions de façon modulaire, guidés par des traces d'erreur. Notre technique a différents usages: elle permet de montrer qu'un état d'erreur est inatteignable, mais également d'inférer des préconditions aux boucles et aux fonctions.Nous appliquons nos méthodes d'analyse statique à l'estimation du temps d'exécution pire cas (WCET). Dans un premier temps, nous présentons la façon d'exprimer ce problème via optimisation modulo théorie, et pourquoi un encodage naturel du problème en SMT génère des formules trop difficiles pour l'ensemble des solveurs actuels. Nous proposons un moyen simple et efficace de réduire considérablement le temps de calcul des solveurs SMT en ajoutant aux formules certaines propriétés impliquées obtenues par analyse statique. Enfin, nous présentons l'implémentation de Pagai, un nouvel analyseur statique pour LLVM, qui calcule des invariants numériques grâce aux différentes méthodes décrites dans cette thèse. Nous avons comparé les différentes techniques implémentées sur des programmes open-source et des benchmarks utilisés par la communauté
Static program analysis aims at automatically determining whether a program satisfies some particular properties. For this purpose, abstract interpretation is a framework that enables the computation of invariants, i.e. properties on the variables that always hold for any program execution. The precision of these invariants depends on many parameters, in particular the abstract domain, and the iteration strategy for computing these invariants. In this thesis, we propose several improvements on the abstract interpretation framework that enhance the overall precision of the analysis.Usually, abstract interpretation consists in computing an ascending sequence with widening, which converges towards a fixpoint which is a program invariant; then computing a descending sequence of correct solutions without widening. We describe and experiment with a method to improve a fixpoint after its computation, by starting again a new ascending/descending sequence with a smarter starting value. Abstract interpretation can also be made more precise by distinguishing paths inside loops, at the expense of possibly exponential complexity. Satisfiability modulo theories (SMT), whose efficiency has been considerably improved in the last decade, allows sparse representations of paths and sets of paths. We propose to combine this SMT representation of paths with various state-of-the-art iteration strategies to further improve the overall precision of the analysis.We propose a second coupling between abstract interpretation and SMT in a program verification framework called Modular Path Focusing, that computes function and loop summaries by abstract interpretation in a modular fashion, guided by error paths obtained with SMT. Our framework can be used for various purposes: it can prove the unreachability of certain error program states, but can also synthesize function/loop preconditions for which these error states are unreachable.We then describe an application of static analysis and SMT to the estimation of program worst-case execution time (WCET). We first present how to express WCET as an optimization modulo theory problem, and show that natural encodings into SMT yield formulas intractable for all current production-grade solvers. We propose an efficient way to considerably reduce the computation time of the SMT-solvers by conjoining to the formulas well chosen summaries of program portions obtained by static analysis.We finally describe the design and the implementation of Pagai,a new static analyzer working over the LLVM compiler infrastructure,which computes numerical inductive invariants using the various techniques described in this thesis.Because of the non-monotonicity of the results of abstract interpretation with widening operators, it is difficult to conclude that some abstraction is more precise than another based on theoretical local precision results. We thus conducted extensive comparisons between our new techniques and previous ones, on a variety of open-source packages and benchmarks used in the community
APA, Harvard, Vancouver, ISO, and other styles
41

Latzo, Curtis Thomas. "Approaches to Arc Flash Hazard Mitigation in 600 Volt Power Systems." Scholar Commons, 2011. http://scholarcommons.usf.edu/etd/3198.

Full text
Abstract:
ABSTRACT Federal regulations have recognized that arc flash hazards are a critical source of potential injury. As a consequence, in order to work on some electrical equipment, the energy source must be completely shut-down. However, power distribution systems in mission critical facilities such as hospitals and data centers must sometimes remain energized while being maintained. In recent years the Arc Flash Hazard Analysis has emerged as a power system tool that informs the qualified technician of the incident energy at the equipment to be maintained and recommends the proper protective equipment to wear. Due to codes, standards and historically acceptable design methods, the Arc Flash Hazard is often higher and more dangerous than necessary. This dissertation presents detailed methodology and proposes alternative strategies to be implemented at the design stage of 600 volt facility power distribution systems which will decrease the Arc Flash Hazard Exposure when compared to widely used code acceptable design strategies. Software models have been developed for different locations throughout a power system. These software model simulations will analyze the Arc Flash Hazard in a system designed with typical mainstream code acceptable methods. The model will be changed to show implementation of arc flash mitigation techniques at the system design level. The computer simulations after the mitigation techniques will show significant lowering of the Arc Flash Hazard Exposure.
APA, Harvard, Vancouver, ISO, and other styles
42

Giroudot, Frédéric. "NoC-based Architectures for Real-Time Applications : Performance Analysis and Design Space Exploration." Thesis, Toulouse, INPT, 2019. https://oatao.univ-toulouse.fr/25921/1/Giroudot_Frederic.pdf.

Full text
Abstract:
Les architectures mono-processeur montrent leurs limites en termes de puissance de calcul face aux besoins des systèmes actuels. Bien que les architectures multi-cœurs résolvent partiellement ce problème, elles utilisent en général des bus pour interconnecter les cœurs, et cette solution ne passe pas à l'échelle. Les architectures dites pluri-cœurs ont été proposées pour palier les limitations des processeurs multi-cœurs. Elles peuvent réunir jusqu'à des centaines de cœurs sur une seule puce, organisés en dalles contenant une ou plusieurs entités de calcul. La communication entre les cœurs se fait généralement au moyen d'un réseau sur puce constitué de routeurs reliés les uns aux autres et permettant les échanges de données entre dalles. Cependant, ces architectures posent de nombreux défis, en particulier pour les applications temps-réel. D'une part, la communication via un réseau sur puce provoque des scénarios de blocage entre flux, ce qui complique l'analyse puisqu'il devient difficile de déterminer le pire cas. D'autre part, exécuter de nombreuses applications sur des systèmes sur puce de grande taille comme des architectures pluri-cœurs rend la conception de tels systèmes particulièrement complexe. Premièrement, cela multiplie les possibilités d'implémentation qui respectent les contraintes fonctionnelles, et l'exploration d'architecture résultante est plus longue. Deuxièmement, une fois une architecture matérielle choisie, décider de l'attribution de chaque tâche des applications à exécuter aux différents cœurs est un problème difficile, à tel point que trouver une une solution optimale en un temps raisonnable n'est pas toujours possible. Ainsi, nos premières contributions s'intéressent à cette nécessité de pouvoir calculer des bornes fiables sur le pire cas des latences de transmission des flux de données empruntant des réseaux sur puce dits "wormhole". Nous proposons un modèle analytique, BATA, prenant en compte la taille des mémoires tampon des routeurs et applicable à une configuration de flux de données périodiques générant un paquet à la fois. Nous étendons ensuite le domaine d'applicabilité de BATA pour couvrir un modèle de traffic plus général ainsi que des architectures hétérogènes. Cette nouvelle méthode, appelée G-BATA, est basée sur une structure de graphe pour capturer les interférences possibles entre flux de données. Elle permet également de diminuer le temps de calcul de l'analyse, améliorant la capacité de l'approche à passer à l'échelle. Dans une seconde partie, nous proposons une méthode pour la conception d'applications temps-réel s'exécutant sur des plateformes pluri-cœurs. Cette méthode intègre notre modèle d'analyse G-BATA dans un processus de conception systématique, faisant en outre intervenir un outil de modélisation et de simulation de systèmes reposant sur des concepts d'ingénierie dirigée par les modèles, TTool, et un logiciel pour l'analyse de performance pire-cas des réseaux, WoPANets. Enfin, nous proposons une validation de nos contributions grâce à (a) une série d'expériences sur une plateforme physique et (b) deux études de cas d'applications réelle; le système de contrôle d'un véhicule autonome et une application de décodeur 5G
Monoprocessor architectures have reached their limits in regard to the computing power they offer vs the needs of modern systems. Although multicore architectures partially mitigate this limitation and are commonly used nowadays, they usually rely on intrinsically non-scalable buses to interconnect the cores. The manycore paradigm was proposed to tackle the scalability issue of bus-based multicore processors. It can scale up to hundreds of processing elements (PEs) on a single chip, by organizing them into computing tiles (holding one or several PEs). Intercore communication is usually done using a Network-on-Chip (NoC) that consists of interconnected onchip routers allowing communication between tiles. However, manycore architectures raise numerous challenges, particularly for real-time applications. First, NoC-based communication tends to generate complex blocking patterns when congestion occurs, which complicates the analysis, since computing accurate worst-case delays becomes difficult. Second, running many applications on large Systems-on-Chip such as manycore architectures makes system design particularly crucial and complex. On one hand, it complicates Design Space Exploration, as it multiplies the implementation alternatives that will guarantee the desired functionalities. On the other hand, once a hardware architecture is chosen, mapping the tasks of all applications on the platform is a hard problem, and finding an optimal solution in a reasonable amount of time is not always possible. Therefore, our first contributions address the need for computing tight worst-case delay bounds in wormhole NoCs. We first propose a buffer-aware worst-case timing analysis (BATA) to derive upper bounds on the worst-case end-to-end delays of constant-bit rate data flows transmitted over a NoC on a manycore architecture. We then extend BATA to cover a wider range of traffic types, including bursty traffic flows, and heterogeneous architectures. The introduced method is called G-BATA for Graph-based BATA. In addition to covering a wider range of assumptions, G-BATA improves the computation time; thus increases the scalability of the method. In a second part, we develop a method addressing design and mapping for applications with real-time constraints on manycore platforms. It combines model-based engineering tools (TTool) and simulation with our analytical verification technique (G-BATA) and tools (WoPANets) to provide an efficient design space exploration framework. Finally, we validate our contributions on (a) a serie of experiments on a physical platform and (b) two case studies taken from the real world: an autonomous vehicle control application, and a 5G signal decoder application
APA, Harvard, Vancouver, ISO, and other styles
43

Tsoupidi, Rodothea Myrsini. "Two-phase WCET analysis for cache-based symmetric multiprocessor systems." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-222362.

Full text
Abstract:
The estimation of the worst-case execution time (WCET) of a task is a problem that concerns the field of embedded systems and, especially, real-time systems. Estimating a safe WCET for single-core architectures without speculative mechanisms is a challenging task and an active research topic. However, the advent of advanced hardware mechanisms, which often lack predictability, complicates the current WCET analysis methods. The field of Embedded Systems has high safety considerations and is, therefore, conservative with speculative mechanisms. However, nowadays, even safety-critical applications move to the direction of multiprocessor systems. In a multiprocessor system, each task that runs on a processing unit might affect the execution time of the tasks running on different processing units. In shared-memory symmetric multiprocessor systems, this interference occurs through the shared memory and the common bus. The presence of private caches introduces cachecoherence issues that result in further dependencies between the tasks. The purpose of this thesis is twofold: (1) to evaluate the feasibility of an existing one-pass WCET analysis method with an integrated cache analysis and (2) to design and implement a cachebased multiprocessor WCET analysis by extending the singlecore method. The single-core analysis is part of the KTH’s Timing Analysis (KTA) tool. The WCET analysis of KTA uses Abstract Search-based WCET Analysis, an one-pass technique that is based on abstract interpretation. The evaluation of the feasibility of this analysis includes the integration of microarchitecture features, such as cache and pipeline, into KTA. These features are necessary for extending the analysis for hardware models of modern embedded systems. The multiprocessor analysis of this work uses the single-core analysis in two stages to estimate the WCET of a task running under the presence of temporally and spatially interfering tasks. The first phase records the memory accesses of all the temporally interfering tasks, and the second phase uses this information to perform the multiprocessor WCET analysis. The multiprocessor analysis assumes the presence of private caches and a shared communication bus and implements the MESI protocol to maintain cache coherence.
Uppskattning av längsta exekveringstid (eng. worst-case execution time eller WCET) är ett problem som angår inbyggda system och i synnerhet realtidssystem. Att uppskatta en säker WCET för enkelkärniga system utan spekulativa mekanismer är en utmanande uppgift och ett aktuellt forskningsämne. Tillkomsten av avancerade hårdvarumekanismer, som ofta saknar förutsägbarhet, komplicerar ytterligare de nuvarande analysmetoderna för WCET. Inom fältet för inbyggda system ställs höga säkerhetskrav. Således antas en konservativ inställning till nya spekulativa mekanismer. Trotts detta går säkerhetskritiska system mer och mer i riktning mot multiprocessorsystem. I multiprocessorsystem påverkas en process som exekveras på en processorenhet av processer som exekveras på andra processorenheter. I symmetriska multiprocessorsystem med delade minnen påträffas denna interferens i det delade minnet och den gemensamma bussen. Privata minnen introducerar cache-koherens problem som resulterar i ytterligare beroende mellan processerna. Syftet med detta examensarbete är tvåfaldigt: (1) att utvärdera en befintlig analysmetod för WCET efter integrering av en lågnivå analys och (2) att designa och implementera en cache-baserad flerkärnig WCET-analys genom att utvidga denna enkelkärniga metod. Den enkelkärniga metoden är implementerad i KTH’s Timing Analysis (KTA), ett verktyg för tidsanalys. KTA genomför en så-kallad Abstrakt Sök-baserad Metod som är baserad på Abstrakt Interpretation. Utvärderingen av denna analys innefattar integrering av mikroarkitektur mekanismer, såsom cache-minne och pipeline, i KTA. Dessa mekanismer är nödvändiga för att utvidga analysen till att omfatta de hårdvarumodeller som används idag inom fältet för inbyggda system. Den flerkärniga WCET-analysen genomförs i två steg och uppskattar WCET av en process som körs i närvaron av olika tids och rumsligt störande processer. Första steget registrerar minnesåtkomst för alla tids störande processer, medans andra steget använder sig av första stegets information för att utföra den flerkärniga WCET-analysen. Den flerkärniga analysen förutsätter ett system med privata cache-minnen och en gemensamm buss som implementerar MESI protokolen för att upprätthålla cache-koherens.
APA, Harvard, Vancouver, ISO, and other styles
44

Mejzlík, Tomáš. "Teplotní profil výkonového spínacího přístroje nízkého napětí pro různé provozní stavy." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-221176.

Full text
Abstract:
The heat generated in a circuit breaker can be transmitted in two ways: Either through metal parts of current path to conductors outside of device or through plastic parts or air of chassis. The accuracy of the simulation depends on the accuracy of the 3D model and all his parts and it also depends on precise definition of materials with precise definition of electrical and thermal parameters. Electrical circuit breaker has various source of the heat which results in raising temperature of the device above the level of environment. Heat sources are: 1) Joule’s loss of the circuit breaker current path. 2) Heat loss in a bimetal, which is used for thermal release. 3) Resistivity of contacts. This thesis deals with static state of thermal analysis so the sources do not include transient heat source for switching OFF and switching ON. Electrical circuit breakers are made in smaller and smaller forms however their electrical parameters are not decreasing with size. There is logical conclusion that there is more heat on the same unit size which makes thermal analysis of circuit breaker one of the most important part of development.
APA, Harvard, Vancouver, ISO, and other styles
45

Marcus, Ventovaara, and Hasanbegović Arman. "A Method for Optimised Allocation of System Architectures with Real-time Constraints." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-39492.

Full text
Abstract:
Optimised allocation of system architectures is a well researched area as it can greatly reduce the developmental cost of systems and increase performance and reliability in their respective applications.In conjunction with the recent shift from federated to integrated architectures in automotive, and the increasing complexity of computer systems, both in terms of software and hardware, the applications of design space exploration and optimised allocation of system architectures are of great interest.This thesis proposes a method to derive architectures and their allocations for systems with real-time constraints.The method implements integer linear programming to solve for an optimised allocation of system architectures according to a set of linear constraints while taking resource requirements, communication dependencies, and manual design choices into account.Additionally, this thesis describes and evaluates an industrial use case using the method wherein the timing characteristics of a system were evaluated, and, the method applied to simultaneously derive a system architecture, and, an optimised allocation of the system architecture.This thesis presents evidence and validations that suggest the viability of the method and its use case in an industrial setting.The work in this thesis sets precedence for future research and development, as well as future applications of the method in both industry and academia.
APA, Harvard, Vancouver, ISO, and other styles
46

Santini, Tales Roberto de Souza. "Projeto e análise de aplicações de circuladores ativos para a operação em frequências de ultrassom Doppler de ondas contínuas." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/18/18153/tde-19082014-090655/.

Full text
Abstract:
Os circuladores tradicionais são amplamente utilizados em telecomunicações e defesa militar para o simultâneo envio e recepção de sinais por um único meio. Esses circuitos passivos, fabricados a partir de materiais ferromagnéticos, possuem a desvantagem do aumento de dimensões, peso e custos de fabricação com a diminuição da frequência de operação definida no projeto destes dispositivos, inviabilizando sua aplicação em frequências abaixo de 500 MHz. O circulador ativo surgiu como uma alternativa aos tradicionais, tendo aplicações em frequências desde o nível DC até a ordem de dezenas de gigahertz. As suas maiores aplicações ocorrem quando são necessários dispositivos compactos, de baixo custo e de baixa potência. Os primeiros circuitos propostos possuíam uma grande limitação em termos de frequência de operação e de potência entregue à carga. Entretanto, com os avanços tecnológicos na eletrônica, tais problemas podem ser amenizados atualmente. Neste trabalho é apresentado o desenvolvimento de um circuito circulador ativo para a utilização em instrumentação eletrônica, em particular para a operação em frequências na ordem das utilizadas em equipamentos de ultrassom Doppler de ondas contínuas, na faixa de 2 MHz a 10 MHz. As possíveis vantagens da implementação de circuladores em sistemas de ultrassom estão relacionadas ao incremento da relação sinal-ruído, aumento da área de recepção do transdutor, simplificação da construção do transdutor, simplificação do circuito de demodulação/ processamento, e maior isolação entre os circuitos de transmissão e recepção de sinais. Na fase inicial, o circulador ativo proposto é modelado por equacionamento, utilizando-se tanto o modelo ideal dos amplificadores operacionais como o seu modelo de resposta em frequência. Simulações computacionais foram executadas para confirmar a validade do equacionamento. Um circuito montado em placa de prototipagem rápida foi apresentado, e testes de prova de conceito em baixas frequências foram realizados, mostrando uma grande semelhança entre o teórico, o simulado e o experimental. A segunda parte contou com o projeto do circuito circulador para a operação em maiores frequências. O circuito proposto é composto por três amplificadores operacionais de realimentação por corrente e vários componentes passivos. Uma análise de sensibilidade utilizando os métodos de Monte-Carlo e análise do pior caso foi aplicada, resultando em um perfil de comportamento frente às variações dos componentes do circuito e às variações da impedância de carga. Uma placa de circuito impressa foi projetada, utilizando-se de boas práticas de leiaute para a operação em altas frequências. Neste circuito montado, foram realizados os seguintes testes e medições: comportamento no domínio do tempo, faixa dinâmica, nível de isolação em relação à amplitude do sinal, largura de banda, levantamento dos parâmetros de espalhamento, e envio e recepção de sinais por transdutor de ultrassom Doppler de ondas contínuas. Os resultados dos testes de desempenho foram satisfatórios, apresentando uma banda de transmissão de sinais para frequências de 100 MHz, isolação entre portas não consecutivas de 39 dB na frequência de interesse para ultrassom Doppler e isolação maior que 20 dB para frequências de até 35 MHz. A faixa dinâmica excedeu a tensão de 5 Vpp, e o circuito teve bom comportamento no envio e na recepção simultânea de sinais pelo transdutor de ultrassom.
Traditional circulators are widely used in both telecommunications and military defense for sending and receiving signals simultaneously through a single medium. These passive circuits which are manufactured from ferromagnetic materials, have the disadvantages of having suffered an increase in dimensions, weight, and manufacturing costs along with the decrease in the operation frequency established in the designs of such devices, thus preventing their useful employment in frequencies below 500 MHz. The active circulator emerged as an alternative to the traditional ones, and has applications on frequencies ranging from a DC level to levels involving dozens of gigahertz. It is applicable when compact devices are made necessary, at a low cost, and for low frequencies. The first circuits to be introduced had a major limitation in terms of operating frequency and power delivered to the load. However, due to technological advances in electronics, problems such as the aforementioned can now be minimized. This research work presents the development of an active circulator circuit to be used in electronic instrumentation, particularly for operation at frequencies such as those used in continuous wave Doppler ultrasound equipment, ranging from 2 MHz to 10 MHz. The advantages made possible by implementing ultrasound systems with circulators are related to an increase in the signal-to-noise ratio, an increase in the transducers reception area, a simplified construction of the transducer, simplification of the demodulation/processing circuit, and a greater isolation between the transmission circuits and signal reception. In the initial phase, the proposed active circulator was modeled by means of an equating method, using both the ideal model of operational amplifiers and the model of frequency response. Computer simulations were carried out in order to confirm the validity of the equating method. A circuit mounted upon a breadboard was introduced and proof of concept assessments were performed at low frequencies, showing a great similarity among the theoretical, simulated and experimented data. The second phase is when the circulator circuits design was developed in order make its operation at higher frequencies possible. The proposed circuit is comprised of three currentfeedback operational amplifiers and several passive components. A sensitivity analysis was carried out using Monte-Carlo methods and worst-case analyses, resulting in a certain behavioral profile influenced by variations in circuit components and variations in load impedance. A printed circuit board was designed, employing good practice layout standards so that operation at high frequencies would be achieved. The following evaluations and measurements were performed on the circuit that was assembled: time domain behavior, dynamic range, isolation level relative to signal amplitude, bandwidth, survey of the scattering parameters, and transmission and reception of signals by a continuous wave Doppler ultrasound transducer. The results of the performance tests were satisfactory, presenting a 100 MHz signal transmission band, isolation between non-consecutive ports of 39 dB at the frequency of interest to the Doppler ultrasound, and an isolation greater than 20 dB for frequencies of up to 35 MHz. The dynamic range exceeded the 5Vpp and the circuit performed satisfactorily in the simultaneous transmission and reception of signals through the ultrasound\'s transducer.
APA, Harvard, Vancouver, ISO, and other styles
47

Wolf, Anne. "Robust Optimization of Private Communication in Multi-Antenna Systems." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-203827.

Full text
Abstract:
The thesis focuses on the privacy of communication that can be ensured by means of the physical layer, i.e., by appropriately chosen coding and resource allocation schemes. The fundamentals of physical-layer security have been already formulated in the 1970s by Wyner (1975), Csiszár and Körner (1978). But only nowadays we have the technical progress such that these ideas can find their way in current and future communication systems, which has driven the growing interest in this area of research in the last years. We analyze two physical-layer approaches that can ensure the secret transmission of private information in wireless systems in presence of an eavesdropper. One is the direct transmission of the information to the intended receiver, where the transmitter has to simultaneously ensure the reliability and the secrecy of the information. The other is a two-phase approach, where two legitimated users first agree on a common and secret key, which they use afterwards to encrypt the information before it is transmitted. In this case, the secrecy and the reliability of the transmission are managed separately in the two phases. The secrecy of the transmitted messages mainly depends on reliable information or reasonable and justifiable assumptions about the channel to the potential eavesdropper. Perfect state information about the channel to a passive eavesdropper is not a rational assumption. Thus, we introduce a deterministic model for the uncertainty about this channel, which yields a set of possible eavesdropper channels. We consider the optimization of worst-case rates in systems with multi-antenna Gaussian channels for both approaches. We study which transmit strategy can yield a maximum rate if we assume that the eavesdropper can always observe the corresponding worst-case channel that reduces the achievable rate for the secret transmission to a minimum. For both approaches, we show that the resulting max-min problem over the matrices that describe the multi-antenna system can be reduced to an equivalent problem over the eigenvalues of these matrices. We characterize the optimal resource allocation under a sum power constraint over all antennas and derive waterfilling solutions for the corresponding worst-case channel to the eavesdropper for a constraint on the sum of all channel gains. We show that all rates converge to finite limits for high signal-to-noise ratios (SNR), if we do not restrict the number of antennas for the eavesdropper. These limits are characterized by the quotients of the eigenvalues resulting from the Gramian matrices of both channels. For the low-SNR regime, we observe a rate increase that depends only on the differences of these eigenvalues for the direct-transmission approach. For the key generation approach, there exists no dependence from the eavesdropper channel in this regime. The comparison of both approaches shows that the superiority of an approach over the other mainly depends on the SNR and the quality of the eavesdropper channel. The direct-transmission approach is advantageous for low SNR and comparably bad eavesdropper channels, whereas the key generation approach benefits more from high SNR and comparably good eavesdropper channels. All results are discussed in combination with numerous illustrations
Der Fokus dieser Arbeit liegt auf der Abhörsicherheit der Datenübertragung, die auf der Übertragungsschicht, also durch geeignete Codierung und Ressourcenverteilung, erreicht werden kann. Die Grundlagen der Sicherheit auf der Übertragungsschicht wurden bereits in den 1970er Jahren von Wyner (1975), Csiszár und Körner (1978) formuliert. Jedoch ermöglicht erst der heutige technische Fortschritt, dass diese Ideen in zukünftigen Kommunikationssystemen Einzug finden können. Dies hat in den letzten Jahren zu einem gestiegenen Interesse an diesem Forschungsgebiet geführt. In der Arbeit werden zwei Ansätze zur abhörsicheren Datenübertragung in Funksystemen analysiert. Dies ist zum einen die direkte Übertragung der Information zum gewünschten Empfänger, wobei der Sender gleichzeitig die Zuverlässigkeit und die Abhörsicherheit der Übertragung sicherstellen muss. Zum anderen wird ein zweistufiger Ansatz betrachtet: Die beiden Kommunikationspartner handeln zunächst einen gemeinsamen sicheren Schlüssel aus, der anschließend zur Verschlüsselung der Datenübertragung verwendet wird. Bei diesem Ansatz werden die Abhörsicherheit und die Zuverlässigkeit der Information getrennt voneinander realisiert. Die Sicherheit der Nachrichten hängt maßgeblich davon ab, inwieweit zuverlässige Informationen oder verlässliche Annahmen über den Funkkanal zum Abhörer verfügbar sind. Die Annahme perfekter Kanalkenntnis ist für einen passiven Abhörer jedoch kaum zu rechtfertigen. Daher wird hier ein deterministisches Modell für die Unsicherheit über den Kanal zum Abhörer eingeführt, was zu einer Menge möglicher Abhörkanäle führt. Die Optimierung der sogenannten Worst-Case-Rate in einem Mehrantennensystem mit Gaußschem Rauschen wird für beide Ansätze betrachtet. Es wird analysiert, mit welcher Sendestrategie die maximale Rate erreicht werden kann, wenn gleichzeitig angenommen wird, dass der Abhörer den zugehörigen Worst-Case-Kanal besitzt, welcher die Rate der abhörsicheren Kommunikation jeweils auf ein Minimum reduziert. Für beide Ansätze wird gezeigt, dass aus dem resultierenden Max-Min-Problem über die Matrizen des Mehrantennensystems ein äquivalentes Problem über die Eigenwerte der Matrizen abgeleitet werden kann. Die optimale Ressourcenverteilung für eine Summenleistungsbeschränkung über alle Sendeantennen wird charakterisiert. Für den jeweiligen Worst-Case-Kanal zum Abhörer, dessen Kanalgewinne einer Summenbeschränkung unterliegen, werden Waterfilling-Lösungen hergeleitet. Es wird gezeigt, dass für hohen Signal-Rausch-Abstand (engl. signal-to-noise ratio, SNR) alle Raten gegen endliche Grenzwerte konvergieren, wenn die Antennenzahl des Abhörers nicht beschränkt ist. Die Grenzwerte werden durch die Quotienten der Eigenwerte der Gram-Matrizen beider Kanäle bestimmt. Für den Ratenanstieg der direkten Übertragung ist bei niedrigem SNR nur die Differenz dieser Eigenwerte maßgeblich, wohingegen für den Verschlüsselungsansatz in dem Fall keine Abhängigkeit vom Kanal des Abhörers besteht. Ein Vergleich zeigt, dass das aktuelle SNR und die Qualität des Abhörkanals den einen oder anderen Ansatz begünstigen. Die direkte Übertragung ist bei niedrigem SNR und verhältnismäßig schlechten Abhörkanälen überlegen, wohingegen der Verschlüsselungsansatz von hohem SNR und vergleichsweise guten Abhörkanälen profitiert. Die Ergebnisse der Arbeit werden umfassend diskutiert und illustriert
APA, Harvard, Vancouver, ISO, and other styles
48

Rihani, Hamza. "Analyse temporelle des systèmes temps-réels sur architectures pluri-coeurs." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM074/document.

Full text
Abstract:
La prédictibilité est un aspect important des systèmes temps-réel critiques. Garantir la fonctionnalité de ces systèmespasse par la prise en compte des contraintes temporelles. Les architectures mono-cœurs traditionnelles ne sont plussuffisantes pour répondre aux besoins croissants en performance de ces systèmes. De nouvelles architectures multi-cœurssont conçues pour offrir plus de performance mais introduisent d'autres défis. Dans cette thèse, nous nous intéressonsau problème d’accès aux ressources partagées dans un environnement multi-cœur.La première partie de ce travail propose une approche qui considère la modélisation de programme avec des formules desatisfiabilité modulo des théories (SMT). On utilise un solveur SMT pour trouverun chemin d’exécution qui maximise le temps d’exécution. On considère comme ressource partagée un bus utilisant unepolitique d’accès multiple à répartition dans le temps (TDMA). On explique comment la sémantique du programme analyséet le bus partagé peuvent être modélisés en SMT. Les résultats expérimentaux montrent une meilleure précision encomparaison à des approches simples et pessimistes.Dans la deuxième partie, nous proposons une analyse de temps de réponse de programmes à flot de données synchroness'exécutant sur un processeur pluri-cœur. Notre approche calcule l'ensemble des dates de début d'exécution et des tempsde réponse en respectant la contrainte de dépendance entre les tâches. Ce travail est appliqué au processeur pluri-cœurindustriel Kalray MPPA-256. Nous proposons un modèle mathématique de l'arbitre de bus implémenté sur le processeur. Deplus, l'analyse de l'interférence sur le bus est raffinée en prenant en compte : (i) les temps de réponseet les dates de début des tâches concurrentes, (ii) le modèle d'exécution, (iii) les bancsmémoires, (iv) le pipeline des accès à la mémoire. L'évaluation expérimentale est réalisé sur desexemples générés aléatoirement et sur un cas d'étude d'un contrôleur de vol
Predictability is of paramount importance in real-time and safety-critical systems, where non-functional properties --such as the timing behavior -- have high impact on the system's correctness. As many safety-critical systems have agrowing performance demand, classical architectures, such as single-cores, are not sufficient anymore. One increasinglypopular solution is the use of multi-core systems, even in the real-time domain. Recent many-core architectures, such asthe Kalray MPPA, were designed to take advantage of the performance benefits of a multi-core architecture whileoffering certain predictability. It is still hard, however, to predict the execution time due to interferences on sharedresources (e.g., bus, memory, etc.).To tackle this challenge, Time Division Multiple Access (TDMA) buses are often advocated. In the first part of thisthesis, we are interested in the timing analysis of accesses to shared resources in such environments. Our approach usesSatisfiability Modulo Theory (SMT) to encode the semantics and the execution time of the analyzed program. To estimatethe delays of shared resource accesses, we propose an SMT model of a shared TDMA bus. An SMT-solver is used to find asolution that corresponds to the execution path with the maximal execution time. Using examples, we show how theworst-case execution time estimation is enhanced by combining the semantics and the shared bus analysis in SMT.In the second part, we introduce a response time analysis technique for Synchronous Data Flow programs. These are mappedto multiple parallel dependent tasks running on a compute cluster of the Kalray MPPA-256 many-core processor. Theanalysis we devise computes a set of response times and release dates that respect the constraints in the taskdependency graph. We derive a mathematical model of the multi-level bus arbitration policy used by the MPPA. Further,we refine the analysis to account for (i) release dates and response times of co-runners, (ii)task execution models, (iii) use of memory banks, (iv) memory accesses pipelining. Furtherimprovements to the precision of the analysis were achieved by considering only accesses that block the emitting core inthe interference analysis. Our experimental evaluation focuses on randomly generated benchmarks and an avionics casestudy
APA, Harvard, Vancouver, ISO, and other styles
49

Lesage, Benjamin. "Architecture multi-coeurs et temps d'exécution au pire cas." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00870971.

Full text
Abstract:
Les tâches critiques en systèmes temps-réel sont soumises à des contraintes temporelles et de correction. La validation d'un tel système repose sur l'estimation du comportement temporel au pire cas de ses tâches. Le partage de ressources, inhérent aux architectures multi-cœurs, entrave le calcul de ces estimations. Le comportement temporel d'une tâche dépend de ses rivales du fait de l'arbitrage de l'accès aux ressources ou de modifications concurrentes de leur état. Cette étude vise à l'estimation de la contribution temporelle de la hiérarchie mémoire au pire temps d'exécution de tâches critiques. Les méthodes existantes, pour caches d'instructions, sont étendues afin de supporter caches de données privés et partagés, et permettre l'analyse de hiérarchies mémoires riches. Le court-circuitage de cache est ensuite utilisé pour réduire la pression sur les caches partagés. Nous proposons à cette fin différentes heuristiques basées sur la capture de la réutilisation de blocs de cache entre différents accès mémoire. Notre seconde proposition est la politique de partitionnement Preti qui permet l'allocation d'un espace sans conflits à une tâche. Preti favorise aussi les performances de tâches non critiques concurrentes aux temps-réel dans les systèmes de criticité hybride.
APA, Harvard, Vancouver, ISO, and other styles
50

KUO, FANG-HSIEN, and 郭芳賢. "A Study on A Worst Case Analysis for Automotive Lighting Circuit." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/10449847901692029474.

Full text
Abstract:
碩士
中華大學
電機工程學系
104
In recent years, LED becomes an important component in car lighting. The reliability of car lighting is key issue for safety. Therefore, from the beginning of product design, we should prepare to assess the reliability or worst case analysis, that is, we have to find out defective or malfunctions, and even affect safety in the design phase. In general, automotive LED circuit architectures are divided into two methods including linear regulator circuit and switching regulator circuit. This thesis will focus on the worst case circuit analysis for linear regulator circuit formed LED driving. The extreme value analysis, root sum square analysis, and Monte Carlo analysis three methods will be used in this thesis. According to the results of analysis we found that the negative feedback architecture (closed-loop) is better than the open loop. Because closed-loop architecture can provide a more stable output power, lower flickering light, and the much better quality in mass production .
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography