To see the other types of publications on this topic, follow the link: Robustness and reliability.

Dissertations / Theses on the topic 'Robustness and reliability'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 36 dissertations / theses for your research on the topic 'Robustness and reliability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

He, Qinxian Ph D. Massachusetts Institute of Technology. "Uncertainty and sensitivity analysis methods for improving design robustness and reliability." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/90601.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2014.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 161-172).<br>Engineering systems of the modern day are increasingly complex, often involving numerous components, countless mathematical models, and large, globally-distributed design teams. These features all contribute uncertainty to the system design process that, if not properly managed, can escalate into risks that seriously jeopardize the design program. In fact, recent history is replete with examples of major design setbacks due to failure to recognize and reduce risks associated with performance, cost, and schedule as they emerge during the design process. The objective of this thesis is to develop methods that help quantify, understand, and mitigate the effects of uncertainty in the design of engineering systems. The design process is viewed as a stochastic estimation problem in which the level of uncertainty in the design parameters and quantities of interest is characterized probabilistically, and updated through successive iterations as new information becomes available. Proposed quantitative measures of complexity and risk can be used in the design context to rigorously estimate uncertainty, and have direct implications for system robustness and reliability. New local sensitivity analysis techniques facilitate the approximation of complexity and risk in the quantities of interest resulting from modifications in the mean or variance of the design parameters. A novel complexity-based sensitivity analysis method enables the apportionment of output uncertainty into contributions not only due to the variance of input factors and their interactions, but also due to properties of the underlying probability distributions such as intrinsic extent and non-Gaussianity. Furthermore, uncertainty and sensitivity information are combined to identify specfic strategies for uncertainty mitigation and visualize tradeoffs between available options. These approaches are integrated with design budgets to guide decisions regarding the allocation of resources toward improving system robustness and reliability. The methods developed in this work are applicable to a wide variety of engineering systems. In this thesis, they are demonstrated on a real-world aviation case study to assess the net cost-benet of a set of aircraft noise stringency options. This study reveals that uncertainties in the scientific inputs of the noise monetization model are overshadowed by those in the scenario inputs, and identifies policy implementation cost as the largest driver of uncertainty in the system.<br>by Qinxian He.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
2

Yodo, Nita. "Design of thin film solar cell material structure for reliability and performance robustness." Thesis, Wichita State University, 2013. http://hdl.handle.net/10057/7050.

Full text
Abstract:
Although a continued exponential growth of solar power generation over the world paves a path to a future in sustainable energy, development of photovoltaic (PV) technologies with low-cost and high-stability materials remains a challenge and has attracted tremendous attention to solar energy research. The prevalence of thin film solar cells substantially reduces the material costs. However, even in the presence of their band gap properties, a major issue faced by most thin film solar cells is the low output efficiency due to manufacturing variability and uncertain operating conditions. Thus, to ensure the reliability and performance robustness of thin film PV technologies, the design of the solar cell is studied. To represent the thin film PV technologies, a copper gallium (di)selenide (CIGS) solar cell model is developed and optimized with the Reliability-based Robust Design Optimization (RBRDO) method. The main contribution of this research is the development of a probabilistic thin film solar cell model that considers the presence of the uncertainties in the PV system. This model takes into account the variability of the structure and the material properties of the CIGS solar cells, and assumes operation in ideal-weather conditions. A general reliability-based methodology to optimize the design of the CIGS PV technologies is presented in this research and this approach also could be used to facilitate the development and assessment of new PV technologies with more robust performance in efficiency and stability.<br>Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Industrial and Manufacturing Engineering.
APA, Harvard, Vancouver, ISO, and other styles
3

Tewari, Anurag. "Upstream supply chain vulnerability, robustness and resilience : a systematic review of literature." Thesis, Cranfield University, 2013. http://dspace.lib.cranfield.ac.uk/handle/1826/12490.

Full text
Abstract:
Purpose: In the last decade, supply chains of many global firms have been exposed to severe and costly supply chain disruptions. Triggered by either a manmade or a natural disaster, these disruptions are often a result of the increased network complexity and interdependency. One of the many contributing factors to this increased network complexity is the conscious effort by organizations to over optimise their efficiency and performance. The field of supply chain resilience, robustness and vulnerability studies, a new and growing area of knowledge, is contributing towards discovering the causes leading to supply chain disasters and measures to tackle them. Criticized to be highly fragmented and fraught with conceptual ambiguity, the filed has been evolving by incorporating vulnerability and resilience research from other interdisciplinary domains. This present research aims at mapping the intellectual territory of the resilience, robustness and vulnerability domain by conducting a literature review. The review also aims to establish a conceptual clarity in the definition of terms and constructs relevant to the field and to discover conceptual and methodological gaps in the existing body of literature. Design/methodology/approach: This literature review is conducted using a systematic review approach which benefits from a clearly defined audit and decision trail. After filtering through 2077 titles, the review is taken up for 43 articles. Findings: The review demonstrates that the drivers of vulnerability and strategies to tackle it can be grouped into three themes, Structural, Operational and Strategic. The review also demonstrates that the field is still plagued with conceptual ambiguity. By the analysis of the findings, a number of research directions were identified. Research limitations/implications: Major limitations to this study were the associated personal bias in quality assessment of included and excluded articles. Also, due to blurred definitions of terms and constructs in the literature, the thematic classification of findings could be challenged. Lastly, it cannot be stated with conviction that the chosen 43 articles are sufficient. Practical implications: This research highlights the future conceptual and methodological prospects in the field of resilience, robustness and vulnerability. The direction of structural research proposed in the thesis has a very high potential to secure future supply chains. Originality/value: This review is first to address the issue of SCV, SCRel and SCRob. The review provides an extensive overview of the present extant of the vulnerability, robustness research and it proposes a thematic framework to further extend the knowledge in this filed.
APA, Harvard, Vancouver, ISO, and other styles
4

Kozak, Joseph Peter. "Hard Switched Robustness of Wide Bandgap Power Semiconductor Devices." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/104874.

Full text
Abstract:
As power conversion technology is being integrated further into high-reliability environments such as aerospace and electric vehicle applications, a full analysis and understanding of the system's robustness under operating conditions inside and outside the safe-operating-area is necessary. The robustness of power semiconductor devices, a primary component of power converters, has been traditionally evaluated through qualification tests that were developed for legacy silicon (Si) technologies. However, new devices have been commercialized using wide bandgap (WBG) semiconductors including silicon carbide (SiC) and gallium nitride (GaN). These new devices promise enhanced capabilities (e.g., higher switching speed, smaller die size, lower junction capacitances, and higher thermal conductance) over legacy Si devices, thus making the traditional qualification experiments ineffective. This work begins by introducing a new methodology for evaluating the switching robustness of SiC metal-oxide-semiconductor field-effect transistors (MOSFETs). Recent static acceleration tests have revealed that SiC MOSFETs can safely operate for thousands of hours at a blocking voltage higher than the rated voltage and near the avalanche boundary. This work evaluates the robustness of SiC MOSFETs under continuous, hard-switched, turn-off stresses with a dc-bias higher than the device rated voltage. Under these conditions, SiC MOSFETs show degradation in merely tens of hours at 25si{textdegree}C and tens of minutes at 100si{textdegree}C. Two independent degradation and failure mechanisms are unveiled, one present in the gate-oxide and the other in the bulk-semiconductor regions, detected by the increase in gate leakage current and drain leakage current, respectively. The second degradation mechanism has not been previously reported in the literature; it is found to be related to the electron hopping along the defects in semiconductors generated in the switching tests. The comparison with the static acceleration tests reveals that both degradation mechanisms correlate to the high-bias switching transients rather than the high-bias blocking states. The GaN high-electron-mobility transistor (HEMT) is a newer WBG device that is being increasingly adopted at an unprecedented rate. Different from SiC MOSFETs, GaN HEMTs have no avalanche capability and withstand the surge energy through capacitive charging, which often causes significant voltage overshoot up to their catastrophic limit. As a result, the dynamic breakdown voltage (BV) and transient overvoltage margin of GaN devices must be studied to fully evaluate the switching ruggedness of devices. This work characterizes the transient overvoltage capability and failure mechanisms of GaN HEMTs under hard-switched turn-off conditions at increasing temperatures, by using a clamped inductive switching circuit with a variable parasitic inductance. This test method allows flexible control over both the magnitude and the dV/dt of the transient overvoltage. The overvoltage robustness of two commercial enhancement-mode (E-mode) p-gate HEMTs was extensively studied: a hybrid drain gate injection transistor (HD-GIT) with an Ohmic-type gate and a Schottky p-Gate HEMT (SP-HEMT). The overvoltage failure of the two devices was found to be determined by the overvoltage magnitude rather than the dV/dt. The HD-GIT and the SP-HEMT were found to fail at a voltage overshoot magnitude that is higher than the breakdown voltage in the static current-voltage measurement. These single event failure tests were repeated at increasing temperatures (100si{textdegree}C and 150si{textdegree}C), and the failures of both devices were consistent with room temperature results. The two types of devices show different failure behaviors, and the underlying mechanisms (electron trapping) have been revealed by physics-based device simulations. Once this single-event overvoltage failure was established, the device's robustness under repetitive overvoltage and surge-energy events remained unclear; therefore, the switching robustness was evaluated for both the HD-GIT and SP-HEMT in a clamped, inductive switching circuit with a 400 V dc bias. A parasitic inductance was used to generate the overvoltage stress events with different overvoltage magnitude up to 95% of the device's destructive limit, different switching periods from 10 ms to 0.33 ms, different temperatures up to 150si{textdegree}C, and different negative gate biases. The electrical parameters of these devices were measured before and after 1 million stress cycles under varying conditions. The HD-GITs showed no failure or permanent degradation after 1-million overvoltage events at different switching periods, or elevated temperatures. The SP-HEMTs showed more pronounced parametric shifts after the 1 million cycles in the threshold voltage, on-resistance, and saturation drain current. Different shifts were also observed from stresses under different overvoltage magnitudes and are attributable to the trapping of the holes produced in impact ionization. All shifts were found to be recoverable after a relaxation period. Overall, the results from these switching-oriented robustness tests have shown that SiC MOSFETs show a tremendous lifetime under static dc-bias experiments, but when excited by hard-switching turn-off events, the failure mechanisms are accelerated. These results suggest the insufficient robustness of SiC MOSFETs under high bias, hard switching conditions, and the significance of using switching-based tests to evaluate the device robustness. These inspired the GaN-based hard-switching turn-off robustness experiments, which further demonstrated the dynamic breakdown voltage phenomena. Ultimately these results suggest that the breakdown voltage and overvoltage margin of GaN HEMTs in practical power switching can be significantly underestimated using the static breakdown voltage. Both sets of experiments provide further evidence for the need for switching-oriented robustness experiments to be implemented by both device vendors and users, to fully qualify and evaluate new power semiconductor transistors.<br>Doctor of Philosophy<br>Power conversion technology is being integrated into industrial and commercial applications with the increased use of laptops, server centers, electric vehicles, and solar and wind energy generation. Each of these converters requires the power semiconductor devices to convert energy reliably and safely. textcolor{black}{Silicon has been the primary material for these devices; however,} new devices have been commercialized from both silicon carbide (SiC) and gallium nitride (GaN) materials. Although these devices are required to undergo qualification testing, the standards were developed for silicon technology. The performance of these new devices offers many additional benefits such as physically smaller dimensions, greater power conversion efficiency, and higher thermal operating capabilities. To facilitate the increased integration of these devices into industrial applications, greater robustness and reliability analyses are required to supplement the traditional tests. The work presented here provides two new experimental methodologies to test the robustness of both SiC and GaN power transistors. These methodologies are oriented around hard-switching environments where both high voltage biases and high conduction current exist and stress the intrinsic semiconductor properties. Experimental evaluations were conducted of both material technologies where the electrical properties were monitored over time to identify any degradation effects. Additional analyses were conducted to determine the physics-oriented failure mechanisms. This work provides insight into the limitations of these semiconductor devices for both device designers and manufacturers as well as power electronic system designers.
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Ming-Te Mark. "Flow path design of a class of material handling systems for robustness and reliability." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/25381.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Mingzhong. "ARTS : agent-oriented robust transactional system /." Connect to thesis, 2009. http://repository.unimelb.edu.au/10187/6778.

Full text
Abstract:
Internet computing enables the construction of large-scale and complex applications by aggregating and sharing computational, data and other resources across institutional boundaries. The agent model can address the ever-increasing challenges of scalability and complexity, driven by the prevalence of Internet computing, by its intrinsic properties of autonomy and reactivity, which support the flexible management of application execution in distributed, open, and dynamic environments. However, the non-deterministic behaviour of autonomous agents leads to a lack of control, which complicates exception management in the system, thus threatening the robustness and reliability of the system, because improperly handled exceptions may cause unexpected system failure and crashes.<br>In this dissertation, we investigate and develop mechanisms to integrate intrinsic support for concurrency control, exception handling, recoverability, and robustness into multi-agent systems. The research covers agent specification, planning and scheduling, execution, and overall coordination, in order to reduce the impact of environmental uncertainty. Simulation results confirm that our model can improve the robustness and performance of the system, while relieving developers from dealing with the low level complexity of exception handling.<br>A survey, along with a taxonomy, of existing proposals and approaches for building robust multi-agent systems is provided first. In addition, the merits and limitations of each category are highlighted.<br>Next, we introduce the ARTS (Agent-Oriented Robust Transactional System) platform which allows agent developers to compose recursively-defined, atomically-handled tasks to specify scoped and hierarchically-organized exception-handling plans for a given goal. ARTS then supports automatic selection, execution, and monitoring of appropriate plans in a systematic way, for both normal and recovery executions. Moreover, we propose multiple-step backtracking, which extends the existing step-by-step plan reversal, to serve as the default exception handling and recovery mechanism in ARTS. This mechanism utilizes previous planning results in determining the response to a failure, and allows a substitutable path to start, prior to, or in parallel with, the compensation process, thus allowing an agent to achieve its goals more directly and efficiently. ARTS helps developers to focus on high-level business logic and relaxes them from considering low-level complexity of exception management.<br>One of the reasons for the occurrence of exceptions in a multi-agent system is that agents are unable to adhere to their commitments. We propose two scheduling algorithms for minimising such exceptions when commitments are unreliable. The first scheduling algorithm is trust-based scheduling, which incorporates the concept of trust, that is, the probability that an agent will comply with its commitments, along with the constraints of system budget and deadline, to improve the predictability and stability of the schedule. Trust-based scheduling supports the runtime adaptation and evolvement of the schedule by interleaving the processes of evaluation, scheduling, execution, and monitoring in the life cycle of a plan. The second scheduling algorithm is commitment-based scheduling, which focuses on the interaction and coordination protocol among agents, and augments agents with the ability to reason about and manipulate their commitments. Commitment-based scheduling supports the refactoring and parallel execution of commitments to maximize the system's overall robustness and performance. While the first scheduling algorithm needs to be performed by a central coordinator, the second algorithm is designed to be distributed and embedded into the individual agent.<br>Finally, we discuss the integration of our approaches into Internet-based applications, to build flexible but robust systems. Specifically, we discuss the designs of an adaptive business process management system and of robust scientific workflow scheduling.
APA, Harvard, Vancouver, ISO, and other styles
7

Pérez, Garcia Julio César. "Contribution to security and privacy in the Blockchain-based Internet of Things : Robustness, Reliability, and Scalability." Electronic Thesis or Diss., Avignon, 2023. http://www.theses.fr/2023AVIG0120.

Full text
Abstract:
L’Internet des Objets (IoT, Internet of Things) est un réseau diversifié d’objets interconnectés, généralement via l’internet. En raison de la sensibilité des informations échangées dans les applications de IoT, il est essentiel de garantir la sécurité et le respect de la vie privée. Ce problème est aggravé par la nature ouverte des communications sans fil et par les contraintes de puissance et de ressources computationnelles de la plupart des appareils IoT. Parallèlement, les solutions de sécurité IoT existantes sont basées sur des architectures centralisées, ce qui pose des problèmes d’évolutivité et de point de défaillance unique, les rendant sensibles aux attaques par déni de service et aux défaillances techniques. La Blockchain est considérée comme une solution attractive aux problèmes de sécurité et de centralisation de IoT. Les Blockchains reproduisent un enregistrement permanent, en annexe seulement, de toutes les transactions effectuées sur un réseau entre plusieurs appareils, en les maintenant synchronisées par un protocole de consensus. L’utilisation de la Blockchain peut impliquer des coûts de calcul et d’énergie élevés pour les appareils. Par conséquent, des solutions basées sur Fog/Edge Computing ont été envisagées dans le cadre de l’intégration avec l’IoT. Cette approche transfère la charge de calcul et la consommation d’énergie plus élevées vers les dispositifs ayant une plus grande disponibilité de ressources, les dispositifs Fog/Edge. Toutefois, le coût de l’utilisation de la Blockchain doit être optimisé, en particulier dans le protocole de consensus, qui influe considérablement sur les performances globales du système. Les Blockchains avec permission correspondent mieux aux exigences des applications IoT que les Blockchains sans permission, en raison de leur taux élevé de traitement des transactions et de leur scalabilité. En effet, les nœuds de consensus, les validateurs, sont connus et prédéterminés. Dans les protocoles de consensus existants utilisés dans les Blockchains avec permission, les validateurs sont généralement un ensemble de nœuds prédéfinis ou sélectionnés de manière aléatoire, ce qui affecte à la fois les performances du système et l’équité (Fairness) entre les utilisateurs. L’objectif de ce travail est de proposer des solutions pour améliorer la sécurité et la vie privée dans IoT en intégrant la technologie Blockchain, ainsi que pour maximiser les niveaux de fairness pendant le consensus. L’étude est organisée en deux parties distinctes : l’une traite des aspects critiques de la sécurité de IoT et propose des solutions basées sur la Blockchain, tandis que l’autre se concentre sur l’optimisation de la Fairness entre les utilisateurs lors de l’exécution de l’algorithme de consensus sur la Blockchain. Nous présentons un mécanisme d’authentification inspiré du protocole d’authentification µTesla, qui utilise des clés symétriques formant une chaîne de hachage et obtient des propriétés asymétriques en dévoilant la clé utilisée un peu plus tard. Grâce à ce mécanisme et à l’utilisation de la Blockchain pour stocker les clés et faciliter l’authentification, notre proposition garantit une authentification robuste et efficace des appareils, sans qu’il soit nécessaire de recourir à un tiers de confiance. En outre, nous présentons un système de gestion des clés basé sur la Blockchain pour les communications de groupe, adapté aux contextes de IoT. L’utilisation de la cryptographie à courbe elliptique garantit un faible coût de calcul tout en permettant une distribution sécurisée des clés de groupe. Dans les deux solutions de sécurité, nous fournissons des preuves formelles et informelles de la sécurité dans le modèle d’attaque défini. Une analyse de l’impact sur la performance et une comparaison avec les solutions existantes sont également menées pour les solutions proposées, montrant que les solutions proposées sont sûres et efficaces et peuvent être utilisées dans de multiples applications IoT<br>The Internet of Things (IoT) is a diverse network of objects typically interconnected via the Internet. Given the sensitivity of the information exchanged in IoT applications, it is essential to guarantee security and privacy. This problem is aggravated by the open nature of wireless communications, and the power and computing resource limitations of most IoT devices. Existing IoT security solutions are based on centralized architectures, which raises scalability issues and the single point of failure problem, making them susceptible to denial-of-service attacks and technical failures. Blockchain has emerged as an attractive solution to IoT security and centralization issues. Blockchains replicate a permanent, append-only record of all transactions occurring on a network across multiple devices, keeping them synchronized through a consensus protocol. Blockchain implementation may involve high computational and energy costs for devices. Consequently, solutions based on Fog/Edge computing have been considered in the integration with IoT. However, the cost of Blockchain utilization must be optimized, especially in the consensus protocol, which significantly influences the overall system performance. Permissioned Blockchains align better with the requirements of IoT applications than Permissionless Blockchains, due to their high transaction processing rate and scalability. This is because the consensus nodes, i.e., Validators, are known and predetermined. In existing consensus protocols used in Permissioned Blockchains, the Validators are usually a predefined or randomly selected set of nodes, which affects both system performance and fairness among users. The objective of this work is to propose solutions to improve security and privacy within IoT by integrating Blockchain technology, as well as to maximize fairness levels during consensus. The study is organized into two distinct parts: one addresses critical aspects of IoT security and proposes Blockchain-based solutions, while the other part focuses on optimizing fairness among users during the execution of the consensus algorithm on the Blockchain. We present an authentication mechanism inspired by the µTesla authentication protocol, which uses symmetric keys that form a hashchain and achieves asymmetric properties by unveiling the key used a while later. With this mechanism and the use of the Blockchain to store the keys and facilitate authentication, our proposal ensures robust and efficient authentication of devices, without the need for a trusted third party. In addition, we introduce a Blockchain-based key management system for group communications adapted to IoT contexts. The use of Elliptic Curve Cryptography ensures a low computational cost while enabling secure distribution of group keys. In both security solutions, we provide formal and informal proofs of security under the defined attack model. A performance impact analysis and a comparison with existing solutions are also conducted, showing that the proposed solutions are secure and efficient and can be used in multiple IoT applications. The second part of the work proposes an algorithm to select Validator nodes in Permissioned Blockchains maximizing Social Welfare, using α-Fairness as the objective function. A mathematical model of the problem is developed, and a method for finding the solution in a distributed manner is proposed, employing metaheuristic Evolutionary algorithms and a Searchspace partitioning strategy. The security of the proposed algorithm and the quality of the solutions obtained are analyzed. As a result of this work, two security protocols for IoT based on Blockchain are introduced, along with a distributed algorithm for maximizing Social Welfare among users in a Permissioned Blockchain network
APA, Harvard, Vancouver, ISO, and other styles
8

Al-Ameri, Shehab Ahmed. "A framework for assessing robustness of water networks and computational evaluation of resilience." Thesis, Cranfield University, 2016. http://dspace.lib.cranfield.ac.uk/handle/1826/12334.

Full text
Abstract:
Arid regions tend to take careful measures to ensure water supplies are secured to consumers, to help provide the basis for further development. The distribution network is the most expensive part of the water supply infrastructure and it must maintain performance during unexpected incidents. Many aspects of performance have previously been discussed separately, including reliability, vulnerability, flexibility and resilience. This study aimed to develop a framework to bring together these aspects as found in the literature and industry practice, and bridge the gap between them. Semi-structured interviews with water industry experts were used to examine the presence and understanding of robustness factors. Thematic analysis was applied to investigate these and inform a conceptual framework including the component and topological levels. Robustness was described by incorporating network reliability and resiliency. The research focused on resiliency as a network-level concept derived from flexibility and vulnerability. To utilise this new framework, the study explored graph theory to formulate metrics for flexibility and vulnerability that combine network topology and hydraulics. The flexibility metric combines hydraulic edge betweenness centrality, representing hydraulic connectivity, and hydraulic edge load, measuring utilised capacity. Vulnerability captures the impact of failures on the ability of the network to supply consumers, and their sensitivity to disruptions, by utilising node characteristics, such as demand, population and alternative supplies. These measures together cover both edge (pipe) centric and node (demand) centric perspectives. The resiliency assessment was applied to several literature benchmark networks prior to using a real case network. The results show the benefits of combining hydraulics with topology in robustness analysis. The assessment helps to identify components or sections of importance for future expansion plans or maintenance purposes. The study provides a novel viewpoint overarching the gap between literature and practice, incorporating different critical factors for robust performance.
APA, Harvard, Vancouver, ISO, and other styles
9

Yu, Hang. "Reliability-based design optimization of structures : methodologies and applications to vibration control." Phd thesis, Ecole Centrale de Lyon, 2011. http://tel.archives-ouvertes.fr/tel-00769937.

Full text
Abstract:
Deterministic design optimization is widely used to design products or systems. However, due to the inherent uncertainties involved in different model parameters or operation processes, deterministic design optimization without considering uncertainties may result in unreliable designs. In this case, it is necessary to develop and implement optimization under uncertainties. One way to deal with this problem is reliability-based robust design optimization (RBRDO), in which additional uncertainty analysis (UA, including both of reliability analysis and moment evaluations) is required. For most practical applications however, UA is realized by Monte Carlo Simulation (MCS) combined with structural analyses that renders RBRDO computationally prohibitive. Therefore, this work focuses on development of efficient and robust methodologies for RBRDO in the context of MCS. We presented a polynomial chaos expansion (PCE) based MCS method for UA, in which the random response is approximated with the PCE. The efficiency is mainly improved by avoiding repeated structural analyses. Unfortunately, this method is not well suited for high dimensional problems, such as dynamic problems. To tackle this issue, we applied the convolution form to compute the dynamic response, in which the PCE is used to approximate the modal properties (i.e. to solve random eigenvalue problem) so that the dimension of uncertainties is reduced since only structural random parameters are considered in the PCE model. Moreover, to avoid the modal intermixing problem when using MCS to solve the random eigenvalue problem, we adopted the MAC factor to quantify the intermixing, and developed a univariable method to check which variable results in such a problem and thereafter to remove or reduce this issue. We proposed a sequential RBRDO to improve efficiency and to overcome the nonconvergence problem encountered in the framework of nested MCS based RBRDO. In this sequential RBRDO, we extended the conventional sequential strategy, which mainly aims to decouple the reliability analysis from the optimization procedure, to make the moment evaluations independent from the optimization procedure. Locally "first-torder" exponential approximation around the current design was utilized to construct the equivalently deterministic objective functions and probabilistic constraints. In order to efficiently calculate the coefficients, we developed the auxiliary distribution based reliability sensitivity analysis and the PCE based moment sensitivity analysis. We investigated and demonstrated the effectiveness of the proposed methods for UA as well as RBRDO by several numerical examples. At last, RBRDO was applied to design the tuned mass damper (TMD) in the context of passive vibration control, for both deterministic and uncertain structures. The associated optimal designs obtained by RBRDO cannot only reduce the variability of the response, but also control the amplitude by the prescribed threshold.
APA, Harvard, Vancouver, ISO, and other styles
10

Kagho, Gouadjio Nadia Christiana. "Étude de la vulnérabilité et de la robustesse des ouvrages." Thesis, Paris Est, 2013. http://www.theses.fr/2013PEST1003/document.

Full text
Abstract:
Le terme de robustesse structurale donne lieu à diverses définitions et domaines d'application. Dans le domaine de l'ingénierie structurale, le cadre réglementaire des Eurocodes définit la robustesse structurale comme « l'aptitude d'une structure à résister à des événements tels que les incendies, les explosions, les chocs ou les conséquences d'une erreur humaine, sans présenter de dégâts disproportionnés par rapport à la cause d'origine ». Cette définition fait clairement ressortir les notions de dommage initial (défaillance locale) et de dommage disproportionné (défaillance globale). Cette thèse propose une approche de la quantification de la robustesse structurale en contexte probabiliste pour mesurer l'impact d'une défaillance localisée sur la défaillance globale de la structure. L'objectif majeur de la thèse est de quantifier l'écart entre une défaillance locale et une défaillance globale, en introduisant différents indices de robustesse selon que la structure soit intègre ou initialement endommagée. Pour cela, dans le but de caractériser et quantifier les liens existant entre la performance des différents éléments d'une structure et la performance globale de la structure, il est nécessaire d'introduire une étude en système qui intègre de manière concomitante des notions de défaillance locale (modes de défaillance) et des notions de défaillance globale. Une recherche « par l'intérieur » des chemins de défaillance dominants est présentée. Le terme « par l'intérieur » est utilisé car c'est le cheminement interne de la défaillance dans la structure qui est recherché. Des méthodes de parcours d'arbre d'évènements sont introduites telles que la méthode des « branches et bornes », du β-unzipping, ou encore du β-unzipping avec bornage. Ces méthodes permettent d'identifier les chemins de défaillance dominants avec des temps de calcul raisonnables. En particulier, il est possible de déterminer le chemin de défaillance associé à la plus grande probabilité de défaillance, appelé encore chemin de référence. Une approche « par l'extérieur » est également proposée, qui consiste à identifier la défaillance globale sans parcourir un arbre d'évènement (et donc sans s'intéresser à l'ordre avec lequel la défaillance survient). Le terme « par l'extérieur » correspond donc à regarder la défaillance de manière globale sans chercher à déterminer la chronologie de la défaillance. Dans les deux cas, l'enjeu est au final de développer une démarche globale permettant d'apprécier et de quantifier la robustesse des structures neuves ou existantes au travers de méthodes et d'indices pouvant s'appliquer à une large variété de problèmes<br>Structural robustness is associated with several definitions depending on context. In the field of structural engineering, the Eurocodes define structural robustness as “the ability of a structure to withstand events like fire, explosions, impact or the consequences of human error, without being damaged to an extent disproportionate to the original cause”. Such a definition clearly involves concepts of local and global failures. This PhD work proposes a methodology to quantify structural robustness in a probabilistic way and to assess the impact of local failures on global failures. The main objective of this PhD is to quantify the gap between local and global failures by introducing several robustness indices proposed for undamaged and damaged structures. To qualify and quantify the relationships between the performance of the different structural components and the overall structural performance, it is necessary to introduce a system-level analysis which simultaneously considers concepts of local failure modes and global failure events. An inner approach is introduced to determine significant failure sequences and to characterize stochastically dominant failure paths identified by using branch-and-bound, β-unzipping, and mixed β-unzipping with bounding methods. These methods enable to determine significant failure paths with reasonable computational times. In particular, the path with the largest probability of occurrence is considered as the reference failure path. An outer approach is also proposed which identifies global failure without using an event-tree search (and, consequently, without analyzing the order in the failure sequence). This concept characterizes an overall and simultaneous failure of different components without determining the chronology in the failure event. In both cases, the goal is to provide a general and widely applicable framework for qualifying and quantifying the robustness level of new and existing structures through the introduction of methodologies and indices
APA, Harvard, Vancouver, ISO, and other styles
11

Siljeström, Hansson Eira, and Emil Hellström. "Improving robustness of a PID-controlled measurement system through Design of Experiments : A DMAIC case study at Atlas Copco BLM." Thesis, Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-79557.

Full text
Abstract:
Companies within the manufacturing industry are often aiming to increase productivity and simultaneously maintain the quality of their products. To achieve higher productivity and quality it is imperative to have tools with a high speed and accuracy. Atlas Copco BLM's STbench is a measurement system which enables manufacturers to validate their tools during different situations. The purpose of this case study was to improve the robustness of the STbench so it would operate well during situations with both low and high tool speed. To define and investigate how to improve the STbench, a modified DMAIC-approach was used. During the investigation it was found that the area with the largest improvement possibilities was the STbench's PID-controllers. Design of Experiments was used as the method to optimize the P- and I-element of the PID-controllers; hence, increase the robustness. The optimal settings could improve the robustness of the STbench with approximately 50%, but the result has not been verified. This case study presents results that can increase the robustness of the STbench; thus answering the purpose. Furthermore, this master thesis presents several revelations regarding using experimental plans while optimizing control systems, an area that has not been extensively investigated in previous literature.
APA, Harvard, Vancouver, ISO, and other styles
12

Loayza, Ramirez Jorge Miguel. "Study and characterization of electrical overstress aggressors on integrated circuits and robustness optimization of electrostatic discharge protection devices." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEI044.

Full text
Abstract:
Cette thèse de doctorat s’inscrit dans la thématique de la fiabilité des circuits intégrés dans l’industrie de la microélectronique. Un circuit intégré peut être exposé à des agresseurs électriques potentiellement dangereux pendant toute sa durée de vie. Idéalement, les circuits devraient pouvoir encaisser ces excès d’énergie sans perdre leur fonctionnalité. En réalité, des défaillances peuvent être observées lors de tests de qualification ou en application finale. Il est donc dans l’intérêt des fabricants de réduire ces défaillances. Actuellement, il existe des circuits de protection sur puce conçus pour dévier l’énergie de ces agresseurs à l’écart des composants fragiles. Le terme anglophone Electrical Overstress (EOS) englobe tous les agresseurs électriques qui dépassent une limite au-delà de laquelle les composants peuvent être détruits. La définition de ce terme est traitée en détail dans la thèse. L’objectif de cette thèse est de comprendre le statut du sujet des EOS dans l’industrie. On propose ensuite une nouvelle méthodologie de caractérisation de circuits pour quantifier leur robustesse face à des formes d’onde représentatives présélectionnées. On propose également des solutions de circuits de protection sur puce que ce soit au niveau de nouveaux composants actifs ou au niveau de la conception des circuits électroniques de protection. Par exemple on propose un nouveau composant basé sur le thyristor qui a la capacité de s’éteindre même si la tension d’alimentation est présente sur l’anode. Une autre proposition est de désactiver les circuits de protection face aux décharges électrostatiques lorsque les puces sont dans un environnement où l’on est sur ou ces agresseurs ne présentent plus de danger. Finalement, des perspectives du travail de thèse sont citées<br>This Ph.D. thesis concerns reliability issues in the microelectronics industry for the most advanced technology nodes. In particular, the Electrical OverStress (EOS) issue is studied. Reducing EOS failures in Integrated Circuits (ICs) is becoming more and more important. However, the EOS topic is very complex and involves many different causes, viewpoints, definitions and approaches. In this context, a complete analysis of the current status of the EOS issue is carried out. Then, the Ph.D. objectives can be defined in a clear way. In particular, robustness increase of on-chip protection structures and IC characterization against EOS-like aggressors are two of the main goals. In order to understand and quantify the behavior of ICs against these aggressors, a dedicated EOS test bench is put in place along with the definition of a characterization methodology. A full characterization and comparison is performed on two different Electro- Static Discharge (ESD) power supply clamps. After identifying the potential weaknesses of the promising Silicon-Controlled Rectifier (SCR) device, a new SCR-based device with a turn-off capability is proposed and studied thanks to 3-D Technology Computer-Aided Design (TCAD)simulation. Triggering and turn-off behaviors are studied, as well as its optimization. Finally, three different approaches are proposed for improving the robustness of the IC onchip protection circuits. They are characterized thanks to the EOS test bench which allows identifying their assets as well as their points of improvement
APA, Harvard, Vancouver, ISO, and other styles
13

Rado, Omesaad A. M. "Contributions to evaluation of machine learning models. Applicability domain of classification models." Thesis, University of Bradford, 2019. http://hdl.handle.net/10454/18447.

Full text
Abstract:
Artificial intelligence (AI) and machine learning (ML) present some application opportunities and challenges that can be framed as learning problems. The performance of machine learning models depends on algorithms and the data. Moreover, learning algorithms create a model of reality through learning and testing with data processes, and their performance shows an agreement degree of their assumed model with reality. ML algorithms have been successfully used in numerous classification problems. With the developing popularity of using ML models for many purposes in different domains, the validation of such predictive models is currently required more formally. Traditionally, there are many studies related to model evaluation, robustness, reliability, and the quality of the data and the data-driven models. However, those studies do not consider the concept of the applicability domain (AD) yet. The issue is that the AD is not often well defined, or it is not defined at all in many fields. This work investigates the robustness of ML classification models from the applicability domain perspective. A standard definition of applicability domain regards the spaces in which the model provides results with specific reliability. The main aim of this study is to investigate the connection between the applicability domain approach and the classification model performance. We are examining the usefulness of assessing the AD for the classification model, i.e. reliability, reuse, robustness of classifiers. The work is implemented using three approaches, and these approaches are conducted in three various attempts: firstly, assessing the applicability domain for the classification model; secondly, investigating the robustness of the classification model based on the applicability domain approach; thirdly, selecting an optimal model using Pareto optimality. The experiments in this work are illustrated by considering different machine learning algorithms for binary and multi-class classifications for healthcare datasets from public benchmark data repositories. In the first approach, the decision trees algorithm (DT) is used for the classification of data in the classification stage. The feature selection method is applied to choose features for classification. The obtained classifiers are used in the third approach for selection of models using Pareto optimality. The second approach is implemented using three steps; namely, building classification model; generating synthetic data; and evaluating the obtained results. The results obtained from the study provide an understanding of how the proposed approach can help to define the model’s robustness and the applicability domain, for providing reliable outputs. These approaches open opportunities for classification data and model management. The proposed algorithms are implemented through a set of experiments on classification accuracy of instances, which fall in the domain of the model. For the first approach, by considering all the features, the highest accuracy obtained is 0.98, with thresholds average of 0.34 for Breast cancer dataset. After applying recursive feature elimination (RFE) method, the accuracy is 0.96% with 0.27 thresholds average. For the robustness of the classification model based on the applicability domain approach, the minimum accuracy is 0.62% for Indian Liver Patient data at r=0.10, and the maximum accuracy is 0.99% for Thyroid dataset at r=0.10. For the selection of an optimal model using Pareto optimality, the optimally selected classifier gives the accuracy of 0.94% with 0.35 thresholds average. This research investigates critical aspects of the applicability domain as related to the robustness of classification ML algorithms. However, the performance of machine learning techniques depends on the degree of reliable predictions of the model. In the literature, the robustness of the ML model can be defined as the ability of the model to provide the testing error close to the training error. Moreover, the properties can describe the stability of the model performance when being tested on the new datasets. Concluding, this thesis introduced the concept of applicability domain for classifiers and tested the use of this concept with some case studies on health-related public benchmark datasets.<br>Ministry of Higher Education in Libya
APA, Harvard, Vancouver, ISO, and other styles
14

Bessani, Michel. "Resilience and vulnerability of power distribution systems: approaches for dynamic features and extreme weather scenarios." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/18/18153/tde-11072018-165318/.

Full text
Abstract:
Our society is heavily dependent on commodities, as water and electricity, supplied to final users by engineered systems, which are known as critical infrastructures. In such context, the understanding of how such systems handle damaging events is an important aspect and is a current concern of researchers, public agents, and society. How much of performance a system loses due to damages is related to its vulnerability, and the ability to absorb and recover successfully from damages is its resilience. In this study, approaches to assess the vulnerability and resilience of power distribution systems by evaluating dynamic features, as the processes of failure and repair, and system reconfiguration for vulnerability, and the effects of extreme weather scenarios for resilience together with the processes of failure of repair are presented. Such approaches were applied on systems previously presented in the literature, and also on a Brazilian power distribution system. A Monte Carlo simulation was applied to evaluate this systems, models for time-to-failure and time-to-repair under different circumstances were obtained from historical data, and a method to use the models of time-to-failure during the vulnerability analysis was introduced. In addition, an assessment of the impact of reconfiguration capability on vulnerability is also carried out, and a resilience assessment under different climate scenarios has been developed. The time-to-failure and repair models highlighted how external factors modifies the Brazilian system failure and repair dynamics, the use of time-to-failure models during vulnerability analysis showed that the consideration of the failure dynamic of the types of elements give different results, and the time domain allows new analysis\' perspectives. The investigation indicated that the vulnerability reduction due to reconfiguration is affected by the number of switches and also the maximum load capacity of the distribution system feeders. The resilience assessment showed that for structural connectivity, larger distribution networks are less resilient, while for electricity delivery, a set of features, related with the topological and electrical organization of such networks, seems to be associated with the network service resilience, such information is useful for system planning and management. The dynamics evaluated in this study are relevant to vulnerability and resilience of such systems, and also to other critical infrastructures. Moreover, the developed approaches can be applied to other systems, as transportation and water distribution. In future studies, other power distribution systems features, as distributed generation and energy storage, will be considered in both, vulnerability and resilience analysis.<br>Nossa sociedade é altamente dependente de commodities, como água e eletricidade, fornecidas para os usuários por sistemas de engenharia, conhecidos como infraestruturas críticas. A compreensão de como tais sistemas lidam com eventos prejudiciais é uma preocupação atual de pesquisadores, agentes públicos e sociedade. A perda de desempenho de um sistema devido a danos é relacionada à sua vulnerabilidade, e a capacidade de absorver e se recuperar dos danos é a resiliência. Neste estudo, são apresentadas abordagens para avaliar a vulnerabilidade e resiliência de sistemas de distribuição de energia considerando características dinâmicas, como os processos de falha e reconfiguração do sistema, para a vulnerabilidade, e os efeitos de climas extremos na resiliência com os processos de falha e reparo. Tais abordagens foram aplicadas em sistemas previamente apresentados na literatura, e também em um sistema brasileiro. Simulação de Monte Carlo foi utilizada para avaliar as dinâmicas de falha e reparo do sistema utilizando de modelos obtidos a partir de dados históricos, e um método para usar os modelos de tempo-até-falha durante a análise de vulnerabilidade também foi apresentado. Além disso, uma avaliação do impacto da dinâmica de reconfiguração na vulnerabilidade foi realizada e uma avaliação de resiliência sob diferentes cenários climáticos foi desenvolvida. Os modelos tempo-para-falha e reparo destacaram como fatores externos modificam as dinâmicas de falha e reparo do sistema brasileiro, o uso de modelos de confiabilidade na análise de vulnerabilidades mostrou que a consideração dos diferentes tipos de elementos geram resultados diferentes e o domínio de tempo permite novas perspectivas de análise. A investigação da reconfiguração indicou que a redução da vulnerabilidade devido à reconfiguração é afetada pelo número de chaves e também pela máxima capacidade de carga dos alimentadores do sistema de distribuição. A avaliação de resiliência mostrou que, para conectividade estrutural, redes de distribuição maiores são menos resilientes, enquanto que para fornecimento de energia, um conjunto de características, relacionados com a organização topológica e elétrica dessas redes parece ser associado à resiliência do serviço, informação útil para o planejamento. As dinâmicas avaliadas neste estudo são relevantes para a vulnerabilidade e resiliência de tais sistemas, e também para outras infraestruturas críticas. Além disso, essas abordagens podem ser aplicadas a outros sistemas, como transporte e distribuição de água. Em estudos futuros, outras características de sistemas de distribuição de energia, como geração distribuída e armazenamento de energia, serão consideradas nas análises de vulnerabilidade e resiliência.
APA, Harvard, Vancouver, ISO, and other styles
15

Sickert, Jan-Uwe, Wolfgang Graf, and Stephan Pannier. "Entwurf von Textilbetonverstärkungen – computerorientierte Methoden mit verallgemeinerten Unschärfemodellen." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1244047293129-54264.

Full text
Abstract:
Im Beitrag werden drei Methoden für den Entwurf und die Bemessung von Textilbetonverstärkungen vorgestellt. Für eine Vorbemessung wird die Variantenuntersuchung angewendet, z.B. für die Bestimmung der Anzahl an Textillagen. Für die Festlegung von Realisierungen mehrerer kontinuierlicher Entwurfsvariablen unter Berücksichtigung unterschiedlicher Entwurfsziele und Entwurfsnebenbedingungen werden die Fuzzy-Optimierung und die direkte Lösung der Entwurfsaufgabe skizziert. Mit der Fuzzy-Optimierung werden Kompromisslösungen für die multikriterielle Entwurfsaufgabe ermittelt. Die direkte Lösung basiert auf der explorativen Datenanalyse von Punktmengen, die als Ergebnis einer unscharfen Tragwerksanalyse vorliegen, und liefert Bereiche – sog. Entwurfsteilräume – als Grundlage für die Auswahl des Entwurfs.
APA, Harvard, Vancouver, ISO, and other styles
16

Bagayoko, Amadou Baba. "Politiques de robustesse en réseaux ad hoc." Thesis, Toulouse, INPT, 2012. http://www.theses.fr/2012INPT0056/document.

Full text
Abstract:
Les réseaux sans fil sont sujets à des perturbations voire des pannes de liens et de noeuds en raison des caractéristiques intrinsèques de leur support de communication ; ces pannes sont aggravées par les particularités de relayage et de mobilité des noeuds dans les réseaux ad hoc. Ces réseaux requièrent donc la conception et la mise oeuvre des protocoles robustes au niveau de toutes les couches protocolaires. Dans cette thèse, nous choisissons une approche de robustesse pour améliorer les performances des communications dans un réseau mobile ad hoc. Nous proposons et étudions deux architectures de protection (protection par une analyse prédictive et protection par redondance de routes) qui sont couplées avec une restauration de niveau routage. Concernant la phase de détection, le protocole de routage utilise les notifications de niveau liaison pour détecter les pannes de liens. La première solution repose sur un protocole de routage réactif unipath dont le critère de sélection de routes est modifié. L’idée est d’utiliser des métriques capables de prédire l’état futur des routes dans le but d’améliorer leur durée de vie. Pour cela, deux métriques prédictives reposant sur la mobilité des noeuds sont proposées : la fiabilité des routes et une combinaison fiabilité-minimum de sauts. Pour calculer ces métriques prédictives, nous proposons une méthode analytique de calcul de la fiabilité de liens entre noeuds. Cette méthode prend compte le modèle de mobilité des noeuds et les caractéristiques de la communication sans fil notamment les collisions inter-paquets et les atténuations du signal. Les modèles de mobilité étudiés sont les modèles Random Walk et Random Way Point. Nous montrons l’impact de ces métriques sur les performances en termes de taux de livraison de paquets, de surcoût normalisé et de ruptures de routes. La seconde solution est une protection par redondance de routes qui s’appuie sur un protocole de routage multipath. Dans cette architecture, l’opération de recouvrement consiste soit à un basculement sur une route secondaire soit à une nouvelle découverte. Nous montrons que la redondance de routes améliore la robustesse de la communication en réduisant le temps de restauration. Ensuite, nous proposons une comparaison analytique entre les différentes politiques de recouvrement d’un protocole multipath. Nous en deduisons qu’un recouvrement segmenté donne les meilleurs résultats en termes de temps de restauration et de fiabilité<br>Due to the unreliability characteristics of wireless communications, and nodes mobility, Mobile Ad hoc Networks (MANETs) suffer from frequent failures and reactivation of links. Consequently, the routes frequently change, causing significant number of routing packets to discover new routes, leading to increased network congestion and transmission latency. Therefore, MANETs demand robust protocol design at all layers of the communication protocol stack, particularly at the MAC, the routing and transport layers. In this thesis, we adopt robustness approach to improve communication performance in MANET. We propose and study two protection architectures (protection by predictive analysis and protection by routes redundancy) which are coupled with a routing level restoration. The routing protocol is responsible of the failure detection phase, and uses the mechanism of link-level notifications to detect link failures. Our first proposition is based on unipath reactive routing protocol with a modified route selection criterion. The idea is to use metrics that can predict the future state of the route in order to improve their lifetime. Two predictive metrics based on the mobility of nodes are proposed : the routes reliability and, combining hop-count and reliability metrics. In order to determine the two predictive metrics, we propose an analytical formulation that computes link reliability between adjacent nodes. This formulation takes into account nodes mobility model and the the wireless communication characteristics including the collisions between packets and signal attenuations. Nodes mobility models studied are Random Walk and Random Way Point. We show the impact of these predictive metrics on the networks performance in terms of packet delivery ratio, normalized routing overhead and number of route failures. The second proposition is based on multipath routing protocol. It is a protection mechanism based on route redundancy. In this architecture, the recovery operation is either to switch the traffic to alternate route or to compute a new route. We show that the routes redundancy technique improves the communication robustness by reducing the failure recovery time. We propose an analytical comparison between different recovery policies of multipath routing protocol. We deduce that segment recovery is the best recovery policy in terms of recovery time and reliability
APA, Harvard, Vancouver, ISO, and other styles
17

Akopyan, Evelyne. "Fiabilité de l'architecture réseau des systèmes spatiaux distribués sur essaims de nanosatellites." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSEP102.

Full text
Abstract:
Le domaine de l’observation de l’Espace s’intéresse de près aux signaux très basse fréquence, car ils fournissent d’importantes informations quant à la naissance et aux premiers jours de l’Univers. A ce jour, les interféromètres observant ces signaux se situent à la surface de la Terre, dans des zones arides. Malheureusement, ces signaux sont très sensibles aux interférences terrestres ainsi qu’à l’ionosphère, et sont donc difficilement observables. Une solution à ce problème serait d’observer les signaux directement depuis l’Espace, en déployant un essaim de nanosatellites en orbite autour de la Lune. Cet essaim constitue un Système Spatial Distribué (DSS), fonctionnant en tant qu'interféromètre, dont l’orbite lunaire le préserve des interférences terrestres et de l’ionosphère. Cependant, la configuration de l’essaim de nanosatellites en interféromètre spatial est un défi de taille en termes de communication, notamment en raison de l’absence d’infrastructure dans l’Espace et du volume de données d’observation à propager au sein du système. L’objectif de la thèse est donc de définir une configuration d’architecture fiable, répondant aux contraintes conjointes d’un réseau MANET et d’un système distribué. La thèse commence par caractériser le réseau de l’essaim de nanosatellites et met en avant sa forte hétérogénéité. Ensuite, elle propose des algorithmes permettant de répartir équitablement la charge du réseau, en se basant sur des techniques de division de graphe, et compare les performances de ces algorithmes sur des critères d'équité. Enfin, elle évalue la sensibilité du système aux pannes en termes de robustesse (résistance aux pannes) et de résilience (maintien du fonctionnement en présence de pannes) et étudie l'impact de la division de graphe sur la fiabilité globale de l'essaim. Les algorithmes de division développés dans cette thèse devront garantir la Qualité de Service (QoS) essentielle au bon fonctionnement d'un interféromètre spatial. Pour atteindre cet objectif, des solutions de routage pertinentes devront être minutieusement étudiées et intégrées, afin de s'assurer qu'elles répondent aux exigences strictes de performance et de fiabilité de cette application avancée<br>The study of the low-frequency range is essential for Deep Space observation, as it extracts precious information from Dark Ages signals, which are signatures of the very early Universe. To this day, the majority of low-frequency radio interferometers are deployed in desertic regions on the surface of the Earth. However, these signals are easily distorted by radio-frequency interferences as well as the ionosphere, making them hardly observable when they are not completely masked. One solution to this problem would be to observe the low-frequency signals directly from Space, by deploying a nanosatellite swarm in orbit around the Moon. This swarm is defined as a Distributed Space System (DSS) operating as an interferometer, while being shielded by the Moon from terrestrial interferences and ionospheric distortions. However, the configuration of a nanosatellite swarm as a space observatory proves to be a challenging problem in terms of communication, mostly because of the lack of external infrastructure in Space, and the amount of observation data to propagate within the swarm. Thus, the objective of the thesis is to define a reliable network architecture that would comply with the requirements of a MANET and a distributed system, simultaneously. This thesis starts by characterizing the network of the nanosatellite swarm and highlights its strong heterogeneity. Then, it introduces a set of algorithms, based on graph division, to fairly distribute the network load among the swarm, and compares their performance in terms of fairness. Finally, it assesses the fault tolerance of the system in terms of robustness (capacity to resist faults) and resilience (capacity to maintain functionality when faults occur) and evaluates the impact of graph division on the overall reliability of the swarm. The division algorithms developped in this thesis should ensure the Quality of Service (QoS) necessary to the proper functioning of a Space interferometer. To this end, relevant routing protocols should be thoroughly studied and integrated, in order to meet the strict requirements of this advanced application in terms of performance and reliability
APA, Harvard, Vancouver, ISO, and other styles
18

Schwerz, André Luis. "Sistemas de informação cientes de processos, robustos e confiáveis." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-17042018-145500/.

Full text
Abstract:
Atualmente, diversas empresas e organizações estão cada vez mais empreendendo esforços para transformar rapidamente as suas potenciais ideias em produtos e serviços. Esses esforços também têm estimulado a evolução dos sistemas de informação que passaram a ser apoiados por modelos de alto nível de abstração para descrever a lógica do processo. Neste contexto, destaca-se o sucesso dos Sistemas de Informação cientes de Processos (PAIS, do inglês Process-Aware Information Systems) para o gerenciamento de processos de negócios e automação de processos científicos de larga escala (e-Science). Grande parte do sucesso dos PAIS é devido à capacidade de prover funcionalidades genéricas para modelagem, execução e monitoramento dos processos. Essas características são bem-sucedidas quando os modelos de processos têm um caminho bem-comportado no sentido de atingir os seus objetivos. No entanto, situações anômalas que desviam a execução desse caminho bem-comportado ainda representam um significativo desafio para os PAIS. Por causa dos vários tipos de falhas que desviam a execução do comportamento esperado, prover uma execução robusta e confiável é uma tarefa complexa para os atuais PAIS, uma vez que nem todas as situações de falha podem ser eficientemente descritas dentro da estrutura do fluxo tradicional. Como consequência, o tratamento de tais situações geralmente envolve intervenções manuais nos sistemas por operadores humanos, o que resulta em custos adicionais e significativos para as empresas. Neste trabalho é introduzido um método de composição para recuperação ciente de custos e benefícios que é capaz de encontrar e seguir caminhos alternativos que reduzam os prejuízos financeiros do tratamento de exceções. Do ponto de vista prático, esse método provê o tratamento de exceção automatizado e otimizado ao calcular os custos e benefícios de cada caminho de recuperação e escolher o caminho com a melhor relação custo-benefício disponível. Mais especificamente, o método de recuperação proposto estende a abordagem WED-flow (Workflow, Event processing and Data-flow) para permitir a composição ciente de custos e benefícios de passos de recuperação transacionais backward e forward. Por fim, os experimentos mostram que esse método de recuperação pode ser adequadamente incorporado para manipular exceções em uma ampla variedade de processos.<br>Nowadays, many corporations and organizations are increasingly making efforts to transform quickly and effectively their potential ideas into products and services. These efforts have also stimulated the evolution of information systems that are now supported by higher-level abstract models to describe the process logic. In this context, several sophisticated Process-Aware Information Systems (PAIS) have successfully been proposed for managing business processes and automating large-scale scientific (e-Science) processes. Much of this success is due to their ability to provide generic functionality for modeling, execution and monitoring processes. These functionalities work well when process models have a well-behaved path towards achieving their objectives. However, anomalous situations that fall outside of the well-behaved execution path still pose a significant challenge to PAIS. Because of the many types of failures that may deviate execution away from expected behaviors, provision of robust and reliable execution is a complex task for current PAIS, since not all failure situations can be efficiently modeled within the traditional flow structure. As a consequence, the treatment for such situations usually involves interventions in systems by human operators, which result in significant additional cost for businesses. In this work, we introduce a cost/benefit-aware recovery composition method that is able to find and follow alternative paths to reduce the financial side effects of exception handling. From a practical point of view, this method provides the automated and optimized exception handling, by calculating the cost and benefits of each recovery path, and choosing the recovery path with the best cost/benefits available. More specifically, our recovery method extends the WED-flow (Workflow, Event processing and Data-flow) approach for enabling cost/benefit-aware composition of forward and/or backward transactional recovery steps. Finally, the experiments point out that this recovery method can be suitably incorporated into exception handling within a wide variety of processes.
APA, Harvard, Vancouver, ISO, and other styles
19

Pomès, Emilie. "Amélioration et suivi de la robustesse et de la qualité de MOSFETs de puissance dédiés à des applications automobiles micro-hybrides." Thesis, Toulouse, INSA, 2012. http://www.theses.fr/2012ISAT0039/document.

Full text
Abstract:
Dans le contexte écologique actuel, les équipementiers automobiles européens sont dans l’obligation de développer des systèmes innovants afin de réduire les rejets de gaz à effet de serre des véhicules. Les nouvelles applications électroniques micro-hybrides exigent le développement de stratégies quant à l’intégration des systèmes et la réduction des pertes. Ainsi, une proposition a consisté à réaliser des modules de puissance constitués de transistors MOSFETs basse tension fort courant. L’application de type alterno-démarreur plus communément nommée « Stop &amp; Start »exige des composants toujours plus robustes et fiables du fait de la sollicitation en mode d’avalanche sous des températures pouvant atteindre 175°C.Les travaux de recherche présentés dans cette thèse portent donc sur l’aspect d’optimisation de la robustesse et de la fiabilité des composants. Tout d’abord, il était essentiel de comprendre l’avalanche et ses enjeux pour la technologie. Ensuite dans ce contexte, le procédé notamment autour de l’oxyde de grille a été amélioré afin de garantir la tenue en mode de sollicitation grille-source et grille-drain pour satisfaire les exigences de fiabilité. En outre, le développement d’un test innovant de la puce, dérivé du QBD, a permis d’évaluer précisément les modifications apportées sur le procédé de fabrication et d’être corrélé avec les résultats des essais de fiabilité. Enfin, le cycle de vie d’un MOSFET nécessite un suivi qualité précis qui se compose de deux aspects essentiels. En premier lieu, le suivi des paramètres électriques et de leur dérive par une analyse statistique « postprocessing». En second lieu, la mise en place d’un outil de traçabilité du module à la puce pour traquer les éventuels rejets dans l’application finale et remonter à la cause d’origine. Toutes les innovations présentées, dans ce mémoire, s’inscrivent dans une démarche novatrice de l’amélioration continue de la qualité des composants de type MOSFET de puissance<br>In the current ecologic context, the European automotive suppliers have to develop innovating systems inorder to reduce greenhouse gas rejects produce by vehicles. The new mild-hybrid electronic applications require the development of new strategies due to their integration and the reduction of power losses.Thereby, a proposition consisted in creating power modules constituted by MOSFETs characterized by alow blocking voltage under high current. The starter alternator reversible application also named “Stop &amp;Start” requires robust and reliable components in order to support a high current solicitation in avalanche mode for temperatures up to 175°C.Research work presented in this thesis concerns the robustness and reliability enhancement of MOSFET components. First of all, the important part is about avalanche mode understanding and their issues. Inthis context, the fabrication process is a main part for quality and reliability requirements. Then, the workis focused on gate oxide process quality in order to hold gate-source and gate-drain stress modes.Moreover, the development of an innovating test at wafer level derivate from QBD test, allowed the precise evaluation of process modification thanks to the correlation with reliability campaign results. Finally, theMOSFET life cycle needs a quality monitoring constituted by two main steps. The first one is the monitoring of electrical parameters in time with a post-processing statistical analysis. The second one is the use of a traceability tool between the power module and the silicon die in order to highlight possible defects in the final starter alternator application, and understand failure root causes. The innovations presented in this thesis are included in the continued improvement approach for MOSFETs quality and robustness enhancement
APA, Harvard, Vancouver, ISO, and other styles
20

Manseur, Farida. "Algorithmes pour un guidage optimal des usagers dans les réseaux de transport." Thesis, Paris Est, 2017. http://www.theses.fr/2017PESC1141/document.

Full text
Abstract:
Nous nous intéressons dans ce travail au guidage optimal des usagers dans un réseau routier. Plus précisément, nous nous focalisons sur les stratégies adaptatives de guidage avec des garanties en termes de fiabilité des temps de parcours, et en termes de robustesse de ces stratégies. Nous nous basons sur une approche stochastique où des distributions de probabilités sont associées aux temps de parcours sur les liens du réseau. Le guidage est adaptatif et individuel. L'objectif de ce travail de recherche est le développement de stratégies « robustes » de guidage des usagers dans un réseau de transport routier. Une stratégie de guidage d’un nœud origine vers un nœud destination est dite robuste, ici, si elle minimise la détérioration de sa valeur maximale calculée au départ de l’origine, contre d’éventuelles reconfigurations du réseau dues à des coupures de liens (accidents, travaux, etc.) La valeur de la stratégie de guidage est maximisée par rapport à la moyenne et à la fiabilité des temps de parcours associées à la stratégie. Deux principales parties sont distinguées dans ce travail. Nous commençons par l’aspect statique du guidage, où la dynamique du trafic n’est pas prise en compte. Nous proposons une extension d’une approche existante de guidage, pour tenir compte de la robustesse des itinéraires calculés. Dans une deuxième étape, nous combinons notre nouvel algorithme avec un modèle microscopique du trafic pour avoir l’effet de la dynamique du trafic sur le calcul d’itinéraires robustes<br>In this work, we are interested in the optimal guidance of users on road networks. More precisely, we are focused on the adaptive strategies of guidance with guarantees in terms of the travel time reliability and in terms of the robustness of the strategies. We base here on a stochastic approach, where probability distributions are associated to travel times on the links of the network. The guidance is adaptive and user-based. The objective of this work is the development of "robust" strategies for user guidance in a road network. A guidance strategy is said to be robust, here, if it minimizes the deterioration of its maximum value calculated at the origin, against eventual reconfigurations of the network due to link failures (accidents, works, etc.) The value of a guidance strategy is maximized with respect to the mean travel time and its reliability. Two main parts are distinguished in this work. We start with the static aspect of the guidance, where the traffic dynamics are not taken into account. We propose an extension of an existing guidance approach, to take into account the robustness of the calculated itineraries. In a second step, we combine our new guidance algorithm with a microscopic traffic model in order to have the effect of the traffic dynamics on the robust route calculation
APA, Harvard, Vancouver, ISO, and other styles
21

Rahat, Alma As-Aad Mohammad. "Hybrid evolutionary routing optimisation for wireless sensor mesh networks." Thesis, University of Exeter, 2015. http://hdl.handle.net/10871/21330.

Full text
Abstract:
Battery powered wireless sensors are widely used in industrial and regulatory monitoring applications. This is primarily due to the ease of installation and the ability to monitor areas that are difficult to access. Additionally, they can be left unattended for long periods of time. However, there are many challenges to successful deployments of wireless sensor networks (WSNs). In this thesis we draw attention to two major challenges. Firstly, with a view to extending network range, modern WSNs use mesh network topologies, where data is sent either directly or by relaying data from node-to-node en route to the central base station. The additional load of relaying other nodes’ data is expensive in terms of energy consumption, and depending on the routes taken some nodes may be heavily loaded. Hence, it is crucial to locate routes that achieve energy efficiency in the network and extend the time before the first node exhausts its battery, thus improving the network lifetime. Secondly, WSNs operate in a dynamic radio environment. With changing conditions, such as modified buildings or the passage of people, links may fail and data will be lost as a consequence. Therefore in addition to finding energy efficient routes, it is important to locate combinations of routes that are robust to the failure of radio links. Dealing with these challenges presents a routing optimisation problem with multiple objectives: find good routes to ensure energy efficiency, extend network lifetime and improve robustness. This is however an NP-hard problem, and thus polynomial time algorithms to solve this problem are unavailable. Therefore we propose hybrid evolutionary approaches to approximate the optimal trade-offs between these objectives. In our approach, we use novel search space pruning methods for network graphs, based on k-shortest paths, partially and edge disjoint paths, and graph reduction to combat the combinatorial explosion in search space size and consequently conduct rapid optimisation. The proposed methods can successfully approximate optimal Pareto fronts. The estimated fronts contain a wide range of robust and energy efficient routes. The fronts typically also include solutions with a network lifetime close to the optimal lifetime if the number of routes per nodes were unconstrained. These methods are demonstrated in a real network deployed at the Victoria & Albert Museum, London, UK.
APA, Harvard, Vancouver, ISO, and other styles
22

Said, Nasri. "Evaluation de la robustesse des technologies HEMTs GaN à barrière AlN ultrafine pour l'amplification de puissance au-delà de la bande Ka." Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0425.

Full text
Abstract:
La filière GaN est stratégique pour l'Union Européenne car elle permet d'améliorer la puissance et le rendement des systèmes radar et de télécommunication, notamment dans les bandes S à Ka (jusqu'à 30 GHz). Pour répondre aux besoins des futures applications, telles que la 5G et les systèmes militaires, le développement des technologies GaN vise à augmenter les fréquences jusqu'aux ondes millimétriques. Cela nécessite d'optimiser l'épitaxie et la réduction de la longueur de grille à moins de 150 nm, ainsi que l'utilisation de barrières ultrafines (&lt;10 nm) pour éviter les effets de canaux courts. La substitution de la barrière AlGaN par du AlN représente une solution pour maintenir de bonnes performances tout en miniaturisant les composants. Dans ces travaux de thèse, plusieurs variantes technologiques à barrière AlN ultrafine (3 nm) sur des canaux GaN non-dopés de différentes épaisseurs, développées par le laboratoire IEMN sont étudiés. L'évaluation des performances et de la robustesse de ces technologies, cruciale pour leur qualification et utilisation dans des missions à long-terme, sont ainsi menées en mode DC et RF afin de définir les zones de sécurité de fonctionnement (SOA) et d’identifier les mécanismes de dégradation.La campagne de caractérisation DC et pulsée a révélé une faible dispersion des composants après leur stabilisation électrique, reflétant une bonne maîtrise technologique : ceci permet par ailleurs des études statistiques et des analyses génériques plus pertinentes sur l’ensemble des lots de composants étudiés. L'analyse de la sensibilité des dispositifs à des températures allant jusqu'à 200°C a prouvé la forte stabilité thermique des performances en mode diode et transistor, en suivant les indicateurs paramétriques représentatifs des modèles électriques des composants (courants de saturation et courants de fuite, tension de seuil, taux de retard aux commandes entrée sortie, …). L’ajout d’une barrière arrière AlGaN sur une couche tampon moyennement dopée C a réglé le compromis entre confinement des électrons et densités de pièges. Les tests de vieillissement accéléré en mode DC à différents points de polarisation et en mode RF par paliers de puissance d’entrée ont montré que la barrière arrière AlGaN confère une meilleure stabilité des courants de fuite et des courbes I(V) statiques, une réduction des effets de piégeage et d'auto-échauffement, ainsi qu'une extension de la SOA-DC opérationnelle. Les tests de vieillissement accéléré en mode dynamique à 10 GHz sur des HEMTs avec différents espacements grille-drain ont montré que la SOA-RF ne dépend pas de cet espacement, mais plutôt de la capacité de la grille à supporter des signaux RF élevés, avant dégradation brutale de cette dernière. En utilisant une méthode de modélisation non linéaire originale, prenant en compte le phénomène d'auto-polarisation, les dispositifs avec barrière AlGaN se sont révélés plus robustes également en RF. Cela se traduit par leur compression plus tardive de gain, allant jusqu’à +10dB et sans dégradation électrique ainsi que structurelle apparente (observée par photoluminescence). Indépendamment de la variante AlN/GaN, le mécanisme de dégradation en stress RF correspond au claquage abrupt de la grille Schottky conduisant à sa défaillance. Ces résultats prouvent que les composants sont plus sensibles aux conditions de polarisation DC qu’au niveau de signal RF injecté [...]<br>The GaN industry is strategic for the European Union because it enhances the power and efficiency of radar and telecommunication systems, especially in the S to Ka bands (up to 30 GHz). To meet the needs of future applications such as 5G and military systems, GaN technology development aims to increase frequencies to the millimeter-wave range. This requires optimizing epitaxy and reducing the gate length to less than 150 nm, as well as using ultrathin barriers (&lt;10 nm) to avoid short-channel effects. Replacing the AlGaN barrier with AlN is a solution to maintain good performance while miniaturizing devices. In this thesis, several technological variants with an ultrathin AlN barrier (3 nm) on undoped GaN channels of various thicknesses, developed by the IEMN laboratory, are studied. The evaluation of the performance and robustness of these technologies, crucial for their qualification and use in long-term profil missions, is conducted in both DC and RF modes to define the safe operating areas (SOA) and identify degradation mechanisms.The DC and pulsed characterization campaign revealed low component dispersion after electrical stabilization, reflecting good technological control. This also allows for more relevant statistical studies and generic analyses across all component batches studied. The sensitivity analysis of the devices at temperatures up to 200°C demonstrated strong thermal stability in diode and transistor modes, following parametric indicators representative of the electrical models of the components (saturation currents and leakage currents, threshold voltage, gate and drain lags rates, ...). The addition of a AlGaN back-barrier on a moderately C-doped buffer layer resolved the trade-off between electron confinement and trap densities. Accelerated aging tests in DC mode at various biasing conditions and in RF mode by input power steps showed that the AlGaN back-barrier provides better stability in leakage currents and static I(V) curves, reduces trapping and self-heating effects, and extends the operational DC-SOA.Dynamic accelerated aging tests at 10 GHz on HEMTs with different gate-drain spacings showed that the RF-SOA does not depend on this spacing but rather on the gate's ability to withstand high RF signals before abrupt degradation occurs. Using an original nonlinear modeling method that considers the self-biasing phenomenon, devices with the AlGaN back-barrier proved to be more robust in RF as well. This is reflected in their later gain compression, up to +10 dB, without apparent electrical or structural degradation (as observed by photoluminescence). Regardless of the AlN/GaN variant, the RF stress degradation mechanism corresponds to the abrupt breakdown of the Schottky gate, leading to its failure. These results indicate that the components are more sensitive to DC bias conditions than to the level of injected RF signals [...]
APA, Harvard, Vancouver, ISO, and other styles
23

Lelièvre, Nicolas. "Développement des méthodes AK pour l'analyse de fiabilité. Focus sur les évènements rares et la grande dimension." Thesis, Université Clermont Auvergne‎ (2017-2020), 2018. http://www.theses.fr/2018CLFAC045/document.

Full text
Abstract:
Les ingénieurs utilisent de plus en plus de modèles numériques leur permettant de diminuer les expérimentations physiques nécessaires à la conception de nouveaux produits. Avec l’augmentation des performances informatiques et numériques, ces modèles sont de plus en plus complexes et coûteux en temps de calcul pour une meilleure représentation de la réalité. Les problèmes réels de mécanique sont sujets en pratique à des incertitudes qui peuvent impliquer des difficultés lorsque des solutions de conception admissibles et/ou optimales sont recherchées. La fiabilité est une mesure intéressante des risques de défaillance du produit conçu dus aux incertitudes. L’estimation de la mesure de fiabilité, la probabilité de défaillance, nécessite un grand nombre d’appels aux modèles coûteux et deviennent donc inutilisable en pratique. Pour pallier ce problème, la métamodélisation est utilisée ici, et plus particulièrement les méthodes AK qui permettent la construction d’un modèle mathématique représentatif du modèle coûteux avec un temps d’évaluation beaucoup plus faible. Le premier objectif de ces travaux de thèses est de discuter des formulations mathématiques des problèmes de conception sous incertitudes. Cette formulation est un point crucial de la conception de nouveaux produits puisqu’elle permet de comprendre les résultats obtenus. Une définition des deux concepts de fiabilité et de robustesse est aussi proposée. Ces travaux ont abouti à une publication dans la revue internationale Structural and Multidisciplinary Optimization (Lelièvre, et al. 2016). Le second objectif est de proposer une nouvelle méthode AK pour l’estimation de probabilités de défaillance associées à des évènements rares. Cette nouvelle méthode, nommée AK-MCSi, présente trois améliorations de la méthode AK-MCS : des simulations séquentielles de Monte Carlo pour diminuer le temps d’évaluation du métamodèle, un nouveau critère d’arrêt sur l’apprentissage plus stricte permettant d’assurer le bon classement de la population de Monte Carlo et un enrichissement multipoints permettant la parallélisation des calculs du modèle coûteux. Ce travail a été publié dans la revue Structural Safety (Lelièvre, et al. 2018). Le dernier objectif est de proposer de nouvelles méthodes pour l’estimation de probabilités de défaillance en grande dimension, c’est-à-dire un problème défini à la fois par un modèle coûteux et un très grand nombre de variables aléatoires d’entrée. Deux nouvelles méthodes, AK-HDMR1 et AK-PCA, sont proposées pour faire face à ce problème et sont basées respectivement sur une décomposition fonctionnelle et une technique de réduction de dimension. La méthode AK-HDMR1 fait l’objet d’une publication soumise à la revue Reliability Engineering and Structural Safety le 1er octobre 2018<br>Engineers increasingly use numerical model to replace the experimentations during the design of new products. With the increase of computer performance and numerical power, these models are more and more complex and time-consuming for a better representation of reality. In practice, optimization is very challenging when considering real mechanical problems since they exhibit uncertainties. Reliability is an interesting metric of the failure risks of design products due to uncertainties. The estimation of this metric, the failure probability, requires a high number of evaluations of the time-consuming model and thus becomes intractable in practice. To deal with this problem, surrogate modeling is used here and more specifically AK-based methods to enable the approximation of the physical model with much fewer time-consuming evaluations. The first objective of this thesis work is to discuss the mathematical formulations of design problems under uncertainties. This formulation has a considerable impact on the solution identified by the optimization during design process of new products. A definition of both concepts of reliability and robustness is also proposed. These works are presented in a publication in the international journal: Structural and Multidisciplinary Optimization (Lelièvre, et al. 2016). The second objective of this thesis is to propose a new AK-based method to estimate failure probabilities associated with rare events. This new method, named AK-MCSi, presents three enhancements of AK-MCS: (i) sequential Monte Carlo simulations to reduce the time associated with the evaluation of the surrogate model, (ii) a new stricter stopping criterion on learning evaluations to ensure the good classification of the Monte Carlo population and (iii) a multipoints enrichment permitting the parallelization of the evaluation of the time-consuming model. This work has been published in Structural Safety (Lelièvre, et al. 2018). The last objective of this thesis is to propose new AK-based methods to estimate the failure probability of a high-dimensional reliability problem, i.e. a problem defined by both a time-consuming model and a high number of input random variables. Two new methods, AK-HDMR1 and AK-PCA, are proposed to deal with this problem based on respectively a functional decomposition and a dimensional reduction technique. AK-HDMR1 has been submitted to Reliability Enginnering and Structural Safety on 1st October 2018
APA, Harvard, Vancouver, ISO, and other styles
24

Chambion, Bertrand. "Etude de la fiabilité de modules à base de LEDs blanches pour applications automobile." Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0163/document.

Full text
Abstract:
Les composants dédiés et actuellement disponibles pour le marché automobileprésentent une grande diversité technologique tant au niveau puce que stratégie de packaging ouencore architecture module (mono-puce ou multi-puce) pour des performances équivalentes. Cetteétude s’est attachée à développer une méthodologie d’évaluation de la fiabilité de deux filièrestechnologiques particulières de modules de LEDs multi-puce : l’une intègre une technologie verticale(VTF pour Vertical Thin Film) tandis que la seconde est focalisée sur une structure par puce montéeretournée(TFFC pour Thin Film Flip Chip). La méthodologie s’articule autour de trois principaux axes:· La connaissance des structures et le développement de modèles électro-optiques et thermiquesmulti-puce permettant d’extraire les paramètres clés à suivre au travers d’un panel varié detechniques d’analyse physique et non-destructives incluant les aspects électriques, optiques,thermiques….· Une analyse comportementale de robustesse par paliers afin de dégager les margesopérationnelles de fonctionnement ainsi que les modes et les signatures caractéristiques dedéfaillance.· Une étude de fiabilité conduite à partir de différents régimes de contraintes accélérées pourestimer les durées de vie moyennes de ces nouveaux composants en environnement automobileet l’impact au niveau système.Les résultats mettent en évidence une durée de vie très dépendante de la filière technologique(facteur 6 entre les deux filières étudiées). Les analyses de défaillance ont permis d’identifierprécisément les comportements de ces nouvelles sources d’éclairage pour dégager des indicateursprécoces de défaillance. Enfin, des préconisations ont été extraites afin de fiabiliser les futursprojecteurs à sources LEDs de puissance pour les applications en automobile<br>With rapid development of Lighting Emitting Diode (LED) market, LED performancesare now suitable for automotive high beam / low beam lighting applications. Due to the need of UltraHigh Brightness (UHB-LEDs), LEDs are packaged on high thermal conductivity materials to obtainmultichip module (4 chips in series), which deliver up to 1000 lumens at 1A. Currently, several LEDtechnologies are commercially offered for the same performances, and different packaging strategieshave been implemented in terms of chip configuration, bonding, down conversion phosphor layerand mechanical protection to optimize performances. This study addresses a dedicated methodologyfor reliability analysis, applied on two LED chip packaging technologies: On the one hand, a VerticalThin Film (VTF) technology; on the other hand a Thin Film Flip Chip (TFFC). Our methodology is basedon 3 main items: Packaging technology structure, materials analysis and electro-optical and thermal multichipmodels for both technologies to understand and extract the key parameters to monitor duringageing tests. Robustness assessment tests to define operating margins, adjust accelerated life-testingconditions, and identify failures signatures. Reliability study through a 6 000 hours High Temperature Operating Life (HTOL) acceleratedtests, to predict the Mean Time To Failure (MTTF) of these new light source technologiesregarding the automotive mission profile. Linked to failure analysis, convincing failuremechanisms are proposed.Based on these results, parametric variations are compared to failure analysis results topropose failure mechanisms. The HTOL tests reveal that both LED technologies have their specificreliability behavior and failure modes: catastrophic failure and gradual failure. Predictive lifetimeestimations (L70B50) of these multichip modules give a factor 6 between both technologies.Beyond these reliability results, the multichip architecture brings new issues for Solid StateLighting (SSL) sources in automotive, as well as partial failure or unbalanced behavior after stress.These new issues are discussed through the behavior modeling of a 10 LED modules batch for bothfailure modes. Modeling results demonstrate that the predictive lifetime of a LED multichiparchitecture is directly related with the LED technology failure mode
APA, Harvard, Vancouver, ISO, and other styles
25

Mbarek, Safa. "Fiabilité et analyse physique des défaillances des composants électroniques sous contraintes électro-thermiques pour des applications en mécatronique." Thesis, Normandie, 2017. http://www.theses.fr/2017NORMR142/document.

Full text
Abstract:
L’amélioration des systèmes de conversion d’énergie rend les dispositifs à base de SiC très attractifs pour leur efficacité, compacité et robustesse. Cependant, leur comportement en réponse à un défaut de court-circuit doit être soigneusement étudié pour assurer la fiabilité des systèmes. Ce travail de recherche porte sur les problèmes de robustesse et de fiabilité du MOSFET SiC sous contraintes de court-circuit. Cette étude repose sur des caractérisations électriques et microstructurales. La somme de toutes les caractérisations avant, pendant et après les tests de robustesse ainsi que l’analyse microstructurale permet de définir des hypothèses sur l’origine physique de la défaillance pour ce type de composants. De plus, la mesure de la capacité est introduite au cours des tests de vieillissement en tant qu’indicateur de santé et outil clé pour remonter à l’origine physique du défaut<br>The improvement of power conversion systems makes SiC devices very attractive for efficiency, compacity and robustness. However, their behavior in response to short circuit mode must be carefulli studied to ensure the reliability of systems. This research work deals with the SiC MOSFET robustness and reliability issues under short-circuit constraints. It is based upon electrical and microstructural characterizations. The sum of all the characterizations before, during and after the robustness tests as well as microstructural analysis allow to define hypotheses regarding the physical origin of failure of such components. Also, caoacitance measurement is introduced during aging tests as a health indicator and a key tool to go back to the physical origin of the defect
APA, Harvard, Vancouver, ISO, and other styles
26

Jouha, Wadia. "Etude et modélisation des dégradations des composants de puissance grand gap soumis à des contraintes thermiques et électriques." Thesis, Normandie, 2018. http://www.theses.fr/2018NORMR083/document.

Full text
Abstract:
Ce travail vise à étudier la robustesse de trois générations de MOSFET SiC de puissance (Silicon Carbide Metal Oxide Semiconductor Field E_ect Transistors). Plusieurs approches sont suivies : la caractérisation électrique, la modélisation physique, les tests de vieillissement et la simulation physique. Un modèle compact basé sur une nouvelle méthode d'extraction de paramètres et sur les résultats de caractérisation électrique est présenté. Les paramètres extraits du modèle (tensionde seuil, transconductance de la région de saturation et paramètre du champ électrique transverse) sont utilisés pour analyser avec précision le comportement statique de trois générations de MOSFET SiC. La robustesse de ces dispositifs sont étudiées par deux tests : le test HTRB (High Temperature Reverse Bias) et le test ESD (Electrostatic Discharge). Une simulation physique est réalisée pour comprendre l'impact de la température et des paramètres physiques sur les caractérisations électriques des MOSFETs SiC<br>This work aims to investigate the robustness of three generations of power SiC MOSFETs (SiliconCarbide Metal Oxide Semiconductor Field E_ect Transistors). Several approaches are followed :electrical characterization, device modeling, ageing tests and physical simulation. An improvedcompact model based on an accurate parameters extraction method and one electrical characterization results is presented. The parameters extracted precisely from the model (thresholdvoltage, saturation region transconductance...) are used to accurately analyze the static behaviorof two generations of SiC MOSFETs. The robustness of these devices are investigated bytwo tests : HTRB (High Temperature Reverse Bias) stress and an ESD (Electrostatic Discharge)stress. Physical simulation is conducted to understand the impact of the temperature and thephysical parameters on the device electrical characterizations
APA, Harvard, Vancouver, ISO, and other styles
27

Huang, He. "Développement de modèles prédictifs pour la robustesse électromagnétique des composants électroniques." Thesis, Toulouse, INSA, 2015. http://www.theses.fr/2015ISAT0036/document.

Full text
Abstract:
Un objectif important des études de la compatibilité électromagnétique (CEM) est de rendre les produits conformes aux exigences CEM des clients ou les normes. Cependant, toutes les vérifications de la conformité CEM sont appliquées avant la livraison des produits finis. Donc nous pourrions avoir de nouvelles questions sur les performances CEM des systèmes électroniques au cours de leur vie. Les comportements CEM de ces produits seront-ils toujours conformes dans plusieurs années ? Un produit peut-il garder les mêmes performances CEM pendant toute sa durée de vie ? Si non, combien de temps la conformité CEM peut-elle être maintenue ?L'étude à long terme de l'évolution des niveaux CEM, appelée "robustesse électromagnétique», est apparue ces dernières années. Les travaux précédents ont montré que la dégradation causée par le vieillissement pourrait induire des défaillances de système électronique, y compris une évolution de la compatibilité électromagnétique. Dans cette étude, l'évolution à long terme des niveaux CEM de deux groupes de composants électroniques a été étudiée. Le premier type de composant électronique est le circuit intégré. Les courants de hautes fréquences et les tensions induites au cours des activités de commutation de circuits intégrés sont responsables des émissions électromagnétiques non intentionnelles. En outre, les circuits intégrés sont aussi très souvent les victimes d'interférences électromagnétiques. Un autre groupe de composants est formé par les composants passifs. Dans un système électronique, les circuits intégrés fonctionnent souvent avec les composants passifs sur un même circuit imprimé. Les fonctions des composants passifs dans un système électronique, telles que le filtrage et le découplage, ont également une influence importante sur les niveaux de CEM.Afin d'analyser l'évolution à long terme des niveaux CEM des composants électroniques, les travaux présentés dans cette thèse ont pour objectif de proposer des méthodes générales pour prédire l'évolution dans les temps des niveaux de compatibilité électromagnétique des composants électroniques<br>One important objective of the electromagnetic compatibility (EMC) studies is to make the products compliant with the EMC requirement of the customers or the standards. However, all the EMC compliance verifications are applied before the delivery of final products. So we might have some new questions about the EMC performance during their lifetime. Will the product still be EMC compliant in several years? Can a product keep the same EMC performance during its whole lifetime? If not, how long the EMC compliance can be maintained? The study of the long-term EMC level, which is called “electromagnetic robustness”, appeared in the recent years. Past works showed that the degradation caused by aging could induce failures of electronic system, including a harmful evolution of electromagnetic compatibility. In this study, the long-term evolution of the EMC levels of two electronic component groups has been studied. The first electronic component type is the integrated circuit. The high-frequency currents and voltages during the switching activities of ICs are responsible for unintentional emissions or coupling. Besides, ICs are also very often the victim of electromagnetic interference. Another group of components is the passive component. In an electronic system, the IC components usually work together with the passive components at PCB level. The functions of passive components in an electronic system, such as filtering and decoupling, also have an important influence on the EMC levels.In order to analyze the long-term evolution of the EMC level of the electronic components, the study in this thesis tends to propose general predictive methods for the electromagnetic compatibility levels of electronic components which evolve with time
APA, Harvard, Vancouver, ISO, and other styles
28

Yang, Hao-I., and 楊皓義. "Robustness of Nano-Scale SRAM Design: Reliability and Tolerance Techniques." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/03899148500620045179.

Full text
Abstract:
博士<br>國立交通大學<br>電子研究所<br>100<br>This thesis discusses the reliability and tolerance techniques for the robust nanoscale SRAM design. It provides comprehensive analyses on the impacts of Bias Temperature Instability (BTI) and gate-oxide breakdown on power-gated SRAMs, including the stability and Write-ability of cells, Read/Write access paths, replica timing control circuits, and the data-retention power-gating devices. We show that the degradation of power-gating switches induced by BTI or gate-oxide breakdown significantly affects the stability of SRAM arrays. The degradation of timing control circuits caused by BTI results in SRAM performance decreasing. Moreover, based on these analyses, the degradation tolerance techniques are also presented. We provide the dual gate-oxide thickness power-switch to improve the time-to-dielectric-breakdown (TBD) of the power-switch while maintaining the performance without side effect. We also present some techniques to mitigate SRAM degradation induced by BTI, including dual-VTH cells, and the banking data-retention power-gating technique to reduce the stress voltage during Standby mode. Furthermore, a low VMIN disturb-free 8T SRAM cell with cross-point Write structure and adaptive VVSS control is introduced. The Monte Carlo simulation results show that the proposed 8T cell improve Static Noise Margin about 120% comparing with the conventional 6T cell. A 512Kb test chip is implemented in UMC 55nm Standard Performance (SP) CMOS technology, and the chip area is 1100.3×1434.50 um2. The measurement results demonstrate operating frequency of 1.143GHz at 1.5V, 943MHz at 1.2V, and 209MHz at 0.6V.
APA, Harvard, Vancouver, ISO, and other styles
29

Hsieh, Mon-Lin, and 謝孟霖. "The Study of Network Reliability, Survivability and Robustness in Small World Model." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/z6e2pg.

Full text
Abstract:
碩士<br>國立臺北科技大學<br>電機工程系所<br>93<br>In 1998, a paper “Collective dynamics of ‘small-world’ networks” proposed by D.J. Watts and S. H. Strogatz on the Nature journal amazing described a model that made “real world network” successfully simulated and identified. The small-world model used two parameters, clustering coefficient and average shortest path, to fetch the characteristic phenomenon behind the network structure of real world. Based on the small-world model, this thesis studies the relationships between the network structure and its reliability, survivability and robustness. Experiments show that those network properties are related to the rewired probability of the small-world model. According to the results of experiments, this thesis proposes a fuzzy set approach for optimizing network design and limiting the spread rate of computer viruses.
APA, Harvard, Vancouver, ISO, and other styles
30

Pan, Yi-Fang, and 潘儀芳. "On the Robustness of Power Delivery Network - A Perspective of Lifetime Reliability Trojan." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/94vra3.

Full text
Abstract:
碩士<br>國立交通大學<br>資訊科學與工程研究所<br>107<br>Due to the recent escalation in the cost and complexity of the integrated circuits (ICs) design process, using third-party resources in modern ICs design has become a trend; thus, ICs design has been more prone to various kinds of attacks in the supply chain. Hardware Trojan horses (HTHs) can be implanted to facilitate the leakage of confidential information to adversaries or destroy the IC system under a specific condition. In addition, one of the main categories of HTHs is to tamper with IC reliability and accelerate the aging of the chip, which is called reliability Trojan. The aim of this research is to develop a post-layout staged reliability Trojan that controls the lifetime of a power delivery network (PDN) by manipulating electromigration-induced aging. In addition to attack, this Trojan could be utilized by designers to strengthen the robustness of PDN. In this work, we transformed our Trojan problem into a min-cut problem and further considered process variation (PV), an uncertainty as regards to HTHs effectiveness, to efficiently control system lifetime under PV. Experiments based on 28 nm designs demonstrated that our proposed methodology could control the system lifetime with only 4.3\% average error rate under PV.
APA, Harvard, Vancouver, ISO, and other styles
31

Elhami, Khorasani Negar. "System-level Structural Reliability of Bridges." Thesis, 2010. http://hdl.handle.net/1807/30141.

Full text
Abstract:
The purpose of this thesis is to demonstrate that two-girder or two-web structural systems can be employed to design efficient bridges with an adequate level of redundancy. The issue of redundancy in two-girder bridges is a constraint for the bridge designers in North America who want to take advantage of efficiency in this type of structural system. Therefore, behavior of two-girder or two-web structural systems after failure of one main load-carrying component is evaluated to validate their safety. A procedure is developed to perform system-level reliability analysis of bridges. This procedure is applied to two bridge concepts, a twin steel girder with composite deck slab and a concrete double-T girder with unbonded external tendons. The results show that twin steel girder bridges can be designed to fulfill the requirements of a redundant structure and the double-T girder with external unbonded tendons can be employed to develop a robust structural system.
APA, Harvard, Vancouver, ISO, and other styles
32

Samanta, Roopsha. "Program reliability through algorithmic design and analysis." 2013. http://hdl.handle.net/2152/23103.

Full text
Abstract:
Software systems are ubiquitous in today's world and yet, remain vulnerable to the fallibility of human programmers as well as the unpredictability of their operating environments. The overarching goal of this dissertation is to develop algorithms to enable automated and efficient design and analysis of reliable programs. In the first and second parts of this dissertation, we focus on the development of programs that are free from programming errors. The intent is not to eliminate the human programmer, but instead to complement his or her expertise, with sound and efficient computational techniques, when possible. To this end, we make contributions in two specific domains. Program debugging --- the process of fault localization and error elimination from a program found to be incorrect --- typically relies on expert human intuition and experience, and is often a lengthy, expensive part of the program development cycle. In the first part of the dissertation, we target automated debugging of sequential programs. A broad and informal statement of the (automated) program debugging problem is to suitably modify an erroneous program, say P, to obtain a correct program, say P'. This problem is undecidable in general; it is hard to formalize; moreover, it is particularly challenging to assimilate and mechanize the customized, expert programmer intuition involved in the choices made in manual program debugging. Our first contribution in this domain is a methodical formalization of the program debugging problem, that enables automation, while incorporating expert programmer intuition and intent. Our second contribution is a solution framework that can debug infinite-state, imperative, sequential programs written in higher-level programming languages such as C. Boolean programs, which are smaller, finite-state abstractions of infinite-state or large, finite-state programs, have been found to be tractable for program verification. In this dissertation, we utilize Boolean programs for program debugging. Our solution framework involves two main steps: (a) automated debugging of a Boolean program, corresponding to an erroneous program P, and (b) translation of the corrected Boolean program into a correct program P'. Shared-memory concurrent programs are notoriously difficult to write, verify and debug; this makes them excellent targets for automated program completion, in particular, for synthesis of synchronization code. Extant work in this domain has focused on either propositional temporal logic specifications with simplistic models of concurrent programs, or more refined program models with the specifications limited to just safety properties. Moreover, there has been limited effort in developing adaptable and fully-automatic synthesis frameworks that are capable of generating synchronization at different levels of abstraction and granularity. In the second part of this dissertation, we present a framework for synthesis of synchronization for shared-memory concurrent programs with respect to temporal logic specifications. In particular, given a concurrent program composed of synchronization-free processes, and a temporal logic specification describing their expected concurrent behaviour, we generate synchronized processes such that the resulting concurrent program satisfies the specification. We provide the ability to synthesize readily-implementable synchronization code based on lower-level primitives such as locks and condition variables. We enable synchronization synthesis of finite-state concurrent programs composed of processes that may have local and shared variables, may be straight-line or branching programs, may be ongoing or terminating, and may have program-initialized or user-initialized variables. We also facilitate expression of safety and liveness properties over both control and data variables by proposing an extension of propositional computation tree logic. Most program analyses, verification, debugging and synthesis methodologies target traditional correctness properties such as safety and liveness. These techniques typically do not provide a quantitative measure of the sensitivity of a computational system's behaviour to unpredictability in the operating environment. We propose that the core property of interest in reasoning in the presence of such uncertainty is robustness --- small perturbations to the operating environment do not change the system's observable behavior substantially. In well-established areas such as control theory, robustness has always been a fundamental concern; however, the techniques and results therein are not directly applicable to computational systems with large amounts of discretized, discontinuous behavior. Hence, robustness analysis of software programs used in heterogeneous settings necessitates development of new theoretical frameworks and algorithms. In the third part of this dissertation, we target robustness analysis of two important classes of discrete systems --- string transducers and networked systems of Mealy machines. For each system, we formally define robustness of the system with respect to a specific source of uncertainty. In particular, we analyze the behaviour of transducers in the presence of input perturbations, and the behaviour of networked systems in the presence of channel perturbations. Our overall approach is automata-theoretic, and necessitates the use of specialized distance-tracking automata for tracking various distance metrics between two strings. We present constructions for such automata and use them to develop decision procedures based on reducing the problem of robustness verification of our systems to the problem of checking the emptiness of certain automata. Thus, the system under consideration is robust if and only if the languages of particular automata are empty.<br>text
APA, Harvard, Vancouver, ISO, and other styles
33

Ribeiro, Filipe Luís Alves. "Robustness Analysis of Structures in Post-Earthquake Scenarios Considering Multiple Hazards." Doctoral thesis, 2017. http://hdl.handle.net/10362/20212.

Full text
Abstract:
Recent earthquakes have highlighted that the consideration of isolated seismic events, although necessary, may not be sufficient to prevent building collapse. In fact, the occurrence of a large number of aftershocks with significant intensity, as well as the occurrence of tsunamis, fires, and explosions, poses a safety threat that has not been addressed properly in the design and assessment of building structures over the last decade. Although research has been developed in order to evaluate the impact of multiple and/or cascading hazards in structural safety and economical losses, there is no established framework to perform such analysis. In addition, the available numerical tools lack a unified implementation in a widely used software in order to allow for the development of large numerical simulations involving these hazard events. This work proposes a probabilistic framework for quantifying the robustness of structures considering the occurrence of a major earthquake (mainshock) and the subsequent cascading hazard events, namely fire and aftershocks. These events can significantly increase the probability of collapse of buildings, especially for structures that are damaged during the mainshock. In order to assess the structural performance under post-earthquake hazards, it is of paramount importance to accurately simulate the damage attained during the earthquake, which is strongly correlated to the residual structural capacity to withstand cascading events. In this context, the influence of ground motion characteristics, namely ground motion duration, has been identified as one of the parameters that may induce significant bias on damage patterns associated with the mainshock. Thus, ground motion duration influence on structural damage is analyzed in this work. Steel moment resisting frame buildings designed according to pre-Northridge codes are analyzed using the proposed framework. These buildings are representative of the design practice in the US and Europe for decades, and the conclusions of this work can be significant in the assessment/retrofit of thousands of buildings. Fragility curves and reliabilitybased robustness measures are obtained using the proposed framework. The fragility curve parameters obtained herein can be used in the development of future probabilistic-based studies considering post-earthquake hazards. The results highlight the importance of the post-earthquake hazard events in the structural safety assessment. Further work is needed in order to better characterize these hazards as to include them in the code-based design and assessment methodologies.
APA, Harvard, Vancouver, ISO, and other styles
34

Yadav, Avinash. "Multi-Threshold Low Power-Delay Product Memory And Datapath Components Utilizing Advanced FinFET Technology Emphasizing The Reliability And Robustness." Thesis, 2020. http://hdl.handle.net/1805/24772.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)<br>In this thesis, we investigated the 7 nm FinFET technology for its delay-power product performance. In our study, we explored the ASAP7 library from Arizona State University, developed in collaboration with ARM Holdings. The FinFET technology was chosen since it has a subthreshold slope of 60mV/decade that enables cells to function at 0.7V supply voltage at the nominal corner. An emphasis was focused on characterizing the Non-Ideal effects, delay variation, and power for the FinFET device. An exhaustive analysis of the INVx1 delay variation for different operating conditions was also included, to assess the robustness. The 7nm FinFET device was then employed into 6T SRAM cells and 16 function ALU. The SRAM cells were approached with advanced multi-corner stability evaluation. The system-level architecture of the ALU has demonstrated an ultra-low power system operating at 1 GHz clock frequency.
APA, Harvard, Vancouver, ISO, and other styles
35

Milczarek, Anna Maria. "Robustness-based assessment of railway masonry arch bridges." Master's thesis, 2017. http://hdl.handle.net/1822/49751.

Full text
Abstract:
Dissertação de mestrado em Structural Analysis of Monuments and Historical Constructions<br>Railway masonry arch bridges (MAB) were first created to allow human beings to travel through bodies of water. The durable masonry allowed the bridges to withstand heavy subjection to loads, exposure to climate, and the passing of time. To this day, many older railway MAB remain standing—while never undergoing any renovations—supporting more loads than the original intent; standing true to their robustness. The concept of robustness gained interest following the collapse of the Roman Point building in the second half of the 20th century. The collapse of the multi-storey building—that resulted from a single oven gas explosion—lead to the development of a concept of robustness as proportionality between inflicted damage and consequence. A few decades later, at the beginning of the 21st century, the collapse of the Twin Towers increased the interest in developing efficient methods to measure robustness. While various methods have been proposed to assess robustness, to this day there exists no consensual decision in which of those methods should be enforced. The paper assesses and compares the robustness of the bridge of Vila Mea through three applications; i) probabilistic based redundancy; ii) Frangopol and Curley approach; iii) Cavaco method. To compute the robustness of the bridge, the reliability indexes of the damaged and undamaged structure are defined. To determine the reliability indexes of the damaged and undamaged structure, the bridge of Vila Mea is modeled using the software Limitstate:RING. The software computes the resistance of the bridge by applying a modified version of the Kinematic theorem, initially formulated by Heyman. The reliability index of the undamaged structure is computed through the probabilistic analysis. Next, 5 typical damage scenarios are introduced into the bridge individually, and the reliability index for each of the 5 damaged structures is defined. By obtaining the reliability indexes of the damaged and undamaged structure, robustness of the bridge is assessed through the application of the 3 methods mentioned above. Results from each damage, as well from each method, are compared. The bridge of Vila Mea proved to be robust and the methods proved to be difficult to be compared amongst eachother.<br>As pontes em arco de alvenaria de pedra (PAAP) foram construídas para permitir que os seres humanos transpusessem cursos de água. A alvenaria, material durável e resistente, permitiu que as pontes suportassem as fortes cargas, à exposição ao clima e à passagem do tempo às quais estão submetidas. Até ao presente, várias PAAP encontram-se em utilização – e não foram intervencionadas – suportando cargas superiores às consideradas no seu dimensionamento; reforçando a sua robustez. O conceito de robustez despoletou interesse após o colapso do edifício Ronan Point na segunda metade do século XX. O colapso do edifício de vários andares – que resultou de uma única explosão de gás – originou o desenvolvimento do conceito de robustez como medida de proporcionalidade entre o dano inicial e respetivas consequências. Poucas décadas mais tarde, no início do século XXI, o colapso das Torres Gêmeas aumentou o interesse e investigação no desenvolvimento de métodos eficientes para avaliação da robustez. Embora vários métodos tenham sido propostos para avaliar a robustez, até hoje não existe uma decisão consensual sobre quais dos métodos desenvolvidos devem ser aplicados. A dissertação avalia e compara a robustez da ponte de Vila Meã através da aplicação de três métodos: i) redundância baseada em análises probabilísticas; ii) método de Frangopol e Curley; iii) método de Cavaco. Para calcular a robustez da ponte, os índices de fiabilidade da estrutura danificada e não danificada são determinados. Para determinar os índices de fiabilidade da estrutura danificada e não danificada, a ponte de Vila Mea é modelada através do software Limitstate: RING. O software calcula a capacidade de carga última da ponte aplicando uma versão modificada do teorema cinemático, inicialmente formulado pela Heyman. O índice de fiabilidade da estrutura intacta é calculado através métodos probabilísticos. Seguidamente, 5 cenários de dano típicos são considerados individualmente, e o índice de fiabilidade para cada uma das 5 estruturas danificadas é obtido. Ao obter os índices de fiabilidade da estrutura danificada e não danificada, a robustez da ponte é avaliada através da aplicação dos 3 métodos mencionados anteriormente. Os resultados de cada cenário de dano, bem como de cada método, são comparados. A ponte de Vila Meã apresentou ser robusta perante os cenários de dano estudados, enquanto que os métodos para avaliação da robustez estrutural revelaram-se difíceis de serem comparados entre si.<br>Kamienne kolejowe wiadukty o łukowym sklepieniu (KWLS) zostały pierwotnie skonstruowane, żeby umożliwić ludziom swobodne przemieszczanie się przez rzeki, i wszelkie inne wodne zbiorniki. Konstrukcjia mostu jest przykladem wytrzymałości; odporna jest na wszelkie czynniki klimatyczne a także upływ czasu. Zawalenie się budynku Roman Point w drugiej połowie dwudziestego wieku spowodowało zainteresowanie do koncepcji wytrzymałości. Wtedy zaczęła dojrzewać nowa koncepja traktująca wytrzymałość konstrukcji z uwzględnieniem skutków ewentualnej katastrofy. Tragiczne w skutkach konsekwencje zawalenia się Twin Towers w Nowym Jorku, zwłąszcza liczba ofiar i stopień zniszczeń, zpowodowały zainteresowanie w obliczeniu wytrzymałości konstrukcji. Podczas gdy wiele różnych metod zostało zaproponowanych w celu zmierzenia wytrzymałości, do dziś dnia nie ma concensusu co do wyboru które z tych metod są najwłaściwsze. Papierkowe wyliczenia i porównania wytrzymałości mostu Vila Mea dokonano używając trzech kryteriów: a) koncepcja probabilistycznej niezawodności b) koncepcja Frangopola i Curleya c) metoda Cavaco. Do obliczenia wytrzymałości mostu, porównanie zniszczonej konstrukcji do niezniszczonej konstrukcji musi być precyzyjnie określone. Aby określić stosunek niezniszczonej części mostu do zniszczonej, do modelu mostu Vila Mea użyto oprogramowania Limistate:RING. Program wylicza wytrzymałość mostu przez zastosowanie zmodyfikowanej teorii kinematycznej pierwotnie sformułowanej przez Heymana. Wskaźnik niezawodności jest wyliczany poprzez analizę prawdopodobieństwa. Następnie most zostaje testowany przez pięc podstawowych rodzajów katastrof i wzkaźnik wytrzymałości jest wyraźnie określony przez wskaźnik niezawodności zniszczonych do niezniszczonych części mostu. Wytrzymałość jest określona poprzez zastosowanie trzech metod wspomnianych powyżej. Rezultaty każdego rodzaju zniszczeń a także wyliczenia pochodzące z trzech metod zostają porównane. Most Vila Mea potwierdził swoją wytrzymałość a metody udowodniły swoją nieporuwnywalność.
APA, Harvard, Vancouver, ISO, and other styles
36

Neiva, Diana Sá. "Análise estrutural de uma ponte ferroviária em alvenaria considerando novos critérios de robustez." Master's thesis, 2016. http://hdl.handle.net/1822/47414.

Full text
Abstract:
Dissertação de mestrado integrado em Engenharia Civil (área de especialização em Estruturas e Geotecnia)<br>Nas últimas décadas tem existido um interesse crescente pelo conceito de robustez estrutural, nomeadamente perceber quão robusta pode ser uma estrutura que esteja a ser afetada pelo envelhecimento. Apesar do conceito de robustez estrutural ter sido estudado nos últimos anos essencialmente no âmbito de estruturas sujeitas a eventos extremos, este conceito pode ter bastante utilidade quando utilizado no contexto de eventos mais prováveis, como aqueles que resultam do envelhecimento estrutural. Uma análise detalhada do conceito de robustez estrutural é o principal objeto de estudo desta dissertação. As principais definições e medidas de robustez encontradas na literatura são apresentadas com o objetivo de perceber a razão pela qual tem sido difícil chegar a um conceito evidente da robustez. O caso de estudo passa pela análise do Viaduto Ferroviário de Coval, procedendo numa primeira fase à construção do modelo determinístico e numa segunda fase à avaliação probabilística do viaduto. No caso de estudo irá perceber-se que, por trás do cálculo da robustez estrutural, existe um inúmero conjunto de operações a ser realizadas, nomeadamente, análise de sensibilidade (estudo paramétrico), construção do modelo numérico usando o software de análise estrutural RING (análise determinística), análise do índice de fiabilidade que envolve diversos métodos estatísticos (análise probabilística) e, por fim, a realização do cálculo do índice de robustez que, na presente tese, será obtido pela metodologia desenvolvida por Cavaco (2013) e por Frangopol e Curley (1987).<br>There has been a growing interest regarding the concept of structural robustness in the past few decades, particularly in understanding how robust a structure can be when affected by structural ageing. Although, the concept of robustness has been studied in recent years, mainly in the context of structures subjected to extreme events, it can be quite useful when used in the context of more probable events such as those resulting from structural ageing. A detailed analysis of the concept of structural robustness is the main object of study of this thesis. The main definitions and measurements of robustness, found in the literature, are revised and presented in order to understand why it has been difficult to reach a clear concept for robustness. The case study involves the analysis of the Viaduct Coval Railway, proceeding in a first phase the construction of the deterministic model and a second phase to the probabilistic assessment of the viaduct. In the case study it will be shown that behind the structural robustness calculation there is a countless number of operations to be performed, in particular, sensitivity analysis (parametric study), the development of a numerical model, using the structural analysis software RING (deterministic analysis), the reliability analysis that involves various statistical methods (probabilistic analysis), and finally, the calculation of the robustness index that, for this thesis, will obtained by the methodology developed by Cavaco (2013) and Frangopol and Curley (1987).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!