To see the other types of publications on this topic, follow the link: Metric thread.

Dissertations / Theses on the topic 'Metric thread'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 25 dissertations / theses for your research on the topic 'Metric thread.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Šubr, Jiří. "Porovnání RT vlastností 8-bitových a 32-bitových implementací jádra uC/OS-II." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236389.

Full text
Abstract:
This thesis concerns of benchmarking $\mu$C/OS-II systems on different microcontroller architectures. The thesis describes COS-II microcontroller core and possible series of benchmark tests which can be used. Selected tests are implemented and measured properties of microcontrollers with different architecture are compared.
APA, Harvard, Vancouver, ISO, and other styles
2

Hlavinka, Miloslav. "Rekonstrukce protitlakové parní turbiny." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-231818.

Full text
Abstract:
Tématem diplomové práce je revize parní turbíny Mitsubishi a výpočet utahovacích momentů šroubů dělící roviny. Tato práce je rozdělena do několika částí. Úvodní část práce obsahuje seznámení s rozsahem prováděných servisních prací na parních turbínách. Poté je zde samotná revize parní turbíny Mitsubishi. Tato revize je dělena podle jednotlivých komponent turbíny. Poté je zde stanoven seznam nutných oprav a také seznam doporučených oprav pro příští odstávku. V další části je zde shrnut výpočet utěsnění dělicích rovin a to s nebo bez odlehčení. Dále jsou rozebrány nejčastěji používané typy závitů spojovacího materiálu parních turbín. Hlavní částí práce je samotný výpočet utahovacího momentu. Výstupem této práce je poté program pro výpočet utahovacího momentu v programu Excel.
APA, Harvard, Vancouver, ISO, and other styles
3

Lorenc, Ján. "Porovnání vlastností a výkonnosti jader uC/OS-II a uC/OS-III." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255364.

Full text
Abstract:
This master's thesis is focused on benchmarking of Real-Time Operating Systems uC/OS-II and uC/OS-III . It describes the basic features of these systems and metrics used for benchmarking of Real-Time Operating Systems. Selected test methods are implemented and based on them are then compared the performance of Real-Time Operating Systems uC/OS-II and uC/OS-III .
APA, Harvard, Vancouver, ISO, and other styles
4

Zhong, Huang. "3D metric reconstruction from uncalibrated circular motion image sequences." Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B37043791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhong, Huang, and 鐘煌. "3D metric reconstruction from uncalibrated circular motion image sequences." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B37043791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Farhady, Ghalaty Nahid. "Fault Attacks on Cryptosystems: Novel Threat Models, Countermeasures and Evaluation Metrics." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/72280.

Full text
Abstract:
Recent research has demonstrated that there is no sharp distinction between passive attacks based on side-channel leakage and active attacks based on fault injection. Fault behavior can be processed as side-channel information, offering all the benefits of Differential Power Analysis including noise averaging and hypothesis testing by correlation. In fault attacks, the adversary induces faults into a device while it is executing a known program and observes the reaction. The abnormal reactions of the device are later analyzed to obtain the secrets of the program under execution. Fault attacks are a powerful threat. They are used to break cryptosystems, Pay TVs, smart cards and other embedded applications. In fault attack resistant design, the fault is assumed to be induced by a smart, malicious, determined attacker who has high knowledge of the design under attack. Moreover, the purpose of fault attack resistant design is for the system to work correctly under intentional fault injection without leaking any secret data information. Towards building a fault attack resistant design, the problem can be categorized into three main subjects: begin{itemize} item Investigating novel and more powerful threat models and attack procedures. item Proposing countermeasures to build secure systems against fault attacks item Building evaluation metrics to measure the security of designs end{itemize} In this regard, my thesis has covered the first bullet, by proposing the Differential Fault Intensity Analysis (DFIA) based on the biased fault model. The biased fault model in this attack means the gradual behavior of the fault as a cause of increasing the intensity of fault injection. The DFIA attack has been successfully launched on AES, PRESENT and LED block ciphers. Our group has also recently proposed this attack on the AES algorithm running on a LEON3 processor. In our work, we also propose a countermeasure against one of the most powerful types of fault attacks, namely, Fault Sensitivity Analysis (FSA). This countermeasure is based on balancing the delay of the circuit to destroy the correlation of secret data and timing delay of a circuit. Additionally, we propose a framework for assessing the vulnerability of designs against fault attacks. An example of this framework is the Timing Violation Vulnerability Factor (TVVF) that is a metric for measuring the vulnerability of hardware against timing violation attacks. We compute TVVF for two implementations of AES algorithm and measure the vulnerability of these designs against two types of fault attacks. %For future work, we plan to propose an attack that is a combination of power measurements and fault injections. This attack is more powerful in the sense that it has less fault injection restrictions and requires less amount of information from the block cipher's data. We also plan to design more efficient and generic evaluation metrics than TVVF. As shown in this thesis, fault attacks are more serious threat than considered by the cryptography community. This thesis provides a deep understanding of the fault behavior in the circuit and therefore a better knowledge on powerful fault attacks. The techniques developed in this dissertation focus on different aspects of fault attacks on hardware architectures and microprocessors. Considering the proposed fault models, attacks, and evaluation metrics in this thesis, there is hope to develop robust and fault attack resistant microprocessors. We conclude this thesis by observing future areas and opportunities for research.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Yee, Darrick. "A Three-Study Examination of Test-Based Accountability Metrics." Thesis, Harvard University, 2017. http://nrs.harvard.edu/urn-3:HUL.InstRepos:33052855.

Full text
Abstract:
Recent state and federal policy initiatives have led to the development of a multitude of statistics intended to measure school performance. Of these, statistics constructed from student test scores number among both the most widely-used and most controversial. In many cases, researchers and policymakers alike are not fully aware of the ways in which these statistics may lead to unjustified inferences regarding school effectiveness. A substantial amount of recent research has attempted to remedy this, although much remains unknown. This thesis seeks to contribute to these research efforts via three papers, each examining how a commonly-employed accountability statistic may be influenced by factors unrelated to student proficiency or school effectiveness. The first paper demonstrates how the discrete nature of test scores leads to biased estimates of changes in the percentage of “proficient” students between any two given years and examines estimators that provide better recovery of this parameter. The second paper makes use of a state-wide natural experiment to show that a change in testing program, from paper-and-pencil to computer-adaptive, may cause apparent changes in achievement gaps even when relative student proficiencies have remained constant. The third paper examines “growth-based” accountability metrics based on vertically-scaled assessments, showing that certain types of metrics based on gain scores can be modeled via nonlinear transformations of the underlying vertical scale. It then makes use of this result to investigate the potential magnitude of impacts of such transformations on growth-based school accountability ratings.
APA, Harvard, Vancouver, ISO, and other styles
8

Yu, Ying. "Visual Appearances of the Metric Shapes of Three-Dimensional Objects: Variation and Constancy." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1592254922173432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nia, Ramadianti Putri Mganga, and Medard Charles. "Enhancing Information Security in Cloud Computing Services using SLA based metrics." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1999.

Full text
Abstract:
Context: Cloud computing is a prospering technology that most organizations are considering for adoption as a cost effective strategy for managing IT. However, organizations also still consider the technology to be associated with many business risks that are not yet resolved. Such issues include security, privacy as well as legal and regulatory risks. As an initiative to address such risks, organizations can develop and implement SLA to establish common expectations and goals between the cloud provider and customer. Organizations can base on the SLA to measure the achievement of the outsourced service. However, many SLAs tend to focus on cloud computing performance whilst neglecting information security issues. Objective: We identify threats and security attributes applicable in cloud computing. We also select a framework suitable for identifying information security metrics. Moreover, we identify SLA based information security metrics in the cloud in line with the COBIT framework. Methods: We conducted a systematic literature review (SLR) to identify studies focusing on information security threats in the cloud computing. We also used SLR to select frameworks available for identification of security metrics. We used Engineering Village and Scopus online citation databases as primary sources of data for SLR. Studies were selected based on the inclusion/exclusion criteria we defined. A suitable framework was selected based on defined framework selection criteria. Based on the selected framework and conceptual review of the COBIT framework we identified SLA based information security metrics in the cloud. Results: Based on the SLR we identified security threats and attributes in the cloud. The Goal Question Metric (GQM) framework was selected as a framework suitable for identification of security metrics. Following the GQM approach and the COBIT framework we identified ten areas that are essential and related with information security in the cloud computing. In addition, covering the ten essential areas we identified 41 SLA based information security metrics that are relevant for measuring and monitoring security performance of cloud computing services. Conclusions: Cloud computing faces similar threats as traditional computing. Depending on the service and deployment model adopted, addressing security risks in the cloud may become a more challenging and complex undertaking. This situation therefore appeals to the cloud providers the need to execute their key responsibilities of creating not only a cost effective but also a secure cloud computing service. In this study, we assist both cloud provider and customers on the security issues that are to be considered for inclusion in their SLA. We have identified 41 SLA based information security metrics to aid both cloud providers and customers obtain common security performance expectations and goals. We anticipate that adoption of these metrics can help cloud providers in enhancing security in the cloud environment. The metrics will also assist cloud customers in evaluating security performance of the cloud for improvements.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhu, Mingying. "The Human Impacts of Air Pollution: Three Studies Using Internet Metrics." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39404.

Full text
Abstract:
Chapter 1: We provide first evidence of a link from daily air pollution exposure to sleep loss in a panel of Chinese cities. We develop a social media-based, city-level metric for sleeplessness, and bolster causal claims by instrumenting for pollution with plausibly exogenous variations in wind patterns. Estimates of effect sizes are substantial and robust. In our preferred specification, a one standard deviation increase in AQI causes an 11.6% increase in sleeplessness. The results sustain qualitatively under OLS estimation but are attenuated. The analysis provides a previously unaccounted-for benefit of more stringent air quality regulation. It also offers a candidate mechanism in support of recent research that links daily air quality to diminished workplace productivity, cognitive performance, school absence, traffic accidents, and other detrimental outcomes. Chapter 2: We provide linear and non-parametric estimates of the causal impact of short-term exposure to polluted air on the prevalence of cough in a panel of a hundred Chinese cities. In our central estimate, which exploits plausibly-exogenous variations in the number of agricultural fires burning in the vicinity as an instrument, we find that a one standard deviation increase in airborne pollution causes a roughly 5% increase in the prevalence of cough in the affected city. Amongst pollutants the effect can be tied specifically to particulate matter (PM2.5). The results prove resilient in a series of robustness tests and falsification exercises. Chapter 3: We provide the first study of the relationship between air pollution and students' migration intentions for higher education. Young people's interest in local study is proxied by their Baidu search index for local universities. The IV method is supplemented to identify the causal link by instrumenting for particular matter with plausibly exogenous variations in temperature inversion strength. The estimates of effect sizes are substantial and robust. When air quality in Beijing moves from good-day level to moderately-polluted level, people's search for local education decreases by 3.8% under OLS and 11.8% under IV. The results release the signal that people lost their interest in local universities due to the elevated air pollution. There could be future out-migration to cleaner cities for higher education.
APA, Harvard, Vancouver, ISO, and other styles
11

Xu, Dongping. "Performance Study and Dynamic Optimization Design for Thread Pool Systems." Washington, D.C. : Oak Ridge, Tenn. : United States. Dept. of Energy. Office of Science ; distributed by the Office of Scientific and Technical Information, U.S. Dept. of Energy, 2004. http://www.osti.gov/servlets/purl/835380-ZOcXfL/webviewable/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Champion, Daniel James. "Mobius Structures, Einstein Metrics, and Discrete Conformal Variations on Piecewise Flat Two and Three Dimensional Manifolds." Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/145313.

Full text
Abstract:
Spherical, Euclidean, and hyperbolic simplices can be characterized by the dihedral angles on their codimension-two faces. These characterizations analyze the Gram matrix, a matrix with entries given by cosines of dihedral angles. Hyperideal hyperbolic simplices are non-compact generalizations of hyperbolic simplices wherein the vertices lie outside hyperbolic space. We extend recent characterization results to include fully general hyperideal simplices. Our analysis utilizes the Gram matrix, however we use inversive distances instead of dihedral angles to accommodate fully general hyperideal simplices.For two-dimensional triangulations, an angle structure is an assignment of three face angles to each triangle. An angle structure permits a globally consistent scaling provided the faces can be simultaneously scaled so that any two contiguous faces assign the same length to their common edge. We show that a class of symmetric Euclidean angle structures permits globally consistent scalings. We develop a notion of virtual scaling to accommodate spherical and hyperbolic triangles of differing curvatures and show that a class of symmetric spherical and hyperbolic angle structures permit globally consistent virtual scalings.The double tetrahedron is a triangulation of the three-sphere obtained by gluing two congruent tetrahedra along their boundaries. The pentachoron is a triangulation of the three-sphere obtained from the boundary of the 4-simplex. As piecewise flat manifolds, the geometries of the double tetrahedron and pentachoron are determined by edge lengths that gives rise to a notion of a metric. We study notions of Einstein metrics on the double tetrahedron and pentachoron. Our analysis utilizes Regge's Einstein-Hilbert functional, a piecewise flat analogue of the Einstein-Hilbert (or total scalar curvature) functional on Riemannian manifolds.A notion of conformal structure on a two dimensional piecewise flat manifold is given by a set of edge constants wherein edge lengths are calculated from the edge constants and vertex based parameters. A conformal variation is a smooth one parameter family of the vertex parameters. The analysis of conformal variations often involves the study of degenerating triangles, where a face angle approaches zero. We show for a conformal variation that remains weighted Delaunay, if the conformal parameters are bounded then no triangle degenerations can occur.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhou, Luyuan. "Security Risk Analysis based on Data Criticality." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-93055.

Full text
Abstract:
Nowadays, security risk assessment has become an integral part of network security as everyday life has become interconnected with and dependent on computer networks. There are various types of data in the network, often with different criticality in terms of availability or confidentiality or integrity of information. Critical data is riskier when it is exploited. Data criticality has an impact on network security risks. The challenge of diminishing security risks in a specific network is how to conduct network security risk analysis based on data criticality. An interesting aspect of the challenge is how to integrate the security metric and the threat modeling, and how to consider and combine the various elements that affect network security during security risk analysis. To the best of our knowledge, there exist no security risk analysis techniques based on threat modeling that consider the criticality of data. By extending the security risk analysis with data criticality, we consider its impact on the network in security risk assessment. To acquire the corresponding security risk value, a method for integrating data criticality into graphical attack models via using relevant metrics is needed. In this thesis, an approach for calculating the security risk value considering data criticality is proposed. Our solution integrates the impact of data criticality in the network by extending the attack graph with data criticality. There are vulnerabilities in the network that have potential threats to the network. First, the combination of these vulnerabilities and data criticality is identified and precisely described. Thereafter the interaction between the vulnerabilities through the attack graph is taken into account and the final security metric is calculated and analyzed. The new security metric can be used by network security analysts to rank security levels of objects in the network. By doing this, they can find objects that need to be given additional attention in their daily network protection work. The security metric could also be used to help them prioritize vulnerabilities that need to be fixed when the network is under attack. In general, network security analysts can find effective ways to resolve exploits in the network based on the value of the security metric.
APA, Harvard, Vancouver, ISO, and other styles
14

Hoechstetter, Sebastian. "Enhanced methods for analysing landscape structure : landscape metrics for characterising three-dimensional patterns and ecological gradients /." Berlin : Rhombos-Verl, 2009. http://d-nb.info/99728238X/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Cole, Mary Elizabeth. "Optimizing Bone Loss Across the Lifespan: The Three-Dimensional Structure of Porosity in the Human Femoral Neck and Rib As a Metric of Bone Fragility." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1559033566505566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Eriksson, Gustav, and Johan Isendahl. "Conceptual decision support tool for RMS-investments : A three-pronged approach to investments with focus on performance metrics for reconfigurability." Thesis, Tekniska Högskolan, Jönköping University, JTH, Produktionsutveckling, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-49773.

Full text
Abstract:
Today's society is characterized by a high degree of change where the manufacturing systems are affected by both internal and external factors. To adapt to current manufacturing requirements in the form of short lead-time, more variants, low and fluctuating volumes, in a cost-efficient manner, new approaches are needed. As the global market and its uncertainties for products and its lifecycles change, a concept called 'reconfigurable manufacturing system' has been developed. The idea is to design a manufacturing system for rapid structural change in both hardware and software to be responsive to capacity and functionality. A company's development towards the concept is often based on a strategy of incremental investments. In this situation, the challenges are to prioritize the right project and maximize the performance as well as the financial efficiency of a multi-approach problem. The report is based on three different issues. Partly how to standardize relevant performance-based metrics to measure current conditions, how new performance-based metrics can be developed in collaboration with reconfigurability characteristics, and set a direction for how decision models can be used to optimize step-based investments. The study is structured as an explorative study with qualitative methods such as semi-structured interviews and document study to get in-depth knowledge. Related literature addresses concepts in search areas such as reconfigurable manufacturing system, key performance indicators, investment decisions, and manufacturing readiness levels. The findings are extracted from interviews and document studies that generate a focal company setting within the automotive industry, which acts as the foundation for further analysis and decisions throughout the thesis. The analysis results in sixteen performance measurements where new measures been created for product flexibility, productionvolume flexibility, material handling flexibility, reconfiguration quality and diagnosability using reconfigurability characteristics. A conceptual decision support model is introduced with an underlying seven-step investment process, analyzing lifecycle cost, risk triggered events in relation to cost, and performance measurements. The discussion chapter describes how different approaches are used during the project that has been revised by internal and external factors. Improvement possibilities regarding method choice and the aspects of credibility, transferability, dependability, and conformability are discussed. Furthermore, the authors argue about the analysis process and how the result has been affected by circumstances and choices. The study concludes that a three-pronged approach is needed to validate the investment decision in terms of system performance changes, cost, and uncertainty. The report also helps to understand which performance-based metrics are relevant for evaluating manufacturing systems based on operational goals and manufacturing requirements.
APA, Harvard, Vancouver, ISO, and other styles
17

Starigazda, Michal. "Optimalizace testování pomocí algoritmů prohledávání prostoru." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-234928.

Full text
Abstract:
Testing of multi-threaded programs is a demanding work due to the many possible thread interleavings one should examine. The noise injection technique helps to increase the number of tested thread interleavings by noise injection to suitable program locations. This work optimizes meta-heuristics search techniques in the testing of concurrent programs by utilizing deterministic heuristic in the application of genetic algorithms in a space of legal program locations suitable for the noise injection. In this work, several novel deterministic noise injection heuristics without dependency on the random number generator are proposed in contrary to the most of currently used heuristic. The elimination of the randomness should make the search process more informed and provide better, more optimal, solutions thanks to increased stability in the results provided by novel heuristics. Finally, a benchmark of programs, used for the evaluation of novel noise injection heuristics is presented.
APA, Harvard, Vancouver, ISO, and other styles
18

Saman, Nariman Goran. "A Framework for Secure Structural Adaptation." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-78658.

Full text
Abstract:
A (self-) adaptive system is a system that can dynamically adapt its behavior or structure during execution to "adapt" to changes to its environment or the system itself. From a security standpoint, there has been some research pertaining to (self-) adaptive systems in general but not enough care has been shown towards the adaptation itself. Security of systems can be reasoned about using threat models to discover security issues in the system. Essentially that entails abstracting away details not relevant to the security of the system in order to focus on the important aspects related to security. Threat models often enable us to reason about the security of a system quantitatively using security metrics. The structural adaptation process of a (self-) adaptive system occurs based on a reconfiguration plan, a set of steps to follow from the initial state (configuration) to the final state. Usually, the reconfiguration plan consists of multiple strategies for the structural adaptation process and each strategy consists of several steps steps with each step representing a specific configuration of the (self-) adaptive system. Different reconfiguration strategies have different security levels as each strategy consists of a different sequence configuration with different security levels. To the best of our knowledge, there exist no approaches which aim to guide the reconfiguration process in order to select the most secure available reconfiguration strategy, and the explicit security of the issues associated with the structural reconfiguration process itself has not been studied. In this work, based on an in-depth literature survey, we aim to propose several metrics to measure the security of configurations, reconfiguration strategies and reconfiguration plans based on graph-based threat models. Additionally, we have implemented a prototype to demonstrate our approach and automate the process. Finally, we have evaluated our approach based on a case study of our making. The preliminary results tend to expose certain security issues during the structural adaptation process and exhibit the effectiveness of our proposed metrics.
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Chaoli. "A multiresolutional approach for large data visualization." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1164730737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Chakir, El-Alaoui El-Houcine. "Les métriques sous riemanniennes en dimension 3." Rouen, 1996. http://www.theses.fr/1996ROUES055.

Full text
Abstract:
Cette thèse est consacrée essentiellement à l'étude des métriques sous-riemanniennes dites de contact en dimension 3. Bien que cette étude soit faite localement, on observe des différences fondamentales avec les métriques riemanniennes. En particulier, les lieux conjugue et cut d'un point p contiennent p dans leur adhérence. Ce travail se divise en deux parties : 1. On montre, dans un premier temps, qu'on peut associer à toute métrique sous-riemannienne de contact formelle une forme normale formelle. Ensuite, dans un deuxième temps, on montre que cette forme normale est actuellement lisse (i. E. C, c) si la métrique l'est. Aussi, cette forme normale permet de définir des invariants associés aux métriques sous-riemanniennes de contact. 2. A l'aide de cette forme normale on prouve que l'application exponentielle d'une métrique sous-riemannienne de contact générique est déterminée par un certain jet fini de la métrique. Et on en déduit une classification générique de ces singularités (i. E. Lieux conjugués).
APA, Harvard, Vancouver, ISO, and other styles
21

Hård, af Segerstad Per. "Sveriges åtgärder mot det ryska militära hotet tre försvarsinriktningsperioder åren 2005-2020; balansering mot hotet eller inte? : En teoriprövande fallstudie av Stephen Walts hotbalanseringsteori, respektive Randall Schwellers teori om underbalansering." Thesis, Försvarshögskolan, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:fhs:diva-9297.

Full text
Abstract:
Risken för krig ökar om stater inte vidtar åtgärder mot hot från andra stater. Samtidigt finns en otydlighet i form av att forskare har kommit till delvis kontrasterande slutsatser om vad stater verkligen gör när de utsätts för militära hot. Två välkända teorier på området säger emot varandra. Stephen Walts teori säger att stater rustar militärt och ingår allianser för att stå emot hotande stater-de hotbalanserar. Randall Schwellers teori kontrasterar mot detta och säger att staters inrikes motstånd mot att satsa på sitt militära försvar många gånger leder till att de inte hotbalanserar på ett effektivt sätt- de underbalanserar. Båda teorierna har av efterföljande forskare kritiserats men även fått stöd. Denna studie använder tidigare forskning om de två teorierna för att pröva dem på ett sätt som tar hänsyn till kritiken i denna. Adam Liffs analysverktyg används därför att mäta åtgärder som vidtas i fallet Sverige mot det ryska militära hotet. Studiens visar att de prövade teorierna får stöd på olika sätt. Sverige hotbalanser genom att vidta åtgärder för att stå emot de Rysslands offensiva militära kapaciteter först efter att den ryska ledningens intentioner uppfattas som skadliga för Sveriges intressen. Underbalanseringsteorin får starkast stöd genom Sverige att inte ingår bindande försvarsöverenskommelser.
APA, Harvard, Vancouver, ISO, and other styles
22

Carvalho, Solidônio Rodrigues de. "Determinação do campo de temperatura em ferramentas de corte durante um processo de usinagem por torneamento." Universidade Federal de Uberlândia, 2005. https://repositorio.ufu.br/handle/123456789/14729.

Full text
Abstract:
Fundação de Amparo a Pesquisa do Estado de Minas Gerais
During machining, high temperatures are generated in the region of the tool cutting edge, and these temperatures have a controlling influence on the wear rate of the cutting tool and on the friction between the chip and the tool. However, direct measurement of temperature using contact type sensors at the tool-work interface is difficult to implement due to the rotating movement of the workpiece and the presence of the chip. Therefore, the use of inverse heat conduction techniques represents a good alternative since these techniques takes into account temperatures measured from accessible positions. This work proposes a new experimental methodology to determine the thermal fields and the heat generated in the chiptool interface during machining process using inverse problems techniques. This work develops a numerical 3-D transient thermal model that takes into account both the tool and toolholder assembly. The thermal model represents the direct problem and is solved using finite volume techniques on a non uniform mesh. The related inverse problem is solved by using the golden section technique. The experimental data and inverse technique are processed using a computational algorithm developed specifically for inverse heat flux estimation in manufacturing processes called INV3D. An error analysis of the results and the experimental procedures to determine the cut area and the tool holder temperature are also presented. Besides the machining problem, INV3D is also able to solve different thermal problems. As an example of its generality, this work also presents an application of this software in the thermal fields study during a welding process.
Durante a usinagem de metais, altas temperaturas são geradas na interface de cavacoferramenta. Essas temperaturas, por sua vez, têm forte influência no controle da taxa de remoção de material e no atrito entre o cavaco e a ferramenta de corte. Observa-se, entretanto que a medição direta de temperaturas nessa região é de difícil execução devido ao movimento da peça e a presença do cavaco. Assim, o uso de técnicas inversas em condução de calor se apresenta como uma boa alternativa para a obtenção dessas temperaturas uma vez que essas técnicas permitem o uso de dados experimentais obtidos em regiões acessíveis. Este trabalho propõe uma nova metodologia experimental para a determinação dos campos térmicos e do fluxo térmico gerado em ferramentas de corte durante um processo de torneamento. Uma das inovações apresentadas é o desenvolvimento de um modelo térmico tridimensional transiente que considera além da ferramenta de corte, o conjunto ferramenta, calço e porta-ferramenta. O problema direto é então resolvido numericamente usando-se diferenças finitas a partir de uma malha de discretização não uniforme. O problema inverso, por sua vez, é resolvido por meio da técnica de otimização da seção áurea. Para a solução dos problemas envolvidos, desenvolveu-se um código computacional específico, denominado INV3D. O programa INV3D contém ainda uma série de funções que auxiliam na aquisição dos dados experimentais, na geração da malha tridimensional e na análise em ambiente gráfico. O trabalho apresenta também os procedimentos experimentais usados na medição das temperaturas na ferramenta, calço e porta-ferramenta e na identificação da área de interface de corte. Os resultados obtidos são validados por meio de experimentos controlados em laboratório e de análises qualitativas. Além do problema de usinagem investigado, como exemplo da generalidade do Inv3D na solução de problemas térmicos, apresenta-se também uma aplicação deste software no estudo de campos térmicos decorrentes de um processo de soldagem TIG em alumínio.
Doutor em Engenharia Mecânica
APA, Harvard, Vancouver, ISO, and other styles
23

Lima, Frederico Romagnoli Silveira. "Modelagem tridimensional de problemas inversos em condução de calor: aplicação em problemas de usinagem." Universidade Federal de Uberlândia, 2001. https://repositorio.ufu.br/handle/123456789/14793.

Full text
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
This work proposes a methodology to obtain the transient cutting tool temperature. The physical phenomenon is treated by a three-dimensional analysis. The inverse heat conduction technique is proposed to estimate the generated heat flux on the rake face of the tool. This technique is based on conjugate gradient method with adjoint equation. The machining process is instrumented with thermocouples at the bottom face of the tool, opposite to its main rake face. The signals are automatically received and processed using a data acquisition system and a PC-Pentium. The direct solution is numerically solved using finite volumes method with the heat flux estimated. The experimental data are processed using a computational algorithm developed specifically for inverse heat flux estimation in machining processes. Experimental temperatures are obtained during several cutting tests using cemented carbide and ceramic tools. The influence of the cutting parameters on the temperature distribution is verified. An error analysis of the results is also presented.
O objetivo deste trabalho é propor uma metodologia para a obtenção da distribuição da temperatura na superfície de corte da ferramenta em um processo de usinagem por torneamento. Nesse sentido, o problema térmico de usinagem é caracterizado de maneira bem realista através de uma abordagem tridimensional. Para a obtenção dos campos térmicos na região de corte propõe-se o uso de técnicas de problemas inversos em condução de calor. Assim, a solução do problema térmico é obtida em duas etapas: solução inversa e solução direta. A solução inversa baseia-se no método do gradiente conjugado e da equação adjunta para a estimar o fluxo de calor gerado na região de corte que flui para a ferramenta. Nesse caso, são usados termopares soldados na face oposta da ferramenta que fornecem a informação necessária para que a solução inversa consiga estimar o fluxo de calor. Com a obtenção do fluxo de calor que flui para a ferramenta utiliza-se a solução direta do problema térmico para o cálculo da temperatura na região de corte. A implementação computacional da solução inversa e da solução direta é apresentada sob a forma de um programa de computador intitulado GRAD3D 1.0. Nesse programa, além da solução proposta para o problema térmico de usinagem é possível simular numericamente problemas térmicos correlatos. Testes experimentais unidimensionais e tridimensionais com condições controladas são apresentados para a validação do algoritmo computacional. Nos testes experimentais de usinagem, a aplicabilidade da técnica proposta é avaliada para o processo de usinagem por torneamento de uma barra de ferro fundido cinzento usando-se ferramentas de metal duro (WC) e de cerâmica (Si3N4). Apresenta-se ainda uma análise dos erros que podem estar presentes nos resultados obtidos.
Doutor em Engenharia Mecânica
APA, Harvard, Vancouver, ISO, and other styles
24

Alves, João Paulo Martins José Teixeira. "Threat intelligence: using osint and security metrics to enhance siem capabilities." Master's thesis, 2017. http://hdl.handle.net/10451/31162.

Full text
Abstract:
Tese de mestrado, Segurança Informática, Universidade de Lisboa, Faculdade de Ciências, 2017
Nos últimos anos, face ao aumento em quantidade e em complexidade de ataques informáticos contra diversas organizações, tem-se verificado um crescimento elevado no investimento em plataformas de segurança informática nas infra-estruturas das organizações. As equipas com a responsabilidade de garantir a cibersegurança necessitam de monitorizar um vasto número de dispositivos, utilizadores, aplicações e, consequentemente, eventos de cibersegurança relacionados com esses elementos. A plataforma mais utilizada para monitorizar os eventos de segurança informática é o sistema de Gestão e Correlação de Eventos de Segurança (SIEM, do inglês Security Information and Event Management). Este sistema agrega toda a informação de segurança proveniente de diversas fontes, normaliza-a, enriquece-a e envia-a para uma consola centralizada de gestão. A eficiência e a eficácia das equipas de resposta a incidentes de segurança dependem em grande medida da capacidade de o sistema produzir uma alarmística detalhada e contextualizada sobre possíveis ameaças. Para melhorar essa capacidade é necessário conjugar indicadores externos relevantes com a informação recolhida na infra-estrutura da organização. Threat Intelligence (TI) é o conhecimento adquirido da conjugação das técnicas de recolha de informação sobre ameaças externas à organização e das técnicas de recolha de informação sobre factores de segurança internos das organizações. É necessário estar atento às fontes públicas de informação de cibersegurança e avaliar a sua qualidade para obter indicadores fidedignos sobre actividades maliciosas. A organização necessita de avaliar o seu nível de cibersegurança para identificar as vulnerabilidades existentes, antes que estas possam ser exploradas por agentes mal intencionados. Somente com o recurso a fontes de informação, internas e externas, é possível ter uma abordagem TI abrangente e aplicar as medidas de cibersegurança adequadas para evitar os ciberataques aos quais a organização possa estar vulnerável. Para uma organização estabelecer correctamente o seu nível de cibersegurança, é necessário realizar uma gestão de risco adequada. A gestão de risco é caracterizada por três etapas, todas interligadas e contínuas: análise do risco, avaliação do risco e controlo do risco. No fim do processo, a organização terá um conhecimento credível sobre o seu risco informático, tendo um bom suporte para as tomadas de decisão no que respeita a reestruturações e investimentos em segurança informática. As métricas de segurança são a ferramenta mais indicada para o processo de gestão de risco. Estas ajudam a determinar o estado de cibersegurança no qual a organização se encontra, o desempenho da equipa do Centro de Operações de Segurança (SOC, do inglês Security Operation Center), e o nível de segurança das infra-estruturas da organização. As entidades governamentais e militares foram as primeiras a utilizar as métricas de segurança. No entanto, recentemente, investigadores de diversos tipos de organizações (públicas, privadas e público-privadas), têm investido recursos para melhorar e implementar estas métricas nas suas organizações. Toda esta atenção dada às métricas de segurança deve-se ao resultado evidente da sua implementação:é possível medir o risco, classificá-lo e, finalmente, tomar as contramedidas adequadas para reduzir o impacto de possíveis ciberataques, aumentando a cibersegurança na organização. Contudo é necessário estabelecer os objectivos e o propósito das métricas de segurança. Muitas equipas de cibersegurança cometem o erro de criar métricas que são complexas, fora do contexto, e expressam resultados com valores irrealistas. O resultado desta má gestão das métricas de segurança é oposto do pretendido, providenciando má informação e, consequentemente, diminuindo a cibersegurançaa de uma organização. A visualização dos resultados das métricas é o último passo da criação de métricas e tem como finalidade fornecer informação de uma forma ilustrativa, com recurso a formatos de fácil leitura e compreensão. As visualizações ajudam a equipa responsável pela cibersegurança de uma organização a visualizar de imediato informações relativas ao nível de cibersegurança dos sistemas e o risco de cada activo. As visualizações permitem à equipa avaliar e responder, de uma forma quantitativa e qualitativa, às perguntas colocadas pela direcção executiva, tais como: qual o nível de segurança, qual o valor de risco na organização, qual o retorno financeiro dos investimentos feitos para melhorar a segurança informática na organização ou mesmo para justificar a permanência, redução ou aumento de equipamentos e equipas de ciberseguranc¸a. Para além do mecanismo de descoberta de informação interna, o Open Source Intelligence (OSINT) é considerado o mecanismo para a captura de informação externa a partir de fontes online. Com um conjunto de técnicas é possível capturar a informação relevante para o conhecimento sobre ciberameaças. Existem comunidades de cibersegurança cujo objectivo é publicar listas com informações sobre novos ciberataques, que normalmente contêm informações sobre anfitriões suspeitos ou conteúdos maliciosos. Estas listas, as listas negras, podem ser públicas, quando qualquer pessoa pode aceder à sua informação, ou privadas, restringindo o uso das listas a um determinado grupo ou comunidade. Apesar de as listas oferecerem uma informação valiosa sobre ciberameaças actuais, estas sem qualquer tipo de pré-processamento, podem gerar um número significativo de falsos positivos, devido à ausência de contextualização e alinhamento com a realidade da organização. Este trabalho é dividido por dois tópicos: métricas de segurança e listas negras confiáveis. Para cada tópico são descritas soluções para melhorar o estado de segurança numa organização, ao integrar o processo TI em tempo-real no SIEM. Esta integração pode ser materializada na utilização de métricas de segurança para análise do estado de segurança na organização e fontes de segurança com informação sobre endereços IP suspeitos de actividades maliciosas com consideração das operações da equipa do SOC sobre incidentes de segurança, com o recurso a métricas. A utilização directa das listas negras, sem qualquer tipo de pré-processamento, resulta num elevado número de falsos positivos, pela ausência de contextualização e alinhamento com a realidade da organização. O trabalho está inserido no projecto DiSIEM e resulta da colaboraçãao de dois dos parceiros do projecto, Faculdade de Ciências da Universidade de Lisboa e EDP – Energias De Portugal, SA. Os objectivos alinham-se com as metas do projecto DISIEM: 1) fornecer informações OSINT para um sistema SIEM, melhorando a sua detecção e prevenção de novas ameac¸as; 2) identificar e desenvolver um conjunto de métricas dedicadas à equipa de cibersegurança para uma melhor gestão e monitorização dos eventos de segurança para aumentar o estado de segurança na organização, consequentemente, reduzindo o risco de actividades maliciosas na organização. A dissertação apresenta e discute um conjunto de métricas com uma estrutura bem definida para serem aplicadas no sistema SIEM. Estas métricas cobrem os sectores de gestão, processos e tecnologia, e estão apropriadas para a realidade da equipa de cibersegurança. É introduzido protótipos para visualização dos resultados das métricas, incluindo dados históricos, possibilitando assim uma avaliação comparativa de eficiência. O trabalho propõe uma solução OSINT para aperfeiçoar a alarmística do sistema SIEM, reduzindo a taxa de falsos positivos, com base na avaliação do nível de confiança em fontes de informação públicas, e dessa forma contribuir para a eficiência das equipas de cibersegurança nas organizações que usam o sistema SIEM. Esta solução usa listas negras que identificam endereços de Protocolo de Internet (IP do inglês Internet Protocol) suspeitos de actividade maliciosa. A informação pode ser sobre sua maliciosidade, o número de denúncias (efectuadas por comunidades ou outras listas negras), número de ataques aos quais o endereço IP esteve associado, a última vez que foi denunciado, entre outros. As listas negras são úteis para serem utilizadas no sistema SIEM, para a monitorização de comunicações entre a organização e um IP suspeito. Assim, quando houver um alarme de uma comunicação suspeita, a equipa do SOC pode actuar de forma imediata e analisar os eventos para identificar a máquina, pedir uma análise local e eliminar a ameaça, caso seja detectada. A solução recolhe informação sobre endereços IP de um conjunto de listas públicas. Os endereços IP e as listas são avaliadas quanto à sua veracidade, com base na correlação da informação recolhida a partir das listas e com base em métricas sobre o resultado dos incidentes associados a comunicações suspeitas entre a organização e endereços IP das listas. Esta avaliação é realizada de forma constante, sempre que exista uma alteração nas listas públicas ou nos incidentes, para que os seus valores sejam os mais actualizados e precisos. Foi desenvolvida uma aplicação para administrar as listas negras utilizadas, os endereços IP, os casos da organização e endereços públicos da organização. São apresentadas regras do SIEM que seleccionam os endereços IP recolhidos das listas negras com base na reputação dada pela avaliação da sua veracidade, para a monitorização de comunicações entre a organização e os endereços IP suspeitos. Os resultados mostram que há um aumento de detecção de casos positivos com a utilização da solução proposta. Este aumento deve-se ao uso de informação interna dos incidentes, tratados pela equipa do SOC, como parâmetros de avaliação da confiabilidade das listas negras e dos endereços IP. Dois componentes que se destacam como parâmetros de avaliação da confiabilidade é o componente da precisão e o componente da persistência. O componente da precisão tem em conta os resultados da organização e aumenta a confiabilidade de um endereço IP ou de uma lista caso o número de resultados positivos dos casos de incidentes relacionados com o IP seja superior ou número de resultados falsos positivos. A persistência tem em conta a precisão e a denúncia de um endereço IP por parte das listas, para o guardar na nossa lista durante três meses. A avaliação da lista negra e do seu conteúdo considerando o ambiente da organização é uma solução que não foi apresentada por nenhum outro trabalho, e o mais semelhante é o uso de métricas ou recolha de informação com o uso do conceito OSINT, sem avaliação do conteúdo com base na informação da organização. Sendo um trabalho inovador, este ainda se encontra na sua fase primordial. Os resultados do nosso estudo servirão como base para melhorias e comparação de resultados de estudos posteriores para melhoria na avaliação da confiabilidade das listas públicas e da maliciosidade do seu conteúdo.
Threat Intelligence (TI) is a cyber defence process that combines the use of internal and external information discovery mechanisms. The Security Information and Event Management (SIEM) system is the tool typically used to aggregate data from multiple sources, normalize, enrich and send it to a centralized management console, later used by the security operation team (SOC). However, it is necessary to use Security Metrics (SM) to summarize, calculate and provide valuable information to the SOC team from the large datasets collected in the SIEM. Although the SM provide valuable information, its erroneous creation or use could lead to the opposite goal and decreasing the security level, by generating false positives. Regarding the external information discovery, the information from blacklists is commonly used to monitor and/or to block external cyberthreats. The blacklists provide intelligence about suspicious Internet Protocol (IP) addresses, reported by communities and security organizations. Although the use of blacklists is commonly used to detect suspicious communications, it generates a high rate of false positives. We introduce a set of security metrics, well-structured and properly defined to be used with a SIEM system. We develop a solution with Open-Source Intelligence (OSINT) mechanism to discover and collect suspicious IP from public blacklists, a process to assess the reputation of the suspicious IP addresses and blacklists, considering the persistence of the IP and the organization’s incidents of communications with suspicious IP addresses. The IP are inserted in the SIEM with rules to monitor and aiming at reducing the number of false positives. The preliminary study in a real environment shows that the proposed solution improves the security effectiveness of the SIEM’s alerts due the innovations idea of assessing the IP and blacklists by using the persistence and precision components, and considering the organization’s incidents status.
APA, Harvard, Vancouver, ISO, and other styles
25

"Patch to Landscape and Back Again: Three Case Studies of Land System Architecture Change and Environmental Consequences from the Local to Global Scale." Doctoral diss., 2020. http://hdl.handle.net/2286/R.I.57273.

Full text
Abstract:
abstract: Humans have modified land systems for centuries in pursuit of a wide range of social and ecological benefits. Recent decades have seen an increase in the magnitude and scale of land system modification (e.g., the Anthropocene) but also a growing recognition and interest in generating land systems that balance environmental and human well-being. This dissertation focused on three case studies operating at distinctive spatial scales in which broad socio-economic or political-institutional drivers affected land systems, with consequences for the environmental conditions of that system. Employing a land system architecture (LSA) framework and using landscape metrics to quantify landscape composition and configuration from satellite imagery, each case linked these drivers to changes in LSA and environmental outcomes. The first paper of this dissertation found that divergent design intentions lead to unique trajectories for LSA, the urban heat island effect, and bird community at two urban riparian sites in the Phoenix metropolitan area. The second paper examined institutional shifts that occurred during Cuba’s “special period in time of peace” and found that the resulting land tenure changes both modified and maintained the LSA of the country, changing cropland but preserving forest land. The third paper found that globalized forces may be contributing to the homogenizing urban form of large, populous cities in China, India, and the United States—especially for the ten largest cities in each country—with implications for surface urban heat island intensity. Expanding knowledge on social drivers of land system and environmental change provides insights on designing landscapes that optimize for a range of social and ecological trade-offs.
Dissertation/Thesis
Doctoral Dissertation Geography 2020
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography