To see the other types of publications on this topic, follow the link: Control Flow Graph (CFG).

Dissertations / Theses on the topic 'Control Flow Graph (CFG)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 40 dissertations / theses for your research on the topic 'Control Flow Graph (CFG).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sánchez, Yagüe Mónica. "Information extraction and validation of CDFG in NoGap." Thesis, Linköpings universitet, Datorteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-93905.

Full text
Abstract:
A Control Data Flow Graph (CDFG) is a Directed Acyclic Graph (DAG) in which a node can be either an operation node or a control node. The target of this kind of graph is to capture allt he control and data flow information of the original hardware description while preserving the various dependencies. This kind of graph is generated by Novel Generator of Accelerators and Processors (NoGap), a design automation tool for Application Specific Instruction-set Processor (ASIP) and accelerator design developed by Per Karlström from the Department of Electrical Engineering of Linköping University. The aim of this project is to validate the graph, check if it fulfills the requirements of its definition. If it does not, it is considered an error and the running process will be aborted. Moreover, useful information will be extracted from the graph for futute work.
APA, Harvard, Vancouver, ISO, and other styles
2

Self, Joel P. "On-the-Fly Dynamic Dead Variable Analysis." BYU ScholarsArchive, 2007. https://scholarsarchive.byu.edu/etd/886.

Full text
Abstract:
State explosion in model checking continues to be the primary obstacle to widespread use of software model checking. The large input ranges of variables used in software is the main cause of state explosion. As software grows in size and complexity the problem only becomes worse. As such, model checking research into data abstraction as a way of mitigating state explosion has become more and more important. Data abstractions aim to reduce the effect of large input ranges. This work focuses on a static program analysis technique called dead variable analysis. The goal of dead variable analysis is to discover variable assignments that are not used. When applied to model checking, this allows us to ignore the entire input range of dead variables and thus reduce the size of the explored state space. Prior research into dead variable analysis for model checking does not make full use of dynamic run-time information that is present during model checking. We present an algorithm for intraprocedural dead variable analysis that uses dynamic run-time information to find more dead variables on-the-fly and further reduce the size of the explored state space. We introduce a definition for the maximal state space reduction possible through an on-the-fly dead variable analysis and then show that our algorithm produces a maximal reduction in the absence of non-determinism.
APA, Harvard, Vancouver, ISO, and other styles
3

LI, ZHEN. "Control Flow Graph Based Attacks : In the Context of Flattened Programs." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-155770.

Full text
Abstract:
This report addresses de-obfuscation on programs. The targeted obfuscation scheme is the control flow flattening, which is an obfuscation method focusing on hiding the control flow of a program. This scheme introduces a special block named dispatcher into the program. The control flow of the program is reconstructed to be directed back to the dispatcher whenever the execution of a basic block ends. By doing this, in the flattened program, each basic block could be recognized as a precursor or a successor of any other basic blocks. While the realcontrol flow of the program is merely disclosed during the execution of the program.This report aims to remove the dispatcher added in the flattenedprogram and rebuild the control flow of its original program. To achieve the targets, this report presents a de-obfuscation model based on theControl Flow Graph of an obfuscated program. The de-flattening model makes use of both static analysis and dynamic analysis.The de-flattening model primarily relies on execution paths which are obtained by executing a program dynamically. The idea is that in the execution paths, after eliminating the dispatcher block, the real control flow of the original program is disclosed. Then based on these real execution paths, the control flow of the program without obfuscation could be constructed.In order to obtain the full program structure, we need to gather the execution paths that result in a full coverage of the program. Merely with dynamic analysis, this could hardly be achieved. Therefore, static analysis are introduced. In the de-flattening model, the execution paths within a program are computed with the assistance of dynamic execution path analysis, which is a study to statically compute the feasible paths in a program by solving logical formulas obtained during the exploration of the program code. With this static analysis method, the model is adequate to reverse the flattened program to its original structure.The obfuscated programs are distributed in binaries, our research provides insights to de-obfuscation on binaries directly. Besides, the deflattening result obtained in the report is valuable for improvements to existing code obfuscation techniques.
APA, Harvard, Vancouver, ISO, and other styles
4

Torbey, Elie. "Control/data flow graph synthesis using evolutionary computation and behavioral estimation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ37080.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Stanier, James. "Removing and restoring control flow with the Value State Dependence Graph." Thesis, University of Sussex, 2012. http://sro.sussex.ac.uk/id/eprint/7576/.

Full text
Abstract:
This thesis studies the practicality of compiling with only data flow information. Specifically, we focus on the challenges that arise when using the Value State Dependence Graph (VSDG) as an intermediate representation (IR). We perform a detailed survey of IRs in the literature in order to discover trends over time, and we classify them by their features in a taxonomy. We see how the VSDG fits into the IR landscape, and look at the divide between academia and the 'real world' in terms of compiler technology. Since most data flow IRs cannot be constructed for irreducible programs, we perform an empirical study of irreducibility in current versions of open source software, and then compare them with older versions of the same software. We also study machine-generated C code from a variety of different software tools. We show that irreducibility is no longer a problem, and is becoming less so with time. We then address the problem of constructing the VSDG. Since previous approaches in the literature have been poorly documented or ignored altogether, we give our approach to constructing the VSDG from a common IR: the Control Flow Graph. We show how our approach is independent of the source and target language, how it is able to handle unstructured control flow, and how it is able to transform irreducible programs on the fly. Once the VSDG is constructed, we implement Lawrence's proceduralisation algorithm in order to encode an evaluation strategy whilst translating the program into a parallel representation: the Program Dependence Graph. From here, we implement scheduling and then code generation using the LLVM compiler. We compare our compiler framework against several existing compilers, and show how removing control flow with the VSDG and then restoring it later can produce high quality code. We also examine specific situations where the VSDG can put pressure on existing code generators. Our results show that the VSDG represents a radically different, yet practical, approach to compilation.
APA, Harvard, Vancouver, ISO, and other styles
6

Torbey, Elie Carleton University Dissertation Engineering Electronics. "Control/data flow graph synthesis using evolutionary computation and behavioral estimation." Ottawa, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Stange, Yuri. "Visualization of Code Flow." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-162108.

Full text
Abstract:
Visual representation of Control Flow Graphs (CFG) is a feature available in many tools, such as decompilers. These tools often rely on graph drawing frameworks which implement the Sugiyama hierarchical style graph drawing method, a well known method for drawing directed graphs. The main disadvantage of the Sugiyama framework, is the fact that it does not take into account the nature of the graph to be visualized, specically loops are treated as second class citizens. The question this paper attempts to answer is; how can we improve the visual representation of loops in the graph? A method based on the Sugiyama framework was developed and implemented in Qt. It was evaluated by informally interviewing test subjects, who were allowed to test the implementation and compare it to the normal Sugiyama. The results show that all test subjects concluded that loops, as well as the overall representation of the graph was improved, although with reservations. The method presented in this paper has problems which need to be adressed, before it can be seen as an optimal solution for drawing Control Flow Graphs.<br>Visuell representation av flödesscheman (eng. Control Flow Graph, CFG) är en funktion tillgänglig hos många verktyg, bland annat dekompilerare. Dessa verktyg använder sig ofta av grafritande ramverk som implementerar Sugiyamas metod för uppritning av hierarkiska grafer, vilken är en känd metod för uppritning av riktade grafer. Sugiyamas stora nackdelär att metoden inte tar hänsyn till grafens natur, loopar i synnerhet behandlas som andra klassens medborgare. Frågeställningen hos denna rapport är; Hur kan vi förbättra den visuella representationen av loopar i en graf? En metod som bygger vidare på Sugiyama-ramverket utvecklades och implementerades i Qt. Metoden testades genom att hålla informella kvalitativa intervjuer med testpersoner, vilka fick testa implementeringen och jämföra den med den vanliga Sugiyama-metoden. Resultaten visar att alla testpersonerna stämmer in på att loopar, så väl som den overskådliga representionen av grafen förbättrades, dock med vissa reservationer. Metoden som presenteras i denna rapport har vissa problem, vilka bör adresseras innan den kan ses som en optimal lösning för uppritning av flödesscheman.
APA, Harvard, Vancouver, ISO, and other styles
8

Pogulis, Jakob. "Generation of dynamic control-dependence graphs for binary programs." Thesis, Linköpings universitet, Databas och informationsteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-110247.

Full text
Abstract:
Dynamic analysis of binary files is an area of computer science that has many purposes. It is useful when it comes to debugging software in a development environment and the developer needs to know which statements affected the value of a specific variable. But it is also useful when analyzing a software for potential vulnerabilities, where data controlled by a malicious user could potentially result in the software executing adverse commands or executing malicious code. In this thesis a tool has been developed to perform dynamic analysis of x86 binaries in order to generate dynamic control-dependence graphs over the execution. These graphs can be used to determine which conditional statements that resulted in a certain outcome. The tool has been developed for x86 Linux systems using the dynamic binary instrumentation framework PIN, developed and maintained by Intel. Techniques for utilizing the additional information about the control flow for a program available during the dynamic analysis in order to improve the control flow information have been implemented and tested. The basic theory of dynamic analysis as well as dynamic slicing is discussed, and a basic overview of the implementation of a dynamic analysis tool is presented. The impact on the performance of the dynamic analysis tool for the techniques used to improve the control flow graph is significant, but approaches to improving the performance are discussed.
APA, Harvard, Vancouver, ISO, and other styles
9

Caron, David. "Generating a control/data-flow graph representation of a circuit from VHDL for use in circuit synthesis." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0011/MQ27011.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Matsusaki, Cristina Toshie Motohashi. "Redes F-MFG (Functional Mark Flow Graph) e sua aplicação no projeto de sistemas antropocêntricos." Universidade de São Paulo, 1998. http://www.teses.usp.br/teses/disponiveis/3/3132/tde-17122004-154654/.

Full text
Abstract:
Este trabalho introduz a formalização algébrica do F-MFG (Functional Mark Flow Graph) para a análise e simulação computacional de modelos de sistemas antropocêntricos de produção, onde são enfocadas a interação e a interface do elemento humano com o sistema produtivo. Abordando os sistemas antropocêntricos como uma classe de sistemas a eventos discretos, o F-MFG, que é uma técnica baseada nas redes de Petri, comprova ter potencial para descrever detalhadamente as ações e estados do sistema. O F-MFG, em conjunto com a Metodologia PFS/MFG (Production Flow Schema/ Mark Flow Graph), estabelece um procedimento eficiente para o projeto de sistemas antropocêntricos, tornando concisa a modelagem e a posterior avaliação estrutural e comportamental do sistema.<br>This work introduces an algebraic formalization of F-MFG (Functional-Mark Flow Graph). This formalization is effective for analysis and simulation of anthropocentric production systems, which is focused on the interaction and interface between human elements and production systems. When approaching anthropocentric systems as Discrete Event Dynamic Systems, the F-MFG, which is a Petri Net based technique, has been demonstrated its potential capabilities in describing detailed models of system actions and states. The PFS/MFG Methodology (Production Flow Schema/Mark Flow Graph Methodology) combined with F-MFG establishes an efficient procedure for the design of anthropocentric systems. This procedure results in concise modeling and analysis (system structural and behavioral evaluation) processes.
APA, Harvard, Vancouver, ISO, and other styles
11

Seidler, Steffen. "Über Minoren gerichteter Graphen." Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-68153.

Full text
Abstract:
Seit 1983 begründet die Publikationsreihe "Graph Minors" von N. Robertson und P.D. Seymour im Wesentlichen die Minorentheorie mit mächtigen Hilfsmitteln wie der Baumzerlegung und weitreichenden Resultaten wie dem Minorensatz. Für gerichtete Graphen existiert allerdings noch keine einheitliche Minorentheorie und verschiedene Ansätze werden in dieser Arbeit systematisiert. Einige gerichtete Versionen der Baumzerlegung (gerichtete Baumzerlegung nach B. Reed, arboreale, D- und DAG-Zerlegung) werden unter einheitlichen Aspekten untersucht. Die D-Weite ist dabei besonders vielversprechend. Enge Verbindungen zu zwei gerichteten Räuber-und-Gendarmen-Spielen werden unter analogen Aspekten betrachtet und sind wichtige Hilfsmittel. Der zentrale Begriff des Minoren ist im Wesentlichen für ungerichtete Graphen definiert und eine gerichtete Version wirft einige Probleme auf, welche untersucht werden. In \"Directed Tree-Width\" schlugen T. Johnson, N. Robertson, P.D. Seymour und R. Thomas 2001 einen Kompromiss vor. Durch Einschränkung der möglichen Kontraktionen soll der gewonnen Minorenbegriff mit einigen fundamentalen Anforderungen vereinbar sein und trotzdem ein mächtiges Werkzeug darstellen. Dieser Ansatz wird mit einer Anforderungsliste systematisch verfolgt und schrittweise Einschränkungen betrachtet. Die gerichtete Version topologischer Minoren ist dabei besonders vielversprechend. Die Minorentheorie gerichteter Graphen wird auf reduzible Flussgraphen angewandt. Wesentliche Resultate sind Konstruktionen arborealer und D-Zerlegungen mit Weite <2, sowie Gegenbeispiele für die Beschränktheit der DAG-Weite. Analoge Resultate folgen für die jeweiligen gerichteten Räuber-und-Gendarmen-Spiele.
APA, Harvard, Vancouver, ISO, and other styles
12

Kraut, Daniel. "Generování modelů pro testy ze zdrojových kódů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-403157.

Full text
Abstract:
The aim of the masters thesis is to design and implement a tool for automatic generation of paths in source code. Firstly was acquired a study of model based testing and possible design for the desired automatic generator based on coverage criteria defined on CFG model. The main point of the master theis is the tool design and description of its implementation. The tool supports many coverage criteria, which allows the user of such tool to focus on specific artefact of the system under test. Moreover, this tool is tuned to allow aditional requirements on the size of generated test suite, reflecting real world practical usage. The generator was implemented in C++ language and web interface for it in Python language, which at the same time is used to integrated the tool into Testos platform.
APA, Harvard, Vancouver, ISO, and other styles
13

Segura, Salvador Albert. "High-performance and energy-efficient irregular graph processing on GPU architectures." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671449.

Full text
Abstract:
Graph processing is an established and prominent domain that is the foundation of new emerging applications in areas such as Data Analytics and Machine Learning, empowering applications such as road navigation, social networks and automatic speech recognition. The large amount of data employed in these domains requires high throughput architectures such as GPGPU. Although the processing of large graph-based workloads exhibits a high degree of parallelism, memory access patterns tend to be highly irregular, leading to poor efficiency due to memory divergence.In order to ameliorate these issues, GPGPU graph applications perform stream compaction operations which process active nodes/edges so subsequent steps work on a compacted dataset. We propose to offload this task to the Stream Compaction Unit (SCU) hardware extension tailored to the requirements of these operations, which additionally performs pre-processing by filtering and reordering elements processed.We show that memory divergence inefficiencies prevail in GPGPU irregular graph-based applications, yet we find that it is possible to relax the strict relationship between thread and processed data to empower new optimizations. As such, we propose the Irregular accesses Reorder Unit (IRU), a novel hardware extension integrated in the GPU pipeline that reorders and filters data processed by the threads on irregular accesses improving memory coalescing.Finally, we leverage the strengths of both previous approaches to achieve synergistic improvements. We do so by proposing the IRU-enhanced SCU (ISCU), which employs the efficient pre-processing mechanisms of the IRU to improve SCU stream compaction efficiency and NoC throughput limitations due to SCU pre-processing operations. We evaluate the ISCU with state-of-the-art graph-based applications achieving a 2.2x performance improvement and 10x energy-efficiency.<br>El processament de grafs és un domini prominent i establert com a la base de noves aplicacions emergents en àrees com l'anàlisi de dades i Machine Learning, que permeten aplicacions com ara navegació per carretera, xarxes socials i reconeixement automàtic de veu. La gran quantitat de dades emprades en aquests dominis requereix d’arquitectures d’alt rendiment, com ara GPGPU. Tot i que el processament de grans càrregues de treball basades en grafs presenta un alt grau de paral·lelisme, els patrons d’accés a la memòria tendeixen a ser irregulars, fet que redueix l’eficiència a causa de la divergència d’accessos a memòria. Per tal de millorar aquests problemes, les aplicacions de grafs per a GPGPU realitzen operacions de stream compaction que processen nodes/arestes per tal que els passos posteriors funcionin en un conjunt de dades compactat. Proposem deslliurar d’aquesta tasca a la extensió hardware Stream Compaction Unit (SCU) adaptada als requisits d’aquestes operacions, que a més realitza un pre-processament filtrant i reordenant els elements processats.Mostrem que les ineficiències de divergència de memòria prevalen en aplicacions GPGPU basades en grafs irregulars, tot i que trobem que és possible relaxar la relació estricta entre threads i les dades processades per obtenir noves optimitzacions. Com a tal, proposem la Irregular accesses Reorder Unit (IRU), una nova extensió de maquinari integrada al pipeline de la GPU que reordena i filtra les dades processades pels threads en accessos irregulars que milloren la convergència d’accessos a memòria. Finalment, aprofitem els punts forts de les propostes anteriors per aconseguir millores sinèrgiques. Ho fem proposant la IRU-enhanced SCU (ISCU), que utilitza els mecanismes de pre-processament eficients de la IRU per millorar l’eficiència de stream compaction de la SCU i les limitacions de rendiment de NoC a causa de les operacions de pre-processament de la SCU.
APA, Harvard, Vancouver, ISO, and other styles
14

Thierry, Aurélien. "Désassemblage et détection de logiciels malveillants auto-modifiants." Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0011/document.

Full text
Abstract:
Cette thèse porte en premier lieu sur l'analyse et le désassemblage de programmes malveillants utilisant certaines techniques d'obscurcissement telles que l'auto-modification et le chevauchement de code. Les programmes malveillants trouvés dans la pratique utilisent massivement l'auto-modification pour cacher leur code utile à un analyste. Nous proposons une technique d'analyse hybride qui utilise une trace d'exécution déterminée par analyse dynamique. Cette analyse découpe le programme auto-modifiant en plusieurs sous-parties non auto-modifiantes que nous pouvons alors étudier par analyse statique en utilisant la trace comme guide. Cette seconde analyse contourne d'autres techniques de protection comme le chevauchement de code afin de reconstruire le graphe de flot de contrôle du binaire analysé. Nous étudions également un détecteur de programmes malveillants, fonctionnant par analyse morphologique : il compare les graphes de flot de contrôle d'un programme à analyser à ceux de programmes connus comme malveillants. Nous proposons une formalisation de ce problème de comparaison de graphes, des algorithmes permettant de le résoudre efficacement et détaillons des cas concrets d'application à la détection de similarités logicielles<br>This dissertation explores tactics for analysis and disassembly of malwares using some obfuscation techniques such as self-modification and code overlapping. Most malwares found in the wild use self-modification in order to hide their payload from an analyst. We propose an hybrid analysis which uses an execution trace derived from a dynamic analysis. This analysis cuts the self-modifying binary into several non self-modifying parts that we can examine through a static analysis using the trace as a guide. This second analysis circumvents more protection techniques such as code overlapping in order to recover the control flow graph of the studied binary. Moreover we review a morphological malware detector which compares the control flow graph of the studied binary against those of known malwares. We provide a formalization of this graph comparison problem along with efficient algorithms that solve it and a use case in the software similarity field
APA, Harvard, Vancouver, ISO, and other styles
15

Nielsen, Jerel Bendt. "Robust Visual-Inertial Navigation and Control of Fixed-Wing and Multirotor Aircraft." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7584.

Full text
Abstract:
With the increased performance and reduced cost of cameras, the robotics community has taken great interest in estimation and control algorithms that fuse camera data with other sensor data.In response to this interest, this dissertation investigates the algorithms needed for robust guidance, navigation, and control of fixed-wing and multirotor aircraft applied to target estimation and circumnavigation.This work begins with the development of a method to estimate target position relative to static landmarks, deriving and using a state-of-the-art EKF that estimates static landmarks in its state.Following this estimator, improvements are made to a nonlinear observer solving part of the SLAM problem.These improvements include a moving origin process to keep the coordinate origin within the camera field of view and a sliding window iteration algorithm to drastically improve convergence speed of the observer.Next, observers to directly estimate relative target position are created with a circumnavigation guidance law for a multirotor aircraft.Taking a look at fixed-wing aircraft, a state-dependent LQR controller with inputs based on vector fields is developed, in addition to an EKF derived from error state and Lie group theory to estimate aircraft state and inertial wind velocity.The robustness of this controller/estimator combination is demonstrated through Monte Carlo simulations.Next, the accuracy, robustness, and consistency of a state-of-the-art EKF are improved for multirotors by augmenting the filter with a drag coefficient, partial updates, and keyframe resets.Monte Carlo simulations demonstrate the improved accuracy and consistency of the augmented filter.Lastly, a visual-inertial EKF using image coordinates is derived, as well as an offline calibration tool to estimate the transforms needed for accurate, visual-inertial estimation algorithms.The imaged-based EKF and calibrator are also shown to be robust under various conditions through numerical simulation.
APA, Harvard, Vancouver, ISO, and other styles
16

Voráč, Ladislav. "Řiditelné filtry s maximálním možným přeladěním a netradičními aktivními prvky." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2010. http://www.nusl.cz/ntk/nusl-218353.

Full text
Abstract:
The thesis is paid to design frequency filters steerable jet active elements of the MO–CF (Multiple-Output Current Follower) and also newly-developed current active element DACA (Digitally Adjustable Current Amplifier) in the company ON Semiconductor. The first three chapters focus on basic properties of frequency filters, used components and circuit design methods. The digitally controllable current amplifier DACA is used for managing the radio frequency filters and adjust it using the current amplification parameter A through the digital input CTR. For the design of filters of the second order a method of M-C Signal flow graphs is used. This method is in the work proved very useful and contributed to the modification of already designed frequency filters. The fourth chapter describes the actual detailed design of circuits with quality management, or marginal frequencies to the two parameters. Involvement of the proposed filter is simulated in the OrCAD program for different levels of models of the circuit elements. At the end of each filter design there is a comparison of theoretical and simulated values of the adjustable parameters circuit. The conclusion was one of the districts selected for experimental verification, followed by comparing the measured and simulated transmission characteristics.
APA, Harvard, Vancouver, ISO, and other styles
17

Neikter, Carl-Fredrik. "Cache Prediction and Execution Time Analysis on Real-Time MPSoC." Thesis, Linköping University, Department of Computer and Information Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15394.

Full text
Abstract:
<p>Real-time systems do not only require that the logical operations are correct. Equally important is that the specified time constraints always are complied. This has successfully been studied before for mono-processor systems. However, as the hardware in the systems gets more complex, the previous approaches become invalidated. For example, multi-processor systems-on-chip (MPSoC) get more and more common every day, and together with a shared memory, the bus access time is unpredictable in nature. This has recently been resolved, but a safe and not too pessimistic cache analysis approach for MPSoC has not been investigated before. This thesis has resulted in designed and implemented algorithms for cache analysis on real-time MPSoC with a shared communication infrastructure. An additional advantage is that the algorithms include improvements compared to previous approaches for mono-processor systems. The verification of these algorithms has been performed with the help of data flow analysis theory. Furthermore, it is not known how different types of cache miss characteristic of a task influence the worst case execution time on MPSoC. Therefore, a program that generates randomized tasks, according to different parameters, has been constructed. The parameters can, for example, influence the complexity of the control flow graph and average distance between the cache misses.</p>
APA, Harvard, Vancouver, ISO, and other styles
18

Samala, Harikrishna. "Methodology to Derive Resource Aware Context Adaptable Architectures for Field Programmable Gate Arrays." DigitalCommons@USU, 2009. https://digitalcommons.usu.edu/etd/484.

Full text
Abstract:
The design of a common architecture that can support multiple data-flow patterns (or contexts) embedded in complex control flow structures, in applications like multimedia processing, is particularly challenging when the target platform is a Field Programmable Gate Array (FPGA) with a heterogeneous mixture of device primitives. This thesis presents scheduling and mapping algorithms that use a novel area cost metric to generate resource aware context adaptable architectures. Results of a rigorous analysis of the methodology on multiple test cases are presented. Results are compared against published techniques and show an area savings and execution time savings of 46% each.
APA, Harvard, Vancouver, ISO, and other styles
19

Thierry, Aurélien. "Désassemblage et détection de logiciels malveillants auto-modifiants." Electronic Thesis or Diss., Université de Lorraine, 2015. http://www.theses.fr/2015LORR0011.

Full text
Abstract:
Cette thèse porte en premier lieu sur l'analyse et le désassemblage de programmes malveillants utilisant certaines techniques d'obscurcissement telles que l'auto-modification et le chevauchement de code. Les programmes malveillants trouvés dans la pratique utilisent massivement l'auto-modification pour cacher leur code utile à un analyste. Nous proposons une technique d'analyse hybride qui utilise une trace d'exécution déterminée par analyse dynamique. Cette analyse découpe le programme auto-modifiant en plusieurs sous-parties non auto-modifiantes que nous pouvons alors étudier par analyse statique en utilisant la trace comme guide. Cette seconde analyse contourne d'autres techniques de protection comme le chevauchement de code afin de reconstruire le graphe de flot de contrôle du binaire analysé. Nous étudions également un détecteur de programmes malveillants, fonctionnant par analyse morphologique : il compare les graphes de flot de contrôle d'un programme à analyser à ceux de programmes connus comme malveillants. Nous proposons une formalisation de ce problème de comparaison de graphes, des algorithmes permettant de le résoudre efficacement et détaillons des cas concrets d'application à la détection de similarités logicielles<br>This dissertation explores tactics for analysis and disassembly of malwares using some obfuscation techniques such as self-modification and code overlapping. Most malwares found in the wild use self-modification in order to hide their payload from an analyst. We propose an hybrid analysis which uses an execution trace derived from a dynamic analysis. This analysis cuts the self-modifying binary into several non self-modifying parts that we can examine through a static analysis using the trace as a guide. This second analysis circumvents more protection techniques such as code overlapping in order to recover the control flow graph of the studied binary. Moreover we review a morphological malware detector which compares the control flow graph of the studied binary against those of known malwares. We provide a formalization of this graph comparison problem along with efficient algorithms that solve it and a use case in the software similarity field
APA, Harvard, Vancouver, ISO, and other styles
20

Silva, Francisco Coelho da. "Estimador e caracterizador de consumo de energia para software embarcado." Universidade Federal do Amazonas, 2011. http://tede.ufam.edu.br/handle/tede/3288.

Full text
Abstract:
Made available in DSpace on 2015-04-22T22:00:44Z (GMT). No. of bitstreams: 1 Francisco.pdf: 2605316 bytes, checksum: ee1fad3d9d9e7780947fc166b5909203 (MD5) Previous issue date: 2011-03-24<br>The energy consumption in the past years became a very important issue in embedded system projects. The high production and wide application of mobile devices have forced the emergence of various restrictions to this system, such as: weight, size and lifetime of batteries and multiple functionalities. Mobile devices works under limited power source that autonomy and lifetime are directly related to energy consumption of the running applications. These concerns have contributed significantly to include the energy consumption as metric for project quality in embedded systems. The main goal of this work is to propose metrics, estimative and compare the energy consumption of programs code written in ANSI-C language, based on execution time of embedded systems. In order to support the approach it was improved a tool in algorithm level known as PESTI in multiple scenarios. It was written a program in ANSI-C language and loaded in processor of the ARM 7 family s. Then, it was added into this program flags to signalize start and stop in order to measure execution time of each track in analysis. The estimative tool already modified to attribute multiple scenarios, for a program written in ANSI-C and translated into an annotated control flow graph, with tracks assignments of probabilities. This model is probabilistically simulated by using Monte Carlo methodology. The proposed approach was validate carrying out a series of experimental in order to show the viability of the improved tool of estimation and characterization, which together will make the estimates of energy consumption somewhat more feasible.  Validate the proposed approach added;  Compare the results between simulation time and the tool for characterization PESTI with the same hardware platform embedded (ARM7). The experimental were divided in three steps:  Simulation of the code in the tool PESTI in multiple scenarios;  Characterization of the query code;  Comparison of the characterization tool and PESTI. The experiments were conducted on:  AMD Turion (tm) II Dual Core Mobile Processor M500, 2.20GHz, 4Gb of RAM;  OS Linux Mint Distribution kernel 2.6.22 32-bit; 11  OS Windows 7 32-bit.<br>Consumo de energia nos últimos anos tornou-se um aspecto importante em projetos de sistemas embarcados. A produção e utilização em larga escala dos dispositivos móveis tem imposto várias restrições como: peso, tamanho, tempo de vida útil da bateria e funcionalidades complexas. Dispositivos móveis operam sob uma fonte de energia limitada cuja autonomia e tempo de vida útil estão diretamente relacionados ao consumo de energia das aplicações. Estas questões contribuíram para incluir o consumo de energia como métrica de qualidade no projeto de sistemas embarcados. Este trabalho tem como objetivo propor uma abordagem de medição, estimação e comparação do consumo de energia de código de programas escritos em linguagem ANSI-C, baseados em ensaios de códigos previamente escolhidos com características de consumo de energia e no tempo de execução. Para dar suporte à abordagem, uma ferramenta de estimação chamada PESTI foi estendida para atender múltiplos cenários probabilísticos. Programas escritos em linguagem ANSI-C são embarcados no processador LPC2148 da família ARM 7. Nesse programa são inseridos flags de sinalização para start e stop, para delimitar o tempo de execução e medirmos o consumo de energia do código. Um hardware chamado de caracterizador de consumo de energia auxiliará na medição em tempo real de execução do código. A ferramenta de estimação chamada PESTI com características probabilísticas e atribuições para múltiplos cenários probabilísticos é usada para estimar o consumo de energia do programa escrito em ANSI-C. Validamos a abordagem proposta, executando um conjunto de experimentos mostrando a viabilidade da extensão da ferramenta de estimação e o caracterizador que, em conjunto, viabilizarão as estimativas de consumo de energia no processador alvo. As atividades realizadas para a execução dos experimentos foram:  Validar a abordagem proposta;  Comparar os resultados medidos e estimados entre a ferramenta PESTI com o caracterizador para a mesma plataforma de hardware embarcada (ARM7). Os experimentos foram divididos em três passos:  Estimação dos códigos na ferramenta PESTI em simples e múltiplos cenários;  Caracterização do código em questão;  Comparação da caracterização e ferramenta PESTI. 9 Onde os resultados obtidos mostram uma diferença entre os valores estimados e simulados e os resultados medidos. Os experimentos foram conduzidos sobre:  AMD Turion(tm) II Dual Core Mobile M500, 2.20GHz, 4GB de RAM;  SO Linux Distribuição Mint kernel 2.6.22;  SO de 32 bits Windows 7.
APA, Harvard, Vancouver, ISO, and other styles
21

Ďuričeková, Daniela. "Statická analýza možných hodnot proměnných v programech v C." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236176.

Full text
Abstract:
Value-range analysis is a static analysis technique based on arguing about the values that a variable may take on a given program point. It can be used to prove absence of run-time errors such as out-of-bound array accesses. Since value-range analysis collects information on each program point, data-flow analysis can be used in association with it. The main goal of this work is designing and implementing such a value-range analysis tool. The work begins with an introduction into the topic, an explanation of data-flow and value-range analyses and a description of abstract interpretation, which provides the formal basis of the analyser. The core of this work is the design, implementation, testing and evaluation of the analyser. In the conclusion, our personal experience obtained in the area of the thesis is mentioned, along with a discussion of a possible future development of the designed tool.
APA, Harvard, Vancouver, ISO, and other styles
22

Peyret, Thomas. "Architecture matérielle et flot de programmation associé pour la conception de systèmes numériques tolérants aux fautes." Thesis, Lorient, 2014. http://www.theses.fr/2014LORIS348/document.

Full text
Abstract:
Que ce soit dans l’automobile avec des contraintes thermiques ou dans l’aérospatial et lenucléaire soumis à des rayonnements ionisants, l’environnement entraîne l’apparition de fautesdans les systèmes électroniques. Ces fautes peuvent être transitoires ou permanentes et vontinduire des résultats erronés inacceptables dans certains contextes applicatifs. L’utilisation decomposants dits « rad-hard » est parfois compromise par leurs coûts élevés ou les difficultésd’approvisionnement liés aux règles d’exportation.Cette thèse propose une approche conjointe matérielle et logicielle indépendante de la technologied’intégration permettant d’utiliser des composants numériques programmables dans desenvironnements susceptibles de générer des fautes. Notre proposition comporte la définitiond’une Architecture Reconfigurable à Gros Grains (CGRA) capable d’exécuter des codes applicatifscomplets mais aussi l’ensemble des mécanismes matériels et logiciels permettant de rendrecette architecture tolérante aux fautes. Ce résultat est obtenu par l’association de redondance etde reconfiguration dynamique du CGRA en s’appuyant sur une banque de configurations généréepar une chaîne de programmation complète. Cette chaîne outillée repose sur un flot permettantde porter un code sous forme de Control and Data Flow Graph (CDFG) sur l’architecture enobtenant un grand nombre de configurations différentes et qui permet d’exploiter au mieux lepotentiel de l’architecture.Les travaux, qui ont été validés aux travers d’expériences sur des applications du domaine dutraitement du signal et de l’image, ont fait l’objet de publications en conférences internationaleset de dépôts de brevets<br>Whether in automotive with heat stress or in aerospace and nuclear field subjected to cosmic,neutron and gamma radiation, the environment can lead to the development of faults in electronicsystems. These faults, which can be transient or permanent, will lead to erroneous results thatare unacceptable in some application contexts. The use of so-called rad-hard components issometimes compromised due to their high costs and supply problems associated with exportrules.This thesis proposes a joint hardware and software approach independent of integrationtechnology for using digital programmable devices in environments that generate faults. Ourapproach includes the definition of a Coarse Grained Reconfigurable Architecture (CGRA) ableto execute entire application code but also all the hardware and software mechanisms to make ittolerant to transient and permanent faults. This is achieved by the combination of redundancyand dynamic reconfiguration of the CGRA based on a library of configurations generated by acomplete conception flow. This implemented flow relies on a flow to map a code represented as aControl and Data Flow Graph (CDFG) on the CGRA architecture by obtaining directly a largenumber of different configurations and allows to exploit the full potential of architecture.This work, which has been validated through experiments with applications in the field ofsignal and image processing, has been the subject of two publications in international conferencesand of two patents
APA, Harvard, Vancouver, ISO, and other styles
23

Leslous, Mourad. "Highlight and execute suspicious paths in Android malware." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S090/document.

Full text
Abstract:
Les smartphones sont devenus omniprésents dans notre vie quotidienne à cause des options qu'ils proposent. Aujourd'hui, Android est installé sur plus de 80% des smartphones. Les applications mobiles recueillent une grande quantité d'informations sur l'utilisateur. Par conséquent, Android est devenu une cible préférée des cybercriminels. Comprendre le fonctionnement des malwares et comment les détecter est devenu un défi de recherche important. Les malwares Android tentent souvent d'échapper à l'analyse statique en utilisant des techniques telles que l'obfuscation et le chargement dynamique du code. Des approches d'analyse ont été proposées pour exécuter l'application et surveiller son comportement. Néanmoins, les développeurs des malwares utilisent des bombes temporelles et logiques pour empêcher le code malveillant d'être exécuté sauf dans certaines circonstances. Par conséquent, plus d'actions sont requises pour déclencher et surveiller leurs comportements. Des approches récentes tentent de caractériser automatiquement le comportement malveillant en identifiant les endroits du code les plus suspicieux et en forçant leur exécution. Elles se basent sur le calcul des graphes de flot de contrôle (CFG) qui sont incomplets, car ils ne prennent pas en considération tous les types de chemins d'exécution. Ces approches analysent seulement le code d'application et ratent les chemins d'exécution générés quand l'application appelle une méthode du framework, qui appelle à son tour une autre méthode applicative. Nous proposons GPFinder, un outil qui extrait automatiquement les chemins d'exécution qui mènent vers les endroits suspicieux du code, en calculant des CFG qui incluent les appels interprocéduraux explicites et implicites. Il fournit aussi des informations clés sur l'application analysée afin de comprendre comment le code suspicieux a été injecté dans l'application. Pour valider notre approche, nous utilisons GPFinder pour étudier une collection de 14224 malwares Android. Nous évaluons que 72,69% des échantillons ont au moins un endroit suspicieux du code qui n'est atteignable qu'à travers des appels implicites. Les approches de déclenchement actuelles utilisent principalement deux stratégies pour exécuter une partie du code applicatif. La première stratégie consiste à modifier l'application excessivement pour lancer le code ciblé sans faire attention à son contexte originel. La seconde stratégie consiste à générer des entrées pour forcer le flot de contrôle à prendre le chemin désiré sans modifier le code d'application. Cependant, il est parfois difficile de lancer un endroit spécifique du code seulement en manipulant les entrées. Par exemple, quand l'application fait un hachage des données fournies en entrée et compare le résultat avec une chaîne de caractères fixe pour décider quelle branche elle doit prendre. Clairement, le programme de manipulation d'entrée devrait inverser la fonction de hachage, ce qui est presque impossible. Nous proposons TriggerDroid, un outil qui a deux buts : forcer l'exécution du code suspicieux et garder le contexte originel de l'application. Il fournit les événements framework requis pour lancer le bon composant et satisfait les conditions nécessaires pour prendre le chemin d'exécution désiré. Pour valider notre approche, nous avons fait une expérience sur 135 malwares Android de 71 familles différentes. Les résultats montrent que notre approche nécessite plus de raffinement et d'adaptation pour traiter les cas spéciaux dus à la grande diversité des échantillons analysés. Finalement, nous fournissons un retour sur les expériences que nous avons conduites sur différentes collections, et nous expliquons notre processus expérimental. Nous présentons le dataset Kharon, une collection de malwares Android bien documentés qui peuvent être utilisés pour comprendre le panorama des malwares Android<br>The last years have known an unprecedented growth in the use of mobile devices especially smartphones. They became omnipresent in our daily life because of the features they offer. They allow the user to install third-party apps to achieve numerous tasks. Smartphones are mostly governed by the Android operating system. It is today installed on more than 80% of the smartphones. Mobile apps collect a huge amount of data such as email addresses, contact list, geolocation, photos and bank account credentials. Consequently, Android has become a favorable target for cyber criminals. Thus, understanding the issue, i.e., how Android malware operates and how to detect it, became an important research challenge. Android malware frequently tries to bypass static analysis using multiple techniques such as code obfuscation and dynamic code loading. To overcome these limitations, many analysis techniques have been proposed to execute the app and monitor its behavior at runtime. Nevertheless, malware developers use time and logic bombs to prevent the malicious code from executing except under certain circumstances. Therefore, more actions are needed to trigger it and monitor its behavior. Recent approaches try to automatically characterize the malicious behavior by identifying the most suspicious locations in the code and forcing them to execute. They strongly rely on the computation of application global control flow graphs (CFGs). However, these CFGs are incomplete because they do not take into consideration all types of execution paths. These approaches solely analyze the application code and miss the execution paths that occur when the application calls a framework method that in turn calls another application method. We propose in this dissertation a tool, GPFinder, that automatically exhibits execution paths towards suspicious locations in the code by computing global CFGs that include edges representing explicit and implicit interprocedural calls. It also gives key information about the analyzed application in order to understand how the suspicious code was injected into the application. To validate our approach, we use GPFinder to study a collection of 14,224 malware samples, and we evaluate that 72.69% of the samples have at least one suspicious code location which is only reachable through implicit calls. Triggering approaches mainly use one of the following strategies to run a specific portion of the application's code: the first approach heavily modifies the app to launch the targeted code without keeping the original behavioral context. The second approach generates the input to force the execution flow to take the desired path without modifying the app's code. However, it is sometimes hard to launch a specific code location just by fuzzing the input. For instance, when the application performs a hash on the input data and compares the result to a fixed string to decide which branch of the condition to take, the fuzzing program should reverse the hashing function, which is obviously a hard problem. We propose in this dissertation a tool, TriggerDroid, that has a twofold goal: force the execution of the suspicious code and keep its context close to the original one. It crafts the required framework events to launch the right app component and satisfies the necessary triggering conditions to take the desired execution path. To validate our approach, we led an experiment on a dataset of 135 malware samples from 71 different families. Results show that our approach needs more refinement and adaptation to handle special cases due to the highly diverse malware dataset that we analyzed. Finally, we give a feedback on the experiments we led on different malware datasets, and we explain our experimental process. Finally, we present the Kharon dataset, a collection of well documented Android malware that can be used to understand the malware landscape
APA, Harvard, Vancouver, ISO, and other styles
24

El, Sibaïe Besognet Rémy. "Programmation Web Réactive dans un cadre typé statiquement pour l'orchestration de contenus multimédia riches." Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS169/document.

Full text
Abstract:
Le but de cette thèse est d'apporter de nouvelles possibilités au domaine de la programmation Web, dont les technologies répandues ne capturent pas toutes les problématiques engendrées par les interactions dans une application. Notre solution est un langage, Pendulum, inspiré de la programmation synchrone réactive en Esterel et se présentant comme une extension à OCaml. Il permet de gagner en sûreté et en expressivité en particulier dans la gestion d'interaction multiples. Dans une première partie, nous présentons notre perception de la programmation Web d'aujourd'hui en partant du standard pour aller vers les technologies plus modernes qui tentent de subvenir aux besoins des programmes par d'autres approches, notamment la programmation multitier et les modèles de concurrence en flot de données. Dans une seconde partie, nous introduisons le langage Pendulum et ses constructions, ce qu'il propose comme interopérabilité avec le client Web le différenciant d'autres langages synchrones, et l'interface de programmation qui le connecte avec le langage hôte. Dans les parties trois et quatre, nous présentons la méthode de compilation utilisée, GRC, pour GraphCode, qui produit un graphe de flot de contrôle à partir du programme synchrone source. On revient sur la structure du GRC, les règles permettant de le construire, ainsi que notre méthode d'ordonnancement statique. Nous décrivons ensuite la génération de l'environnement d'exécution d'un programme synchrone dans le programme hôte. Dans une cinquième partie, nous montrons l'intérêt de la programmation synchrone dans le client et en quoi son modèle d'exécution s'adapte naturellement à celui du navigateur Web. Nous montrons qu'il est possible de profiter de cet avantage pour réagir aux évènements plus efficacement sans efforts d'optimisation. Avant de conclure, nous présentons de multiples exemples implémentés en Pendulum pour mettre en avant les qualités d'expressivité et de sûreté de la programmation synchrone sur différentes problématiques impliquant du multimédia et des interactions<br>The goal of this thesis is to bring new capabilities to Web programming, whose languages, frameworks don't handle all the problematics raised by interactions in a Web application. Our solution is a programming language, Pendulum, taking its roots in synchronous reactive model à la Esterel. It brings safety and expressiveness, especially when handling multiple interactions. In the first chapter, we give our point of view on what is Web programming today, from the standard to the newest frameworks trying to fill programers needs by other approaches, like multitier programming or dataflow programming. In the second chapter, we introduce Pendulum and its instructions, its interface with the host language, and what it brings to both synchronous and Web programming. In the third and fourth chapter, we present the compilation method, GRC a.k.a GraphCode, that produces a control flow graph from the source code. In the first part, we insist mainly on GRC structure, the rules describing its creation and our technic to linearize parallel branches. Then, we describe the how to initialize synchronous execution environment in OCaml. In the fifth chapter, we show why it is a benefit to use synchronous programming in client programming and how its execution model can natively match the Web browser execution model. We use those ideas to show how a synchronous program can be fast to handle events without optimisation attempt. Before we conclude, we detail several examples implemented with our language to show how expressive and safe synchronous programming can be on diverse programs, implying multimedia and interactions
APA, Harvard, Vancouver, ISO, and other styles
25

Laouadi, Rabah. "Analyse du flot de contrôle multivariante : application à la détection de comportements des programmes." Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT255.

Full text
Abstract:
Sans exécuter une application, est-il possible de prévoir quelle est la méthode cible d’un site d’appel ? Est-il possible de savoir quels sont les types et les valeurs qu’une expression peut contenir ? Est-il possible de déterminer de manière exhaustive l’ensemble de comportements qu’une application peut effectuer ? Dans les trois cas, la réponse est oui, à condition d’accepter une certaine approximation. Il existe une classe d’algorithmes − peu connus à l’extérieur du cercle académique − qui analysent et simulent un programme pour calculer de manière conservatrice l’ensemble des informations qui peuvent être véhiculées dans une expression.Dans cette thèse, nous présentons ces algorithmes appelés CFAs (acronyme de Control Flow Analysis), plus précisément l’algorithme multivariant k-l-CFA. Nous combinons l’algorithme k-l-CFA avec l’analyse de taches (taint analysis),qui consiste à suivre une donnée sensible dans le flot de contrôle, afin de déterminer si elle atteint un puits (un flot sortant du programme). Cet algorithme, en combinaison avec l’interprétation abstraite pour les valeurs, a pour objectif de calculer de manière aussi exhaustive que possible l’ensemble des comportements d’une application. L’un des problèmes de cette approche est le nombre élevé de faux-positifs, qui impose un post-traitement humain. Il est donc essentiel de pouvoir augmenter la précision de l’analyse en augmentant k.k-l-CFA est notoirement connu comme étant très combinatoire, sa complexité étant exponentielle dans la valeur de k. La première contribution de cette thèse est de concevoir un modèle et une implémentation la plus efficace possible, en séparant soigneusement les parties statiques et dynamiques de l’analyse, pour permettre le passage à l’échelle. La seconde contribution de cette thèse est de proposer une nouvelle variante de CFA basée sur k-l-CFA, et appelée *-CFA, qui consiste à faire du paramètre k une propriété de chaque variante, de façon à ne l’augmenter que dans les contextes qui le justifient.Afin d’évaluer l’efficacité de notre implémentation de k-l-CFA, nous avons effectué une comparaison avec le framework Wala. Ensuite, nous validons l’analyse de taches et la détection de comportements avec le Benchmark DroidBench. Enfin, nous présentons les apports de l’algorithme *-CFA par rapport aux algorithmes standards de CFA dans le contexte d’analyse de taches et de détection de comportements<br>Without executing an application, is it possible to predict the target method of a call site? Is it possible to know the types and values that an expression can contain? Is it possible to determine exhaustively the set of behaviors that an application can perform? In all three cases, the answer is yes, as long as a certain approximation is accepted.There is a class of algorithms - little known outside of academia - that can simulate and analyze a program to compute conservatively all information that can be conveyed in an expression. In this thesis, we present these algorithms called CFAs (Control flow analysis), and more specifically the multivariant k-l-CFA algorithm.We combine k-l-CFA algorithm with taint analysis, which consists in following tainted sensitive data inthe control flow to determine if it reaches a sink (an outgoing flow of the program).This combination with the integration of abstract interpretation for the values, aims to identify asexhaustively as possible all behaviors performed by an application.The problem with this approach is the high number of false positives, which requiresa human post-processing treatment.It is therefore essential to increase the accuracy of the analysis by increasing k.k-l-CFA is notoriously known as having a high combinatorial complexity, which is exponential commensurately with the value of k.The first contribution of this thesis is to design a model and most efficient implementationpossible, carefully separating the static and dynamic parts of the analysis, to allow scalability.The second contribution of this thesis is to propose a new CFA variant based on k-l-CFA algorithm -called *-CFA - , which consists in keeping locally for each variant the parameter k, and increasing this parameter in the contexts which justifies it.To evaluate the effectiveness of our implementation of k-l-CFA, we make a comparison with the Wala framework.Then, we do the same with the DroidBench benchmark to validate out taint analysis and behavior detection. Finally , we present the contributions of *-CFA algorithm compared to standard CFA algorithms in the context of taint analysis and behavior detection
APA, Harvard, Vancouver, ISO, and other styles
26

El, Sibaïe Besognet Rémy. "Programmation Web Réactive dans un cadre typé statiquement pour l'orchestration de contenus multimédia riches." Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS169.

Full text
Abstract:
Le but de cette thèse est d'apporter de nouvelles possibilités au domaine de la programmation Web, dont les technologies répandues ne capturent pas toutes les problématiques engendrées par les interactions dans une application. Notre solution est un langage, Pendulum, inspiré de la programmation synchrone réactive en Esterel et se présentant comme une extension à OCaml. Il permet de gagner en sûreté et en expressivité en particulier dans la gestion d'interaction multiples. Dans une première partie, nous présentons notre perception de la programmation Web d'aujourd'hui en partant du standard pour aller vers les technologies plus modernes qui tentent de subvenir aux besoins des programmes par d'autres approches, notamment la programmation multitier et les modèles de concurrence en flot de données. Dans une seconde partie, nous introduisons le langage Pendulum et ses constructions, ce qu'il propose comme interopérabilité avec le client Web le différenciant d'autres langages synchrones, et l'interface de programmation qui le connecte avec le langage hôte. Dans les parties trois et quatre, nous présentons la méthode de compilation utilisée, GRC, pour GraphCode, qui produit un graphe de flot de contrôle à partir du programme synchrone source. On revient sur la structure du GRC, les règles permettant de le construire, ainsi que notre méthode d'ordonnancement statique. Nous décrivons ensuite la génération de l'environnement d'exécution d'un programme synchrone dans le programme hôte. Dans une cinquième partie, nous montrons l'intérêt de la programmation synchrone dans le client et en quoi son modèle d'exécution s'adapte naturellement à celui du navigateur Web. Nous montrons qu'il est possible de profiter de cet avantage pour réagir aux évènements plus efficacement sans efforts d'optimisation. Avant de conclure, nous présentons de multiples exemples implémentés en Pendulum pour mettre en avant les qualités d'expressivité et de sûreté de la programmation synchrone sur différentes problématiques impliquant du multimédia et des interactions<br>The goal of this thesis is to bring new capabilities to Web programming, whose languages, frameworks don't handle all the problematics raised by interactions in a Web application. Our solution is a programming language, Pendulum, taking its roots in synchronous reactive model à la Esterel. It brings safety and expressiveness, especially when handling multiple interactions. In the first chapter, we give our point of view on what is Web programming today, from the standard to the newest frameworks trying to fill programers needs by other approaches, like multitier programming or dataflow programming. In the second chapter, we introduce Pendulum and its instructions, its interface with the host language, and what it brings to both synchronous and Web programming. In the third and fourth chapter, we present the compilation method, GRC a.k.a GraphCode, that produces a control flow graph from the source code. In the first part, we insist mainly on GRC structure, the rules describing its creation and our technic to linearize parallel branches. Then, we describe the how to initialize synchronous execution environment in OCaml. In the fifth chapter, we show why it is a benefit to use synchronous programming in client programming and how its execution model can natively match the Web browser execution model. We use those ideas to show how a synchronous program can be fast to handle events without optimisation attempt. Before we conclude, we detail several examples implemented with our language to show how expressive and safe synchronous programming can be on diverse programs, implying multimedia and interactions
APA, Harvard, Vancouver, ISO, and other styles
27

Cecchetto, Sylvain. "Analyse du flot de données pour la construction du graphe de flot de contrôle des codes obfusqués." Electronic Thesis or Diss., Université de Lorraine, 2021. http://www.theses.fr/2021LORR0042.

Full text
Abstract:
L’augmentation des cyberattaques dans le monde fait de l’analyse des codes malveillants un domaine de recherche prioritaire. Ces logiciels utilisent diverses méthodes de protection, encore appelées obfuscations, visant à contourner les antivirus et à ralentir le travail d’analyse. Dans ce contexte, cette thèse apporte une solution à la construction du Graphe de Flot de Contrôle (GFC) d’un code binaire obfusqué. Nous avons développé la plateforme BOA (Basic blOck Analysis) qui effectue une analyse statique d’un code binaire protégé. Pour cela, nous avons défini une sémantique s’appuyant sur l’outil BINSEC à laquelle nous avons ajouté des continuations. Ces dernières permettent d’une part de contrôler les auto-modifications, et d’autre part de simuler le système d’exploitation pour traiter les appels et interruptions système. L’analyse statique est faite en exécutant symboliquement le code binaire et en calculant les valeurs des états du système à l’aide de solveur SMT. Ainsi, nous effectuons une analyse du flot de données afin de construire le GFC en calculant les adresses de transfert. Enfin, la gestion des boucles est réalisée en transformant un GFC en un automate à pile. BOA est capable de calculer les adresses des sauts dynamiques, de détecter les prédicats opaques, de calculer les adresses de retour sur une pile même si elles ont été falsifiées, de gérer les falsifications des gestionnaires d’interruption, reconstruire à la volée les tables d’importation, et pour finir, de gérer les auto-modifications. Nous avons validé la correction de BOA en utilisant l’obfuscateur de code Tigress. Ensuite, nous avons testé BOA sur 35 packers connus et nous avons montré que dans 30 cas, BOA était capable de reconstruire complètement ou partiellement le binaire initialement masqué. Pour finir, nous avons détecté les prédicats opaques protégeant XTunnel, un malware utilisé lors des élections américaines de 2016, et nous avons partiellement dépacké un échantillon du cheval de Troie Emotet, qui, le 14/10/2020 n’était détecté que par 7 antivirus sur les 63 que propose VirusTotal. Ce travail contribue au développement des outils d’analyse statique des codes malveillants. Contrairement aux analyses dynamiques, cette solution permet une analyse sans exécution du binaire, ce qui offre un double avantage : d’une part une approche statique est plus facile à déployer, et d’autre part le code malveillant n’étant pas exécuté, il ne peut pas prévenir son auteur<br>The increase in cyber-attacks around the world makes malicious code analysis a priority research area. This software uses various protection methods, also known as obfuscations, to bypass antivirus software and slow down the analysis process. In this context, this thesis provides a solution to build the Control Float Graph (CFG) of obfuscated binary code. We developed the BOA platform (Basic blOck Analysis) which performs a static analysis of a protected binary code. For this, we have defined a semantics based on the BINSEC tool to which we have added continuations. These allow on one hand to control the self-modifications, and on the other hand to simulate the operating system to handle system calls and interruptions. The static analysis is done by symbolically executing the binary code and calculating the values of the system states using SMT solvers. Thus, we perform a data flow analysis to build the CFG by calculating the transfer addresses. Finally, loop handling is performed by transforming a CFG into a pushdown automaton. BOA is able to compute dynamic jump addresses, to detect opaque predicates, to compute return addresses on a stack even if they have been falsified, to manage interrupt handler falsifications, to rebuild import tables on the fly, and finally, to manage self-modifications. We validated the BOA correction using the Tigress code obfuscator. Then, we tested BOA on 35 known packers and showed that in 30 cases, BOA was able to completely or partially rebuild the initially protected binary. Finally, we detected the opaque predicates protecting XTunnel, a malware used during the 2016 U.S. elections, and we partially unpacked a sample of the Emotet Trojan, which on 14/10/2020 was detected by only 7 antivirus programs out of the 63 offered by VirusTotal. This work contributes to the development of tools for static analysis of malicious code. In contrast to dynamic methods, this solution allows an analysis without executing the binary, which offers a double advantage: on the one hand, a static approach is easier to deploy, and on the other hand, since the malicious code is not executed, it cannot warn its author
APA, Harvard, Vancouver, ISO, and other styles
28

Laouadi, Rabah. "Analyse du flot de contrôle multivariante : application à la détection de comportements des programmes." Electronic Thesis or Diss., Montpellier, 2016. http://www.theses.fr/2016MONTT255.

Full text
Abstract:
Sans exécuter une application, est-il possible de prévoir quelle est la méthode cible d’un site d’appel ? Est-il possible de savoir quels sont les types et les valeurs qu’une expression peut contenir ? Est-il possible de déterminer de manière exhaustive l’ensemble de comportements qu’une application peut effectuer ? Dans les trois cas, la réponse est oui, à condition d’accepter une certaine approximation. Il existe une classe d’algorithmes − peu connus à l’extérieur du cercle académique − qui analysent et simulent un programme pour calculer de manière conservatrice l’ensemble des informations qui peuvent être véhiculées dans une expression. Dans cette thèse, nous présentons ces algorithmes appelés CFAs (acronyme de Control Flow Analysis), plus précisément l’algorithme multivariant k-l-CFA. Nous combinons l’algorithme k-l-CFA avec l’analyse de taches (taint analysis),qui consiste à suivre une donnée sensible dans le flot de contrôle, afin de déterminer si elle atteint un puits (un flot sortant du programme). Cet algorithme, en combinaison avec l’interprétation abstraite pour les valeurs, a pour objectif de calculer de manière aussi exhaustive que possible l’ensemble des comportements d’une application. L’un des problèmes de cette approche est le nombre élevé de faux-positifs, qui impose un post-traitement humain. Il est donc essentiel de pouvoir augmenter la précision de l’analyse en augmentant k.k-l-CFA est notoirement connu comme étant très combinatoire, sa complexité étant exponentielle dans la valeur de k. La première contribution de cette thèse est de concevoir un modèle et une implémentation la plus efficace possible, en séparant soigneusement les parties statiques et dynamiques de l’analyse, pour permettre le passage à l’échelle. La seconde contribution de cette thèse est de proposer une nouvelle variante de CFA basée sur k-l-CFA, et appelée *-CFA, qui consiste à faire du paramètre k une propriété de chaque variante, de façon à ne l’augmenter que dans les contextes qui le justifient. Afin d’évaluer l’efficacité de notre implémentation de k-l-CFA, nous avons effectué une comparaison avec le framework Wala. Ensuite, nous validons l’analyse de taches et la détection de comportements avec le Benchmark DroidBench. Enfin, nous présentons les apports de l’algorithme *-CFA par rapport aux algorithmes standards de CFA dans le contexte d’analyse de taches et de détection de comportements<br>Without executing an application, is it possible to predict the target method of a call site? Is it possible to know the types and values that an expression can contain? Is it possible to determine exhaustively the set of behaviors that an application can perform? In all three cases, the answer is yes, as long as a certain approximation is accepted.There is a class of algorithms - little known outside of academia - that can simulate and analyze a program to compute conservatively all information that can be conveyed in an expression. In this thesis, we present these algorithms called CFAs (Control flow analysis), and more specifically the multivariant k-l-CFA algorithm.We combine k-l-CFA algorithm with taint analysis, which consists in following tainted sensitive data inthe control flow to determine if it reaches a sink (an outgoing flow of the program).This combination with the integration of abstract interpretation for the values, aims to identify asexhaustively as possible all behaviors performed by an application.The problem with this approach is the high number of false positives, which requiresa human post-processing treatment.It is therefore essential to increase the accuracy of the analysis by increasing k.k-l-CFA is notoriously known as having a high combinatorial complexity, which is exponential commensurately with the value of k.The first contribution of this thesis is to design a model and most efficient implementationpossible, carefully separating the static and dynamic parts of the analysis, to allow scalability.The second contribution of this thesis is to propose a new CFA variant based on k-l-CFA algorithm -called *-CFA - , which consists in keeping locally for each variant the parameter k, and increasing this parameter in the contexts which justifies it.To evaluate the effectiveness of our implementation of k-l-CFA, we make a comparison with the Wala framework.Then, we do the same with the DroidBench benchmark to validate out taint analysis and behavior detection. Finally , we present the contributions of *-CFA algorithm compared to standard CFA algorithms in the context of taint analysis and behavior detection
APA, Harvard, Vancouver, ISO, and other styles
29

Zanasi, Fabio. "Interacting Hopf Algebras- the Theory of Linear Systems." Thesis, Lyon, École normale supérieure, 2015. http://www.theses.fr/2015ENSL1020/document.

Full text
Abstract:
Dans cette thèse, on présente la théorie algébrique IH par le biais de générateurs et d’équations.Le modèle libre de IH est la catégorie des sous-espaces linéaires sur un corps k. Les termes de IH sont des diagrammes de cordes, qui, selon le choix de k, peuvent exprimer différents types de réseaux et de formalismes graphiques, que l’on retrouve dans des domaines scientifiques divers, tels que les circuits quantiques, les circuits électriques et les réseaux de Petri. Les équations de IH sont obtenues via des lois distributives entre algèbres de Hopf – d’où le nom “Interacting Hopf algebras” (algèbres de Hopf interagissantes). La caractérisation via les sous-espaces permet de voir IH comme une syntaxe fondée sur les diagrammes de cordes pour l’algèbre linéaire: les applications linéaires, les espaces et leurs transformations ont chacun leur représentation fidèle dans le langage graphique. Cela aboutit à un point de vue alternatif, souvent fructueux, sur le domaine.On illustre cela en particulier en utilisant IH pour axiomatiser la sémantique formelle de circuits de calculs de signaux, pour lesquels on s’intéresse aux questions de la complète adéquation et de la réalisabilité. Notre analyse suggère un certain nombre d’enseignements au sujet du rôle de la causalité dans la sémantique des systèmes de calcul<br>We present by generators and equations the algebraic theory IH whose free model is the category oflinear subspaces over a field k. Terms of IH are string diagrams which, for different choices of k, expressdifferent kinds of networks and graphical formalisms used by scientists in various fields, such as quantumcircuits, electrical circuits and Petri nets. The equations of IH arise by distributive laws between Hopfalgebras - from which the name interacting Hopf algebras. The characterisation in terms of subspacesallows to think of IH as a string diagrammatic syntax for linear algebra: linear maps, spaces and theirtransformations are all faithfully represented in the graphical language, resulting in an alternative, ofteninsightful perspective on the subject matter. As main application, we use IH to axiomatise a formalsemantics of signal processing circuits, for which we study full abstraction and realisability. Our analysissuggests a reflection about the role of causality in the semantics of computing devices
APA, Harvard, Vancouver, ISO, and other styles
30

Lee, Byeongcheol. "Call graph correction using control flow constraints." Thesis, 2006. http://hdl.handle.net/2152/30457.

Full text
Abstract:
Dynamic optimizers for object-oriented languages collect a variety of profile data to drive optimization decisions. In particular, the dynamic call graph (DCG) informs key structural optimizations such as which methods to optimize and how to optimize them. Unfortunately, current low-overhead call-stack hardware and software sampling methods are subject to sampling bias, which loses accuracy of 40 to 50% when compared with a perfect call graph. This paper introduces DCG correction, a novel approach that uses static and dynamic control-flow graphs (CFGs) to improve DCG accuracy. We introduce the static frequency dominator (FDOM) relation, which extends the dominator relation on the CFG to capture relative execution frequencies and expose static constraints on DCG edges, which we use to correct DCG edge frequencies. Using conservation of flow principles, we further show how to use dynamic CFG basic block profiles to correct DCG edge frequencies intraprocedurally and interprocedurally. We implement and evaluate DCG correction in Jikes RVM on the SPEC JVM98 and DaCapo benchmarks. Default DCG sampling attains an average accuracy of 52-59% compared with perfect, whereas FDOM correction improves average accuracy to 64-68%, while adding 0.2% average overhead. The dynamic correction raises accuracy to 85% on average, while adding 1.2% average overhead. We then provide dynamically corrected DCGs to the inliner with mixed results -1% average degradations and improvements across a variety of configurations. However, prior work shows that increased DCG accuracy in production VMs has benefits. We believe that high-accuracy DCGs will become more important in the future as the complexity and modularity of object-oriented programs increases.
APA, Harvard, Vancouver, ISO, and other styles
31

Gu, Chung Cheng, and 顧峻誠. "RTL Simulation on Control Data Flow Graph." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/34508739877326789518.

Full text
Abstract:
碩士<br>國立清華大學<br>電機工程學系<br>89<br>In the top-down design flow, the RTL(Register Transfer Level) design is one of the most widely used design representations for Digital IC's. Therefore, it is important to ensure that the design functionality is correct at this level. Recently, the ASIC design is more and more complex and the functional verification takes more and more efforts accordingly. Very often, we need to debug an RTL code that has been proven incorrect. In this thesis, we present an RTL simulation platform based on Control Data Flow Graph (CDFG). Upon this platform, RTL diagnosis or code coverage analysis can be performed more efficiently in the future. The proposed simulation mechanism is a cycle-based behavior simulation. It incorporates both a parse tree and a CDFG to represent the RTL design. The parse tree represents the code structure, while the CDFG represents how the control signals interact with the data flow. Based on such a dual presentation, we will be able to perform simulation using the path enumeration technique. In this technique, we compute the value of each register and output signals by evaluating the computational path under execution for each clock cycle. As an experiment, we tested this program using a Greatest Common Divisor design. In addition to producing correct output responses and execution traces, the program can also report all computational paths to aid the subsequent design debugging process and/or testbench generation process.
APA, Harvard, Vancouver, ISO, and other styles
32

Hsieh, Ming-Shiang, and 謝明祥. "Scenario Control Graph: A Model to Control the Flow of Temporal Scenarios in Multimedia." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/31494304796405529173.

Full text
Abstract:
碩士<br>國立臺灣大學<br>電機工程學系<br>85<br>In this article, we focus on the issues of temporal specification in multimedia and hypermedia. In the past years, many temporal modelshave been proposed for multimedia, such as Refined Temporal Model,Petri-Net, Interval-based Approach, Basic Hierarchical Specification, and Reference Points. Among them, the Refined Temporal Model (RTM), based ongraph and visualization, is a powerful and simple one.But RTM has also some disadvantages. First, it cannot be used to express thehierarchical structure of a multimedia document. Second, when many user interactionshave to be presented, the flow of scenarios will become more complexthan RTM can handle.Based on the above observations, a new Scenario Copntrol Graph (SCG) model is proposed.
APA, Harvard, Vancouver, ISO, and other styles
33

Hsu, Chung-Han, and 許中瀚. "On the Usability of Control Flow Graph for Detecting Android Malware." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/43208484173393719995.

Full text
Abstract:
碩士<br>國立臺灣海洋大學<br>資訊工程學系<br>102<br>The rapid development of mobile devices make it a valuable target for malicious attackers. Therefore, a lot of tools have been developed to detect malicious applications that attempt to compromise mobile devices. In this paper, we discuss the usability of control flow graphs (CFG) on detecting malicious applications. CFG is the representation used by the androguard tool to construct malicious signatures. We first develop an algorithm to automatically generate CFG-based signatures. However, although the resulted signature has a good detection rate (95%+), it also brings approximately 40% of false positive rates. To maintain both the detection efficiency and accuracy, we choose to adopt another static based mechanism to detect malicious applications by using their requested permissions. By combining the two static-based solutions, we finally achieve the same detection rate (95%+) and reduced the false positive rates to less than 5%.
APA, Harvard, Vancouver, ISO, and other styles
34

Islam, Shariq. "Code Based Analysis of Object-Oriented Systems Using Extended Control Flow Graph." Thesis, 2013. http://ethesis.nitrkl.ac.in/4753/1/109CS0324.pdf.

Full text
Abstract:
The basic features of object-oriented software i.e. encapsulation, inheritance and poly- morphism makes it dicult to apply traditional testing methods to them. Traditional testing methods have been successfully implemented in procedural systems. One of the most commonly used example of white-box testing is basis path testing which ensures that every path of a program is executed at least once. Control Flow Graph(CFG) is a very well-known model that is used for identication of basis paths in procedural systems. McCabes cyclomatic complexity(CC) metric deter- mines that number of linearly independent paths through a piece of software using the control ow graph(CFG) to determine a set of test cases which will cause executable statements to be executed at least once. The major challenge here is calculation of cyclomatic complexity is easy for procedural systems, but due to basic properties of object-oriented system it is dicult. My work implements a new model named as Extended Control Flow Graph for code based analysis of object-oriented software. ECFG is a layered CFG where nodes refer to methods rather than statements. My work also implements the calculation of a new metric Extended Cyclomatic Complexity (E-CC).
APA, Harvard, Vancouver, ISO, and other styles
35

"Construction of GCCFG for Inter-procedural Optimizations in Software Managed Manycore (SMM)." Master's thesis, 2014. http://hdl.handle.net/2286/R.I.26839.

Full text
Abstract:
abstract: Software Managed Manycore (SMM) architectures - in which each core has only a scratch pad memory (instead of caches), - are a promising solution for scaling memory hierarchy to hundreds of cores. However, in these architectures, the code and data of the tasks mapped to the cores must be explicitly managed in the software by the compiler. State-of-the-art compiler techniques for SMM architectures require inter-procedural information and analysis. A call graph of the program does not have enough information, and Global CFG, i.e., combining all the control flow graphs of the program has too much information, and becomes too big. As a result, most new techniques have informally defined and used GCCFG (Global Call Control Flow Graph) - a whole program representation which captures the control-flow as well as function call information in a succinct way - to perform inter-procedural analysis. However, how to construct it has not been shown yet. We find that for several simple call and control flow graphs, constructing GCCFG is relatively straightforward, but there are several cases in common applications where unique graph transformation is needed in order to formally and correctly construct the GCCFG. This paper fills this gap, and develops graph transformations to allow the construction of GCCFG in (almost) all cases. Our experiments show that by using succinct representation (GCCFG) rather than elaborate representation (GlobalCFG), the compilation time of state-of-the-art code management technique [4] can be improved by an average of 5X, and that of stack management [20] can be improved by an average of 4X.<br>Dissertation/Thesis<br>Masters Thesis Computer Science 2014
APA, Harvard, Vancouver, ISO, and other styles
36

WENG, CI-ZONG, and 翁慈宗. "Graph theory with application on software engineering-automatic layout algorithm of data flow diagram and complexity of system control flow." Thesis, 1986. http://ndltd.ncl.edu.tw/handle/38527139044838426563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Mohamed, ATEF. "Software Architecture-Based Failure Prediction." Thesis, 2012. http://hdl.handle.net/1974/7538.

Full text
Abstract:
Depending on the role of software in everyday life, the cost of a software failure can sometimes be unaffordable. During system execution, errors may occur in system components and failures may be manifested due to these errors. These errors differ with respect to their effects on system behavior and consequent failure manifestation manners. Predicting failures before their manifestation is important to assure system resilience. It helps avoid the cost of failures and enables systems to perform corrective actions prior to failure occurrences. However, effective runtime error detection and failure prediction techniques encounter a prohibitive challenge with respect to the control flow representation of large software systems with intricate control flow structures. In this thesis, we provide a technique for failure prediction from runtime errors of large software systems. Aiming to avoid the possible difficulties and inaccuracies of the existing Control Flow Graph (CFG) structures, we first propose a Connection Dependence Graph (CDG) for control flow representation of large software systems. We describe the CDG structure and explain how to derive it from program source code. Second, we utilize the proposed CDG to provide a connection-based signature approach for control flow error detection. We describe the monitor structure and present the error checking algorithm. Finally, we utilize the detected errors and erroneous state parameters to predict failure occurrences and modes during system runtime. We craft runtime signatures based on these errors and state parameters. Using system error and failure history, we determine a predictive function (an estimator) for each failure mode based on these signatures. Our experimental evaluation for these techniques uses a large open-source software (PostgreSQL 8.4.4 database system). The results show highly efficient control flow representation, error detection, and failure prediction techniques. This work contributes to software reliability by providing a simple and accurate control flow representation and utilizing it to detect runtime errors and predict failure occurrences and modes with high accuracy.<br>Thesis (Ph.D, Computing) -- Queen's University, 2012-09-25 23:44:12.356
APA, Harvard, Vancouver, ISO, and other styles
38

Morkhande, Rahul Raj Kumar. "Characterization of Divergence resulting from Workload, Memory and Control-Flow behavior in GPGPU Applications." Thesis, 2018. https://etd.iisc.ac.in/handle/2005/5453.

Full text
Abstract:
GPGPUs have emerged as high-performance computing platforms and are used for boosting the performance of general non-graphics applications from various scientifi c domains. These applications span varied areas like social networks, defense, bioinformatics, data science, med- ical imaging, uid dynamics, etc [3]. In order to efficiently exploit the computing potential of these accelerators, the application should be well mapped to the underlying architecture. As a result, different characteristics of behaviors can be seen in applications running on GPG- PUs. Applications are characterized as regular or irregular based on their behavior. Regular applications typically operate on array-like data structures whose run-time behavior can be statically predicted whereas irregular applications operate on pointer-based data structures like graphs, trees etc. [2]. Irregular applications are generally characterized by the presence of high degree of data-dependent control flow and memory accesses. In the literature, we nd various efforts to characterize such applications, particularly the irregular ones which exhibit behavior that results in run-time bottlenecks. Burtscher et al [2] investigated various irregular GPGPU applications by quantifying control flow and memory access behaviors on a real GPU device. Molly et al. [4] analyzed performance aspects of these behaviors on a cycle-accurate GPU simulator [1]. Qiumin Xu et al. [5] studied execution characteristics of graph-based applications on GPGPUs. All of these works focused on characterizing the di- vergences at the kernel level but not at the thread level. In this work, we provide an in-depth characterization of three divergences resulting from 1) Workload distribution, 2) Memory access and 3) Control- flow behaviors at different levels of the GPU thread hierarchy with the purpose of analyzing and quantifying the divergence characteristics at warp, thread block, and kernel level. In Chapter 1, we review certain aspects of CPUs, GPUs and how they are different from each other. Then we discuss various characteristics of GPGPU applications. In Chapter 2, we provide background on GPU architectures, CUDA programming models, and GPUs SIMD execution model. We briefly explain key programming concepts of CUDA like GPUs thread hierarchy and different addressable memory spaces. We describe various behaviors that cause divergence across the parallel threads. We then review the related work in the context of divergence we studied in this work followed by this thesis contribution. In Chapter 3, we explain our methodology for quantifying the workload and branch divergence across the threads at various levels of thread organization. We then present our characterization methodology to quantify divergent aspects of memory instructions. In Chapter 4, we present our chosen benchmarks taken from various suites and show the baseline GPGPU-Sim con g- fiuration we used for evaluating our methodology. Then we discuss our characterization results for workload and branch divergence at warp, thread-block and kernel level for some interest- ing kernels of applications. We examine graph-based application divergence behaviors and show how it varies across threads. We present our characterization for memory access be- haviors of irregular applications using instruction classi fication based on spatial locality. We then discuss the relationship between the throughput and divergence measures by studying their correlation coefficients. To summarize, we quantifi ed and analyzed the control- flow and workload divergence across the threads at warp, thread-block and kernel level for a diverse collection of 12 GPGPU applications which exhibit both regular and irregular behaviors. By using thread's hardware utilization efficiency and a measure we call `Average normalized instructions per thread', we quantify branch and workload divergence respectively. Our characterization technique for memory divergence classi es memory instructions into four different groups based on the property of intra-warp spatial locality of instructions. We then quantify the impact of memory divergence using the behavior of GPU L1 data cache.
APA, Harvard, Vancouver, ISO, and other styles
39

Seidler, Steffen. "Über Minoren gerichteter Graphen." Master's thesis, 2010. https://tud.qucosa.de/id/qucosa%3A25569.

Full text
Abstract:
Seit 1983 begründet die Publikationsreihe "Graph Minors" von N. Robertson und P.D. Seymour im Wesentlichen die Minorentheorie mit mächtigen Hilfsmitteln wie der Baumzerlegung und weitreichenden Resultaten wie dem Minorensatz. Für gerichtete Graphen existiert allerdings noch keine einheitliche Minorentheorie und verschiedene Ansätze werden in dieser Arbeit systematisiert. Einige gerichtete Versionen der Baumzerlegung (gerichtete Baumzerlegung nach B. Reed, arboreale, D- und DAG-Zerlegung) werden unter einheitlichen Aspekten untersucht. Die D-Weite ist dabei besonders vielversprechend. Enge Verbindungen zu zwei gerichteten Räuber-und-Gendarmen-Spielen werden unter analogen Aspekten betrachtet und sind wichtige Hilfsmittel. Der zentrale Begriff des Minoren ist im Wesentlichen für ungerichtete Graphen definiert und eine gerichtete Version wirft einige Probleme auf, welche untersucht werden. In \"Directed Tree-Width\" schlugen T. Johnson, N. Robertson, P.D. Seymour und R. Thomas 2001 einen Kompromiss vor. Durch Einschränkung der möglichen Kontraktionen soll der gewonnen Minorenbegriff mit einigen fundamentalen Anforderungen vereinbar sein und trotzdem ein mächtiges Werkzeug darstellen. Dieser Ansatz wird mit einer Anforderungsliste systematisch verfolgt und schrittweise Einschränkungen betrachtet. Die gerichtete Version topologischer Minoren ist dabei besonders vielversprechend. Die Minorentheorie gerichteter Graphen wird auf reduzible Flussgraphen angewandt. Wesentliche Resultate sind Konstruktionen arborealer und D-Zerlegungen mit Weite <2, sowie Gegenbeispiele für die Beschränktheit der DAG-Weite. Analoge Resultate folgen für die jeweiligen gerichteten Räuber-und-Gendarmen-Spiele.:1 Einleitung 1.1 Grundbegriffe der Graphentheorie 1.2 Reduzibilität 2 Über Minoren von Graphen 2.1 Minoren von Graphen 2.2 Topologische Minoren von Graphen 2.3 Baumzerlegung und Baumweite tw(G) 2.4 Wegzerlegung und Wegbreite pw(G) 2.5 Räuber-und-Gendarmen-Spiele auf Graphen 2.6 Resultate und Anwendungen 3 Über Minoren von Digraphen 3.1 Übertragung der Minorenrelation auf Digraphen 3.2 Hindernisse bei der De?nition einer gerichteten Baumweite 3.3 Arboreale Weite dtw(D) 3.3.1 Arboreale Weite und Räuber-und-Gendarmen-Spiele 3.3.2 Resultate und Anwendungen 3.4 Gerichtete Baumweite dtwR(D) 3.5 D-Weite dw(D) 3.6 DAG-Weite dgw(D) 3.6.1 DAG-Weite und Räuber-und-Gendarmen-Spiele 3.6.2 Resultate und Anwendungen 3.7 Räuber-und-Gendarmen-Spiele auf Digraphen 3.8 Eingeschränkte Minorenrelation für Digraphen 3.8.1 Die Teilgraphenrelation für Digraphen 3.8.2 Die topologische Minorenrelation für Digraphen 3.8.3 Die Minorenrelation auf Digraphen nach JRST 3.8.4 Eingeschränkte Minorenrelationen 4 Verbindungen zwischen Reduzibilität und Minoren von Digraphen 4.1 Reduzibilität und initiale Wurzeldigraphen 4.2 Charakterisierung der Reduzibilität durch eingeschränkte Minoren 4.3 Resultate der Minorentheorie für reduzible initiale Wurzeldigraphen 5 Zusammenfassung und Ausblick Literaturverzeichnis Abbildungsverzeichnis
APA, Harvard, Vancouver, ISO, and other styles
40

Shao, Danhua. "Application of local semantic analysis in fault prediction and detection." Thesis, 2010. http://hdl.handle.net/2152/ETD-UT-2010-05-1086.

Full text
Abstract:
To improve quality of software systems, change-based fault prediction and scope-bounded checking have been used to predict or detect faults during software development. In fault prediction, changes to program source code, such as added lines or deleted lines, are used to predict potential faults. In fault detection, scope-bounded checking of programs is an effective technique for finding subtle faults. The central idea is to check all program executions up to a given bound. The technique takes two basic forms: scope-bounded static checking, where all bounded executions of a program are transformed into a formula that represents the violation of a correctness property and any solution to the formula represents a counterexample; or scope-bounded testing where a program is tested against all (small) inputs up to a given bound on the input size. Although the accuracies of change-based fault prediction and scope-bounded checking have been evaluated with experiments, both of them have effectiveness and efficiency limitations. Previous change-based fault predictions only consider the code modified by a change while ignoring the code impacted by a change. Scope-bounded testing only concerns the correctness specifications, and the internal structure of a program is ignored. Although scope-bounded static checking considers the internal structure of programs, formulae translated from structurally complex programs might choke the backend analyzer and fail to give a result within a reasonable time. To improve effectiveness and efficiency of these approaches, we introduce local semantic analysis into change-based fault prediction and scope-bounded checking. We use data-flow analysis to disclose internal dependencies within a program. Based on these dependencies, we identify code segments impacted by a change and apply fault prediction metrics on impacted code. Empirical studies with real data showed that semantic analysis is effective and efficient in predicting faults in large-size changes or short-interval changes. While generating inputs for scope-bounded testing, we use control-flow to guide test generation so that code coverage can be achieved with minimal tests. To increase the scalability of scope-bounded checking, we split a bounded program into smaller sub-programs according to data-flow and control-flow analysis. Thus the problem of scope-bounded checking for the given program reduces to several sub-problems, where each sub-problem requires the constraint solver to check a less complex formula, thereby likely reducing the solver’s overall workload. Experimental results show that our approach provides significant speed-ups over the traditional approach.<br>text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography