To see the other types of publications on this topic, follow the link: Program of execution.

Dissertations / Theses on the topic 'Program of execution'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Program of execution.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kiriansky, Vladimir L. (Vladimir Lubenov) 1979. "Secure execution environment via program shepherding." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/29660.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.<br>Includes bibliographical references (p. 77-82).<br>We present program shepherding, a method for monitoring control flow transfers during program execution in order to enforce a security policy. Program shepherding provides three basic techniques as building blocks for security policies. First, program shepherding can restrict execution privileges on the basis of code origins. This distinction can ensure that malicious code masquerading as data is never executed, thwarting a large class of security attacks. Second, shepherding can restrict control transfers based on instruction type, source, and target. Finally, shepherding guarantees that sandboxing checks around any program operation will never be bypassed. Security attacks use inevitable bugs in trusted binaries to coerce a program into performing actions that it was never intended to perform. We use static and dynamic analyses to automatically build a custom security policy for a target program, which specifies the program's execution model. An accurate execution model restricts control flow transfers only to the intended ones and can thwart attacker attempts to alter program execution. For example, shepherding will allow execution of shared library code only through declared entry points. Finer specifications can be extracted from high-level information present in programs' source code - for example, which values a function pointer may take. Program shepherding will allow indirect calls only to their known targets, and function returns only to known callers. These analyses build a strict enough policy to prevent all deviations from the program's control flow graph and nearly all violations of the calling convention. This technique renders most security vulnerabilities unexploitable and thwarts current and future security attacks. We present an efficient implementation of program shepherding's capabilities in the DynamoRIO [6, 7] runtime code modification system. The resulting system imposes minimal performance overhead, operates on unmodified binaries, and requires no special hardware or operating system support.<br>by Vladimir L. Kiriansky.<br>M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
2

Jeffery, Clinton Lewis. "A framework for monitoring program execution." Diss., The University of Arizona, 1993. http://hdl.handle.net/10150/186320.

Full text
Abstract:
Program execution monitors are used to improve human beings' understanding of program run-time behavior in a variety of important applications such as debugging, performance tuning, and the study of algorithms. Unfortunately, many program execution monitors fail to provide adequate understanding of program behavior, and progress in this area of systems software has been slow due to the difficulty of the task of writing execution monitors. In high-level programming languages the task of writing execution monitors is made more complex by features such as non-traditional control flow and complex semantics. Additionally, in many languages, such as the Icon programming language, a significant part of the execution behavior that various monitors need to observe occurs in the language run-time system code rather than the source code of the monitored program. This dissertation presents a framework for monitoring Icon programs that allows rapid development of execution monitors in the Icon language itself. Monitors have full source-level access to the target program with which to gather and process execution information, without intrusive modification to the target executable. In addition, the framework supports the monitoring of implicit run-time system behavior crucial to program understanding. In order to demonstrate its practicality, the framework has been used to implement a collection of program visualization tools. Program visualization provides graphical feedback about program execution that allows human beings to deal with volumes of data more effectively than textual techniques. Ideally, the user specifies program execution controls in such tools directly in the graphics used to visualize execution, employing the same visual language that is used to render the output. Some monitors that exhibit this characteristic are presented.
APA, Harvard, Vancouver, ISO, and other styles
3

Bhatti, Muhammad Afzal. "An incremental execution environment." Thesis, University of Kent, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.328140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Parker, Gregory M. "NestedVision3D-trace, program execution visualization with NestedVision3D." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ38401.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yessenov, Kuat T. "Program synthesis from execution traces and demonstrations." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106098.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 116-121).<br>In this thesis, we introduce an architecture for programming productivity tools that relies on a database of execution traces. Our database enables a novel user interaction model for a programmer assistant based on short demonstrations of framework usages in applications. By matching the demonstration traces against the complete traces in the database, our system infers the code snippets for the demonstrated feature including the missing set-up steps. We develop techniques for an interactive trace matching process, and evaluate them on a sample of Swing applications. We show that our system synthesizes code for several features of the Eclipse platform from traces of existing Eclipse plug-ins, and that the generated code is comparable in quality to the tutorial code.<br>by Kuat Yessenov.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
6

Effinger, Robert T. "Risk-minimizing program execution in robotic domains." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/71465.

Full text
Abstract:
Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2012.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (p. 153-161).<br>In this thesis, we argue that autonomous robots operating in hostile and uncertain environments can improve robustness by computing and reasoning explicitly about risk. Autonomous robots with a keen sensitivity to risk can be trusted with critical missions, such as exploring deep space and assisting on the battlefield. We introduce a novel, risk-minimizing approach to program execution that utilizes program flexibility and estimation of risk in order to make runtime decisions that minimize the probability of program failure. Our risk-minimizing executive, called Murphy, utilizes two forms of program flexibility, 1) flexible scheduling of activity timing, and 2) redundant choice between subprocedures, in order to minimize two forms of program risk, 1) exceptions arising from activity failures, and 2) exceptions arising from timing constraint violations in a program. Murphy takes two inputs, a program written in a nondeterministic variant of the Reactive Model-based Programming Language (RMPL) and a set of stochastic activity failure models, one for each activity in a program, and computes two outputs, a risk-minimizing decision policy and value function. The decision policy informs Murphy which decisions to make at runtime in order to minimize risk, while the value function quantifies risk. In order to execute with low latency, Murphy computes the decision policy and value function offline, as a compilation step prior to program execution. In this thesis, we develop three approaches to RMPL program execution. First, we develop an approach that is guaranteed to minimize risk. For this approach, we reason probabilistically about risk by framing program execution as a Markov Decision Process (MDP). Next, we develop an approach that avoids risk altogether. For this approach, we frame program execution as a novel form of constraint-based temporal reasoning. Finally, we develop an execution approach that trades optimality in risk avoidance for tractability. For this approach, we leverage prior work in hierarchical decomposition of MDPs in order to mitigate complexity. We benchmark the tractability of each approach on a set of representative RMPL programs, and we demonstrate the applicability of the approach on a humanoid robot simulator.<br>by Robert T. Effinger, IV.<br>Sc.D.
APA, Harvard, Vancouver, ISO, and other styles
7

Fadeev, Alexander. "Optimal execution for portfolio transactions." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/42352.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, System Design and Management Program, February 2007.<br>Includes bibliographical references.<br>In my thesis I explore the problem of optimizing trading strategies for complex portfolio transitions. Institutional investors run into this issue during periodic portfolio rebalancing or transition between asset managers. The costs of rebalancing can be broadly broken into trading costs (both the transaction cost and the market impact) and the opportunity costs of delaying the execution and bearing the risk of current-to-target portfolio divergence. This thesis proposes a methodology for measuring the opportunity cost as well as a strategy that minimizes the proposed measure through optimal portfolio transition execution. The benefits from the proposed trading strategy are benchmarked against the industry standard portfolio trading practices.<br>by Alexander Fadeev.<br>S.M.
APA, Harvard, Vancouver, ISO, and other styles
8

Wilkinson, David. "Program execution monitoring : software structures and architectural support." Thesis, University of Liverpool, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Baltas, Nikolaos. "Software performance engineering using virtual time program execution." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/12681.

Full text
Abstract:
In this thesis we introduce a novel approach to software performance engineering that is based on the execution of code in virtual time. Virtual time execution models the timing-behaviour of unmodified applications by scaling observed method times or replacing them with results acquired from performance model simulation. This facilitates the investigation of "what-if" performance predictions of applications comprising an arbitrary combination of real code and performance models. The ability to analyse code and models in a single framework enables performance testing throughout the software lifecycle, without the need to to extract performance models from code. This is accomplished by forcing thread scheduling decisions to take into account the hypothetical time-scaling or model-based performance specifications of each method. The virtual time execution of I/O operations or multicore targets is also investigated. We explore these ideas using a Virtual EXecution (VEX) framework, which provides performance predictions for multi-threaded applications. The language-independent VEX core is driven by an instrumentation layer that notifies it of thread state changes and method profiling events; it is then up to VEX to control the progress of application threads in virtual time on top of the operating system scheduler. We also describe a Java Instrumentation Environment (JINE), demonstrating the challenges involved in virtual time execution at the JVM level. We evaluate the VEX/JINE tools by executing client-side Java benchmarks in virtual time and identifying the causes of deviations from observed real times. Our results show that VEX and JINE transparently provide predictions for the response time of unmodified applications with typically good accuracy (within 5-10%) and low simulation overheads (25-50% additional time). We conclude this thesis with a case study that shows how models and code can be integrated, thus illustrating our vision on how virtual time execution can support performance testing throughout the software lifecycle.
APA, Harvard, Vancouver, ISO, and other styles
10

Radhakrishnan, Ramesh. "Microarchitectural techniques to enable efficient Java execution /." Digital version accessible at:, 2000. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Melhus, Lars Kirkholt. "Analyzing Contextual Bias of Program Execution on Modern CPUs." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-22997.

Full text
Abstract:
Seemingly innocuous properties of the environment, such as the contents of system environment variables, or different link orders, can impact the performance of computer programs. Variations in external properties like these can bias programs towards certain configurations. These effects have been shown to be a significant issue in performance analysis, but unpredictable and difficult to deal with.Abstract This thesis focuses on the underlying reasons for bias effects that can be experienced for example by changing environment variables, or using different link orders. Both of these factors can lead to differences in memory layout of either code or data, which in turn interacts with various hardware mechanisms. Through experimentation and careful measurements using performance counters, we identify several potential sources of bias on the Intel Core i7 ?Ivy Bridge? architecture. Limitations imposed by the Loop Stream Detector is revealed, along with effects from 4K address aliasing. We show that bias is in fact not completely unpredictable, and discuss measures for avoiding it.Abstract Our case studies show that even highly optimized Fourier transform and linear algebra libraries are prone to bias. We find that stack alignment significantly affects the performance of FFTW, and that in some cases more than 30 % performance gain can be made by avoiding address aliasing in ATLAS&apos; matrix-vector multiplication. Our results show that an awareness of program layout in memory is important, especially for users and developers of performance critical software.
APA, Harvard, Vancouver, ISO, and other styles
12

Barr, Kenneth C. (Kenneth Charles) 1978. "Summarizing multiprocessor program execution with versatile, microarchitecture-independent snapshots." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/38224.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.<br>Includes bibliographical references (p. 131-137).<br>Computer architects rely heavily on software simulation to evaluate, refine, and validate new designs before they are implemented. However, simulation time continues to increase as computers become more complex and multicore designs become more common. This thesis investigates software structures and algorithms for quickly simulating modern cache-coherent multiprocessors by amortizing the time spent to simulate the memory system and branch predictors. The Memory Timestamp Record (MTR) summarizes the directory and cache state of a multiprocessor system in a compact data structure. A single MTR snapshot is versatile enough to reconstruct the microarchitectural state resulting from various coherence protocols and cache organizations. The MTR may be quickly updated by each simulated processor during a fast-forwarding phase and optionally stored off-line for reuse. To fill large branch prediction tables, we introduce Branch Predictor-based Compression (BPC) which compactly stores a branch trace so that it may be used to fill in any branch predictor structure. An entire BPC trace requires less space than single discrete predictor snapshots, and it may be decompressed 3-6x faster than performing functional simulation.<br>by Kenneth C. Barr.<br>Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
13

Kumar, Tushar. "Characterizing and controlling program behavior using execution-time variance." Diss., Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/55000.

Full text
Abstract:
Immersive applications, such as computer gaming, computer vision and video codecs, are an important emerging class of applications with QoS requirements that are difficult to characterize and control using traditional methods. This thesis proposes new techniques reliant on execution-time variance to both characterize and control program behavior. The proposed techniques are intended to be broadly applicable to a wide variety of immersive applications and are intended to be easy for programmers to apply without needing to gain specialized expertise. First, we create new QoS controllers that programmers can easily apply to their applications to achieve desired application-specific QoS objectives on any platform or application data-set, provided the programmers verify that their applications satisfy some simple domain requirements specific to immersive applications. The controllers adjust programmer-identified knobs every application frame to effect desired values for programmer-identified QoS metrics. The control techniques are novel in that they do not require the user to provide any kind of application behavior models, and are effective for immersive applications that defy the traditional requirements for feedback controller construction. Second, we create new profiling techniques that provide visibility into the behavior of a large complex application, inferring behavior relationships across application components based on the execution-time variance observed at all levels of granularity of the application functionality. Additionally for immersive applications, some of the most important QoS requirements relate to managing the execution-time variance of key application components, for example, the frame-rate. The profiling techniques not only identify and summarize behavior directly relevant to the QoS aspects related to timing, but also indirectly reveal non-timing related properties of behavior, such as the identification of components that are sensitive to data, or those whose behavior changes based on the call-context.
APA, Harvard, Vancouver, ISO, and other styles
14

Wang, Jian. "Pointer analysis in Java programs using execution path information /." View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?CSED%202008%20WANG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Mehrman, John M. "Centralized execution, decentralized control : why we go slow in defense acquisition." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/118538.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, 2018.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 121-125).<br>The slow pace to field new defense weapon systems is allowing potential adversaries to catch up to the technological advantage the U.S. has maintained since World War 11. Despite hundreds of studies, and a near constant state of "acquisition reform", the problem continues. This research analyzed the defense acquisition process as a socio-technical system, focusing on the source selection process as subset of the Defense Acquisition System (DAS) for modeling purposes to investigate the value of the separation of contracting and program management authorities. Network graphs showed how Conway's law predicted the effect of the separation of authorities on the topographic structure of the source selection process and a high network distance between the separate authorities. An agent-based model was built that showed a 26% cost (112 days) in terms of schedule because of the separation of authorities. The benefit of the separation was investigated by scoring the comments received by the Multi-Functional Independent Review Team (MIRT) for five different source selections and found that less than 1 % of comments had a likely impact on the decision and less than 4% had a likely or highly likely impact on protestability. The results showed that while there is a small benefit to the separation of authorities currently implemented in the source selection process, the cost is very high. Enough data and evidence were generated to recommend taking steps to better structurally combine the two authorities and better integrate source selection expertise into the process.<br>by John M. Mehrman.<br>S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
16

Baumstark, Lewis Benton Jr. "Extracting Data-Level Parallelism from Sequential Programs for SIMD Execution." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4823.

Full text
Abstract:
The goal of this research is to retarget multimedia programs written in sequential languages (e.g., C) to architectures with data-parallel execution capabilities. Image processing algorithms often have a high potential for data-level parallelism, but the artifacts imposed by the sequential programming language (e.g., loops, pointer variables) can obscure the parallelism and prohibit generation of efficient parallel code. This research presents a program representation and recognition approach for generating a data parallel program specification from sequential source code and retargeting it to data parallel execution mechanisms. The representation is based on an extension of the multi-dimensional synchronous dataflow model of computation. A partial recognition approach identifies and transforms only those program elements that hinder parallelization while leaving other computational elements intact. This permits flexibility in the types of programs that can be retargeted, while avoiding the complexity of complete program recognition. This representation and recognition process is implemented in the PARRET system, which is used to extract the high-level specification of a set of image-processing programs. From this specification, code is generated for Intels SSE2 instruction set and for the SIMPil processor. The results demonstrate that PARRET can exploit, given sufficient parallel resources, the maximum available parallelism in the retargeted applications. Similarly, the results show PARRET can also exploit parallelism on architectures with hardware-limited parallel resources. It is desirable to estimate potential parallelism before undertaking the expensive process of reverse engineering and retargeting. The goal is to narrow down the search space to a select set of loops which have a high likelihood of being data-parallel. This work also presents a hybrid static/dynamic approach, called DLPEST, for estimating the data-level parallelism in sequential program loops. We demonstrate the correctness of the DLPESTs estimates, show that estimates for programs of 25 to 5000 lines of code can be performed in under 10 minutes and that estimation time scales sub-linearly with input program size.
APA, Harvard, Vancouver, ISO, and other styles
17

Pyla, Hari Krishna. "Safe Concurrent Programming and Execution." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/19276.

Full text
Abstract:
The increasing prevalence of multi and many core processors has brought the<br />issues of concurrency and parallelism to the forefront of everyday<br />computing. Even for applications amenable to traditional parallelization techniques,<br />the subtleties of concurrent programming are known to introduce concurrency<br />bugs. Due to the potential of concurrency bugs, programmers find it hard to write<br />correct concurrent code. To take full advantage of parallel shared memory<br />platforms, application programmers need safe and efficient mechanisms that<br />can support a wide range of parallel applications.<br /><br />In addition, a large body of applications are inherently<br />hard-to-parallelize; their data and control dependencies impose execution<br />order constraints that preclude the use of traditional parallelization<br />techniques. Sensitive to their input data, a substantial number of<br />applications fail to scale well, leaving cores idle. To improve the<br />performance of such applications, application programmers need effective<br />mechanisms that can fully leverage multi and many core architectures.<br /><br />These challenges stand in the way of realizing the true potential of<br />emerging many core platforms. The techniques described in this  dissertation<br />address these challenges. Specifically, this dissertation contributes<br />techniques to transparently detect and eliminate several concurrency bugs,<br />including deadlocks, asymmetric write-write data races, priority inversion,<br />live-locks, order violations, and bugs that stem from the presence of<br />asynchronous signaling and locks. A second major contribution of this dissertation<br />is a programming framework that exploits coarse-grain speculative<br />parallelism to improve the performance of otherwise hard-to-parallelize<br />applications.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
18

Dunlop, Alistair Neil. "Estimating the execution time of Fortran programs on distributed memory, parallel computers." Thesis, University of Southampton, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.242759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Johnsson, Tomas. "Development of software package for event driven execution of multivariate models." Thesis, Uppsala University, Department of Information Technology, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-126702.

Full text
Abstract:
<p>The BoardModel™ software system is today used as a visualization of, for example, logging of parameters in production and real-time predictions of responses such as formaldehyde emission or moisture content. The system is time based and consists of four main programs, the BMDC (saves and sends the incoming values), the View (shows the result to the screen), the Server (calculates the result) and the HDB exporter (export values to a text file).</p><p>This project aims at doing BoardModel™ event based and implement a new interface where the results can be shown. The need of the Server and the View programs in offline applications will be unnecessary, this will make the whole system much easier to use.</p><p>To make the system event based, SIMCA-QP from Umetrics AB will be used as calculating engine. An interface in C code which communicates with SIMCA-QP will be made. All other changes to the program will be made in C++.</p><p>The final version of the new BoardModel™ is event based, has support for multiple models and multiple y variables. The system also has the opportunity to send the calculated results as OPC. The new BoardModel™ consists only of BMDC with an inbuilt exporter and a new interface where the results are shown.</p><br><p>BoardModel™ är ett mjukvarusystem som används för att visa till exempel värden av parametrar i produktionen och realtidsprediktering av bland annat formaldehyd och fukthalt. BoardModel™ är tidsbaserad och består av fyra olika program, BMDC (sparar och skickar vidare värden som kommer in), View (där resultaten visas), Server (som räknar ut resultaten) och HDB exporter (exporterar ut värden till en textfil).</p><p>Målet med detta examensarbete är att gör BoardModel™ händelsestyrt och implementera ett nytt gränssnitt där resultatet kan visas. I och med detta kommer behovet av ett View- och ett Serverprogram att försvinna i offline applikationer och systemet kommer överlag att bli lättare att använda.</p><p>För att BoardModel™ ska bli händelsestyrt kommer SIMCA-QP från Umetrics AB att användas som beräknings motor. För att kunna kommunicera med SIMCA-QP kommer ett C gränssnitt att byggas och resterande ändringar av programmet kommer att göras i C++.</p><p>Den färdiga versionen av BoardModel™ är händelsestyrd och innehåller stöd för flera modeller och fler y-variabler. Man kan också välja att skicka resultaten med hjälp av OPC. Den nya versionen består bara av BMDC med en inbyggd HDB exporter och ett nytt gränssnitt där resultaten visas.</p>
APA, Harvard, Vancouver, ISO, and other styles
20

Hamou-Lhadj, Abdelwahab. "Techniques to simplify the analysis of execution traces for program comprehension." Thesis, University of Ottawa (Canada), 2006. http://hdl.handle.net/10393/29296.

Full text
Abstract:
Understanding a large execution trace is not easy task due to the size and complexity of typical traces. In this thesis, we present various techniques that tackle this problem. Firstly, we present a set of metrics for measuring various properties of an execution trace in order to assess the work required for understanding its content. We show the result of applying these metrics to thirty traces generated from three different software systems. We discuss how these metrics can be supported by tools to facilitate the exploration of traces based on their complexity. Secondly, we present a novel technique for manipulating traces called trace summarization, which consists of taking a trace as input and return a summary of its main content as output. Traces summaries can be used to enable top-down analysis of traces as well as the recovery of the system behavioural models. In this thesis, we present a trace summarization algorithm that is based on successive filtering of implementation details from traces. An analysis of the concept of implementation details such as utilities is also presented. Thirdly, we have developed a scalable exchange format called the Compact Trace Format (CTF) in order to enable sharing and reusing of traces. The design of CTF satisfies well-known requirements for a standard exchange format. Finally, this thesis includes a survey of eight trace analysis tools. A study of the advantages and limitations of the techniques supported by these tools is provided. The approaches presented in this thesis have been applied to real software systems. The obtained results demonstrate the effectiveness and usefulness of our techniques.
APA, Harvard, Vancouver, ISO, and other styles
21

Shu, Xiaokui. "Threat Detection in Program Execution and Data Movement: Theory and Practice." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/71463.

Full text
Abstract:
Program attacks are one of the oldest and fundamental cyber threats. They compromise the confidentiality of data, the integrity of program logic, and the availability of services. This threat becomes even severer when followed by other malicious activities such as data exfiltration. The integration of primitive attacks constructs comprehensive attack vectors and forms advanced persistent threats. Along with the rapid development of defense mechanisms, program attacks and data leak threats survive and evolve. Stealthy program attacks can hide in long execution paths to avoid being detected. Sensitive data transformations weaken existing leak detection mechanisms. New adversaries, e.g., semi-honest service provider, emerge and form threats. This thesis presents theoretical analysis and practical detection mechanisms against stealthy program attacks and data leaks. The thesis presents a unified framework for understanding different branches of program anomaly detection and sheds light on possible future program anomaly detection directions. The thesis investigates modern stealthy program attacks hidden in long program executions and develops a program anomaly detection approach with data mining techniques to reveal the attacks. The thesis advances network-based data leak detection mechanisms by relaxing strong requirements in existing methods. The thesis presents practical solutions to outsource data leak detection procedures to semi-honest third parties and identify noisy or transformed data leaks in network traffic.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
22

Sargeant, Roland B. (Roland Basil) 1974. "Functional specifications of a manufacturing execution system." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/84352.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science; in conjunction with the Leaders for Manufacturing Program at MIT, 2003.<br>Includes bibliographical references (p. 129-130).<br>by Roland B. Sargeant.<br>S.M.<br>M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
23

Otto, Carsten [Verfasser], Jürgen [Akademischer Betreuer] Giesl, and Fausto [Akademischer Betreuer] Spoto. "Java program analysis by symbolic execution / Carsten Otto ; Jürgen Giesl, Fausto Spoto." Aachen : Universitätsbibliothek der RWTH Aachen, 2015. http://d-nb.info/1130402444/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Ku, Yuk-chiu, and 古玉翠. "Partitioning HOPD program for fast execution on the HKU UNIX workstation cluster." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31221026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Ferreira, Raphael Segabinazzi. "Stack smashing attack detection methodology for secure program execution based on hardware." Pontif?cia Universidade Cat?lica do Rio Grande do Sul, 2016. http://tede2.pucrs.br/tede2/handle/tede/7073.

Full text
Abstract:
Submitted by Setor de Tratamento da Informa??o - BC/PUCRS (tede2@pucrs.br) on 2016-12-01T15:53:47Z No. of bitstreams: 1 DIS_RAPHAEL_SEGABINAZZI_FERREIRA_COMPLETO.pdf: 2073138 bytes, checksum: d5db8a28bdcf83806ed8388083415120 (MD5)<br>Made available in DSpace on 2016-12-01T15:53:47Z (GMT). No. of bitstreams: 1 DIS_RAPHAEL_SEGABINAZZI_FERREIRA_COMPLETO.pdf: 2073138 bytes, checksum: d5db8a28bdcf83806ed8388083415120 (MD5) Previous issue date: 2016-08-25<br>Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior - CAPES<br>A necessidade de inclus?o de mecanismos de seguran?a em dispositivos eletr?nicos cresceu consideravelmente com o aumento do uso destes dispositivos no dia a dia das pessoas. ? medida que estes dispositivos foram ficando cada vez mais conectados a rede e uns aos outros, estes mesmos se tornaram vulner?veis a tentativa de ataques e intrus?es remotas. Ataques deste tipo chegam normalmente como dados recebidos por meio de um canal comum de comunica??o, uma vez presente na mem?ria do dispositivo estes dados podem ser capazes de disparar uma falha de software pr?-existente, e, a partir desta falha, desviar o fluxo do programa para o c?digo malicioso inserido. Vulnerabilidades de software foram, nos ?ltimos anos, a principal causa de incidentes relacionados ? quebra de seguran?a em sistemas e computadores. Adicionalmente, estouros de buffer (buffer overflow) s?o as vulnerabilidades mais exploradas em software, chegando a atingir, metade das recomenda??es de seguran?a do grupo norte americano Computer Emergency Readiness Team (CERT). A partir deste cen?rio citado acima, o presente trabalho apresenta um novo m?todo baseado em hardware para detec??o de ataques ocorridos a partir de estouros de buffer chamados de Stack Smashing, prop?e ainda de maneira preliminar, um mecanismo de recupera??o do sistema a partir da detec??o de um ataque ou falha. Comparando com m?todos j? existentes na bibliografia, a t?cnica apresentada por este trabalho n?o necessita de recompila??o de c?digo e, adicionalmente, dispensa o uso de software (como, por exemplo, um Sistema Operacional) para fazer o gerenciamento do uso de mem?ria. Monitorando sinais internos do pipeline de um processador o presente trabalho ? capaz de detectar quando um endere?o de retorno de uma fun??o est? corrompido, e a partir desta detec??o, voltar o sistema para um estado seguro salvo previamente em uma regi?o segura de mem?ria. Para validar este trabalho um programa simples, em linguagem C, foi implementado, este programa for?a uma condi??o de buffer overflow. Esta condi??o deve ser reconhecida pelo sistema implementado neste trabalho e, ainda, recuperada adequadamente. J? para avalia??o do sistema, a fim de verificar como o mesmo se comporta em situa??es reais, programas testes foram implementados em linguagem C com pequenos trechos de c?digos maliciosos. Estes trechos foram obtidos de vulnerabilidades reportadas na base de dados Common Vulnerabilities and Exposures (CVE). Estes pequenos c?digos maliciosos foram adaptados e inseridos nos fontes do programa de teste. Com isso, enquanto estes programas est?o em execu??o o sistema implementado por este trabalho ? avaliado. Durante esta avalia??o s?o observados: (1) a capacidade de detec??o de modifica??o no endere?o de retorno de fun??es e (2) a recupera??o do sistema. Finalmente, ? calculado o overhead de ?rea e de tempo de execu??o.De acordo com resultados e implementa??es preliminares este trabalho conseguiu atingir 100% da detec??o de ataques sobre uma baixa lat?ncia por detec??o de modifica??es de endere?o de retorno de fun??es salva no stack. Foi capaz, tamb?m, de se recuperar nos casos de testes implementados. E, finalmente, resultando em baixo overhead de ?rea sem nenhuma degrada??o de performance na detec??o de modifica??o do endere?o de retorno.<br>The need to include security mechanisms in electronic devices has dramatically grown with the widespread use of such devices in our daily life. With the increasing interconnectivity among devices, attackers can now launch attacks remotely. Such attacks arrive as data over a regular communication channel and, once resident in the program memory, they trigger a pre-existing software flaw and transfer control to the attacker?s malicious code. Software vulnerabilities have been the main cause of computer security incidents. Among these, buffer overflows are perhaps the most widely exploited type of vulnerability, accounting for approximately half the Computer Emergency Readiness Team (CERT) advisories in recent years. In this scenario, the methodology proposed in this work presents a new hardware-based approach to detect stack smashing buffer overflow attack and recover the system after the attack detection. Compared to existing approaches, the proposed technique does not need application code recompilation or use of any kind of software (e.g., an Operating System - OS) to manage memory usage. By monitoring processor pipeline internal signals, this approach is able to detect when the return address of a function call has been corrupted. At this moment, a rollback-based recovery procedure is triggered, which drives the system into a safe state previously stored in a protected memory area. This approach was validated by implementing a C program that forces a buffer overflow condition, which is promptly recognized by the proposed approach. From this point on, the system is then properly recovered. Having in mind to evaluate the system under more realistic conditions, test programs were implemented with pieces of known vulnerable C codes. These vulnerable pieces of codes were obtained from vulnerabilities reported in the Common Vulnerabilities and Exposures (CVE). These code snippets were adapted and included in the test programs. Then, while running these programs the proposed system was evaluated. This evaluation was done by observing the capability of the proposed approach to: (1) detect an invalid return address and (2) to safely recovery the system from the faulty condition. Finally, the execution time and area overheads were determined. According to preliminary implementations and results this approach guarantees 100% attack detection with negligible detection latency by recognizing return address overwritten within a few processor clock cycles.
APA, Harvard, Vancouver, ISO, and other styles
26

Rajan, T. "APT : a principled design for an animated view of program execution for novice programmers." Thesis, Open University, 1986. http://oro.open.ac.uk/56927/.

Full text
Abstract:
This thesis is concerned with the principled design of a computational environment which depicts an animated view of program execution for novice programmers. We assert that a principled animated view of program execution should benefit novice programmers by: (i) helping students conceptualize what is happening when programs are executed; (ii) simplifying debugging through the presentation of bugs in a manner which the novice will understand; (iii) reducing program development time. The design is based on principles which have been extracted from three areas: (i) the problems that novices encounter when learning a programming language; (ii) the general design principles for computer systems; and (iii) systems which present a view of program execution. The design principles have been embodied in three 'canned stepper displays for Prolog, Lisp and 6502 Assembler. These prototypes, called APT-0 (Animated Program Tracer), demonstrate that the design principles can be broadly applied to procedural and declarative; low and high level languages. Protocol data was collected from subjects using the prototypes in order to check the direction of the research and to suggest improvements in the design. These improvements have been incorporated in a real implementation of APT for Prolog. This principled approach embodied by APT provides two important facilities which have previously not been available, firstly a means of demonstrating dynamic programming concepts such as variable binding, recursion, and backtracking, and secondly a debugging tool which allows novices to step through their own code watching the virtual machine in action. This moves towards simplifying the novice's debugging environment by supplying program execution information in a form that the novice can easily assimilate. An experiment into the misconceptions novices hold concerning the execution of Prolog programs shows that the order of database search, and the concepts of variable binding, unification and backtracking are poorly understood. A further experiment was conducted which looked at the effect that APT had on the ability of novice Prolog programmers to understand the execution of Prolog programs. This demonstrated that the performance of subjects significantly increased after being shown demonstrations of the execution of Prolog programs on APT, while the control group who saw no demonstration showed no improvement. The experimental evidence demonstrates the potential of APT, and the principled approach which it embodies, to communicate run-time information to novice programmers, increasing their understanding of the dynamic aspects of the Prolog interpreter. APT, uses an object centred representation, is built on top of a Prolog interpreter and environment, and is implemented in Common Lisp and Zeta Lisp and runs on the Symbolics 3600 range of machines.
APA, Harvard, Vancouver, ISO, and other styles
27

Howard, Neal (Neal David). "Evaluating and mitigating execution risk in Indian real estate development." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/68186.

Full text
Abstract:
Thesis (S.M. in Real Estate Development)--Massachusetts Institute of Technology, Program in Real Estate Development in Conjunction with the Center for Real Estate, 2011.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (p. 80-82).<br>Real estate development is a complex process in which developers and equity investors look to capitalize on favorable financial markets and economic forces to produce investment returns. Real estate development is a risky venture in even the most mature economies that possess transparent government regulations, reliable local and national legal systems, efficient capital markets, skilled labor markets and substantial market demand data. These issues are magnified in an emerging market where few of the above ingredients are readily available. However, the hypothesis of this thesis is that a developer can better assemble its development team, positively impact performance, and reduce execution risks by reorganizing project teams with the resources currently available in India. This thesis contemplates the evolution of real estate development design and delivery methods as developers compete to deliver real estate assets; equity investors seek greater insulation from execution risk; and a growing stable of qualified construction professionals compete for contracts. However, demand for real estate assets, equity investment hurdles and increased competition are pressuring developers to consider design and delivery methods that decrease the time to market and contemplate risk allocation. The analytic approach of this thesis is to: 1) document common delivery methods in India through a series of interview with developers, architects, project management consultants, quantity surveyors and contractors, 2) compare and contrast the delivery methods and allocation of execution risk in the United States and India and 3) propose a management plan to further mitigate execution risk through different risk allocation and delivery methods. The goal of this thesis is to provide developers and equity investors insight into the evolution of the Indian delivery process and identify emerging opportunities to mitigate execution risk.<br>by Neal Howard.<br>S.M.in Real Estate Development
APA, Harvard, Vancouver, ISO, and other styles
28

Anand, Saswat. "Techniques to facilitate symbolic execution of real-world programs." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44733.

Full text
Abstract:
The overall goal of this research is to reduce the cost of software development and improve the quality of software. Symbolic execution is a program-analysis technique that is used to address several problems that arise in developing high-quality software. Despite the fact that the symbolic execution technique is well understood, and performing symbolic execution on simple programs is straightforward, it is still not possible to apply the technique to the general class of large, real-world software. A symbolic-execution system can be effectively applied to large, real-world software if it has at least the two features: efficiency and automation. However, efficient and automatic symbolic execution of real-world programs is a lofty goal because of both theoretical and practical reasons. Theoretically, achieving this goal requires solving an intractable problem (i.e., solving constraints). Practically, achieving this goal requires overwhelming effort to implement a symbolic-execution system that can precisely and automatically symbolically execute real-world programs. This research makes three major contributions. 1. Three new techniques that address three important problems of symbolic execution. Compared to existing techniques, the new techniques * reduce the manual effort that may be required to symbolically execute those programs that either generate complex constraints or parts of which cannot be symbolically executed due to limitations of a symbolic-execution system. * improve the usefulness of symbolic execution (e.g., expose more bugs in a program) by enabling discovery of more feasible paths within a given time budget. 2. A novel approach that uses symbolic execution to generate test inputs for Apps that run on modern mobile devices such as smartphones and tablets. 3. Implementations of the above techniques and empirical results obtained from applying those techniques to real-world programs that demonstrate their effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
29

Ayala, Miguel A. "Execution level Java software and hardware for the NPS autonomous underwater vehicle /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://library.nps.navy.mil/uhtbin/hyperion-image/02sep%5FAyala.pdf.

Full text
Abstract:
Thesis (M.S. in Computer Science)--Naval Postgraduate School, September 2002.<br>Thesis advisor(s): Don Brutzman, Man-Tak Shing. Includes bibliographical references (p. 259-260). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
30

Smith, Danny Roy. "The influence of contract type in program execution/V-22 OSPREY: a case study." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/25996.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Bester, Willem Hendrik Karel. "Bug-finding and test case generation for java programs by symbolic execution." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/85832.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2013.<br>ENGLISH ABSTRACT: In this dissertation we present a software tool, Artemis, that symbolically executes Java virtual machine bytecode to find bugs and automatically generate test cases to trigger the bugs found. Symbolic execution is a technique of static software analysis that entails analysing code over symbolic inputs—essentially, classes of inputs—where each class is formulated as constraints over some input domain. The analysis then proceeds in a path-sensitive way adding the constraints resulting from a symbolic choice at a program branch to a path condition, and branching non-deterministically over the path condition. When a possible error state is reached, the path condition can be solved, and if soluble, value assignments retrieved to be used to generate explicit test cases in a unit testing framework. This last step enhances confidence that bugs are real, because testing is forced through normal language semantics, which could prevent certain states from being reached. We illustrate and evaluate Artemis on a number of examples with known errors, as well as on a large, complex code base. A preliminary version of this work was successfully presented at the SAICSIT conference held on 1–3 October 2012, in Centurion, South Africa.<br>AFRIKAANSE OPSOMMING: In die dissertasie bied ons ’n stuk sagtewaregereedskap, Artemis, aan wat biskode van die Java virtuele masjien simbolies uitvoer om foute op te spoor en toetsgevalle outomaties voort te bring om die foute te ontketen. Simboliese uitvoering is ’n tegniek van statiese sagteware-analise wat behels dat kode oor simboliese toevoere—in wese, klasse van toevoer—geanaliseer word, waar elke klas geformuleer word as beperkinge oor ’n domein. Die analise volg dan ’n pad-sensitiewe benadering deur die domeinbeperkinge, wat volg uit ’n simboliese keuse by ’n programvertakking, tot ’n padvoorwaarde by te voeg en dan nie-deterministies vertakkings oor die padvoorwaarde te volg. Wanneer ’n moontlike fouttoestand bereik word, kan die padvoorwaarde opgelos word, en indien dit oplaasbaar is, kan waardetoekennings verkry word om eksplisiete toetsgevalle in ’n eenheidstoetsingsraamwerk te formuleer. Die laaste stap verhoog vertroue dat die foute gevind werklik is, want toetsing word deur die normale semantiek van die taal geforseer, wat sekere toestande onbereikbaar maak. Ons illustreer en evalueer Artemis met ’n aantal voorbeelde waar die foute bekend is, asook op ’n groot, komplekse versameling kode. ’n Voorlopige weergawe van die´ werk is suksesvol by die SAICSIT-konferensie, wat van 1 tot 3 Oktober 2012 in Centurion, Suid-Afrika, gehou is, aangebied.
APA, Harvard, Vancouver, ISO, and other styles
32

Marouf, Said M. "An extensive analysis of the software security vunerabilities that exist within the Java software execution environment /." Connect to title online, 2008. http://minds.wisconsin.edu/handle/1793/34240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Palmskog, Karl. "Towards Correct and Efficient Program Execution in Decentralized Networks: Programming Languages, Semantics, and Resource Management." Doctoral thesis, KTH, Teoretisk datalogi, TCS, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-152247.

Full text
Abstract:
The Internet as of 2014 connects billions of devices, and is expected to connect tens of billions by 2020. To meet escalating requirements, networks must be scalable, easy to manage, and be able to efficiently execute programs and disseminate data. The prevailing use of centralized systems and control in, e.g., pools of computing resources, clouds, is problematic for scalability. A promising approach to management of large networks is decentralization, where independently acting network nodes communicate with their immediate neighbors to achieve desirable results at the global level. The research in this thesis addresses three distinct but interrelated problems in the context of cloud computing, networks, and programs running in clouds. First, we show how implementation correctness of active objects can be achieved in decentralized networks using location independent routing. Second, we investigate the feasibility of decentralized adaptive resource allocation for active objects in such networks, with promising results. Third, we automate an initial step of a process for converting programs with thread-based concurrency using shared memory to programs with message passing concurrency, which can then run efficiently in clouds. Specifically, starting from fragments of the distributed object modeling language ABS, we give network-oblivious descriptions of runtime behavior of programs, where the global state is a flat collection of objects and method calls. We then provide network-aware semantics, that place objects on network nodes connected point-to-point by asynchronous message passing channels. By relying on location independent routing, which maps object identifiers to next-hop neighbors at each node, inter-object messages can be delivered, regardless of object mobility among nodes. We establish that network-oblivious and network-aware behavior in static networks correspond in the sense of contextual equivalence. Using a network protocol reminiscent of a two-phase commit for controlled node shutdown, we extend the approach to dynamic networks without failures. We investigate node-local procedures for object migration to meet requirements on balanced allocations of objects to nodes, that also attempt to minimize exchange of object-related messages between nodes. By relying on coin-flips biased on local and neighbor load to decide on migration, and heuristics to capture object communication patterns, we show that balanced allocations can be achieved that make headway towards minimizing communication and latency. Our approach to execution of object-oriented programs in networks relies on message-passing concurrency. Mainstream programming languages generally use thread-based concurrency, which relies on control-centric primitives, such as locks, for synchronization. We present an algorithm for dynamic probabilistic inference of annotations for data-centric synchronization in threaded programs. By making collections of variables in classes accessed atomically explicit, these annotations can in turn suggest objects suitable for encapsulation as a unit of message-passing concurrency.<br>2014 års Internet sammankopplar miljarder enheter, och förväntas sammankoppla tiotals miljarder år 2020. För att möta eskalerande krav måste nätverk vara skalbara, enkla att underhålla, och effektivt exekvera program och disseminera data. Den nuvarande användningen av centraliserade system och kontrollmekanismer, t ex i pooler av beräkningsresurser, moln, är problematisk för skalbarhet. Ett lovande angreppssätt för att hantera storskaliga nätverk är decentralisering, där noder som agerar oberoende av varandra genom kommunikation med sina omedelbara grannar åstadkommer gynnsamma resultat på den globala nivån. Forskningen i den här avhandlingen addresserar tre distinkta men relaterade problem i kontexten av molnsystem, nätverk och program som körs i moln. För det första visar vi hur implementationskorrekthet för aktiva objekt kan åstadkommas i decentraliserade nätverk med hjälp av platsoberoende routning. För det andra undersöker vi genomförbarheten i decentraliserad adaptiv resursallokering för aktiva objekt i sådana nätverk, med lovande resultat. För det tredje automatiserar vi ett initialt steg i en process för att konvertera program med trådbaserad samtidighet och delat minne till program med meddelandebaserad samtidighet, som då kan köras effektivt i moln. Mer specifikt ger vi, med utgångspunkt i fragment av modelleringsspråket ABS baserat på distribuerade objekt, nätverksomedvetna beskrivningar av körningstidsbeteende för program där det globala tillståndet är en platt samling av objekt och metodanrop. Vi ger därefter nätverksmedvetna semantiker, där objekt placeras på nätverksnoder sammankopplade från punkt till punkt av asynkrona kanaler för meddelandetransmission. Genom att vid varje nod använda platsoberoende routning, som associerar objektidentifierare med grannoder som är nästa hopp, kan meddelanden mellan objekt levereras oavsett hur objekt rör sig mellan noder. Vi etablerar att nätverksomedvetet och nätverksmedvetet beteende i statiska nätverk stämmer överens enligt kontextuell ekvivalens. Genom att använda ett nätverksprotokoll som påminner om en tvåstegsförpliktelse, utökar vi vår ansats till felfria dynamiska nätverk. Vi undersöker nodlokala procedurer för objektmigration för att möta krav på balanserade allokeringar av objekt till noder, som också försöker minimera utbyte av objektrelaterade meddelanden mellan noder. Genom att använda oss av slantsinglingar viktade efter lokal last och grannars last för att besluta om migration, och tumregler för att fånga kommunikationsmönster mellan objekt, visar vi att balanserade allokeringar, som gör framsteg mot att minimera kommunikation och tidsfördröjning, kan uppnås. Vår ansats för exekvering av objektorienterade program i nätverk använder meddelandebaserad samtidighet. Vanligt förekommande programspråk använder sig generellt av trådbaserad samtidighet, som kräver kontrollcentrerade mekanismer, som lås, för synkronisering. Vi presenterar en algoritm som med dynamisk probabilistisk analys härleder annoteringar för datacentrerad synkronisering för trådade program. Genom att göra samlingar av variabler i klasser som läses och skrivs atomiskt explicita, kan sådana annoteringar antyda vilka objekt som är lämpliga att kapsla in som en enhet i meddelandebaserad samtidighet.<br><p>QC 20140929</p>
APA, Harvard, Vancouver, ISO, and other styles
34

Kafle, Bishoksan. "Modeling assembly program with constraints. A contribution to WCET problem." Master's thesis, Faculdade de Ciências e Tecnologia, 2012. http://hdl.handle.net/10362/7968.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Lógica Computacional<br>Model checking with program slicing has been successfully applied to compute Worst Case Execution Time (WCET) of a program running in a given hardware. This method lacks path feasibility analysis and suffers from the following problems: The model checker (MC) explores exponential number of program paths irrespective of their feasibility. This limits the scalability of this method to multiple path programs. And the witness trace returned by the MC corresponding to WCET may not be feasible (executable). This may result in a solution which is not tight i.e., it overestimates the actual WCET. This thesis complements the above method with path feasibility analysis and addresses these problems. To achieve this: we first validate the witness trace returned by the MC and generate test data if it is executable. For this we generate constraints over a trace and solve a constraint satisfaction problem. Experiment shows that 33% of these traces (obtained while computing WCET on standard WCET benchmark programs) are infeasible. Second, we use constraint solving technique to compute approximate WCET solely based on the program (without taking into account the hardware characteristics), and suggest some feasible and probable worst case paths which can produce WCET. Each of these paths forms an input to the MC. The more precise WCET then can be computed on these paths using the above method. The maximum of all these is the WCET. In addition this, we provide a mechanism to compute an upper bound of over approximation for WCET computed using model checking method. This effort of combining constraint solving technique with model checking takes advantages of their strengths and makes WCET computation scalable and amenable to hardware changes. We use our technique to compute WCET on standard benchmark programs from M¨alardalen University and compare our results with results from model checking method.
APA, Harvard, Vancouver, ISO, and other styles
35

Liu, Yin. "Methodologies, Techniques, and Tools for Understanding and Managing Sensitive Program Information." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103421.

Full text
Abstract:
Exfiltrating or tampering with certain business logic, algorithms, and data can harm the security and privacy of both organizations and end users. Collectively referred to as sensitive program information (SPI), these building blocks are part and parcel of modern software systems in domains ranging from enterprise applications to cyberphysical setups. Hence, protecting SPI has become one of the most salient challenges of modern software development. However, several fundamental obstacles stand on the way of effective SPI protection: (1) understanding and locating the SPI for any realistically sized codebase by hand is hard; (2) manually isolating SPI to protect it is burdensome and error-prone; (3) if SPI is passed across distributed components within and across devices, it becomes vulnerable to security and privacy attacks. To address these problems, this dissertation research innovates in the realm of automated program analysis, code transformation, and novel programming abstractions to improve the state of the art in SPI protection. Specifically, this dissertation comprises three interrelated research thrusts that: (1) design and develop program analysis and programming support for inferring the usage semantics of program constructs, with the goal of helping developers understand and identify SPI; (2) provide powerful programming abstractions and tools that transform code automatically, with the goal of helping developers effectively isolate SPI from the rest of the codebase; (3) provide programming mechanism for distributed managed execution environments that hides SPI, with the goal of enabling components to exchange SPI safely and securely. The novel methodologies, techniques, and software tools, supported by programming abstractions, automated program analysis, and code transformation of this dissertation research lay the groundwork for establishing a secure, understandable, and efficient foundation for protecting SPI. This dissertation is based on 4 conference papers, presented at TrustCom'20, GPCE'20, GPCE'18, and ManLang'17, as well as 1 journal paper, published in Journal of Computer Languages (COLA).<br>Doctor of Philosophy<br>Some portions of a computer program can be sensitive, referred to as sensitive program information (SPI). By compromising SPI, attackers can hurt user security/privacy. It is hard for developers to identify and protect SPI, particularly for large programs. This dissertation introduces novel methodologies, techniques, and software tools that facilitate software developments tasks concerned with locating and protecting SPI.
APA, Harvard, Vancouver, ISO, and other styles
36

Ji, Ran [Verfasser], Reiner [Akademischer Betreuer] Hähnle, and Bernhard [Akademischer Betreuer] Beckert. "Sound Program Transformation Based on Symbolic Execution and Deduction / Ran Ji. Betreuer: Reiner Hähnle ; Bernhard Beckert." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2014. http://d-nb.info/1110792980/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Ströder, Thomas [Verfasser], Jürgen [Akademischer Betreuer] Giesl, and Albert [Akademischer Betreuer] Rubio. "Symbolic execution and program synthesis : a general methodology for software verification / Thomas Ströder ; Jürgen Giesl, Albert Rubio." Aachen : Universitätsbibliothek der RWTH Aachen, 2019. http://d-nb.info/119230831X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Iyer, Krishnan Jyothi Lakshmi. "Design of an interactive simulation tool for automatic generation and execution of a simulation program using siman." Ohio University / OhioLINK, 1993. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1178123692.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Cuvillo, Juan del. "Breaking away from the OS shadow a program execution model aware thread virtual machine for multicore architectures /." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 120 p, 2008. http://proquest.umi.com/pqdweb?did=1601517941&sid=4&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Vigouroux, Xavier. "Analyse distribuée de traces d'exécution de programmes parallèles." Lyon, École normale supérieure (sciences), 1996. http://www.theses.fr/1996ENSL0016.

Full text
Abstract:
Le monitoring consiste a generer des informations de trace durant une execution d'un programme parallele pour detecter les problemes de performances. La quantite d'information generee par de tres grosses machines paralleles rend les outils d'analyse classiques inutilisables. Cette these resout ce probleme en distribuant l'information de trace sur plusieurs fichiers contenus sur plusieurs sites, les fichiers pouvant etre lus en parallele. La manipulation de ces fichiers afin d'obtenir une information coherente est la base d'un logiciel client-serveur grace auquel des clients demandent de l'information deja filtree sur une execution. Cette architecture client serveur est extensible (l'utilisateur peut creer ses propres clients) et modulable. Nous avons, d'autre part, cree deja plusieurs clients novateurs: client hierarchique, sonore, recherche automatique de problemes, interface filtrante aux outils classique, integration d'outil 3D
APA, Harvard, Vancouver, ISO, and other styles
41

Waldron, Todd Andrew. "Strategic development of a manufacturing execution system (MES) for cold chain management using information product mapping." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66043.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering; in conjunction with the Leaders for Global Operations Program at MIT, 2011.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (p. 63).<br>The Vaccines & Diagnostics (V&D) division of Novartis recently developed a global automation strategy that highlights the need to implement a manufacturing execution system (MES). Benefits of an MES (electronic production records) include enhancing the compliance position of the organization, reducing production delays, and improving process flexibility; however, implementing an MES at global manufacturing sites presents unique logistical challenges that need to be overcome. The goal of this thesis is to investigate cold chain management as an expanded functionality for an MES. The thesis attempts to identify best practices for the strategic implementation of an MES in the management of cold chain vaccine products. While the concepts presented in this thesis are in the context of managing the cold chain for vaccine products, the best practices can be applied to a variety of cold chain management scenarios. In order to generate best practice recommendations for the strategic implementation of a cold chain management MES, a thorough understanding of the manufacturing process will need to be acquired. The first tool used to gain this understanding was value-stream mapping (VSM). VSM provided some insight into the current paper-based cold chain management system; however, the tool was not applicable for understanding the flow of information generated within the cold chain management system. Another tool was used to enable the organization to focus on the data generated by a process, the information product map (IP-Map). Current-state IP-Maps of the cold chain at the Rosia, Italy, site were generated and numerous areas for improving the data quality were identified. Future-state IP-Maps of the cold chain at the Rosia, Italy, site were generated to demonstrate how the implementation of a cold chain MES could improve the shortcomings of the current system. The future-state IP-Maps were based on underlying assumptions that directly lead to recommendations for the cold chain MES implementation. First, a unit of measurement smaller than lot size must be selected for tracking material data in the MES. Second, data capture technology for material entering or leaving cold storage must be integrated with the MES.<br>by Todd Andrew Waldron.<br>S.M.<br>M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
42

Knight, Victoria M. "A systems perspective on project management : interdependencies in the execution of capital projects in the automotive industry." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/80987.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division; and, (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; in conjunction with the Leaders for Global Operations Program at MIT, 2013.<br>Cataloged from PDF version of thesis. Vita.<br>Includes bibliographical references (p. 58-59).<br>The primary focus of the thesis is the analysis of a project management tool in executing capital-intensive, multi-stakeholder projects. While the example in this thesis is the result of work at General Motors (GM)' Global Casting, Engine and Transmission Center in Pontiac, MI, it is largely applicable to the management of any corporate endeavor with both a large budget and scope. Two aspects of project management are analyzed: project task management and the communication channels by which this is achieved. Using the GM example, this thesis compares the task linkages in the Microsoft (MS) Project file with how often groups meet and what is shared at those meetings. Design Structure Matrix analysis shows that periodic meetings involving all inter-related stakeholders are necessary to preserve effective project-wide information sharing.<br>by Victoria M. Knight.<br>M.B.A.<br>S.M.
APA, Harvard, Vancouver, ISO, and other styles
43

Huang, Jin. "Detecting Server-Side Web Applications with Unrestricted File Upload Vulnerabilities." Wright State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=wright163007760528389.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Dhar, Siddharth. "Optimizing TEE Protection by Automatically Augmenting Requirements Specifications." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/98730.

Full text
Abstract:
An increasing number of software systems must safeguard their confidential data and code, referred to as critical program information (CPI). Such safeguarding is commonly accomplished by isolating CPI in a trusted execution environment (TEE), with the isolated CPI becoming a trusted computing base (TCB). TEE protection incurs heavy performance costs, as TEE-based functionality is expensive to both invoke and execute. Despite these costs, projects that use TEEs tend to have unnecessarily large TCBs. As based on our analysis, developers often put code and data into TEE for convenience rather than protection reasons, thus not only compromising performance but also reducing the effectiveness of TEE protection. In order for TEEs to provide maximum benefits for protecting CPI, their usage must be systematically incorporated into the entire software engineering process, starting from Requirements Engineering. To address this problem, we present a novel approach that incorporates TEEs in the Requirements Engineering phase by using natural language processing (NLP) to classify those software requirements that are security critical and should be isolated in TEE. Our approach takes as input a requirements specification and outputs a list of annotated software requirements. The annotations recommend to the developer which corresponding features comprise CPI that should be protected in a TEE. Our evaluation results indicate that our approach identifies CPI with a high degree of accuracy to incorporate safeguarding CPI into Requirements Engineering.<br>Master of Science<br>An increasing number of software systems must safeguard their confidential data like passwords, payment information, personal details, etc. This confidential information is commonly protected using a Trusted Execution Environment (TEE), an isolated environment provided by either the existing processor or separate hardware that interacts with the operating system to secure sensitive data and code. Unfortunately, TEE protection incurs heavy performance costs, with TEEs being slower than modern processors and frequent communication between the system and the TEE incurring heavy performance overhead. We discovered that developers often put code and data into TEE for convenience rather than protection purposes, thus not only hurting performance but also reducing the effectiveness of TEE protection. By thoroughly examining a project's features in the Requirements Engineering phase, which defines the project's functionalities, developers would be able to understand which features handle confidential data. To that end, we present a novel approach that incorporates TEEs in the Requirements Engineering phase by means of Natural Language Processing (NLP) tools to categorize the project requirements that may warrant TEE protection. Our approach takes as input a project's requirements and outputs a list of categorized requirements defining which requirements are likely to make use of confidential information. Our evaluation results indicate that our approach performs this categorization with a high degree of accuracy to incorporate safeguarding the confidentiality related features in the Requirements Engineering phase.
APA, Harvard, Vancouver, ISO, and other styles
45

Jamrozik, Hervé. "Aide à la mise au point des applications parallèles et réparties à base d'objets persistants." Phd thesis, Grenoble 1, 1993. http://tel.archives-ouvertes.fr/tel-00005129.

Full text
Abstract:
L'objectif de ce travail est d'offrir une aide a la mise au point des applications paralleles et reparties, a base dobjets persistants, permettant une mise au point cyclique et offrant une observation de l'execution dun haut niveau dabstraction. Le non-determinisme et la sensibilite a toute perturbation de ce type d'execution rendent tres difficile la correction des erreurs liees aux conditions d'execution. Les limitations de l'analyse statique des programmes et des approches dynamiques fondees sur une execution courante nous conduisent a preconiser la mise en oeuvre de methodes basees sur la reproduction d'une execution qui apportent une solution au non-determinisme en fixant une execution. La mise au point s'effectue alors dans un contexte particulier ou le comportement de l'execution a corriger est deja connu et peut etre observe a l'aide de vues de l'execution adaptees aux particularites de l'environnement dexecution. Nous definissons, dans le contexte des systemes a objets, un systeme de mise au point base sur la reproduction (dirigee par le controle) d'une execution, permettant une mise au point cyclique et une observation de l'execution au niveau des objets. Nous specifions le service de reexecution, le service d'observation, et proposons une architecture modulaire pour l'assemblage des composants logiciels realisant ces services. Nous presentons ensuite l'application concrete des propositions precedentes au systeme Guide. Nous avons realise un noyau de reexecution, structure en objets Guide, qui se charge de maniere automatique de l'enregistrement et de la reproduction dune execution Guide.
APA, Harvard, Vancouver, ISO, and other styles
46

Apiwattanapong, Taweesup. "Identifying Testing Requirements for Modified Software." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16148.

Full text
Abstract:
Throughout its lifetime, software must be changed for many reasons, such as bug fixing, performance tuning, and code restructuring. Testing modified software is the main activity performed to gain confidence that changes behave as they are intended and do not have adverse effects on the rest of the software. A fundamental problem of testing evolving software is determining whether test suites adequately exercise changes and, if not, providing suitable guidance for generating new test inputs that target the modified behavior. Existing techniques evaluate the adequacy of test suites based only on control- and data-flow testing criteria. They do not consider the effects of changes on program states and, thus, are not sufficiently strict to guarantee that the modified behavior is exercised. Also, because of the lack of this guarantee, these techniques can provide only limited guidance for generating new test inputs. This research has developed techniques that will assist testers in testing evolving software and provide confidence in the quality of modified versions. In particular, this research has developed a technique to identify testing requirements that ensure that the test cases satisfying them will result in different program states at preselected parts of the software. This research has also developed supporting techniques for identifying testing requirements. Such techniques include (1) a differencing technique, which computes differences and correspondences between two software versions and (2) two dynamic-impact-analysis techniques, which identify parts of software that are likely affected by changes with respect to a set of executions.
APA, Harvard, Vancouver, ISO, and other styles
47

Wu, Meng. "Analysis and Enforcement of Properties in Software Systems." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/90887.

Full text
Abstract:
Due to the lack of effective techniques for detecting and mitigating property violations, existing approaches to ensure the safety and security of software systems are often labor intensive and error prone. Furthermore, they focus primarily on functional correctness of the software code while ignoring micro-architectural details of the underlying processor, such as cache and speculative execution, which may undermine their soundness guarantees. To fill the gap, I propose a set of new methods and tools for ensuring the safety and security of software systems. Broadly speaking, these methods and tools fall into three categories. The first category is concerned with static program analysis. Specifically, I develop a novel abstract interpretation framework that considers both speculative execution and a cache model, and guarantees to be sound for estimating the execution time of a program and detecting side-channel information leaks. The second category is concerned with static program transformation. The goal is to eliminate side channels by equalizing the number of CPU cycles and the number of cache misses along all program paths for all sensitive variables. The third category is concerned with runtime safety enforcement. Given a property that may be violated by a reactive system, the goal is to synthesize an enforcer, called the shield, to correct the erroneous behaviors of the system instantaneously, so that the property is always satisfied by the combined system. I develop techniques to make the shield practical by handling both burst error and real-valued signals. The proposed techniques have been implemented and evaluated on realistic applications to demonstrate their effectiveness and efficiency.<br>Doctor of Philosophy<br>It is important for everything around us to follow some rules to work correctly. That is the same for our software systems to follow the security and safety properties. Especially, softwares may leak information via unexpected ways, e.g. the program timing, which makes it more difficult to be detected or mitigated. For instance, if the execution time of a program is related to the sensitive value, the attacker may obtain information about the sensitive value. On the other side, due to the complexity of software, it is nearly impossible to fully test or verify them. However, the correctness of software systems at runtime is crucial for critical applications. While existing approaches to find or resolve properties violation problem are often labor intensive and error prone, in this dissertation, I first propose an automated tool for detecting and mitigating the security vulnerability through program timing. Programs processed by the tool are guaranteed to be time constant under any sensitive values. I have also taken the influence of speculative execution, which is the cause behind recent Spectre and Meltdown attack, into consideration for the first time. To enforce the correctness of programs at runtime, I introduce an extra component that can be attached to the original system to correct any violation if it happens, thus the entire system will still be correct. All proposed methods have been evaluated on a variety of real world applications. The results show that these methods are effective and efficient in practice.
APA, Harvard, Vancouver, ISO, and other styles
48

Henry, Julien. "Static analysis of program by Abstract Interpretation and Decision Procedures." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM037/document.

Full text
Abstract:
L'analyse statique de programme a pour but de prouver automatiquement qu'un programme vérifie certaines propriétés. L'interprétation abstraite est un cadre théorique permettant de calculer des invariants de programme. Ces invariants sont des propriétés sur les variables du programme vraies pour toute exécution. La précision des invariants calculés dépend de nombreux paramètres, en particulier du domaine abstrait et de l'ordre d'itération utilisés pendant le calcul d'invariants. Dans cette thèse, nous proposons plusieurs extensions de cette méthode qui améliorent la précision de l'analyse.Habituellement, l'interprétation abstraite consiste en un calcul de point fixe d'un opérateur obtenu après convergence d'une séquence ascendante, utilisant un opérateur appelé élargissement. Le point fixe obtenu est alors un invariant. Il est ensuite possible d'améliorer cet invariant via une séquence descendante sans élargissement. Nous proposons une méthode pour améliorer un point fixe après la séquence descendante, en recommençant une nouvelle séquence depuis une valeur initiale choisie judiscieusement. L'interprétation abstraite peut égalementêtre rendue plus précise en distinguant tous les chemins d'exécution du programme, au prix d'une explosion exponentielle de la complexité. Le problème de satisfiabilité modulo théorie (SMT), dont les techniques de résolution ont été grandement améliorée cette décennie, permettent de représenter ces ensembles de chemins implicitement. Nous proposons d'utiliser cette représentation implicite à base de SMT et de les appliquer à des ordres d'itération de l'état de l'art pour obtenir des analyses plus précises.Nous proposons ensuite de coupler SMT et interprétation abstraite au sein de nouveaux algorithmes appelés Modular Path Focusing et Property-Guided Path Focusing, qui calculent des résumés de boucles et de fonctions de façon modulaire, guidés par des traces d'erreur. Notre technique a différents usages: elle permet de montrer qu'un état d'erreur est inatteignable, mais également d'inférer des préconditions aux boucles et aux fonctions.Nous appliquons nos méthodes d'analyse statique à l'estimation du temps d'exécution pire cas (WCET). Dans un premier temps, nous présentons la façon d'exprimer ce problème via optimisation modulo théorie, et pourquoi un encodage naturel du problème en SMT génère des formules trop difficiles pour l'ensemble des solveurs actuels. Nous proposons un moyen simple et efficace de réduire considérablement le temps de calcul des solveurs SMT en ajoutant aux formules certaines propriétés impliquées obtenues par analyse statique. Enfin, nous présentons l'implémentation de Pagai, un nouvel analyseur statique pour LLVM, qui calcule des invariants numériques grâce aux différentes méthodes décrites dans cette thèse. Nous avons comparé les différentes techniques implémentées sur des programmes open-source et des benchmarks utilisés par la communauté<br>Static program analysis aims at automatically determining whether a program satisfies some particular properties. For this purpose, abstract interpretation is a framework that enables the computation of invariants, i.e. properties on the variables that always hold for any program execution. The precision of these invariants depends on many parameters, in particular the abstract domain, and the iteration strategy for computing these invariants. In this thesis, we propose several improvements on the abstract interpretation framework that enhance the overall precision of the analysis.Usually, abstract interpretation consists in computing an ascending sequence with widening, which converges towards a fixpoint which is a program invariant; then computing a descending sequence of correct solutions without widening. We describe and experiment with a method to improve a fixpoint after its computation, by starting again a new ascending/descending sequence with a smarter starting value. Abstract interpretation can also be made more precise by distinguishing paths inside loops, at the expense of possibly exponential complexity. Satisfiability modulo theories (SMT), whose efficiency has been considerably improved in the last decade, allows sparse representations of paths and sets of paths. We propose to combine this SMT representation of paths with various state-of-the-art iteration strategies to further improve the overall precision of the analysis.We propose a second coupling between abstract interpretation and SMT in a program verification framework called Modular Path Focusing, that computes function and loop summaries by abstract interpretation in a modular fashion, guided by error paths obtained with SMT. Our framework can be used for various purposes: it can prove the unreachability of certain error program states, but can also synthesize function/loop preconditions for which these error states are unreachable.We then describe an application of static analysis and SMT to the estimation of program worst-case execution time (WCET). We first present how to express WCET as an optimization modulo theory problem, and show that natural encodings into SMT yield formulas intractable for all current production-grade solvers. We propose an efficient way to considerably reduce the computation time of the SMT-solvers by conjoining to the formulas well chosen summaries of program portions obtained by static analysis.We finally describe the design and the implementation of Pagai,a new static analyzer working over the LLVM compiler infrastructure,which computes numerical inductive invariants using the various techniques described in this thesis.Because of the non-monotonicity of the results of abstract interpretation with widening operators, it is difficult to conclude that some abstraction is more precise than another based on theoretical local precision results. We thus conducted extensive comparisons between our new techniques and previous ones, on a variety of open-source packages and benchmarks used in the community
APA, Harvard, Vancouver, ISO, and other styles
49

Cubero-Castan, Michel. "Vers une définition méthodique d'architecture de calculateur pour l'exécution parallèle des langages fonctionnels." Toulouse 3, 1988. http://www.theses.fr/1988TOU30159.

Full text
Abstract:
Definition d'une machine offrant trois qualites primordiales: efficacite, simplicite et uniformite. Les principales etapes de cette conception sont presentees: la caracterisation des mecanismes de base; la definition du modele d'execution vis a vis de l'architecture; les elements d'evaluation
APA, Harvard, Vancouver, ISO, and other styles
50

Yourst, Matt T. "Peptidal processor enhanced with programmable translation and integrated dynamic acceleration logic /." Diss., Online access via UMI:, 2005.

Find full text
Abstract:
Thesis (M.S.)--State University of New York at Binghamton, Department of Computer Science, Thomas J. Watson School of Engineering and Applied Science, 2005.<br>"This dissertation is a compound document (contains both a paper copy and a CD as part of the dissertation)"--ProQuest abstract document view. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography