To see the other types of publications on this topic, follow the link: Install programs (Computer programs).

Dissertations / Theses on the topic 'Install programs (Computer programs)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Install programs (Computer programs).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wu, YanHao. "SIP-based location service provision." Thesis, University of the Western Cape, 2005. http://etd.uwc.ac.za/index.php?module=etd&amp.

Full text
Abstract:
Location-based service (LBS) is a geographical location-related service that provides highly personalized services for users. It is a platform for network operators to provide new and innovative ways of increasing profits from new services. With the rapidly growing trend toward LBS, there is a need for standard LBS protocols. This thesis started with introducing the Internet Engineering Task Force GEOPRIV working group, which endeavors to provide standard LBS protocols capable of transferring geographic location information for diverse location-aware applications. Through careful observation, it was found that Session Initiation Protocol (SIP) is well suited to the GEOPRIV requirements. The aim of this research was therefore to explore the possibility of the integration of LBS and the SIP protocol and, to some extent fulfill the GEOPRIV requirements.
APA, Harvard, Vancouver, ISO, and other styles
2

Srinivas, Tejaswi. "Mercury Instant Messaging System: A collaborative instant messaging tool." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2677.

Full text
Abstract:
The purpose of this project is to use Java technology to create an instant messenger application that could be used by any person who has the basic knowledge of working with a graphical user interface. The goal here is to develop an application that provides communication to users running different operating systems.
APA, Harvard, Vancouver, ISO, and other styles
3

Illsley, Martin. "Transforming imperative programs." Thesis, University of Edinburgh, 1988. http://hdl.handle.net/1842/10973.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Murrill, Branson Wayne. "Error flow in computer programs." W&M ScholarWorks, 1991. https://scholarworks.wm.edu/etd/1539623805.

Full text
Abstract:
White box program analysis has been applied to program testing for some time, but this analysis is primarily grounded in program syntax, while errors arise from incorrect program semantics. We introduce a semantically-based technique called error flow analysis, which is used to investigate the behavior of a program at the level of data state transitions. Error flow analysis is based on a model of program execution as a composition of functions that each map a prior data state into a subsequent data state. According to the fault/failure model, failure occurs when a fault causes an infection in the data state which then propagates to output. A faulty program may also produce coincidentally correct output for a given input if the fault resists infection, or an infection is cancelled by subsequent computation. We investigate this phenomenon using dynamic error flow analysis to track the infection and propagation of errors in the data states of programs with seeded faults. This information is gathered for a particular fault over many inputs on a path-by-path basis to estimate execution, infection, and failure rates as well as characteristics of error flow behavior for the fault. Those paths that exhibit high failure rates would be more desirable to test for this fault than those with lower failure rates, and we look for error flow characteristics that correlate with high failure rate. We present the results of dynamic error flow experiments on several programs, and suggest ways in which error flow information can be used in program analysis and testing.
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Huiqing. "Refactoring Haskell programs." Thesis, University of Kent, 2006. https://kar.kent.ac.uk/14425/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Allemang, Dean T. "Understanding programs as devices /." The Ohio State University, 1990. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487676261012487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Xia, Ying Han. "Establishing trust in encrypted programs." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24761.

Full text
Abstract:
Thesis (Ph.D.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Owen, Henry; Committee Co-Chair: Abler, Randal; Committee Member: Copeland, John; Committee Member: Giffin, Jon; Committee Member: Hamblen, Jim.
APA, Harvard, Vancouver, ISO, and other styles
8

Koskinen, Eric John. "Thermal verification of programs." Thesis, University of Cambridge, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.607698.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ravelo, Jesus N. "Relations, graphs and programs." Thesis, University of Oxford, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Blumofe, Robert D. (Robert David). "Executing multithreaded programs efficiently." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/11095.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (p. 135-145).
by Robert D. Blumofe.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
11

Frigo, Matteo 1968. "Portable high-performance programs." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Melody, Kevin Andrew. "Computer programs supporting instruction in acoustics." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1998. http://handle.dtic.mil/100.2/ADA343632.

Full text
Abstract:
Thesis (M.S. in Engineering Acoustics) Naval Postgraduate School, March 1998.
Thesis advisor(s): Sanders, James V. "March 1998." Includes bibliographical references (p. 105). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
13

Green, Thomas Alan. "Computer programs supporting instruction in acoustics." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1996. http://handle.dtic.mil/100.2/ADA327082.

Full text
Abstract:
Thesis (M.S. in Engineering Acoustics) Naval Postgraduate School, December 1996.
Thesis advisor(s): Sanders, J. V.; Atchley, A. A. "December 1996." Includes bibliographical references (p. 215). Also Available online.
APA, Harvard, Vancouver, ISO, and other styles
14

Givan, Robert Lawrence. "Automatically inferring properties of computer programs." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/11051.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (p. 97-101).
by Robert Lawrence Givan, Jr.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
15

Lloyd, William Samuel. "Causal reasoning about distributed programs." W&M ScholarWorks, 1991. https://scholarworks.wm.edu/etd/1539623806.

Full text
Abstract:
We present an integrated approach to the specification, verification and testing of distributed programs. We show how "global" properties defined by transition axiom specifications can be interpreted as definitions of causal relationships between process states. We explain why reasoning about causal rather than global relationships yields a clearer picture of distributed processing.;We present a proof system for showing the partial correctness of CSP programs that places strict restrictions on assertions. It admits no global assertions. A process annotation may reference only local state. Glue predicates relate pairs of process states at points of interprocess communication. No assertion references auxiliary variables; appropriate use of control predicates and vector clock values eliminates the need for them. Our proof system emphasizes causality. We do not prove processes correct in isolation. We instead track causality as we write our annotations. When we come to a send or receive, we consider all the statements that could communicate with it, and use the semantics of CSP message passing to derive its postcondition. We show that our CSP proof system is sound and relatively complete, and that we need only recursive assertions to prove that any program in our fragment of CSP is partially correct. Our proof system is, therefore, as powerful as other proof systems for CSP.;We extend our work to develop proof systems for asynchronous communication. For each proof system, our motivation is to be able to write proofs that show that code satisfies its specification, while making only assertions we can use to define the aspects of process state that we should trace during test runs, and check during postmortem analysis. We can trace the assertions we make without having to modify program code or add synchronization or message passing.;Why, if we verify correctness, would we want to test? We observe that a proof, like a program, is susceptible to error. By tracing and analyzing program state during testing, we can build our confidence that our proof is valid.
APA, Harvard, Vancouver, ISO, and other styles
16

Wassell, Mark P. "Semantic optimisation in datalog programs." Master's thesis, University of Cape Town, 1990. http://hdl.handle.net/11427/13556.

Full text
Abstract:
Bibliography: leaves 138-142.
Datalog is the fusion of Prolog and Database technologies aimed at producing an efficient, logic-based, declarative language for databases. This fusion takes the best of logic programming for the syntax of Datalog, and the best of database systems for the operational part of Datalog. As is the case with all declarative languages, optimisation is necessary to improve the efficiency of programs. Semantic optimisation uses meta-knowledge describing the data in the database to optimise queries and rules, aiming to reduce the resources required to answer queries. In this thesis, I analyse prior work that has been done on semantic optimisation and then propose an optimisation system for Datalog that includes optimisation of recursive programs and a semantic knowledge management module. A language, DatalogiC, which is an extension of Datalog that allows semantic knowledge to be expressed, has also been devised as an implementation vehicle. Finally, empirical results concerning the benefits of semantic optimisation are reported.
APA, Harvard, Vancouver, ISO, and other styles
17

Bartenstein, Thomas W. "Rate Types for Stream Programs." Thesis, State University of New York at Binghamton, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10643063.

Full text
Abstract:

RATE TYPES is a novel type system to reason about and optimize data-intensive programs. Built around stream languages, RATE TYPES performs static quantitative reasoning about stream rates—the frequency of data items in a stream being consumed, processed, and produced. Despite the fact that streams are fundamentally dynamic, there are two essential concepts of stream rate control—throughput ratio and natural rate—which are intimately related to the program structure itself and can be effectively reasoned about by a type system. RATE TYPES is proven to correspond with a timeaware operational semantics which supports parallelism. The strong correspondence result tolerates arbitrary schedules, and does not require any synchronization between stream filters. RATE TYPES is also implemented on stream programs, demonstrating its effectiveness in predicting stream data rates in real-world stream programs. Applications of RATE TYPES are discussed, including an application of RATE TYPES to optimize energy consumption.

APA, Harvard, Vancouver, ISO, and other styles
18

Lapointe, Stéphane. "Induction of recursive logic programs." Thesis, University of Ottawa (Canada), 1992. http://hdl.handle.net/10393/7467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Wu, Jerry. "Using dynamic analysis to infer Python programs and convert them into database programs." Thesis, Massachusetts Institute of Technology, 2018. https://hdl.handle.net/1721.1/121643.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 195-196).
I present Nero, a new system that automatically infers and regenerates programs that access databases. The developer first implements a Python program that uses lists and dictionaries to implement the database functionality. Nero then instruments the Python list and dictionary implementations and uses active learning to generate inputs that enable it to infer the behavior of the program. The program can be implemented in any arbitrary style as long as it implements behavior expressible in the domain specific language that characterizes the behaviors that Nero is designed to infer. The regenerated program replaces the Python lists and dictionaries with database tables and contains all code required to successfully access the databases. Results from several inferred and regenerated applications highlight the ability of Nero to enable developers with no knowledge of database programming to obtain programs that successfully access databases.
by Jerry Wu.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
20

Duracz, Jan Andrzej. "Verification of floating point programs." Thesis, Aston University, 2010. http://publications.aston.ac.uk/15778/.

Full text
Abstract:
In this thesis we present an approach to automated verification of floating point programs. Existing techniques for automated generation of correctness theorems are extended to produce proof obligations for accuracy guarantees and absence of floating point exceptions. A prototype automated real number theorem prover is presented, demonstrating a novel application of function interval arithmetic in the context of subdivision-based numerical theorem proving. The prototype is tested on correctness theorems for two simple yet nontrivial programs, proving exception freedom and tight accuracy guarantees automatically. The prover demonstrates a novel application of function interval arithmetic in the context of subdivision-based numerical theorem proving. The experiments show how function intervals can be used to combat the information loss problems that limit the applicability of traditional interval arithmetic in the context of hard real number theorem proving.
APA, Harvard, Vancouver, ISO, and other styles
21

Florez-Larrahondo, German. "A trusted environment for MPI programs." Master's thesis, Mississippi State : Mississippi State University, 2002. http://library.msstate.edu/etd/show.asp?etd=etd-10172002-103135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Goulet, Jean 1939. "Data structures for chess programs." Thesis, McGill University, 1986. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=65427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Jones, Philip E. C. "Common subexpression detection in dataflow programs /." Title page, contents and summary only, 1989. http://web4.library.adelaide.edu.au/theses/09SM/09smj78.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Horsfall, Benjamin. "Automated reasoning for reflective programs." Thesis, University of Sussex, 2014. http://sro.sussex.ac.uk/id/eprint/49871/.

Full text
Abstract:
Reflective programming allows one to construct programs that manipulate or examine their behaviour or structure at runtime. One of the benefits is the ability to create generic code that is able to adapt to being incorporated into different larger programs, without modifications to suit each concrete setting. Due to the runtime nature of reflection, static verification is difficult and has been largely ignored or only weakly supported. This work focusses on supporting verification for cases where generic code that uses reflection is to be used in a “closed” program where the structure of the program is known in advance. This thesis first describes extensions to a verification system and semi-automated tool that was developed to reason about heap-manipulating programs which may store executable code on the heap. These extensions enable the tool to support a wider range of programs on account of the ability to provide stronger specifications. The system's underlying logic is an extension of separation logic that includes nested Hoare-triples which describe behaviour of stored code. Using this verification tool, with the crucial enhancements in this work, a specified reflective library has been created. The resulting work presents an approach where metadata is stored on the heap such that the reflective library can be implemented using primitive commands and then specified and verified, rather than developing new proof rules for the reflective operations. The supported reflective functions characterise a subset of Java's reflection library and the specifications guarantee both memory safety and a degree of functional correctness. To demonstrate the application of the developed solution two case studies are carried out, each of which focuses on different reflection features. The contribution to knowledge is a first look at how to support semi-automated static verification of reflective programs with meaningful specifications.
APA, Harvard, Vancouver, ISO, and other styles
25

D'Paola, Oscar Naim. "Performance visualization of parallel programs." Thesis, University of Southampton, 1995. https://eprints.soton.ac.uk/365532/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Wickerson, John Peter. "Concurrent verification for sequential programs." Thesis, University of Cambridge, 2013. https://www.repository.cam.ac.uk/handle/1810/265613.

Full text
Abstract:
This dissertation makes two contributions to the field of software verification. The first explains how verification techniques originally developed for concurrency can be usefully applied to sequential programs. The second describes how sequential programs can be verified using diagrams that have a parallel nature. The first contribution involves a new treatment of stability in verification methods based on rely-guarantee. When an assertion made in one thread of a concurrent system cannot be invalidated by the actions of other threads, that assertion is said to be 'stable'. Stability is normally enforced through side-conditions on rely-guarantee proof rules. This dissertation proposes instead to encode stability information into the syntactic form of the assertion. This approach, which we call explicit stabilisation, brings several benefits. First, we empower rely-guarantee with the ability to reason about library code for the first time. Second, when the rely-guarantee method is redepleyed in a sequential setting, explicit stabilisation allows more details of a module's implementation to be hidden when verifying clients. Third, explicit stabilisation brings a more nuanced understanding of the important issue of stability in concurrent and sequential verification; such an understanding grows ever more important as verification techniques grow ever more complex. The second contribution is a new method of presenting program proofs conducted in separation logic. Building on work by Jules Bean, the ribbon proof is a diagrammatic alternative to the standard 'proof outline'. By emphasising the structure of a proof, ribbon proofs are intelligible and hence useful pedagogically. Because they contain less redundancy than proof outlines, and allow each proof step to be checked locally, they are highly scalable; this we illustrate with a ribbon proof of the Version 7 Unix memory manager. Where proof outlines are cumbersome to modify, ribbon proofs can be visually manoeuvred to yield proofs of variant programs. We describe the ribbon proof system, prove its soundness and completeness, and outline a prototype tool for mechanically checking the diagrams it produ1res.
APA, Harvard, Vancouver, ISO, and other styles
27

Nikolik, Borislav. "Data Dependence in Programs Involving Indexed Variables." PDXScholar, 1993. https://pdxscholar.library.pdx.edu/open_access_etds/4688.

Full text
Abstract:
Symbolic execution is a powerful technique used to perform various activities such as program testing, formal verification of programs, etc. However, symbolic execution does not deal with indexed variables in an adequate manner. Integration of indexed variables such as arrays into symbolic execution would increase the generality of this technique. We present an original substitution technique that produces array-term-free constraints as a counterargument to the commonly accepted belief that symbolic execution cannot handle arrays. The substitution technique deals with constraints involving array terms with a single aggregate name, array terms with multiple aggregate names, and nested array terms. Our approach to solving constraints involving array terms is based on the analysis of the relationship between the array subscripts. Dataflow dependence analysis of programs involving indexed variables suffers from problems of undecidability. We propose a separation technique in which the array subscript constraints are separated from the loop path constraints. The separation technique suggests that the problem of establishing data dependencies is not as hard as the general loop problem. In this respect, we present a new general heuristic program analysis technique which is used to preserve the properties of the relations between program variables.
APA, Harvard, Vancouver, ISO, and other styles
28

Harman, Mark. "Functional models of procedural programs." Thesis, London Metropolitan University, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.315232.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Martin, Jonathan Charles. "Judgement day : terminating logic programs." Thesis, University of Southampton, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.326732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Cooper, Robert Charles Beaumont. "Debugging concurrent and distributed programs." Thesis, University of Cambridge, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.256762.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Hsieh, Wilson Cheng-Yi. "Extracting parallelism from sequential programs." Thesis, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/14752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

SermuliÅ, Å¡ JÄ nis. "Cache optimizations for stream programs." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33359.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (leaves 73-75).
As processor speeds continue to increase, the memory bottleneck remains a primary impediment to attaining performance. Effective use of the memory hierarchy can result in significant performance gains. This thesis focuses on a set of transformations that either reduce cache-miss rate or reduce the number of memory accesses for the class of streaming applications, which are becoming increasingly prevalent in embedded, desktop and high-performance processing. A fully automated optimization algorithm is presented that reduces the memory bottleneck for stream applications developed in the high-level stream programming language StreamIt. This thesis presents four memory optimizations: 1) cache aware fusion, which combines adjacent program components while respecting instruction and data cache constraints, 2) execution scaling, which judiciously repeats execution of program components to improve instruction and state locality, 3) scalar replacement, which converts certain data buffers into a sequence of scalar variables that can be register allocated, and 4) optimized buffer management, which reduces the overall number of memory accesses issued by the program. The cache aware fusion and execution scaling reduce the instruction and data cache-miss rates and are founded upon a simple and intuitive cache model that quantifies the temporal locality for a sequence of actor executions.
(cont.) The scalar replacement and optimized buffer management reduce the number of memory accesses. An experimental evaluation of the memory optimizations is presented for three different architectures: StrongARM 1110, Pentium 3 and Itanium 2. Compared to unoptimized StreamIt code, the memory optimizations presented in this thesis yield a 257% speedup on the StrongARM, a 154% speedup on the Pentium 3, and a 152% speedup on Itanium 2. These numbers represent averages over our streaming benchmark suite. The most impressive speedups are demonstrated on an embedded processor StrongARM, which has only a single data and a single instruction cache, thus increasing the overall cost of memory operations and cache misses.
by Jānis. Sermuliņš.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
33

Milicevic, Aleksandar Ph D. Massachusetts Institute of Technology. "Executable specifications for Java programs." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62442.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 55-57).
In this thesis, we present a unified environment for running declarative specifications in the context of an imperative object-oriented programming language. Specifications are Alloy-like, written in first-order relational logic with transitive closure, and the imperative language for this purpose is Java. By being able to mix imperative code with executable declarative specifications, the user can easily express constraint problems in-place, i.e. in terms of the existing data structures and objects on the heap. After a solution is found, our framework will automatically update the heap to reflect the solution, so the user can continue to manipulate the program heap in the usual imperative way, without ever having to manually translate the problem back and forth between the host programming environment and the solver language. We show that this approach is not only convenient, but, for certain problems, like puzzles or NP-complete graph algorithms, it can also outperform the manual implementation. We also present an optimization technique that allowed us to run our tool on heaps with almost 2000 objects.
by Aleksandar Milicevic.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
34

Ansel, Jason (Jason Andrew). "Autotuning programs with algorithmic choice." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/87913.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 231-251).
The process of optimizing programs and libraries, both for performance and quality of service, can be viewed as a search problem over the space of implementation choices. This search is traditionally manually conducted by the programmer and often must be repeated when systems, tools, or requirements change. The overriding goal of this work is to automate this search so that programs can change themselves and adapt to achieve performance portability across different environments and requirements. To achieve this, first, this work presents the PetaBricks programming language which focuses on ways for expressing program implementation search spaces at the language level. Second, this work presents OpenTuner which provides sophisticated techniques for searching these search spaces in a way that can easily be adopted by other projects. PetaBricks is a implicitly parallel language and compiler where having multiple implementations of multiple algorithms to solve a problem is the natural way of programming. Choices are provided in a way that also allows our compiler to tune at a finer granularity. The PetaBricks compiler autotunes programs by making both fine-grained as well as algorithmic choices. Choices also include different automatic parallelization techniques, data distributions, algorithmic parameters, transformations, and blocking. PetaBricks also introduces novel techniques to autotune algorithms for different convergence criteria or quality of service requirements. We show that the PetaBricks autotuner is often able to find non-intuitive poly-algorithms that outperform more traditional hand written solutions. OpenTuner is a open source framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully-customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy to use interface for communicating with the program to be autotuned. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously; techniques that perform well will dynamically be allocated a larger proportion of tests. OpenTuner has been shown to perform well on complex search spaces up to 10³⁰⁰⁰ possible configurations in size.
by Jason Ansel.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
35

Romero, M. B. A. "Graphical creation of structured programs." Thesis, University of Sussex, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.371200.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Hinz, Peter. "Visualizing the performance of parallel programs." Master's thesis, University of Cape Town, 1996. http://hdl.handle.net/11427/16141.

Full text
Abstract:
Bibliography: pages 110-115.
The performance analysis of parallel programs is a complex task, particularly if the program has to be efficient over a wide range of parallel machines. We have designed a performance analysis system called Chiron that uses scientific visualization techniques to guide and help the user in performance analysis activities. The aim of Chiron is to give the user full control over what section of the data he/she wants to investigate in detail. Chiron uses interactive three-dimensional graphics techniques to display large amounts of data in a compact and easy to understand/ conceptualize way. The system assists in the tracking of performance bottlenecks by showing data in 10 different views and allowing the user to interact with the data. In this thesis the design and implementation of Chiron are described, and its effectiveness illustrated by means of three case studies.
APA, Harvard, Vancouver, ISO, and other styles
37

Das, Champak. "Automating transformational design for distributed programs." FIU Digital Commons, 1996. http://digitalcommons.fiu.edu/etd/2736.

Full text
Abstract:
We address the problem of designing concurrent, reactive, nonterminating programs. Our approach to developing concurrent programs involves the use of correctness-preserving transformations to realize each step of program development. The transformations we have designed automatically guarantee the preservation of the deadlock freedom property, and hence deadlock freedom does not have to be manually verified after each development step. Since our transformations are syntactic, they are easily mechanizable as well. This makes syntactic transformations particularly appealing for the development of large, complex, and correct distributed systems, where a manual approach would be prohibitively expensive. In this work we present a set of syntactic transformations along with an example of their application to the development of a simplified mobile telephone system.
APA, Harvard, Vancouver, ISO, and other styles
38

Kushman, Nate. "Generating computer programs from natural language descriptions." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101572.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 159-169).
This thesis addresses the problem of learning to translate natural language into preexisting programming languages supported by widely-deployed computer systems. Generating programs for existing computer systems enables us to take advantage of two important capabilities of these systems: computing the semantic equivalence between programs, and executing the programs to obtain a result. We present probabilistic models and inference algorithms which integrate these capabilities into the learning process. We use these to build systems that learn to generate programs from natural language in three different computing domains: text processing, solving math problems, and performing robotic tasks in a virtual world. In all cases the resulting systems provide significant performance gains over strong baselines which do not exploit the underlying system capabilities to help interpret the text.
by Nate Kushman.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
39

Muqtadir, Abdul. "Real-time finance management system." CSUSB ScholarWorks, 2006. https://scholarworks.lib.csusb.edu/etd-project/2992.

Full text
Abstract:
Discusses the development of a real-time finance management system (RFMS) computer application. RFMS lets users learn about and manage their personal finances and stock portfolio. Finances can be managed using management tools and calculators. The program uses a Java/XML based approach where real-time market data from different stock exchanges is fetched and displayed for the user. Stock performance can then be graphed.
APA, Harvard, Vancouver, ISO, and other styles
40

Abu, Hashish Nabil. "Mutation analysis of dynamically typed programs." Thesis, University of Hull, 2013. http://hydra.hull.ac.uk/resources/hull:8444.

Full text
Abstract:
The increasing use of dynamically typed programming languages brings a new challenge to software testing. In these languages, types are not checked at compile-time. Type errors must be found by testing and in general, programs written in these languages require additional testing compared to statically typed languages. Mutation analysis (or mutation testing) has been shown to be effective in testing statically (or strongly) typed programs. In statically typed programs, the type information is essential to ensure only type-correct mutants are generated. Mutation analysis has not so far been fully used for dynamically typed programs. In dynamically typed programs, at compile-time, the types of the values held in variables are not known. Therefore, it is not clear if a variable should be mutated with number, Boolean, string, or object mutation operators. This thesis investigates and introduces new approaches for the mutation analysis of dynamically typed programs. The first approach is a static approach that employs the static type context of variables to determine, if possible, type information and generate mutants in the manner of traditional mutation analysis. With static mutation there is the danger that the type context does not allow the precise type to be determined and so type-mutations are produced. In a type-mutation, the original and mutant expressions have a different type. These mutants may be too easily killed and if they are then they represent incompetent mutants that do not force the tester to improve the test set. The second approach is designed to avoid type-mutations. This approach requires that the types of variables are discovered. The types of variables are discovered at run-time. Using type information, it is possible to generate only type-correct mutants. This dynamic approach, although more expensive computationally, is more likely to produce high quality, difficult to kill, mutants.
APA, Harvard, Vancouver, ISO, and other styles
41

Mareček, Jakub. "Exploiting structure in integer programs." Thesis, University of Nottingham, 2012. http://eprints.nottingham.ac.uk/49276/.

Full text
Abstract:
The thesis argues the case for exploiting certain structures in integer linear programs. Integer linear programs are optimisation problems, where one minimises or maximises a linear function of variables, whose values are required to be integral as well as satisfying certain linear equalities and inequalities. For such an abstract problem, there are very good general-purpose solvers. The state of the art in such solvers is an approach known as “branch and bound”. The performance of such solvers depends crucially on four types of in-built heuristics: primal, improvement, branching, and cut-separation or, more generally, bounding heuristics. However, such heuristics have, until recently, not exploited structure in integer linear programs beyond the recognition of certain types of single-row constraints. Many alternative approaches to integer linear programming can be cast in the following, novel framework. “Structure” in any integer linear program is a class of equivalence among triples of algorithms: deriving combinatorial objects from the input, adapting them, and transforming the adapted object to solutions of the original integer linear program. Many such alternative approaches are, however, inherently incompatible with branch and bound solvers. We, hence, define a structure to be “useful”, only when it extracts submatrices, which allow for the implementation of more than one of the four types of heuristics required in the branch and bound approach. Although the extraction of the best possible submatrices is non-trivial, the lack of a considerable submatrix with a given property can often be recognised quickly, and storing useful submatrices in a “pool” makes it possible to use them repeatedly. The goal is to explore whether the state-of-the-art solvers could make use of the structures studied in the academia. Three examples of useful structures in integer linear programs are presented. A particularly widely applicable useful structure relies on the aggregation of variables. Its application can be seen as a decomposition into three stages: Firstly, we partition variables in the original instance into as small number as possible of support sets of constraints forcing convex combinations of binary variables to be less than or equal to one in the original instance, and one-element sets. Secondly, we solve the “aggregated” instance corresponding to the partition of variables. Under certain conditions, we obtain a valid lower bound. Finally, we fix the solution of the aggregated instance in primal and improvement heuristics for the original instance, and use the partition in hyper-plane branching heuristics. Under certain conditions, the primal heuristics are guaranteed to find a feasible solution to the original instance. We also present structures exploiting mutual-exclusion and precedence constraints, prevalent in scheduling and timetabling applications. Mutual exclusion constraints correspond to instances of graph colouring. For numerous extensions of graph colouring, there are natural primal and branching heuristics. We present lower bounding heuristics for extensions of graph colouring, based on augmented Lagrangian methods for novel semidefinite programming relaxations, and reformulations based on a novel transformation of graph colouring to graph multicolouring. Precedence constraints correspond to an instance of precedence-constrained multi-dimensional packing. For such packing problems, we present heuristics based on an adaptive discretisation and strong discretised linear programming relaxations. On in- stances of packing unit-cubes into a box, the reformulation makes it possible to solve instances that are by five orders of magnitude larger than previously. On instances from complex timetabling problems, which combine mutual- exclusion and packing constraints, the combination of heuristics above can often result in the gap between primal and dual bounds being reduced to under five percent, orders of magnitude faster than using state of the art solvers, without any information being used that is outside of the instance.
APA, Harvard, Vancouver, ISO, and other styles
42

Weiser, David A. "Hybrid analysis of multi-threaded Java programs." Laramie, Wyo. : University of Wyoming, 2007. http://proquest.umi.com/pqdweb?did=1400971421&sid=1&Fmt=2&clientId=18949&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Chang, Yu-Pin. "International extension programs information system." CSUSB ScholarWorks, 2003. https://scholarworks.lib.csusb.edu/etd-project/2346.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ellis, Jason Benjamin. "Palaver tree online : technological support for classroom integration of Oral History." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/9189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Xu, HaiYing. "Dynamic purity analysis for Java programs." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=18481.

Full text
Abstract:
The pure methods in a program are those that exhibit functional or side effect free behaviour, a useful property of methods or code in the context of program optimization as well as program understanding. However, gathering purity data is not a trivial task, and existing purity investigations present primarily static results based on a compile-time analysis of program code. We perform a detailed examination of dynamic method purity in Java programs using a Java Virtual Machine (JVM) based analysis. We evaluate multiple purity definitions that range from strong to weak, consider purity forms specific to dynamic execution, and accommodate constraints imposed by an example consumer application of purity data, memoization. We show that while dynamic method purity is actually fairly consistent between programs, examining pure invocation counts and the percentage of the bytecode instruction stream contained within some pure method reveals great variation. We also show that while weakening purity definitions exposes considerable dynamic purity, consumer requirements can limit the actual utility of this information. A good understanding of which methods are "pure" and in what sense is an important contribution to understanding when, how, and what optimizations or properties a program may exhibit.
Les fonctions purs dans un programme sont ceux qui démontre un comportement sans fonctionnalité ou effet secondaire. Ceci s'avère une propriété utile pour une fonction ou du code dans le contexte d'optimisation et de compréhension du programme. Cependant, récolter de l'information de pureté n'est pas une tâche facile, et les techniques existantes pour les analyses de pureté ne fournissent que des résultats statiques basés sur une analyses de la compilation du programme. Nous avons exécuter une analyse détaillée de la pureté dynamique des fonctions dans des applications Java en utilisant une approche basés sur un Java Virtual Machine (JVM). Nous avons évalué multiples définitions de pureté, forte et faible, et considéré les formats de pureté spécifiques à l'exécution, tout en considérant les contraintes qui nous sont imposées par un application "consommateur" d'information de pureté et de mémorisation. Nous démontrons que malgré la consistance de la pureté dynamique des fonctions parmi certains applications, l'examen du nombre d'invocation pure et le pourcentage de chaîne d'instruction "bytecode" trouvé dans les fonctions purs nous dévoile l'existante de grande variation. Nous montrons aussi que malgré l'affaiblissement de la définition de la pureté expose considérablement la pureté dynamique, les pré-requis des consommateurs peuvent actuellement limiter l'utilité de cet information. Une bonne compréhension de ce qu'est une fonction "pure" et dans quel sens, est une important contribution à comprendre quand, où, et quelles optimisations ou propriétés une application peut dévoilée.
APA, Harvard, Vancouver, ISO, and other styles
46

Keating, Marla Jo Matlick. "Computers in college art and design programs /." Online version of thesis, 1992. http://hdl.handle.net/1850/11630.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Nagarajan, R. "Typed concurrent programs : specification and verification." Thesis, Imperial College London, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.369244.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Busvine, David John. "Detecting parallel structures in functional programs." Thesis, Heriot-Watt University, 1993. http://hdl.handle.net/10399/1415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Jarvis, Stephen Andrew. "Profiling large-scale lazy functional programs." Thesis, Durham University, 1996. http://etheses.dur.ac.uk/5307/.

Full text
Abstract:
The LOLITA natural language processing system is an example of one of the ever increasing number of large-scale systems written entirely in a functional programming language. The system consists of over 50,000 lines of Haskell code and is able to perform a number of tasks such as semantic and pragmatic analysis of text, context scanning and query analysis. Such a system is more useful if the results are calculated in real-time, therefore the efficiency of such a system is paramount. For the past three years we have used profiling tools supplied with the Haskell compilers GHC and HBC to analyse and reason about our programming solutions and have achieved good results; however, our experience has shown that the profiling life-cycle is often too long to make a detailed analysis of a large system possible, and the profiling results are often misleading. A profiling system is developed which allows three types of functionality not previously found in a profiler for lazy functional programs. Firstly, the profiler is able to produce results based on an accurate method of cost inheritance. We have found that this reduces the possibility of the programmer obtaining misleading profiling results. Secondly, the programmer is able to explore the results after the execution of the program. This is done by selecting and deselecting parts of the program using a post-processor. This greatly reduces the analysis time as no further compilation, execution or profiling of the program is needed. Finally, the new profiling system allows the user to examine aspects of the run-time call structure of the program. This is useful in the analysis of the run-time behaviour of the program. Previous attempts at extending the results produced by a profiler in such a way have failed due to the exceptionally high overheads. Exploration of the overheads produced by the new profiling scheme show that typical overheads in profiling the LOLITA system are: a 10% increase in compilation time; a 7% increase in executable size and a 70% run-time overhead. These overheads mean a considerable saving in time in the detailed analysis of profiling a large, lazy functional program.
APA, Harvard, Vancouver, ISO, and other styles
50

Justo, George Roger Ribeiro. "Configuration-oriented development of parallel programs." Thesis, University of Kent, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography