To see the other types of publications on this topic, follow the link: Dynamic programming. Data structures (Computer science).

Dissertations / Theses on the topic 'Dynamic programming. Data structures (Computer science)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 42 dissertations / theses for your research on the topic 'Dynamic programming. Data structures (Computer science).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Zhu, Yingchun 1968. "Optimizing parallel programs with dynamic data structures." Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=36745.

Full text
Abstract:
Distributed memory parallel architectures support a memory model where some memory accesses are local, and thus inexpensive, while other memory accesses are remote, and potentially quite expensive. In order to achieve efficiency on such architectures, we need to reduce remote accesses. This is particularly challenging for applications that use dynamic data structures.
In this thesis, I present two compiler techniques to reduce the overhead of remote memory accesses for dynamic data structure based applications: locality techniques and communication optimizations. Locality techniques include a static locality analysis, which statically estimates when an indirect reference via a pointer can be safely assumed to be a local access, and dynamic locality checks, which consists of runtime tests to identify local accesses. Communication techniques include: (1) code movement to issue remote reads earlier and writes later; (2) code transformations to replace repeated/redundant remote accesses with one access; and (3) transformations to block or pipeline a group of remote requests together. Both locality and communication techniques have been implemented and incorporated into our EARTH-McCAT compiler framework, and a series of experiments have been conducted to evaluate these techniques. The experimental results show that we are able to achieve up to 26% performance improvement with each technique alone, and up to 29% performance improvement when both techniques are applied together.
APA, Harvard, Vancouver, ISO, and other styles
2

Kuper, Lindsey. "Lattice-based data structures for deterministic parallel and distributed programming." Thesis, Indiana University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3726443.

Full text
Abstract:

Deterministic-by-construction parallel programming models guarantee that programs have the same observable behavior on every run, promising freedom from bugs caused by schedule nondeterminism. To make that guarantee, though, they must sharply restrict sharing of state between parallel tasks, usually either by disallowing sharing entirely or by restricting it to one type of data structure, such as single-assignment locations.

I show that lattice-based data structures, or LVars, are the foundation for a guaranteed-deterministic parallel programming model that allows a more general form of sharing. LVars allow multiple assignments that are inflationary with respect to a given lattice. They ensure determinism by allowing only inflationary writes and "threshold" reads that block until a lower bound is reached. After presenting the basic LVars model, I extend it to support event handlers, which enable an event-driven programming style, and non-blocking "freezing" reads, resulting in a quasi-deterministic model in which programs behave deterministically modulo exceptions.

I demonstrate the viability of the LVars model with LVish, a Haskell library that provides a collection of lattice-based data structures, a work-stealing scheduler, and a monad in which LVar computations run. LVish leverages Haskell's type system to index such computations with effect levels to ensure that only certain LVar effects can occur, hence statically enforcing determinism or quasi-determinism. I present two case studies of parallelizing existing programs using LVish: a k-CFA control flow analysis, and a bioinformatics application for comparing phylogenetic trees.

Finally, I show how LVar-style threshold reads apply to the setting of convergent replicated data types (CvRDTs), which specify the behavior of eventually consistent replicated objects in a distributed system. I extend the CvRDT model to support deterministic, strongly consistent threshold queries. The technique generalizes to any lattice, and hence any CvRDT, and allows deterministic observations to be made of replicated objects before the replicas' states converge.

APA, Harvard, Vancouver, ISO, and other styles
3

Robson, R. "Data views for a programming environment." Thesis, McGill University, 1988. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=75754.

Full text
Abstract:
A data structure editor is presented for use in an integrated, fragment-based programming environment. This editor employs high resolution computer graphics to present the user with an iconic representation of the internal storage of a running program.
The editor allows the creation, modification, and deletion of data structures. These abilities allow the user to quickly sketch data structures with which to test incomplete program fragments, alleviating the need for driver routines.
To keep the user cognizant of events inside his program, a technique for automated display management is presented allowing the user to keep the most important objects in the viewport at all times. A history facility permits the user to see the former values of all variables.
Execution controls are provided allowing the user to control the scope and speed of execution, manipulate frames on the run-time stack, set breakpoints, and profile the executing algorithm.
APA, Harvard, Vancouver, ISO, and other styles
4

McCallen, Scott J. "Mining Dynamic Structures in Complex Networks." Kent State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=kent1204154279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lindblad, Christopher John. "A programming system for the dynamic manipulation of temporally sensitive data." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/37744.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (p. 255-277).
by Christopher John Lindblad.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
6

Cain, Andrew Angus, and n/a. "Dynamic data flow analysis for object oriented programs." Swinburne University of Technology, 2005. http://adt.lib.swin.edu.au./public/adt-VSWT20060904.161506.

Full text
Abstract:
There are many tools and techniques to help developers debug and test their programs. Dynamic data flow analysis is such a technique. Existing approaches for performing dynamic data flow analysis for object oriented programs have tended to be data focused and procedural in nature. An approach to dynamic data flow analysis that used object oriented principals would provide a more natural solution to analysing object oriented programs. Dynamic data flow analysis approaches consist of two primary aspects; a model of the data flow information, and a method for collecting action information from a running program. The model for data flow analysis presented in this thesis uses a meta-level object oriented approach. To illustrate the application of this meta-level model, a model for the Java programming language is presented. This provides an instantiation of the meta-level model provided. Finally, several methods are presented for collecting action information from Java programs. The meta-level model contains elements to represent both data items and scoping components (i.e. methods, blocks, objects, and classes). At runtime the model is used to create a representation of the executing program that is used to perform dynamic data flow analysis. The structure of the model is created in such a way that locating the appropriate meta-level entity follows the scoping rules of the language. In this way actions that are reported to the meta-model are routed through the model to their corresponding meta-level elements. The Java model presented contains classes that can be used to create the runtime representation of the program under analysis. Events from the program under analysis are then used to update the model. Using this information developers are able to locate where data items are incorrectly used within their programs. Methods for collecting action information from Java programs include source code instrumentation, as used in earlier approaches, and approaches that use Java byte code transformation, and the facilities of the Java Platform Debugger Architecture. While these approaches aimed to achieve a comprehensive analysis, there are several issues that could not be resolved using the approaches covered. Of the approaches presented byte code transformation is the most practical.
APA, Harvard, Vancouver, ISO, and other styles
7

Jones, Anthony Andrew. "Combining data driven programming with component based software development : with applications in geovisualisation and dynamic data driven application systems." Thesis, Aston University, 2008. http://publications.aston.ac.uk/10682/.

Full text
Abstract:
Software development methodologies are becoming increasingly abstract, progressing from low level assembly and implementation languages such as C and Ada, to component based approaches that can be used to assemble applications using technologies such as JavaBeans and the .NET framework. Meanwhile, model driven approaches emphasise the role of higher level models and notations, and embody a process of automatically deriving lower level representations and concrete software implementations.
APA, Harvard, Vancouver, ISO, and other styles
8

Jain, Jhilmil Cross James H. "User experience design and experimental evaluation of extensible and dynamic viewers for data structures." Auburn, Ala., 2007. http://repo.lib.auburn.edu/2006%20Fall/Dissertations/JAIN_JHILMIL_3.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Harrison, William. "Malleability, obliviousness and aspects for broadcast service attachment." Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2010/4138/.

Full text
Abstract:
An important characteristic of Service-Oriented Architectures is that clients do not depend on the service implementation's internal assignment of methods to objects. It is perhaps the most important technical characteristic that differentiates them from more common object-oriented solutions. This characteristic makes clients and services malleable, allowing them to be rearranged at run-time as circumstances change. That improvement in malleability is impaired by requiring clients to direct service requests to particular services. Ideally, the clients are totally oblivious to the service structure, as they are to aspect structure in aspect-oriented software. Removing knowledge of a method implementation's location, whether in object or service, requires re-defining the boundary line between programming language and middleware, making clearer specification of dependence on protocols, and bringing the transaction-like concept of failure scopes into language semantics as well. This paper explores consequences and advantages of a transition from object-request brokering to service-request brokering, including the potential to improve our ability to write more parallel software.
APA, Harvard, Vancouver, ISO, and other styles
10

Palix, Nicolas, Julia L. Lawall, Gaël Thomas, and Gilles Muller. "How Often do Experts Make Mistakes?" Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2010/4132/.

Full text
Abstract:
Large open-source software projects involve developers with a wide variety of backgrounds and expertise. Such software projects furthermore include many internal APIs that developers must understand and use properly. According to the intended purpose of these APIs, they are more or less frequently used, and used by developers with more or less expertise. In this paper, we study the impact of usage patterns and developer expertise on the rate of defects occurring in the use of internal APIs. For this preliminary study, we focus on memory management APIs in the Linux kernel, as the use of these has been shown to be highly error prone in previous work. We study defect rates and developer expertise, to consider e.g., whether widely used APIs are more defect prone because they are used by less experienced developers, or whether defects in widely used APIs are more likely to be fixed.
APA, Harvard, Vancouver, ISO, and other styles
11

You, Chang Hun. "Learning patterns in dynamic graphs with application to biological networks." Pullman, Wash. : Washington State University, 2009. http://www.dissertations.wsu.edu/Dissertations/Summer2009/c_you_072309.pdf.

Full text
Abstract:
Thesis (Ph. D.)--Washington State University, August 2009.
Title from PDF title page (viewed on Aug. 19, 2009). "School of Electrical Engineering and Computer Science." Includes bibliographical references (p. 114-117).
APA, Harvard, Vancouver, ISO, and other styles
12

Tati, Kiran Kumar Smilkstein Tina Harriet. "General purpose evolutionary algorithm testbed." Diss., Columbia, Mo. : University of Missouri--Columbia, 2009. http://hdl.handle.net/10355/5359.

Full text
Abstract:
The entire thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file; a non-technical public abstract appears in the public.pdf file. Title from PDF of title page (University of Missouri--Columbia, viewed on January 19, 2010). Thesis advisor: Dr. Tina Smilkstein. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
13

Howard, Philip William. "Extending Relativistic Programming to Multiple Writers." PDXScholar, 2012. https://pdxscholar.library.pdx.edu/open_access_etds/114.

Full text
Abstract:
For software to take advantage of modern multicore processors, it must be safely concurrent and it must scale. Many techniques that allow safe concurrency do so at the expense of scalability. Coarse grain locking allows multiple threads to access common data safely, but not at the same time. Non-Blocking Synchronization and Transactional Memory techniques optimistically allow concurrency, but only for disjoint accesses and only at a high performance cost. Relativistic programming is a technique that allows low overhead readers and joint access parallelism between readers and writers. Most of the work on relativistic programming has assumed a single writer at a time (or, in partitionable data structures, a single writer per partition), and single writer solutions cannot scale on the write side. This dissertation extends prior work on relativistic programming in the following ways: 1) It analyses the ordering requirements of lock-based and relativistic programs in order to clarify the differences in their correctness and performance characteristics, and to define precisely the behavior required of the relativistic programming primitives. 2) It shows how relativistic programming can be used to construct efficient, scalable algorithms for complex data structures whose update operations involve multiple writes to multiple nodes. 3) It shows how disjoint access parallelism can be supported for relativistic writers, using Software Transactional Memory, while still allowing low-overhead, linearly-scalable, relativistic reads.
APA, Harvard, Vancouver, ISO, and other styles
14

Larkins, Darrell Brian. "Efficient Run-time Support For Global View Programming of Linked Data Structures on Distributed Memory Parallel Systems." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1280519762.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Heinze, Glenn. "Application of evolutionary algorithm strategies to entity relationship diagrams /." View PDF document on the Internet, 2004. http://library.athabascau.ca/scisthesis/Heinze.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Lin, Chuan-kai. "Practical Type Inference for the GADT Type System." PDXScholar, 2010. https://pdxscholar.library.pdx.edu/open_access_etds/367.

Full text
Abstract:
Generalized algebraic data types (GADTs) are a type system extension to algebraic data types that allows the type of an algebraic data value to vary with its shape. The GADT type system allows programmers to express detailed program properties as types (for example, that a function should return a list of the same length as its input), and a general-purpose type checker will automatically check those properties at compile time. Type inference for the GADT type system and the properties of the type system are both currently areas of active research. In this dissertation, I attack both problems simultaneously by exploiting the symbiosis between type system research and type inference research. Deficiencies of GADT type inference algorithms motivate research on specific aspects of the type system, and discoveries about the type system bring in new insights that lead to improved GADT type inference algorithms. The technical contributions of this dissertation are therefore twofold: in addition to new GADT type system properties (such as the prevalence of pointwise type information flow in GADT patterns, a generalized notion of existential types, and the effects of enforcing the GADT branch reachability requirement), I will also present a new GADT type inference algorithm that is significantly more powerful than existing algorithms. These contributions should help programmers use the GADT type system more effectively, and they should also enable language implementers to provide better support for the GADT type system.
APA, Harvard, Vancouver, ISO, and other styles
17

Carlo, Gilles. "Dynamic loading and class management in a distributed actor system." Master's thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-04272010-020040/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Fan, Yang, Hidehiko Masuhara, Tomoyuki Aotani, Flemming Nielson, and Hanne Riis Nielson. "AspectKE*: Security aspects with program analysis for distributed systems." Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2010/4136/.

Full text
Abstract:
Enforcing security policies to distributed systems is difficult, in particular, when a system contains untrusted components. We designed AspectKE*, a distributed AOP language based on a tuple space, to tackle this issue. In AspectKE*, aspects can enforce access control policies that depend on future behavior of running processes. One of the key language features is the predicates and functions that extract results of static program analysis, which are useful for defining security aspects that have to know about future behavior of a program. AspectKE* also provides a novel variable binding mechanism for pointcuts, so that pointcuts can uniformly specify join points based on both static and dynamic information about the program. Our implementation strategy performs fundamental static analysis at load-time, so as to retain runtime overheads minimal. We implemented a compiler for AspectKE*, and demonstrate usefulness of AspectKE* through a security aspect for a distributed chat system.
APA, Harvard, Vancouver, ISO, and other styles
19

Gabardo, Ademir cristiano. "A heuristic to detect community structures in dynamic complex networks." Universidade Tecnológica Federal do Paraná, 2014. http://repositorio.utfpr.edu.br/jspui/handle/1/970.

Full text
Abstract:
Complex networks are ubiquitous; billions of people are connected through social networks; there is an equally large number of telecommunication users and devices generating implicit complex networks. Furthermore, several structures can be represented as complex networks in nature, genetic data, social behavior, financial transactions and many other structures. Most of these complex networks present communities in their structure. Unveiling these communities is highly relevant in many fields of study. However, depending on several factors, the discover of these communities can be computationally intensive. Several algorithms for detecting communities in complex networks have been introduced over time. We will approach some of them. Our goal in this work is to identify or create an understandable and applicable heuristic to detect communities in complex networks, with a focus on time repetitions and strength measures. This work proposes a semi-supervised clustering approach as a modification of the traditional K-means algorithm submitting each dimension of data to a weight in order to obtain a weighted clustering method. As a first case study, databases of companies that have participated in public bids in Paraná state, will be analyzed to detect communities that can suggest structures such as cartels. As a second case study, the same methodology will be used to analyze datasets of microarray data for gene expressions, representing the correlation of the genes through a complex network, applying community detection algorithms in order to witness such correlations between genes.
APA, Harvard, Vancouver, ISO, and other styles
20

Ren, Bin. "Supporting Applications Involving Dynamic Data Structures and Irregular Memory Access on Emerging Parallel Platforms." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1397753127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Might, Matthew Brendon. "Environment Analysis of Higher-Order Languages." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16289.

Full text
Abstract:
Any analysis of higher-order languages must grapple with the tri-facetted nature of lambda. In one construct, the fundamental control, environment and data structures of a language meet and intertwine. With the control facet tamed nearly two decades ago, this work brings the environment facet to heel, defining the environment problem and developing its solution: environment analysis. Environment analysis allows a compiler to reason about the equivalence of environments, i.e., name-to-value mappings, that arise during a program's execution. In this dissertation, two different techniques-abstract counting and abstract frame strings-make this possible. A third technique, abstract garbage collection, makes both of these techniques more precise and, counter to intuition, often faster as well. An array of optimizations and even deeper analyses which depend upon environment analysis provide motivation for this work. In an abstract interpretation, a single abstract entity represents a set of concrete entities. When the entities under scrutiny are bindings-single name-to-value mappings, the atoms of environment-then determining when the equality of two abstract bindings infers the equality of their concrete counterparts is the crux of environment analysis. Abstract counting does this by tracking the size of represented sets, looking for singletons, in order to apply the following principle: If {x} = {y}, then x = y. Abstract frame strings enable environmental reasoning by statically tracking the possible stack change between the births of two environments; when this change is effectively empty, the environments are equivalent. Abstract garbage collection improves precision by intermittently removing unreachable environment structure during abstract interpretation.
APA, Harvard, Vancouver, ISO, and other styles
22

West, Raymond Troy Jr. "On the performance of B-trees using dynamic address computation." Thesis, Virginia Tech, 1985. http://hdl.handle.net/10919/41574.

Full text
Abstract:

The B-tree is a one of the more popular methods in use today for indexes and inverted files in database management systems. The traditional implementation of a Bâ tree uses many pointers (more than one per key), which can directly affect the performance of the B-tree. A general method of file organization and access (called Dymanic Address Computation) has been described by Cook that can be used to implement B-trees using no pointers. A minimal amount of storage (in addition to the keys) is required. An implementation of Dynamic Address Computation and a B-tree management package is described. Analytical performance measures are derived in an attempt to understand the performance characteristics of the B-tree. It is shown that the additional costs associated with Dynamic Address Computation result in an implementation that is competitive with traditional implementations only for small applications. For very large B-trees, additional work is required to make the performance acceptable. Some examples of possible modifications are discussed.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
23

Thompson, Dean (Dean Barrie) 1974. "Dynamic reconfiguration under real-time constraints." Monash University, School of Computer Science and Software Engineering, 2002. http://arrow.monash.edu.au/hdl/1959.1/7991.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Mansfield, Martin F. "Design of a generic parse tree for imperative languages." Virtual Press, 1992. http://liblink.bsu.edu/uhtbin/catkey/834617.

Full text
Abstract:
Since programs are written in many languages and design documents are not maintained (if they ever existed), there is a need to extract the design and other information that the programs represent. To do this without writing a separate program for each language, a common representation of the symbol table and parse tree would be required.The purpose of the parse tree and symbol table will not be to generate object code but to provide a platform for analysis tools. In this way the tool designer develops only one version instead of separate versions for each language. The generic symbol table and generic parse tree may not be as detailed as those same structures in a specific compiler but the parse tree must include all structures for imperative languages.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
25

Steinert, Bastian. "Built-in recovery support for explorative programming : preserving immediate access to static and dynamic information of intermediate development states." Phd thesis, Universität Potsdam, 2014. http://opus.kobv.de/ubp/volltexte/2014/7130/.

Full text
Abstract:
This work introduces concepts and corresponding tool support to enable a complementary approach in dealing with recovery. Programmers need to recover a development state, or a part thereof, when previously made changes reveal undesired implications. However, when the need arises suddenly and unexpectedly, recovery often involves expensive and tedious work. To avoid tedious work, literature recommends keeping away from unexpected recovery demands by following a structured and disciplined approach, which consists of the application of various best practices including working only on one thing at a time, performing small steps, as well as making proper use of versioning and testing tools. However, the attempt to avoid unexpected recovery is both time-consuming and error-prone. On the one hand, it requires disproportionate effort to minimize the risk of unexpected situations. On the other hand, applying recommended practices selectively, which saves time, can hardly avoid recovery. In addition, the constant need for foresight and self-control has unfavorable implications. It is exhaustive and impedes creative problem solving. This work proposes to make recovery fast and easy and introduces corresponding support called CoExist. Such dedicated support turns situations of unanticipated recovery from tedious experiences into pleasant ones. It makes recovery fast and easy to accomplish, even if explicit commits are unavailable or tests have been ignored for some time. When mistakes and unexpected insights are no longer associated with tedious corrective actions, programmers are encouraged to change source code as a means to reason about it, as opposed to making changes only after structuring and evaluating them mentally. This work further reports on an implementation of the proposed tool support in the Squeak/Smalltalk development environment. The development of the tools has been accompanied by regular performance and usability tests. In addition, this work investigates whether the proposed tools affect programmers’ performance. In a controlled lab study, 22 participants improved the design of two different applications. Using a repeated measurement setup, the study examined the effect of providing CoExist on programming performance. The result of analyzing 88 hours of programming suggests that built-in recovery support as provided with CoExist positively has a positive effect on programming performance in explorative programming tasks.
Diese Arbeit präsentiert Konzepte und die zugehörige Werkzeugunterstützung um einen komplementären Umgang mit Wiederherstellungsbedürfnissen zu ermöglichen. Programmierer haben Bedarf zur Wiederherstellung eines früheren Entwicklungszustandes oder Teils davon, wenn ihre Änderungen ungewünschte Implikationen aufzeigen. Wenn dieser Bedarf plötzlich und unerwartet auftritt, dann ist die notwendige Wiederherstellungsarbeit häufig mühsam und aufwendig. Zur Vermeidung mühsamer Arbeit empfiehlt die Literatur die Vermeidung von unerwarteten Wiederherstellungsbedürfnissen durch einen strukturierten und disziplinierten Programmieransatz, welcher die Verwendung verschiedener bewährter Praktiken vorsieht. Diese Praktiken sind zum Beispiel: nur an einer Sache gleichzeitig zu arbeiten, immer nur kleine Schritte auszuführen, aber auch der sachgemäße Einsatz von Versionskontroll- und Testwerkzeugen. Jedoch ist der Versuch des Abwendens unerwarteter Wiederherstellungsbedürfnisse sowohl zeitintensiv als auch fehleranfällig. Einerseits erfordert es unverhältnismäßig hohen Aufwand, das Risiko des Eintretens unerwarteter Situationen auf ein Minimum zu reduzieren. Andererseits ist eine zeitsparende selektive Ausführung der empfohlenen Praktiken kaum hinreichend, um Wiederherstellungssituationen zu vermeiden. Zudem bringt die ständige Notwendigkeit an Voraussicht und Selbstkontrolle Nachteile mit sich. Dies ist ermüdend und erschwert das kreative Problemlösen. Diese Arbeit schlägt vor, Wiederherstellungsaufgaben zu vereinfachen und beschleunigen, und stellt entsprechende Werkzeugunterstützung namens CoExist vor. Solche zielgerichtete Werkzeugunterstützung macht aus unvorhergesehenen mühsamen Wiederherstellungssituationen eine konstruktive Erfahrung. Damit ist Wiederherstellung auch dann leicht und schnell durchzuführen, wenn explizit gespeicherte Zwischenstände fehlen oder die Tests für einige Zeit ignoriert wurden. Wenn Fehler und unerwartete Ein- sichten nicht länger mit mühsamen Schadensersatz verbunden sind, fühlen sich Programmierer eher dazu ermutig, Quelltext zu ändern, um dabei darüber zu reflektieren, und nehmen nicht erst dann Änderungen vor, wenn sie diese gedanklich strukturiert und evaluiert haben. Diese Arbeit berichtet weiterhin von einer Implementierung der vorgeschlagenen Werkzeugunterstützung in der Squeak/Smalltalk Entwicklungsumgebung. Regelmäßige Tests von Laufzeitverhalten und Benutzbarkeit begleiteten die Entwicklung. Zudem prüft die Arbeit, ob sich die Verwendung der vorgeschlagenen Werkzeuge auf die Leistung der Programmierer auswirkt. In einem kontrollierten Experiment, verbesserten 22 Teilnehmer den Aufbau von zwei verschiedenen Anwendungen. Unter der Verwendung einer Versuchsanordnung mit wiederholter Messung, ermittelte die Studie die Auswirkung von CoExist auf die Programmierleistung. Das Ergebnis der Analyse von 88 Programmierstunden deutet darauf hin, dass sich eingebaute Werkzeugunterstützung für Wiederherstellung, wie sie mit CoExist bereitgestellt wird, positiv bei der Bearbeitung von unstrukturierten ergebnisoffenen Programmieraufgaben auswirkt.
APA, Harvard, Vancouver, ISO, and other styles
26

Ardi, Shanai. "A Nonlinear Programming Approach for Dynamic Voltage Scaling." Thesis, Linköping University, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2774.

Full text
Abstract:

Embedded computing systems in portable devices need to be energy efficient, yet they have to deliver adequate performance to the often computationally expensive applications. Dynamic voltage scaling is a technique that offers a speed versus power trade-off, allowing the application to achieve considerable energy savings and, at the same time, to meet the imposed time constraints.

In this thesis, we explore the possibility of using optimal voltage scaling algorithms based on nonlinear programming at the system level, for a complex multiprocessor scheduling problem. We present an optimization approach to the modeled nonlinear programming formulation of the continuous voltage selection problem excluding the consideration of transition overheads. Our approach achieves the same optimal results as the previous work using the same model, but due to its speed, can be efficiently used for design space exploration. We validate our results using numerous automatically generated benchmarks.

APA, Harvard, Vancouver, ISO, and other styles
27

Triplett, Josh. "Relativistic Causal Ordering A Memory Model for Scalable Concurrent Data Structures." PDXScholar, 2012. https://pdxscholar.library.pdx.edu/open_access_etds/497.

Full text
Abstract:
High-performance programs and systems require concurrency to take full advantage of available hardware. However, the available concurrent programming models force a difficult choice, between simple models such as mutual exclusion that produce little to no concurrency, or complex models such as Read-Copy Update that can scale to all available resources. Simple concurrent programming models enforce atomicity and causality, and this enforcement limits concurrency. Scalable concurrent programming models expose the weakly ordered hardware memory model, requiring careful and explicit enforcement of causality to preserve correctness, as demonstrated in this dissertation through the manual construction of a scalable hash-table item-move algorithm. Recent research on "relativistic programming" aims to standardize the programming model of Read-Copy Update, but thus far these efforts have lacked a generalized memory ordering model, requiring data-structure-specific reasoning to preserve causality. I propose a new memory ordering model, "relativistic causal ordering", which combines the scalabilty of relativistic programming and Read-Copy Update with the simplicity of reader atomicity and automatic enforcement of causality. Programs written for the relativistic model translate to scalable concurrent programs for weakly-ordered hardware via a mechanical process of inserting barrier operations according to well-defined rules. To demonstrate the relativistic causal ordering model, I walk through the straightforward construction of a novel concurrent hash-table resize algorithm, including the translation of this algorithm from the relativistic model to a hardware memory model, and show through benchmarks that the resulting algorithm scales far better than those based on mutual exclusion.
APA, Harvard, Vancouver, ISO, and other styles
28

Lowry, Matthew C. "A new approach to the train algorithm for distributed garbage collection." Title page, table of contents and abstract only, 2004. http://hdl.handle.net/2440/37710.

Full text
Abstract:
This thesis describes a new approach to achieving high quality distributed garbage collection using the Train Algorithm. This algorithm has been investigated for its ability to provide high quality collection in a variety of contexts, including persistent object systems and distributed object systems. Prior literature on the distributed Train Algorithm suggests that safe, complete, asynchronous, and scalable collection can be attained, however an approach that achieves this combination of behaviour has yet to emerge. The mechanisms and policies described in this thesis are unique in their ability to exploit the distributed Train Algorithm in a manner that displays all four desirable qualities. Further the mechanisms allow any number of mutator and collector threads to operate concurrently within a site; this is also a unique property amongst train-based mechanisms (distributed or otherwise). Confidence in the quality of the approach promoted in this thesis is obtained via a top-down approach. Firstly a concise behavioural model is introduced to capture fundamental requirements for safe and complete behaviour from train-based collection mechanisms. The model abstracts over the techniques previously introduced under the banner of the Train Algorithm. It serves as a self- contained template for correct train-based collection that is independent of a target object system for deployment of the algorithm. Secondly a means to instantiate the model in a distributed object system is described. The instantiation includes well-established techniques from prior literature, and via the model these are correctly refined and reorganised with new techniques to achieve asynchrony, scalability, and support for concurrency. The result is a flexible approach that allows a distributed system to exhibit a variety of local collection mechanisms and policies, while ensuring their interaction is safe, complete, asynchronous, and scalable regardless of the local choices made by each site. Additional confidence in the properties of the new approach is obtained from implementation within a distributed object system simulation. The implementation provides some insight into the practical issues that arise through the combination of distribution, concurrent execution within sites, and train-based collection. Executions of the simulation system are used to verify that safe collection is observed at all times, and obtain evidence that asynchrony, scalability, and concurrency can be observed in practice.
Thesis (Ph.D.)--School of Computer Science, 2004.
APA, Harvard, Vancouver, ISO, and other styles
29

Alston, Katherine Yvette. "A heuristic on the rearrangeability of shuffle-exchange networks." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2521.

Full text
Abstract:
The algorithms which control network routing are specific to the network because the algorithms are designed to take advantage of that network's topology. The "goodness" of a network includes such criteria as a simple routing algorithm and a simple routing algorithm would increase the use of the shuffle-exchange network.
APA, Harvard, Vancouver, ISO, and other styles
30

Geller, Felix, Robert Hirschfeld, and Gilad Bracha. "Pattern Matching for an object-oriented and dynamically typed programming language." Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2010/4303/.

Full text
Abstract:
Pattern matching is a well-established concept in the functional programming community. It provides the means for concisely identifying and destructuring values of interest. This enables a clean separation of data structures and respective functionality, as well as dispatching functionality based on more than a single value. Unfortunately, expressive pattern matching facilities are seldomly incorporated in present object-oriented programming languages. We present a seamless integration of pattern matching facilities in an object-oriented and dynamically typed programming language: Newspeak. We describe language extensions to improve the practicability and integrate our additions with the existing programming environment for Newspeak. This report is based on the first author’s master’s thesis.
APA, Harvard, Vancouver, ISO, and other styles
31

Olofsson, Nils. "Kidney Dynamic Model Enrichment." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-242315.

Full text
Abstract:
This thesis explores and explains a method using discrete curvature as a feature to find regions of vertices that can be classified as being likely to indicate the presence of an underlying tumor on a kidney surface mesh. Vertices are tagged based on curvature type and mathematical morphology is used to form regions on the mesh. The size and location of the tumor is approximated by fitting a sphere to this region. The method is intended to be employed in noninvasive radiotherapy with a dynamic soft tissue model. It could also provide an alternative to volumetric methods used to segment tumors. A validation is made using the images from which the kidney mesh was constructed, the tumor is visible as a comparison to the method result. The dynamic kidney model is validated using the Hausdorff distance and it is explained how this can be computed in an effective way using bounding volume hierarchies. Both the tumor finding method and the dynamic model show promising results since they lie within the limit used by practitioners during therapy.
APA, Harvard, Vancouver, ISO, and other styles
32

Douieb, Karim. "Hotlinks and dictionaries." Doctoral thesis, Universite Libre de Bruxelles, 2008. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210471.

Full text
Abstract:
Knowledge has always been a decisive factor of humankind's social evolutions. Collecting the world's knowledge is one of the greatest challenges of our civilization. Knowledge involves the use of information but information is not knowledge. It is a way of acquiring and understanding information. Improving the visibility and the accessibility of information requires to organize it efficiently. This thesis focuses on this general purpose.

A fundamental objective of computer science is to store and retrieve information efficiently. This is known as the dictionary problem. A dictionary asks for a data structure which allows essentially the search operation. In general, information that is important and popular at a given time has to be accessed faster than less relevant information. This can be achieved by dynamically managing the data structure periodically such that relevant information is located closer from the search starting point. The second part of this thesis is devoted to the development and the understanding of self-adjusting dictionaries in various models of computation. In particular, we focus our attention on dictionaries which do not have any knowledge of the future accesses. Those dictionaries have to auto-adapt themselves to be competitive with dictionaries specifically tuned for a given access sequence.

This approach, which transforms the information structure, is not always feasible. Reasons can be that the structure is based on the semantic of the information such as categorization. In this context, the search procedure is linked to the structure itself and modifying the structure will affect how a search is performed. A solution developed to improve search in static structure is the hotlink assignment. It is a way to enhance a structure without altering its original design. This approach speeds up the search by creating shortcuts in the structure. The first part of this thesis is devoted to this approach.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
33

Moussa, Ahmed S. "On learning and visualizing lexicographic preference trees." UNF Digital Commons, 2019. https://digitalcommons.unf.edu/etd/882.

Full text
Abstract:
Preferences are very important in research fields such as decision making, recommendersystemsandmarketing. The focus of this thesis is on preferences over combinatorial domains, which are domains of objects configured with categorical attributes. For example, the domain of cars includes car objects that are constructed withvaluesforattributes, such as ‘make’, ‘year’, ‘model’, ‘color’, ‘body type’ and ‘transmission’.Different values can instantiate an attribute. For instance, values for attribute ‘make’canbeHonda, Toyota, Tesla or BMW, and attribute ‘transmission’ can haveautomaticormanual. To this end,thisthesis studiesproblemsonpreference visualization and learning for lexicographic preference trees, graphical preference models that often are compact over complex domains of objects built of categorical attributes. Visualizing preferences is essential to provide users with insights into the process of decision making, while learning preferences from data is practically important, as it is ineffective to elicit preference models directly from users. The results obtained from this thesis are two parts: 1) for preference visualization, aweb- basedsystem is created that visualizes various types of lexicographic preference tree models learned by a greedy learning algorithm; 2) for preference learning, a genetic algorithm is designed and implemented, called GA, that learns a restricted type of lexicographic preference tree, called unconditional importance and unconditional preference tree, or UIUP trees for short. Experiments show that GA achieves higher accuracy compared to the greedy algorithm at the cost of more computational time. Moreover, a Dynamic Programming Algorithm (DPA) was devised and implemented that computes an optimal UIUP tree model in the sense that it satisfies as many examples as possible in the dataset. This novel exact algorithm (DPA), was used to evaluate the quality of models computed by GA, and it was found to reduce the factorial time complexity of the brute force algorithm to exponential. The major contribution to the field of machine learning and data mining in this thesis would be the novel learning algorithm (DPA) which is an exact algorithm. DPA learns and finds the best UIUP tree model in the huge search space which classifies accurately the most number of examples in the training dataset; such model is referred to as the optimal model in this thesis. Finally, using datasets produced from randomly generated UIUP trees, this thesis presents experimental results on the performances (e.g., accuracy and computational time) of GA compared to the existent greedy algorithm and DPA.
APA, Harvard, Vancouver, ISO, and other styles
34

Colombet, Quentin. "Decoupled (SSA-based) register allocators : from theory to practice, coping with just-in-time compilation and embedded processors constraints." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2012. http://tel.archives-ouvertes.fr/tel-00764405.

Full text
Abstract:
My thesis deals with register allocation. During this phase, the compiler has to assign variables of the source program, in an arbitrary big number, to actual registers of the processor, in a limited number k. Recent works, for instance the thesis of F. Bouchez and S. Hack, have shown that it is possible to split in two different decoupled step this phase: the spill - store the variables into memory to release registers - followed by the registers assignment. These works demonstrate the feasibility of this decoupling relying on a theoretic framework and some assumptions. In particular, it is sufficient to ensure after the spill step that the number of variables simultaneously live is below k.My thesis follows these works by showing how to apply this kind of approach when real-world constraints come in play: instructions encoding, ABI (application binary interface), register aliasing. Different approaches are proposed. They allow either to ignore these problems or to directly tackle them into the theoretic framework. The hypothesis of the models and the proposed solutions are evaluated and validated using a thorough experimental study with the compiler of STMicroelectronics. Finally, all these works have been done with the constraints of modern compilers in mind, the JIT (just-in-time) compilation, where the compilation time et the memory footprint of the compiler are key factors. We strive to offer solutions that cope with these criteria or improve the result until a given budget is reached. We, in particular, used the SSA (static single assignment) form to define algorithm like tree scan that generalizes linear scan based approaches proposed for JIT compilation.
APA, Harvard, Vancouver, ISO, and other styles
35

Legaux, Joeffrey. "Squelettes algorithmiques pour la programmation et l'exécution efficaces de codes parallèles." Phd thesis, Université d'Orléans, 2013. http://tel.archives-ouvertes.fr/tel-00990852.

Full text
Abstract:
Les architectures parallèles sont désormais présentes dans tous les matériels informatiques, mais les pro- grammeurs ne sont généralement pas formés à leur programmation dans les modèles explicites tels que MPI ou les Pthreads. Il y a un besoin important de modèles plus abstraits tels que les squelettes algorithmiques qui sont une approche structurée. Ceux-ci peuvent être vus comme des fonctions d'ordre supérieur synthétisant le comportement d'algorithmes parallèles récurrents que le développeur peut ensuite combiner pour créer ses programmes. Les développeurs souhaitent obtenir de meilleures performances grâce aux programmes parallèles, mais le temps de développement est également un facteur très important. Les approches par squelettes algorithmiques fournissent des résultats intéressants dans ces deux aspects. La bibliothèque Orléans Skeleton Library ou OSL fournit un ensemble de squelettes algorithmiques de parallélisme de données quasi-synchrones dans le langage C++ et utilise des techniques de programmation avancées pour atteindre une bonne efficacité. Nous avons amélioré OSL afin de lui apporter de meilleures performances et une plus grande expressivité. Nous avons voulu analyser le rapport entre les performances des programmes et l'effort de programmation nécessaire sur OSL et d'autres modèles de programmation parallèle. La comparaison rigoureuse entre des programmes parallèles dans OSL et leurs équivalents de bas niveau montre une bien meilleure productivité pour les modèles de haut niveau qui offrent une grande facilité d'utilisation tout en produisant des performances acceptables.
APA, Harvard, Vancouver, ISO, and other styles
36

Chowdhury, Rezaul Alam. "Algorithms and data structures for cache-efficient computation: theory and experimental evaluation." Thesis, 2007. http://hdl.handle.net/2152/3170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Gupta, Ankur. "Succinct Data Structures." Diss., 2007. http://hdl.handle.net/10161/434.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

"Summarizing static graphs and mining dynamic graphs." Thesis, 2011. http://library.cuhk.edu.hk/record=b6075341.

Full text
Abstract:
Besides finding changing areas based on the number of node and edge evolutions, a more interesting problem is to analyze the impact of these evolutions to graphs and find the regions that exhibit significant changes when these evolutions happen. The more different the relationship between nodes in a certain region is, the more significant this region is. This problem is challenging since it is hard to define the range of changing regions that is closely related to actual evolutions. We formalize the problem by using a similarity measure based on neighborhood random walks, and design an efficient algorithm which is able to identify the significant changing regions without recomputing all similarities. Meaningful examples in experiments demonstrate the effectiveness of our algorithms.
Graph patterns are able to represent the complex structural relations among objects in many applications in various domains. Managing and mining graph data, on which we study in this thesis, are no doubt among the most important tasks. We focus on two challenging problems, namely, graph summarization and graph change detection.
In the area of summarizing a collection of graphs, we study the problem of summarizing frequent subgraphs, since it is not much necessary to summarize a collection of random graphs. The bottleneck for exploring and understanding frequent subgraphs is that they are numerous. A summary can be a solution to this issue, so the goal of frequent subgraph summarization is to minimize the restoration errors of the structure and the frequency information. The unique challenge in frequent subgraph summarization comes from the fact that a subgraph can have multiple embeddings in a summarization template graph. We handle this issue by introducing a partial order between edges to allow accurate structure and frequency estimation based on an independence probabilistic model. The proposed algorithm discovers k summarization templates in a top-down fashion to control the restoration error of frequencies within sigma. There is no restoration error of structures. Experiments on both real and synthetic graph datasets show that our framework can control the frequency restoration error within 10% by a compact summarization model.
The objective of graph change detection is to discover the changing areas on graphs when they evolves at a high speed. The most changing areas are those areas having the highest number of evolutions (additions/deletions) of nodes and edges, which is called burst areas. We study on finding the most burst areas in a stream of fast graph evolutions. We propose to use Haar wavelet tree to monitor the upper bound of the number of evolutions. Our approach monitors all potential changing areas of different sizes and computes incrementally the number of evolutions in those areas. The top-k burst areas are returned as soon as they are detected. Our solution is capable of handling a large amount of evolutions in a short time, which is consistent to the experimental results.
The objective of graph summarization is to obtain a concise representation of a single large graph or a collection of graphs, which is interpretable and suitable for analysis. A good summary can reveal the hidden relationships between nodes in a graph. The key issue of summarizing a single graph is how to construct a high-quality and representative summary, which is in the form of a super-graph. We propose an entropy-based unified model for measuring the homogeneity of the super-graph. The best summary in terms of homogeneity could be too large to explore. By using the unified model, we relax three summarization criteria to obtain an approximate homogeneous summary of appropriate size. We propose both agglomerative and divisive algorithms for approximate summarization, as well as pruning techniques and heuristics for both algorithms to save computation cost. Experimental results confirm that our approaches can efficiently generate high-quality summaries.
Liu, Zheng.
Advisers: Wai Lam; Jeffrey Xu Yu.
Source: Dissertation Abstracts International, Volume: 73-06, Section: B, page: .
Thesis (Ph.D.)--Chinese University of Hong Kong, 2011.
Includes bibliographical references (leaves 133-141).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract also in Chinese.
APA, Harvard, Vancouver, ISO, and other styles
39

Olivier, Martin Stephanus. "Die ondersteuning van abstrakte datatipes en toestelle in 'n programmeertaal." Thesis, 2014. http://hdl.handle.net/10210/9878.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Lin, Yi. "Subseries Join and Compression of Time Series Data Based on Non-uniform Segmentation." Thesis, 2008. http://hdl.handle.net/10012/4013.

Full text
Abstract:
A time series is composed of a sequence of data items that are measured at uniform intervals. Many application areas generate or manipulate time series, including finance, medicine, digital audio, and motion capture. Efficiently searching a large time series database is still a challenging problem, especially when partial or subseries matches are needed. This thesis proposes a new denition of subseries join, a symmetric generalization of subseries matching, which finds similar subseries in two or more time series datasets. A solution is proposed to compute the subseries join based on a hierarchical feature representation. This hierarchical feature representation is generated by an anisotropic diffusion scale-space analysis and a non-uniform segmentation method. Each segment is represented by a minimal polynomial envelope in a reduced-dimensionality space. Based on the hierarchical feature representation, all features in a dataset are indexed in an R-tree, and candidate matching features of two datasets are found by an R-tree join operation. Given candidate matching features, a dynamic programming algorithm is developed to compute the final subseries join. To improve storage efficiency, a hierarchical compression scheme is proposed to compress features. The minimal polynomial envelope representation is transformed to a Bezier spline envelope representation. The control points of each Bezier spline are then hierarchically differenced and an arithmetic coding is used to compress these differences. To empirically evaluate their effectiveness, the proposed subseries join and compression techniques are tested on various publicly available datasets. A large motion capture database is also used to verify the techniques in a real-world application. The experiments show that the proposed subseries join technique can better tolerate noise and local scaling than previous work, and the proposed compression technique can also achieve about 85% higher compression rates than previous work with the same distortion error.
APA, Harvard, Vancouver, ISO, and other styles
41

"Generalized Statistical Tolerance Analysis and Three Dimensional Model for Manufacturing Tolerance Transfer in Manufacturing Process Planning." Doctoral diss., 2011. http://hdl.handle.net/2286/R.I.9125.

Full text
Abstract:
abstract: Mostly, manufacturing tolerance charts are used these days for manufacturing tolerance transfer but these have the limitation of being one dimensional only. Some research has been undertaken for the three dimensional geometric tolerances but it is too theoretical and yet to be ready for operator level usage. In this research, a new three dimensional model for tolerance transfer in manufacturing process planning is presented that is user friendly in the sense that it is built upon the Coordinate Measuring Machine (CMM) readings that are readily available in any decent manufacturing facility. This model can take care of datum reference change between non orthogonal datums (squeezed datums), non-linearly oriented datums (twisted datums) etc. Graph theoretic approach based upon ACIS, C++ and MFC is laid out to facilitate its implementation for automation of the model. A totally new approach to determining dimensions and tolerances for the manufacturing process plan is also presented. Secondly, a new statistical model for the statistical tolerance analysis based upon joint probability distribution of the trivariate normal distributed variables is presented. 4-D probability Maps have been developed in which the probability value of a point in space is represented by the size of the marker and the associated color. Points inside the part map represent the pass percentage for parts manufactured. The effect of refinement with form and orientation tolerance is highlighted by calculating the change in pass percentage with the pass percentage for size tolerance only. Delaunay triangulation and ray tracing algorithms have been used to automate the process of identifying the points inside and outside the part map. Proof of concept software has been implemented to demonstrate this model and to determine pass percentages for various cases. The model is further extended to assemblies by employing convolution algorithms on two trivariate statistical distributions to arrive at the statistical distribution of the assembly. Map generated by using Minkowski Sum techniques on the individual part maps is superimposed on the probability point cloud resulting from convolution. Delaunay triangulation and ray tracing algorithms are employed to determine the assembleability percentages for the assembly.
Dissertation/Thesis
Ph.D. Mechanical Engineering 2011
APA, Harvard, Vancouver, ISO, and other styles
42

Sachan, Mohit. "Learning in Partially Observable Markov Decision Processes." 2013. http://hdl.handle.net/1805/3451.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
Learning in Partially Observable Markov Decision process (POMDP) is motivated by the essential need to address a number of realistic problems. A number of methods exist for learning in POMDPs, but learning with limited amount of information about the model of POMDP remains a highly anticipated feature. Learning with minimal information is desirable in complex systems as methods requiring complete information among decision makers are impractical in complex systems due to increase of problem dimensionality. In this thesis we address the problem of decentralized control of POMDPs with unknown transition probabilities and reward. We suggest learning in POMDP using a tree based approach. States of the POMDP are guessed using this tree. Each node in the tree has an automaton in it and acts as a decentralized decision maker for the POMDP. The start state of POMDP is known as the landmark state. Each automaton in the tree uses a simple learning scheme to update its action choice and requires minimal information. The principal result derived is that, without proper knowledge of transition probabilities and rewards, the automata tree of decision makers will converge to a set of actions that maximizes the long term expected reward per unit time obtained by the system. The analysis is based on learning in sequential stochastic games and properties of ergodic Markov chains. Simulation results are presented to compare the long term rewards of the system under different decision control algorithms.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography