To see the other types of publications on this topic, follow the link: Computer algorithms. Data structures (Computer science).

Dissertations / Theses on the topic 'Computer algorithms. Data structures (Computer science)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computer algorithms. Data structures (Computer science).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Costa, Andre. "Analytic modelling of agent-based network routing algorithms." Title page, contents and abstract only, 2002. http://web4.library.adelaide.edu.au/theses/09PH/09phc8373.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Karras, Panagiotis. "Data structures and algorithms for data representation in constrained environments." Thesis, Click to view the E-thesis via HKUTO, 2007. http://sunzi.lib.hku.hk/hkuto/record/B38897647.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

宋永健 and Wing-kin Sung. "Fast labeled tree comparison via better matching algorithms." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31239316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sung, Wing-kin. "Fast labeled tree comparison via better matching algorithms /." Hong Kong : University of Hong Kong, 1998. http://sunzi.lib.hku.hk/hkuto/record.jsp?B20229999.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jain, Jhilmil Cross James H. "User experience design and experimental evaluation of extensible and dynamic viewers for data structures." Auburn, Ala., 2007. http://repo.lib.auburn.edu/2006%20Fall/Dissertations/JAIN_JHILMIL_3.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

黎少斌 and Shiao-bun Lai. "Trading off time for space for the string matching problem." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1996. http://hub.hku.hk/bib/B31214216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lai, Shiao-bun. "Trading off time for space for the string matching problem /." Hong Kong : University of Hong Kong, 1996. http://sunzi.lib.hku.hk/hkuto/record.jsp?B18061795.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Benjamin, Jim Isaac. "Quadtree algorithms for image processing /." Online version of thesis, 1991. http://hdl.handle.net/1850/11078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mak, Vivian. "Algorithms for proximity problems in the presence of obstacles /." Hong Kong : University of Hong Kong, 1999. http://sunzi.lib.hku.hk/hkuto/record.jsp?B21414944.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bae, Sung Eun. "Sequential and Parallel Algorithms for the Generalized Maximum Subarray Problem." Thesis, University of Canterbury. Computer Science and Software Engineering, 2007. http://hdl.handle.net/10092/1202.

Full text
Abstract:
The maximum subarray problem (MSP) involves selection of a segment of consecutive array elements that has the largest possible sum over all other segments in a given array. The efficient algorithms for the MSP and related problems are expected to contribute to various applications in genomic sequence analysis, data mining or in computer vision etc. The MSP is a conceptually simple problem, and several linear time optimal algorithms for 1D version of the problem are already known. For 2D version, the currently known upper bounds are cubic or near-cubic time. For the wider applications, it would be interesting if multiple maximum subarrays are computed instead of just one, which motivates the work in the first half of the thesis. The generalized problem of K-maximum subarray involves finding K segments of the largest sum in sorted order. Two subcategories of the problem can be defined, which are K-overlapping maximum subarray problem (K-OMSP), and K-disjoint maximum subarray problem (K-DMSP). Studies on the K-OMSP have not been undertaken previously, hence the thesis explores various techniques to speed up the computation, and several new algorithms. The first algorithm for the 1D problem is of O(Kn) time, and increasingly efficient algorithms of O(K² + n logK) time, O((n+K) logK) time and O(n+K logmin(K, n)) time are presented. Considerations on extending these results to higher dimensions are made, which contributes to establishing O(n³) time for 2D version of the problem where K is bounded by a certain range. Ruzzo and Tompa studied the problem of all maximal scoring subsequences, whose definition is almost identical to that of the K-DMSP with a few subtle differences. Despite slight differences, their linear time algorithm is readily capable of computing the 1D K-DMSP, but it is not easily extended to higher dimensions. This observation motivates a new algorithm based on the tournament data structure, which is of O(n+K logmin(K, n)) worst-case time. The extended version of the new algorithm is capable of processing a 2D problem in O(n³ + min(K, n) · n² logmin(K, n)) time, that is O(n³) for K ≤ n/log n For the 2D MSP, the cubic time sequential computation is still expensive for practical purposes considering potential applications in computer vision and data mining. The second half of the thesis investigates a speed-up option through parallel computation. Previous parallel algorithms for the 2D MSP have huge demand for hardware resources, or their target parallel computation models are in the realm of pure theoretics. A nice compromise between speed and cost can be realized through utilizing a mesh topology. Two mesh algorithms for the 2D MSP with O(n) running time that require a network of size O(n²) are designed and analyzed, and various techniques are considered to maximize the practicality to their full potential.
APA, Harvard, Vancouver, ISO, and other styles
11

Mak, Vivian, and 麥慧芸. "Algorithms for proximity problems in the presence of obstacles." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B29822749.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Lai, Ka-ying. "Solving multiparty private matching problems using Bloom-filters." Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B37854847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Lai, Ka-ying, and 黎家盈. "Solving multiparty private matching problems using Bloom-filters." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B37854847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Wong, Ka Chun. "Optimal expected-case planar point location /." View abstract or full-text, 2005. http://library.ust.hk/cgi/db/thesis.pl?COMP%202005%20WONG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Malamatos, Theocharis. "Expected-case planar point location /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?COMP%202002%20MALAMA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Dadoun, Nounou Norman. "Geometric hierarchies and parallel subdivision search." Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/30992.

Full text
Abstract:
Geometric hierarchies have proven useful for the problems of point location in planar subdivisions and 2- and 3-dimensional convex polytope separation on a sequential model of computation. In this thesis, we formulate a geometric hierarchy paradigm (following the work of Dobkin and Kirkpatrick) and apply this paradigm to solve a number of computational geometry problems on a shared memory (PRAM) parallel model of computation. For certain problems, we describe what we call cooperative algorithms, algorithms which exploit parallelism in searching geometric hierarchies to solve their respective problems. For convex polygons, the geometric hierarchies are implicit and can be exploited in cooperative algorithms to compute convex polygon separation and to construct convex polygon separating/common tangents. The paradigm is also applied to the problem of tree contraction which is, in turn, applied to a number of specialized point location applications including the parallel construction of 2-dimensional Voronoi Diagrams. For point location in planar subdivisions, we present parallel algorithms to construct a subdivision hierarchy representation. A related convex polyhedra hierarchy is constructed similarly and applied to the parallel construction of 3-dimensional convex hulls. The geometric hierarchy paradigm is applied further to the design of a data structure which supports cooperative point location in general planar subdivisions. Again, a related polyhedral hierarchy can be used to exploit parallelism for a cooperative separation algorithm for convex polyhedra.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
17

Chen, Calvin Ching-Yuen. "Efficient Parallel Algorithms and Data Structures Related to Trees." Thesis, University of North Texas, 1991. https://digital.library.unt.edu/ark:/67531/metadc332626/.

Full text
Abstract:
The main contribution of this dissertation proposes a new paradigm, called the parentheses matching paradigm. It claims that this paradigm is well suited for designing efficient parallel algorithms for a broad class of nonnumeric problems. To demonstrate its applicability, we present three cost-optimal parallel algorithms for breadth-first traversal of general trees, sorting a special class of integers, and coloring an interval graph with the minimum number of colors.
APA, Harvard, Vancouver, ISO, and other styles
18

Tati, Kiran Kumar Smilkstein Tina Harriet. "General purpose evolutionary algorithm testbed." Diss., Columbia, Mo. : University of Missouri--Columbia, 2009. http://hdl.handle.net/10355/5359.

Full text
Abstract:
The entire thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file; a non-technical public abstract appears in the public.pdf file. Title from PDF of title page (University of Missouri--Columbia, viewed on January 19, 2010). Thesis advisor: Dr. Tina Smilkstein. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
19

Demuynck, Marie-Anne. "Performance Study of Concurrent Search Trees and Hash Algorithms on Multiprocessors Systems." Thesis, University of North Texas, 1996. https://digital.library.unt.edu/ark:/67531/metadc332828/.

Full text
Abstract:
This study examines the performance of concurrent algorithms for B-trees and linear hashing. B-trees are widely used as an access method for large, single key, database files, stored in lexicographic order on secondary storage devices. Linear hashing is a fast and reliable hash algorithm, suitable for accessing records stored unordered in buckets. This dissertation presents performance results on implementations of concurrent Bunk-tree and linear hashing algorithms, using lock-based, partitioned and distributed methods on the Sequent Symmetry shared memory multiprocessor system and on a network of distributed processors created with PVM (Parallel Virtual Machine) software. Initial experiments, which started with empty data structures, show good results for the partitioned implementations and lock-based linear hashing, but poor ones for lock-based Blink-trees. A subsequent test, which started with loaded data structures, shows similar results, but with much improved performances for locked Blink- trees. The data also highlighted the high cost of split operations, which reached up to 70% of the total insert time.
APA, Harvard, Vancouver, ISO, and other styles
20

Zhang, Yan. "Improving the efficiency of graph-based data mining with application to public health data." Online access for everyone, 2007. http://www.dissertations.wsu.edu/Thesis/Fall2007/y_zhang_112907.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Fouh, Mbindi Eric Noel. "Building and Evaluating a Learning Environment for Data Structures and Algorithms Courses." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/51951.

Full text
Abstract:
Learning technologies in computer science education have been most closely associated with teaching of programming, including automatic assessment of programming exercises. However, when it comes to teaching computer science content and concepts, learning technologies have not been heavily used. Perhaps the best known application today is Algorithm Visualization (AV), of which there are hundreds of examples. AVs tend to focus on presenting the procedural aspects of how a given algorithm works, rather than more conceptual content. There are also new electronic textbooks (eTextbooks) that incorporate the ability to edit and execute program examples. For many traditional courses, a longstanding problem is lack of sufficient practice exercises with feedback to the student. Automated assessment provides a way to increase the number of exercises on which students can receive feedback. Interactive eTextbooks have the potential to make it easy for instructors to introduce both visualizations and practice exercises into their courses. OpenDSA is an interactive eTextbook for data structures and algorithms (DSA) courses. It integrates tutorial content with AVs and automatically assessed interactive exercises. Since Spring 2013, OpenDSA has been regularly used to teach a fundamental data structures and algorithms course (CS2), and also a more advanced data structures, algorithms, and analysis course (CS3) at various institutions of higher education. In this thesis, I report on findings from early adoption of the OpenDSA system. I describe how OpenDSA's design addresses obstacles in the use of AV systems. I identify a wide variety of use for OpenDSA in the classroom. I found that instructors used OpenDSA exercises as graded assignments in all the courses where it was used. Some instructors assigned an OpenDSA assignment before lectures and started spending more time teaching higher-level concepts. OpenDSA also supported implementing a ``flipped classroom'' by some instructors. I found that students are enthusiastic about OpenDSA and voluntarily used the AVs embedded within OpenDSA. Students found OpenDSA beneficial and expressed a preference for a class format that included using OpenDSA as part of the assigned graded work. The relationship between OpenDSA and students' performance was inconclusive, but I found that students with higher grades tend to complete more exercises.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
22

Heinze, Glenn. "Application of evolutionary algorithm strategies to entity relationship diagrams /." View PDF document on the Internet, 2004. http://library.athabascau.ca/scisthesis/Heinze.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Zhang, Yingying. "Algorithms and Data Structures for Efficient Timing Analysis of Asynchronous Real-time Systems." Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4622.

Full text
Abstract:
This thesis presents a framework to verify asynchronous real-time systems based on model checking. These systems are modeled by using a common modeling formalism named Labeled Petri-nets(LPNs). In order to verify the real-time systems algorithmically, the zone-based timing analysis method is used for LPNs. It searches the state space with timing information (represented by zones). When there is a high degree of concurrency in the model, firing concurrent enabled transitions in different order may result in different zones, and these zones may be combined without affecting the verification result. Since the zone-based method could not deal with this problem efficiently, the POSET timing analysis method is adopted for LPNs. It separates concurrency from causality and generates an exactly one zone for a single state. But it needs to maintain an extra POSET matrix for each state. In order to save time and memory, an improved zone-based timing analysis method is introduced by integrating above two methods. It searches the state space with zones but eliminates the use of the POSET matrix, which generates the same result as with the POSET method. To illustrate these methods, a circuit example is used throughout the thesis. Since the state space generated is usually very large, a graph data structure named multi-value decision diagrams (MDDs) is implemented to store the zones compactly. In order to share common clock value of dierent zones, two zone encoding methods are described: direct encoding and minimal constraint encoding. They ignore the unnecessary information in zones thus reduce the length of the integer tuples. The effectiveness of these two encoding methods is demonstrated by experimental result of the circuit example.
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Zhuding. "Distribution system planning a set of new formulations and hybrid algorithms /." online access from Digital Dissertation Consortium access full-text, 2000. http://libweb.cityu.edu.hk/cgi-bin/er/db/ddcdiss.pl?9994047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Vialette, Stéphane. "Algorithmic Contributions to Computational Molecular Biology." Habilitation à diriger des recherches, Université Paris-Est, 2010. http://tel.archives-ouvertes.fr/tel-00862069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Caumond, Anthony. "Le problème de jobshop avec contraintes : modélisation et optimisation." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2006. http://tel.archives-ouvertes.fr/tel-00713587.

Full text
Abstract:
Les algorithmes d'optimisation les plus performants pour résoudre le problème de jobshop utilisent des méthodes et outils spécifiques comme le modèle de graphe disjonctif et les voisinages basés sur ce graphe. Afin d'utiliser ces méthodes sur des problèmes réels, nous avons du enrichir le problème de jobshop. Nous nous sommes ainsi intéressés aux problèmes de jobshop avec time lags et jobshop avec transport. Pour chacun de ces deux problèmes, le modèle de graphe disjonctif et ses voisinages ont été modifiés et adaptés. Pour le problème de jobshop avec time lags, nous avons proposé des heuristiques et des métaheuristiques performantes, la difficulté principale étant de proposer une solution qui respecte toutes les contraintes de time lags maximum. Pour le problème de jobshop avec transport , nous avons proposé un modèle linéaire et une métaheuristique qui traitent toutes le même problème (i.e. prennent en compte strictement en compte les mêmes contraintes). Dans les deux cas, une modélisation sous forme de graphe disjonctif et une adaptation des voisinages ont été proposés. En outre, l'implantation des métaheuristique pour chacun de ces problèmes nous a montré qu'une grande partie du développement est redondant. Nous avons donc proposé un cadriciel orienté objet pour l'optimisation (BCOO) dont l'objectif est de factoriser la plus grande partie de code possible
APA, Harvard, Vancouver, ISO, and other styles
27

Renaud, Yoan. "Quelques aspects algorithmiques sur les systèmes de fermeture." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2008. http://tel.archives-ouvertes.fr/tel-00731341.

Full text
Abstract:
Nous présentons dans cette thèse les définitions et notations liées aux systèmes de fermeture et montrons leur relation avec les théories de Horn. Nous nous intéressons ensuite à trois opérations sur les systèmes de fermeture : la borne supérieure, la borne inférieure et la différence. Nous proposons une caractérisation de ces différentes opérations selon la représentation des systèmes de fermeture que nous considérons. On s'intéresse ensuite au problème de génération d'une base d'implications mixtes d'un contexte formel. Nous étudions ce problème lorsque la donnée prise en considération est constituée des bases d'implications génériques positives et négatives de ce contexte. Trois résultats majeurs sont présentés : l'apport de propriétés et de règles d'inférence pour déduire des implications mixtes, l'impossibilité de générer une base d'implications mixtes juste et complète à partir de ces données dans le cas général, et la faisabilité dans le cas où le contexte est considéré réduit.
APA, Harvard, Vancouver, ISO, and other styles
28

Monostori, Krisztian 1975. "Efficient computational approach to identifying overlapping documents in large digital collections." Monash University, School of Computer Science and Software Engineering, 2002. http://arrow.monash.edu.au/hdl/1959.1/8756.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Alston, Katherine Yvette. "A heuristic on the rearrangeability of shuffle-exchange networks." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2521.

Full text
Abstract:
The algorithms which control network routing are specific to the network because the algorithms are designed to take advantage of that network's topology. The "goodness" of a network includes such criteria as a simple routing algorithm and a simple routing algorithm would increase the use of the shuffle-exchange network.
APA, Harvard, Vancouver, ISO, and other styles
30

Acuña, Vicente. "Models and algorithms for metabolic networks: elementary modes and precursor sets." Phd thesis, Université Claude Bernard - Lyon I, 2010. http://tel.archives-ouvertes.fr/tel-00850705.

Full text
Abstract:
In this PhD, we present some algorithms and complexity results for two general problems that arise in the analysis of a metabolic network: the search for elementary modes of a network and the search for minimal precursors sets. Elementary modes is a common tool in the study of the cellular characteristic of a metabolic network. An elementary mode can be seen as a minimal set of reactions that can work in steady state independently of the rest of the network. It has therefore served as a mathematical model for the possible metabolic pathways of a cell. Their computation is not trivial and poses computational challenges. We show that some problems, like checking consistency of a network, finding one elementary mode or checking that a set of reactions constitutes a cut are easy problems, giving polynomial algorithms based on LP formulations. We also prove the hardness of central problems like finding a minimum size elementary mode, finding an elementary mode containing two given reactions, counting the number of elementary modes or finding a minimum reaction cut. On the enumeration problem, we show that enumerating all reactions containing one given reaction cannot be done in polynomial total time unless P=NP. This result provides some idea about the complexity of enumerating all the elementary modes. The search for precursor sets is motivated by discovering which external metabolites are sufficient to allow the production of a given set of target metabolites. In contrast with previous proposals, we present a new approach which is the first to formally consider the use of cycles in the way to produce the target. We present a polynomial algorithm to decide whether a set is a precursor set of a given target. We also show that, given a target set, finding a minimal precursor set is easy but finding a precursor set of minimum size is NP-hard. We further show that finding a solution with minimum size internal supply is NP-hard. We give a simple characterisation of precursors sets by the existence of hyperpaths between the solutions and the target. If we consider the enumeration of all the minimal precursor sets of a given target, we find that this problem cannot be solved in polynomial total time unless P=NP. Despite this result, we present two algorithms that have good performance for medium-size networks.
APA, Harvard, Vancouver, ISO, and other styles
31

Dick, Grant, and n/a. "Spatially-structured niching methods for evolutionary algorithms." University of Otago. Department of Information Science, 2008. http://adt.otago.ac.nz./public/adt-NZDU20080902.161336.

Full text
Abstract:
Traditionally, an evolutionary algorithm (EA) operates on a single population with no restrictions on possible mating pairs. Interesting changes to the behaviour of EAs emerge when the structure of the population is altered so that mating between individuals is restricted. Variants of EAs that use such populations are grouped into the field of spatially-structured EAs (SSEAs). Previous research into the behaviour of SSEAs has primarily focused on the impact space has on the selection pressure in the system. Selection pressure is usually characterised by takeover times and the ratio between the neighbourhood size and the overall dimension of space. While this research has given indications into where and when the use of an SSEA might be suitable, it does not provide a complete coverage of system behaviour in SSEAs. This thesis presents new research into areas of SSEA behaviour that have been left either unexplored or briefly touched upon in current EA literature. The behaviour of genetic drift in finite panmictic populations is well understood. This thesis attempts to characterise the behaviour of genetic drift in spatially-structured populations. First, an empirical investigation into genetic drift in two commonly encountered topologies, rings and torii, is performed. An observation is made that genetic drift in these two configurations of space is independent of the genetic structure of individuals and additive of the equivalent-sized panmictic population. In addition, localised areas of homogeneity present themselves within the structure purely as a result of drifting. A model based on the theory of random walks to absorbing boundaries is presented which accurately characterises the time to fixation through random genetic drift in ring topologies. A large volume of research has gone into developing niching methods for solving multimodal problems. Previously, these techniques have used panmictic populations. This thesis introduces the concept of localised niching, where the typically global niching methods are applied to the overlapping demes of a spatially structured population. Two implementations, local sharing and local clearing are presented and are shown to be frequently faster and more robust to parameter settings, and applicable to more problems than their panmictic counterparts. Current SSEAs typically use a single fitness function across the entire population. In the context of multimodal problems, this means each location in space attempts to discover all the optima. A preferable situation would be to use the inherent spatial properties of an SSEA to localise optimisation of peaks. This thesis adapts concepts from multiobjective optimisation with environmental gradients and applies them to multimodal problems. In addition to adapting to the fitness landscape, individuals evolve towards their preferred environmental conditions. This has the effect of separating individuals into regions that concentrate on different optima with the global fitness function. The thesis also gives insights into the expected number of individuals occupying each optima in the problem. The SSEAs and related models developed in this thesis are of interest to both researchers and end-users of evolutionary computation. From the end-user�s perspective, the developed SSEAs require less a priori knowledge of a given problem domain in order to operate effectively, so they can be more readily applied to difficult, poorly-defined problems. Also, the theoretical findings of this thesis provides a more complete understanding of evolution within spatially-structured populations, which is of interest not only to evolutionary computation practitioners, but also to researchers in the fields of population genetics and ecology.
APA, Harvard, Vancouver, ISO, and other styles
32

Zanetti, João Paulo Pereira 1987. "Complexidade de construção de árvores PQR." [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275699.

Full text
Abstract:
Orientador: João Meidanis
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-20T15:24:54Z (GMT). No. of bitstreams: 1 Zanetti_JoaoPauloPereira_M.pdf: 508253 bytes, checksum: b5fd4d2bfb8ac0b251598b01ca9431e9 (MD5) Previous issue date: 2012
Resumo: As árvores PQR são estruturas de dados usadas para tratar o problema dos uns consecutivos e problemas relacionados. Aplicações incluem reconhecimento de grafos de intervalos, de grafos planares, e problemas envolvendo moléculas de DNA. A presente dissertação busca consolidar o conhecimento sobre árvores PQR e, principalmente, sua construção incremental, visando fornecer uma base teórica para o uso desta estrutura em aplicações. Este trabalho apresenta uma descrição detalhada do projeto do algoritmo para construção online de árvores PQR, partindo de uma implementação inocente das operações sugeridas e refinando sucessivamente o algoritmo até alcançar a complexidade de tempo quase-linear. Neste projeto, lidamos com um obstáculo que surge com a utilização de estruturas de union-find que não havia sido tratado anteriormente. A demonstração da complexidade de tempo do algoritmo apresentada aqui também é nova e mais clara. Além disso, o projeto é acompanhado de uma implementação em Java dos algoritmos descritos
Abstract: PQR trees are data structures used to solve the consecutive ones problem and other related problems. Applications include interval or planar graph recognition, and problems involving DNA molecules. This dissertation aims at consolidating existing and new knowledge about PQR trees and, primarily, their online construction, thus providing a theoretical basis for the use of this structure in applications. This work presents a detailed description of the online PQR tree construction algorithm's design, starting with a naive implementation of the suggested operations and refining them successively, culminating with an almost-linear time complexity. In this project, we dealt with an obstacle that arises with the use of union-find structures and that has never been addressed before. The proof presented here for the time complexity is also novel and clearer. Furthermore, the project is accompanied by a Java implementation of all the algorithms described
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
33

Maquet, Nicolas. "New algorithms and data structures for the emptiness problem of alternating automata." Doctoral thesis, Universite Libre de Bruxelles, 2011. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209961.

Full text
Abstract:
This work studies new algorithms and data structures that are useful in the context of program verification. As computers have become more and more ubiquitous in our modern societies, an increasingly large number of computer-based systems are considered safety-critical. Such systems are characterized by the fact that a failure or a bug (computer error in the computing jargon) could potentially cause large damage, whether in loss of life, environmental damage, or economic damage. For safety-critical systems, the industrial software engineering community increasingly calls for using techniques which provide some formal assurance that a certain piece of software is correct.

One of the most successful program verification techniques is model checking, in which programs are typically abstracted by a finite-state machine. After this abstraction step, properties (typically in the form of some temporal logic formula) can be checked against the finite-state abstraction, with the help of automated tools. Alternating automata play an important role in this context, since many temporal logics on words and trees can be efficiently translated into those automata. This property allows for the reduction of model checking to automata-theoretic questions and is called the automata-theoretic approach to model checking. In this work, we provide three novel approaches for the analysis (emptiness checking) of alternating automata over finite and infinite words. First, we build on the successful framework of antichains to devise new algorithms for LTL satisfiability and model checking, using alternating automata. These algorithms combine antichains with reduced ordered binary decision diagrams in order to handle the exponentially large alphabets of the automata generated by the LTL translation. Second, we develop new abstraction and refinement algorithms for alternating automata, which combine the use of antichains with abstract interpretation, in order to handle ever larger instances of alternating automata. Finally, we define a new symbolic data structure, coined lattice-valued binary decision diagrams that is particularly well-suited for the encoding of transition functions of alternating automata over symbolic alphabets. All of these works are supported with empirical evaluations that confirm the practical usefulness of our approaches. / Ce travail traite de l'étude de nouveaux algorithmes et structures de données dont l'usage est destiné à la vérification de programmes. Les ordinateurs sont de plus en plus présents dans notre vie quotidienne et, de plus en plus souvent, ils se voient confiés des tâches de nature critique pour la sécurité. Ces systèmes sont caractérisés par le fait qu'une panne ou un bug (erreur en jargon informatique) peut avoir des effets potentiellement désastreux, que ce soit en pertes humaines, dégâts environnementaux, ou économiques. Pour ces systèmes critiques, les concepteurs de systèmes industriels prônent de plus en plus l'usage de techniques permettant d'obtenir une assurance formelle de correction.

Une des techniques de vérification de programmes les plus utilisées est le model checking, avec laquelle les programmes sont typiquement abstraits par une machine a états finis. Après cette phase d'abstraction, des propriétés (typiquement sous la forme d'une formule de logique temporelle) peuvent êtres vérifiées sur l'abstraction à espace d'états fini, à l'aide d'outils de vérification automatisés. Les automates alternants jouent un rôle important dans ce contexte, principalement parce que plusieurs logiques temporelle peuvent êtres traduites efficacement vers ces automates. Cette caractéristique des automates alternants permet de réduire le model checking des logiques temporelles à des questions sur les automates, ce qui est appelé l'approche par automates du model checking. Dans ce travail, nous étudions trois nouvelles approches pour l'analyse (le test du vide) desautomates alternants sur mots finis et infinis. Premièrement, nous appliquons l'approche par antichaînes (utilisée précédemment avec succès pour l'analyse d'automates) pour obtenir de nouveaux algorithmes pour les problèmes de satisfaisabilité et du model checking de la logique temporelle linéaire, via les automates alternants.Ces algorithmes combinent l'approche par antichaînes avec l'usage des ROBDD, dans le but de gérer efficacement la combinatoire induite par la taille exponentielle des alphabets d'automates générés à partir de LTL. Deuxièmement, nous développons de nouveaux algorithmes d'abstraction et raffinement pour les automates alternants, combinant l'usage des antichaînes et de l'interprétation abstraite, dans le but de pouvoir traiter efficacement des automates de grande taille. Enfin, nous définissons une nouvelle structure de données, appelée LVBDD (Lattice-Valued Binary Decision Diagrams), qui permet un encodage efficace des fonctions de transition des automates alternants sur alphabets symboliques. Tous ces travaux ont fait l'objet d'implémentations et ont été validés expérimentalement.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
34

Rivera, Kris Krishna. "Ray collection bounding volume hierarchy." Master's thesis, University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4701.

Full text
Abstract:
This thesis presents Ray Collection BVH, an improvement over a current day Ray Tracing acceleration structure to both build and perform the steps necessary to efficiently render dynamic scenes. Bounding Volume Hierarchy (BVH) is a commonly used acceleration structure, which aides in rendering complex scenes in 3D space using Ray Tracing by breaking the scene of triangles into a simple hierarchical structure. The algorithm this thesis explores was developed in an attempt at accelerating the process of both constructing this structure, and also using it to render these complex scenes more efficiently. The idea of using "ray collection" as a data structure was accidentally stumbled upon by the author in testing a theory he had for a class project. The overall scheme ofthe algorithm essentially collects a set of localized rays together and intersects them with subsequent levels of the BVH at each build step. In addition, only part of the acceleration structure is built on a per-Ray need basis. During this partial build, the Rays responsible for creating the scene are partially processed, also saving time on the overall procedure. Ray tracing is a widely used technique for simple rendering from realistic images to making movies. Particularly, in the movie industry, the level of realism brought in to the animated movies through ray tracing is incredible. So any improvement brought to these algorithms to improve the speed of rendering would be considered useful and welcome. This thesis makes contributions towards improving the overall speed of scene rendering, and hence may be considered as an important and useful contribution.
ID: 030646225; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (M.S.)--University of Central Florida, 2011.; Includes bibliographical references (p. 80-81).
M.S.
Masters
Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO, and other styles
35

Bessy, Stéphane. "Some problems in graph theory and graphs algorithmic theory." Habilitation à diriger des recherches, Université Montpellier II - Sciences et Techniques du Languedoc, 2012. http://tel.archives-ouvertes.fr/tel-00806716.

Full text
Abstract:
This document is a long abstract of my research work, concerning graph theory and algorithms on graphs. It summarizes some results, gives ideas of the proof for some of them and presents the context of the different topics together with some interesting open questions connected to them The first part precises the notations used in the rest of the paper; the second part deals with some problems on cycles in digraphs; the third part is an overview of two graph coloring problems and one problem on structures in colored graphs; finally the fourth part focus on some results in algorithmic graph theory, mainly in parametrized complexity.
APA, Harvard, Vancouver, ISO, and other styles
36

Viennot, Laurent. "Quelques algorithmes parallèles et séquentiels de traitement des graphes et applications." Phd thesis, Université Paris-Diderot - Paris VII, 1996. http://tel.archives-ouvertes.fr/tel-00471691.

Full text
Abstract:
Cette présente un point de vue algorithmique parllèle et séquentiel sur le traitement des graphes. Le chapitre~1 est consacré au modèle \lscPRAM qui est le modèle de parallèlisme le plus simple qui soit : plusieurs processeurs ont accès à une mémoire partagée. Même avec la simplification apportée par le modèle, certains problèmes restent difficiles à résoudre. La section~1.1 introduit une représentation adaptée aux traitement algorithmique des ordres de dimension fixée $d$ et permet de calculer une représentation classique de l'ordre, ce calcul est lié aux traitement de requêtes géométriques dans un espace de dimension $d$. La section~1.2 est consacrée à la reconnaissance en parallèle des ordres \lscN-free et la section~1.3 traite de la reconnaissance des graphes de comparabilité. D'une manière générale, l'étude de classes particulières de graphes permet de résoudre des problèmes qui sont difficiles dans le cas général en utilisant une structure algorithmique sous-jacente à la classe considérée. Le problème de la reconnaissance consiste à trouver cette structure. Le chapitre~2 est au consacré au modèle \lscCGM qui est un modèle de machine parallèle dite << à gros grain >> qui priviligie l'étude du placement distribué des données d'un problème, \cad{} sur les différentes mémoires des ordinateurs qui vont travailler ensemble sur le problème. Ce chapitre reprend les problèmes abordés dans le modèle \lscPRAM et en fournit des solutions dans le modèle \lscCGM. Un algorithme de \anglais{list-ranking} est de plus présenté dans la section d'un graphe dans ce modèle. Le chapitre~3 est consacré à un << modèle de calcul >> très particulier issu d'un problème de téléphonie \lscGSM. Ce chapitre regroupe d'une part les différentes idées algorithmiques qui s'appliquent à un tel problème soumis à de multiples contraintes et d'autre part des simulations permettant d'évaluer la pertinence des différentes idées. Ce problème est de nature continue mais on peut néanmoins y apporter des solutions issues de l'algorithmique discrète telles que les techniques liées aux des composantes connexes d'un graphe. Par soucis de continuité, un algorithme de composante connexes est donné dans chacun des trois modèles abordés. Enfin, le chapitre~4 est consacré à une nouvelle technique algorithmique : l'affinage de partition. La section~4.1 tente de cerner cette technique et montre les ressemblances entre différents algorithmes existants. Cette technique nous permettra de généraliser certains de ces algorithmes à la résolution d'autres problèmes proches. L'affinage de partition nous permettra ensuite dans la section~4.2 de donner des algorithmes simples pour résoudre la reconnaissance des graphes d'intervalles et l'orientation transitive, deux problèmes dont les solution algorithmiques efficaces étaient jusque là très difficiles à implanter et reposaient sur des structures de données complexes.
APA, Harvard, Vancouver, ISO, and other styles
37

Neves, Patricia Takaki. "Variações e aplicações do algoritmo de Dijkstra." [s.n.], 2007. http://repositorio.unicamp.br/jspui/handle/REPOSIP/276210.

Full text
Abstract:
Orientador: Orlando Lee
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-09T17:25:07Z (GMT). No. of bitstreams: 1 Neves_PatriciaTakaki_M.pdf: 4451114 bytes, checksum: c4fcf2f5d76f36075020307255c1470c (MD5) Previous issue date: 2007
Resumo: O problema de encontrar caminhos mínimos em um grafo com pesos nas arestas é considerado fundamental em otimização combinatória. Diversos problemas do mundo real podem ser modelados dessa forma: percurso mais curto/rápido entre duas cidades, transmissão de dados em uma rede de computadores, reconhecimento de voz, segmentação de imagens entre outros. O algoritmo proposto por Dijkstra em 1959 resolve o problema de caminhos mínimos em grafos sem arestas de peso negativo, o que não chega a ser restritivo na maior parte das aplicações. Desde então, o algoritmo tem sido refinado com o uso de estruturas de dados cada vez mais sofisticadas, reduzindo seu tempo de execução de pior caso (ao menos, do ponto de vista teórico). Recentemente, problemas de caminhos mínimos têm aparecido no contexto de Sistemas de Informação Geográfica (SIG). Neste modelo, o usuário faz consultas ao sistema para encontrar o trajeto mais curto (ou rápido) entre dois pontos especificados (problema ponto-a-ponto ou problema P2P). Além disso, pode haver várias consultas. Instâncias neste tipo de modelo são relativamente grandes: o mapa rodoviário dos Estados Unidos tem mais de 20 milhões de vértices (cada vértice representa intersecções de vias). Mesmo as implementações mais sofisticadas do algoritmo de Dijkstra não apresentam um desempenho prático capaz de atender às demandas que esse tipo de modelo requer. A pesquisa recente tem tentado reduzir este gap entre a teoria e a prática. Várias técnicas de aceleração de algoritmos têm sido propostas e implementadas: busca bidirecional, algoritmo A*, alcance (reach), landmarks e muitos outros. Algumas dessas técnicas têm restrições de domínio e outras podem ser usadas em qualquer contexto. Neste trabalho, estudamos algumas variações da versão original do algoritmo de Dijkstra, caracterizadas pelas diferentes estruturas de dados. Implementamos quatro dessas variações e realizamos testes experimentais utilizando os mapas do mundo real. Nosso objetivo foi analisar o desempenho prático dessas. Dedicamos também uma atenção especial ao problema P2P, apresentando algumas das principais técnicas de aceleração
Abstract: The problem of finding shortest paths in a weighted graph is a fundamental one in combinatorial optimization. Several real world problems can be modeled in this way: shortest or fastest tour between two cities, data transmission on a computer network, voice recognition, image segmentation among others. The algorithm proposed by Dijkstra in 1959 solves this problem when the graph has no edge with negative weight, which is not a serious restriction in most applications. Since then, the algorithm has been improved with the use of sophisticated data structures, reducing the worst case running time (at least, from a theoretical viewpoint). Recently shortest path problems has appeared in the context of Geographic Information System (GIS). In this model, the user asks the system to find out the shortest path between two given points (point-to-point problem or P2P problem). Moreover, there can be several queries. Instances in this model are relatively large: the road network map of the United States has more than 20 million vertices (each vertex represents an intersection of two roads). Even the fastest implementations of Dijkstra's algorithm do not have a performance in practice which is satisfactory to meet the requirements of this model. Recent research has tried to reduce this gap between theory and practice. Several speed-up techniques for these algorithms have been proposed and implemented: bidirectional search, algorithm A*, reach, landmarks and many others. Some of them are domain-restricted and others are applicable in any context. In this work, we studied some variants of Dijkstra's algorithm characterized by its different data structures. We have implemented four of those variants and performed experimental tests using real-world maps. Our goal was to analyze their practical performance. We also paid special attention to the P2P problem, and presented some of the main speed-up techniques
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
38

Ward, Paul. "A Scalable Partial-Order Data Structure for Distributed-System Observation." Thesis, University of Waterloo, 2001. http://hdl.handle.net/10012/1161.

Full text
Abstract:
Distributed-system observation is foundational to understanding and controlling distributed computations. Existing tools for distributed-system observation are constrained in the size of computation that they can observe by three fundamental problems. They lack scalable information collection, scalable data-structures for storing and querying the information collected, and scalable information-abstraction schemes. This dissertation addresses the second of these problems. Two core problems were identified in providing a scalable data structure. First, in spite of the existence of several distributed-system-observation tools, the requirements of such a structure were not well-defined. Rather, current tools appear to be built on the basis of events as the core data structure. Events were assigned logical timestamps, typically Fidge/Mattern, as needed to capture causality. Algorithms then took advantage of additional properties of these timestamps that are not explicit in the formal semantics. This dissertation defines the data-structure interface precisely, and goes some way toward reworking algorithms in terms of that interface. The second problem is providing an efficient, scalable implementation for the defined data structure. The key issue in solving this is to provide a scalable precedence-test operation. Current tools use the Fidge/Mattern timestamp for this. While this provides a constant-time test, it requires space per event equal to the number of processes. As the number of processes increases, the space consumption becomes sufficient to affect the precedence-test time because of caching effects. It also becomes problematic when the timestamps need to be copied between processes or written to a file. Worse, existing theory suggested that the space-consumption requirement of Fidge/Mattern timestamps was optimal. In this dissertation we present two alternate timestamp algorithms that require substantially less space than does the Fidge/Mattern algorithm.
APA, Harvard, Vancouver, ISO, and other styles
39

Gilardet, Mathieu. "Étude d'algorithmes de restauration d'images sismiques par optimisation de forme non linéaire et application à la reconstruction sédimentaire." Phd thesis, Université de Pau et des Pays de l'Adour, 2013. http://tel.archives-ouvertes.fr/tel-00952964.

Full text
Abstract:
Nous présentons une nouvelle méthode pour la restauration d'images sismiques. Quand on l'observe, une image sismique est le résultat d'un système de dépôt initial qui a été transformé par un ensemble de déformations géologiques successives (flexions, glissement de la faille, etc) qui se sont produites sur une grande période de temps. L'objectif de la restauration sismique consiste à inverser les déformations pour fournir une image résultante qui représente le système de dépôt géologique tel qu'il était dans un état antérieur. Classiquement, ce procédé permet de tester la cohérence des hypothèses d'interprétations formulées par les géophysiciens sur les images initiales. Dans notre contribution, nous fournissons un outil qui permet de générer rapidement des images restaurées et qui aide donc les géophysiciens à reconnaître et identifier les caractéristiques géologiques qui peuvent être très fortement modifiées et donc difficilement identifiables dans l'image observée d'origine. Cette application permet alors d'assister ces géophysiciens pour la formulation d'hypothèses d'interprétation des images sismiques. L'approche que nous introduisons est basée sur un processus de minimisation qui exprime les déformations géologiques en termes de contraintes géométriques. Nous utilisons une approche itérative de Gauss-Newton qui converge rapidement pour résoudre le système. Dans une deuxième partie de notre travail nous montrons différents résultats obtenus dans des cas concrets afin d'illustrer le processus de restauration d'image sismique sur des données réelles et de montrer comment la version restaurée peut être utilisée dans un cadre d'interprétation géologique.
APA, Harvard, Vancouver, ISO, and other styles
40

Hanusse, Nicolas. "Navigation dans les grands graphes." Habilitation à diriger des recherches, Université Sciences et Technologies - Bordeaux I, 2009. http://tel.archives-ouvertes.fr/tel-00717765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Bassino, Frédérique. "Automates, énumération et algorithmes." Habilitation à diriger des recherches, Université de Marne la Vallée, 2005. http://tel.archives-ouvertes.fr/tel-00719172.

Full text
Abstract:
Ces travaux s'inscrivent dans le cadre général de la théorie des automates, de la combinatoire des mots, de la combinatoire énumérative et de l'algorithmique. Ils ont en commun de traiter des automates et des langages réguliers, de problèmes d'énumération et de présenter des résultats constructifs, souvent explicitement sous forme d'algorithmes. Les domaines dont sont issus les problèmes abordés sont assez variés. Ce texte est compose de trois parties consacrées aux codes préfixes, à certaines séquences lexicographiques et à l'énumération d'automates.
APA, Harvard, Vancouver, ISO, and other styles
42

Braginton, Pauline. "Taxonomy of synchronization and barrier as a basic mechanism for building other synchronization from it." CSUSB ScholarWorks, 2003. https://scholarworks.lib.csusb.edu/etd-project/2288.

Full text
Abstract:
A Distributed Shared Memory(DSM) system consists of several computers that share a memory area and has no global clock. Therefore, an ordering of events in the system is necessary. Synchronization is a mechanism for coordinating activities between processes, which are program instantiations in a system.
APA, Harvard, Vancouver, ISO, and other styles
43

Choudhury, Sabyasachy. "Hierarchical Data Structures for Pattern Recognition." Thesis, Indian Institute of Science, 1987. http://hdl.handle.net/2005/74.

Full text
Abstract:
Pattern recognition is an important area with potential applications in computer vision, Speech understanding, knowledge engineering, bio-medical data classification, earth sciences, life sciences, economics, psychology, linguistics, etc. Clustering is an unsupervised classification process corning under the area of pattern recognition. There are two types of clustering approaches: 1) Non-hierarchical methods 2) Hierarchical methods. Non-hierarchical algorithms are iterative in nature and. perform well in the context of isotropic clusters. Time-complexity of these algorithms is order of (0 (n) ) and above, Hierarchical agglomerative algorithms, on the other hand, are effective when clusters are non-isotropic. The single linkage method of hierarchical category produces a dendrogram which corresponds to the minimal spanning tree, conventional approaches are time consuming requiring O (n2 ) computational time. In this thesis we propose an intelligent partitioning scheme for generating the minimal spanning tree in the co-ordinate space. This is computationally elegant as it avoids the computation of similarity between many pairs of samples me minimal spanning tree generated can be used to produce C disjoint clusters by breaking the (C-1) longest edges in the tree. A systolic architecture has been proposed to increase the speed of the algorithm further. Simulation study has been conducted and the corresponding results are reported. The simulation package has been developed on DEC-1090 in Pascal. It is observed based on the simulation study that the parallel implementation reduces the time enormously. The number of processors required for the parallel implementation is a constant making the approach more attractive. Texture analysis and synthesis has been extensively studied in the context of computer vision, Two important approaches which have been studied extensively by researchers earlier are statistical and structural approaches, Texture is understood to be a periodic pattern with primitive sub patterns repeating in a particular fashion. This has been used to characterize texture with the help of the hierarchical data structure, tree. It is convenient to use a tree data structure as, along with the operations like merging, splitting, deleting a node, adding a node, etc, .it would be useful to handle a periodic pattern. Various functions like angular second moment, correlation etc, which are used to characterize texture have been translated into the new language of hierarchical data structure.
APA, Harvard, Vancouver, ISO, and other styles
44

Ngoko, Yanik. "L'Approche du portfolio d'algorithmes pour la construction des algorithmes robustes et adaptatifs." Phd thesis, Université de Grenoble, 2010. http://tel.archives-ouvertes.fr/tel-00786253.

Full text
Abstract:
Sur plusieurs problèmes il est difficile d'avoir un seul algorithme qui résout optimalement (en temps d'exécution) toutes ses instances. Ce constat motive l'élaboration des approches permettant de combiner plusieurs algorithmes résolvant le même problème. Les approches permettant la combinaison d'algorithmes peuvent être mise en oeuvre au niveau système (en construisant des bibliothèques, des langages et composants adaptatifs etc.) ou au niveau purement algorithmique. Ce travail se focalise sur les approches génériques de combinaison d'algorithmes au niveau algorithmique avec en particulier l'approche du portfolio d'algorithmes. Un portfolio d'algorithmes définit une exécution concurrente de plusieurs algorithmes résolvant un même problème. Dans une telle exécution, les algorithmes sont entrelacées dans le temps et/ou l'espace. Sur une instance à résoudre, l'exécution est interrompue dès qu'un des algorithmes trouve une solution. Nous proposons dans cette thèse une classification des techniques de combinaison d'algorithmes. Dans celle ci nous précisons pour chaque technique le contexte le plus adapté pour son utilisation. Nous proposons ensuite deux techniques de construction des portfolio d'algorithmes. La première technique est basée sur une adaptation de la méthode des plus proches voisins en apprentissage automatique pour la combinaison des algorithmes. Cette technique est adaptative car elle essaie sur chaque instance de trouver un sous ensemble d'algorithmes adaptés pour sa résolution. Nous l'appliquons dans la combinaison des algorithmes itératifs pour la résolution des systèmes linéaires et nous montrons sur un jeu d'environ mille matrices creuses qu'elle permet de réduire le nombre d'itérations et le temps nécéssaire dans la résolution. En outre, sur certains jeux d'expérimentations, ces résultats montrent que la technique proposée peut dans la plupart des cas trouver l'algorithme le plus adapté à sa résolution. La seconde technique est basée sur le problème de partage de ressources que nous formulons. Etant donnés, un problème cible, un jeu de données le représentant, un ensemble d'algorithmes candidats le résolvant et le comportement en temps d'exécution du jeu de données sur les algorithmes candidats, le problème de partage de ressources a pour objectif de trouver la meilleure répartition statique des ressources aux algorithmes candidats de sorte à minimiser en moyenne le temps de résolution du jeu de données cibles. Ce problème vise à trouver une solution en moyenne plus robuste que chacun des algorithmes candidats pris séparémment. Nous montrons que ce problème est NP-complet et proposons deux familles d'algorithmes approchés et exacts pour le résoudre. Nous validons les solutions proposées en prenant des données issues d'une base de données pour SAT. Les résultats obtenus montrent que les solutions proposées permettent effectivement de bénéficier de la complémentarité des algorithmes résolvant un même problème pour la construction des algorithmes robustes.
APA, Harvard, Vancouver, ISO, and other styles
45

Viennot, Laurent. "Autour des graphes et du routage." Habilitation à diriger des recherches, Université Paris-Diderot - Paris VII, 2005. http://tel.archives-ouvertes.fr/tel-00471731.

Full text
Abstract:
Le chapitre~2 retrace rapidement la problématique du routage, notamment dans l'Internet (qui servira d'exemple introductif dans la plupart des chapitres), les réseaux ad hoc, le graphe du web et les réseaux de pair à pair. Le chapitre~3 est consacré au routage «~de l'un vers tous~», c'est à dire au problème de diffusion d'un message à tous les membres d'un réseau. Le chapitre~4 traite du routage dans son sens le plus classique, c'est-à-dire quand il s'agit d'envoyer un message «~d'un n{\oe}ud vers tel autre~». Nous considérerons ensuite le problème plus pratique qui consiste à envoyer un message vers un n{\oe}ud défini de manière indirecte, ce que j'ai appelé «~de l'un vers celui qui~». Les deux derniers chapitres s'intéressent enfin à la dynamique des réseaux concernant l'algorithmique distribuée entre les n{\oe}uds ou le réseau lui-même. Le chapitre~6 décrit ainsi une classe très générale d'algorithmes de réseau à base d'itérations asynchrones qui fonctionne dès que des messages envoyés régulièrement sont reçus «~de temps à autre~». Le chapitre~7 développe ensuite quelques points liés à l'aspect dynamique de certains réseaux~: «~quand ça bouge~» tant du point de vue des connexions entre n{\oe}uds que de la présence des n{\oe}uds.
APA, Harvard, Vancouver, ISO, and other styles
46

Jaillet, Léonard. "Méthodes probabilistes pour la planifcation réactive de mouvement." Phd thesis, Université Paul Sabatier - Toulouse III, 2005. http://tel.archives-ouvertes.fr/tel-00853031.

Full text
Abstract:
Malgré le franc succès des techniques de planification de mouvement au cours de ces deux dernières décennies, leur adaptation à des scènes comprenant à la fois des obstacles statiques et des obstacles mobiles s'est avérée limitée jusqu'ici. Une des raisons en est le coût associé à la mise à jour des structures de données précalculées afin de capturer la connexité de l'espace libre. Notre contribution principale concerne la proposition d'un nouveau planificateur capable de traiter ces problèmes d'environnements partiellement dynamiques composés à la fois d'obstacles statiques et d'obstacles mobiles.
APA, Harvard, Vancouver, ISO, and other styles
47

Jahami, Ghassan. "Pour un système de synthèse d'images flexible et évolutif : quelques propositions." Phd thesis, Ecole Nationale Supérieure des Mines de Saint-Etienne, 1991. http://tel.archives-ouvertes.fr/tel-00817677.

Full text
Abstract:
Je me suis intéressé pendant ma thèse à la globalité du système de synthèse d'images. En effet, j'ai travaillé sur les différentes étapes du processus de la génération d'une image de synthèse: de la modélisation jusqu'au rendu. Mon objectif principal était de favoriser l'évolutivité et la flexibilité du système. Pour pouvoir atteindre cet objectif, j'ai utilisé la programmation orientée objet pour concevoir et implanter un modeleur de type arbre de construction en langage c++. J'ai proposé une méthodologie de choix de classes et une hiérarchie originale de classes. Pour rendre le système plus flexible, j'ai permis le mixage d'algorithmes d'élimination des parties cachées dans une même scène tout en assurant l'interaction en termes de réflexion, transparence et ombres portées entre tous les objets de la scène. Enfin, j'ai proposé un certain nombre d'outils et méthodes pour la gestion des niveaux de détails dans une scène.
APA, Harvard, Vancouver, ISO, and other styles
48

Blin, Lélia. "Algorithmes auto-stabilisants pour la construction d'arbres couvrants et la gestion d'entités autonomes." Habilitation à diriger des recherches, Université Pierre et Marie Curie - Paris VI, 2011. http://tel.archives-ouvertes.fr/tel-00847179.

Full text
Abstract:
Dans le contexte des réseaux à grande échelle, la prise en compte des pannes est une nécessité évidente. Ce document s'intéresse à l'approche auto-stabilisante qui vise à concevoir des algorithmes se ''réparant d'eux-même ' en cas de fautes transitoires, c'est-à-dire de pannes impliquant la modification arbitraire de l'état des processus. Il se focalise sur deux contextes différents, couvrant la majeure partie de mes travaux de recherche ces dernières années. La première partie du document est consacrée à l'algorithmique auto-stabilisante pour les réseaux de processus. La seconde partie du document est consacrée quant à elle à l'algorithmique auto-stabilisante pour des entités autonomes (agents logiciels, robots, etc.) se déplaçant dans un réseau.
APA, Harvard, Vancouver, ISO, and other styles
49

Upadhyay, Abhyudaya. "Big Vector: An External Memory Algorithm and Data Structure." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1439279714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Ghazanfarpour-Kholendjany, Djamchid. "Problèmes de discrétisation et de filtrage pour la visualisation d'images numériques." Phd thesis, Ecole Nationale Supérieure des Mines de Saint-Etienne, 1990. http://tel.archives-ouvertes.fr/tel-00817493.

Full text
Abstract:
LA FAIBLE DEFINITION DES MEMOIRES DE TRAMES IMPOSEE PAR DES CONTRAINTES TECHNOLOGIQUES POSE DES PROBLEMES DE DISCRETISATION DE L'IMAGE LORS DE SON AFFICHAGE. IL EN RESULTE LES DIFFERENTS DEFAUTS D'ALIASSAGE DUS A UN ECHANTILLONNAGE INSUFFISANT DE L'IMAGE SOUS SA FORME ANALOGIQUE. CES DEFAUTS SONT PERCEPTIBLES ESSENTIELLEMENT SOUS FORMES DE MARCHES D'ESCALIER SUR LES CONTOURS DE L'IMAGE, D'APPARITION ET DE DISPARITION DES PETITS OBJETS SUIVANT LEUR POSITION DANS LA SCENE ET DE PRESENCE DE MOIRE DANS LES SCENES PORTANT DES TEXTURES. POUR ATTEINDRE UN PLUS GRAND DEGRE DE REALISME EN SYNTHESE D'IMAGES, IL EST INDISPENSABLE DE RESOUDRE CES PROBLEMES DE DISCRETISATION. LA SOLUTION GENERALE EST UN PREFILTRAGE PASSE-BAS DE L'IMAGE AVANT SON AFFICHAGE. NOUS ABORDONS CES PROBLEMES SOUS UN ANGLE THEORIQUE ET PRATIQUE DANS CETTE THESE. NOUS ETUDIONS LES METHODES D'ANTIALIASSAGE EN SYNTHESE D'IMAGES DANS LES CAS LES PLUS COURANTS. NOUS PROPOSONS DES NOUVELLES METHODES EN PARTICULIER DES ALGORITHMES ORIGINAUX POUR RESOUDRE CES PROBLEMES DANS LES CAS DU TAMPON DE PROFONDEUR ET DES TEXTURES
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography