Dissertations / Theses on the topic 'Complexité combinatoire'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Complexité combinatoire.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Bouras, Abdelghani. "Problème d'affectation quadratique de petit rang : modeèles, complexité et applications." Université Joseph Fourier (Grenoble), 1996. http://www.theses.fr/1996GRE10102.
Full textChung, Yerim. "Optimisation combinatoire inverse et applications." Paris 1, 2010. http://www.theses.fr/2010PA010009.
Full textRenotte-Jeudy, Laure. "Classes de complexité, complétude et préservation de l'approximation." Paris 9, 1996. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1996PA090064.
Full textWatrigant, Rémi. "Approximation et complexité paramétrée de problèmes d’optimisation dans les graphes : partitions et sous-graphes." Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20100/document.
Full textThe theory of NP-completeness tells us that for many optimization problems, there is no hope for finding an efficient algorithm computing an optimal solution. Based on this, two classical approaches have been developped to deal with these problems. The first one, called polynomial- time approximation, consists in designing efficient algorithms computing a solution that is close to an optimal one. The second one, called param- eterized complexity, consists in designing exact algorithms which com- binatorial explosion is captured by a carefully chosen parameter of the instance. The goal of this thesis is twofold. First, we study and apply classical methods from these two domains in order to obtain positive and negative results for two optimization problems in graphs: a partitioning problem called Sparsest k-Compaction, and a cardinality constraint subgraph problem called Sparsest k-Subgraph. Then, we present how the different methods from these two domains have been combined in recent years in a concept called parameterized approximation. In particular, we study the links between approximation and kernelization algorithms
Duvillié, Guillerme. "Approximation, complexité paramétrée et stratégies de résolution de problèmes d'affectation multidimensionnelle." Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT321/document.
Full textIn this thesis, we focused in the Wafer-to-Wafer integration problems. These problems come from IC manufacturing. During the production of three-dimensional processors, dies have to be superimposed. Until recent, the dies were engraved on a silicon disk called wafer, then were cut, tested and sorted to suppress faulty dies and lastly superimposed one to each other.However superimposing wafers instead of dies presents several technical and financial advantages. Since faulty dies can only be dismissed when cutting the wafer, superimpose two wafers can lead to superimpose a faulty die with a viable one. In this case, the resulting stack of dies is considered as faulty. It follows that a bad assignment between the wafers can lead to a disastrous yield.In order to minimize the number of faulty dies stacks, a "failure map" of each wafer is generated during a test phase. This map gives location of the faulty dies on the wafers. The objective is then to take advantage of this map to define an assignment of the wafers to each other in order to match as many failures as possible.This problem can be modelized with Multidimensional Assignment problems. Each wafer can be seen as a vector with as many dimensions as the number of dies engraved on it. A coordinate set to zero marks a faulty die while a coordinate set to one indicates a viable one. Each seat of wafers is represented by a set of vector. Formally, an instance of a Wafer-to-Wafer integration problem is represented by m sets of n p-dimensional vectors. The objective is then to partition the vectors into n disjoint m-tuples, each tuple containing exactly one vector per set. An m-tuple represents a stack of wafers. Every m-tuple can be represented by a p-dimensional vector. Each coordinate is computed by performing the bitwise AND between the corresponding coordinates of the vectors that compose the m-tuple. In other words, a coordinate of the representative vector is equal to one if and only if this coordinate is equal to one in every vector composing the tuple. It follows that a dies stack is viable if and only if all the dies composing the stack are viable. The objective is then to maximize the overall number of ones of to minimize the overall number of zeros.The first part of the thesis is a theoretical one. We study the complexity of the considered versions of the problem with regards to natural parameters such as m, n, p or the number of zeros per vector. We show that these problems can encode more classical problems such as Maximum Clique, Minimum Vertex Cover or k-Dimensional Matching. This leads to several negative results from computational complexity, approximability or even parameterized complexity point of view. We also provide several positive results for some specific cases of the problem.In a second part, we focus on the practical solving of the problem. We provide and compare several Integer Linear Programming formulations. We also focus on performances of some approximation algorithms that we detailed in the theoretical part
Ould, Mohamed Lemine Mohamed. "Connaissance inter-entreprises et optimisation combinatoire." Thesis, Paris 9, 2014. http://www.theses.fr/2014PA090015/document.
Full textThe inter-companies knowledge allows to every partner to learn about its customers, its suppliers and to develop its activity. Also this permits to limit the risk related to the creditworthiness, or the late payment of its partners. With the cash flow pressures, the need for growth and increased competition, this area becomes more strategic than ever, for both small (PME) and large groups. The amount of data processed in this domain, the requirements of quality and freshness, the need to cross these data to obtain new information and indicators, yield several optimization problems for which the recent techniques and computational tools can bring effective solutions. In this thesis, we use combinatorial optimization, text algorithms as well as graph theory to solve efficiently problems arising in the field of inter-companies knowledge. In particular, such problems was encountered in Altares D&B. First, we focus on the quality of the managers database. This problem combines the detection and removal of duplicates in a database, as well as the error detection in a string. We propose a method for solving this problem, based on data normalization, text algorithms and syntactic comparison between two strings. Our experimental results show that this method is relevant for the detection and removal of duplicates, and it is also very efficient in terms of processing time. In a second part of the thesis, we address a problem related to the data of ownership links. We compute the indirect links, and identify the group heads. We propose a method for solving this problem using graph theory and combinatorial optimization. We then perform a set of experiments on several real-world instances. The computational results show the effectiveness of our method in terms of CPU-time and resource allocation. In fact, the CPU time for computation increases logarithmically with the size of the instances. Finally, we consider the problem of identifying influence networks. We give a description of this problem in terms of graphs, and show that it can reduce to a graph partitioning problem. The latter is NP-hard. We then propose an integer linear programming formulation to model the problem. We investigate the associated polyhedron and describe several classes of valid inequalities. We give some necessaryand sufficient conditions for these inequalities to define facets of the considered polyhedron, and we discuss the related separation problems. Based on the obtained polyhedral results, we devise a Branch-and-Cut algorithm to solve the problem. Some numerical results are presented to show the efficiency of our algorithm
Davot, Tom. "A la recherche de l’échafaudage parfait : efficace, de qualité et garanti." Thesis, Montpellier, 2020. http://www.theses.fr/2020MONTS030.
Full textSequencing is a process in biology that determines the order of nucleotides in the DNA. It produces a set of fragments, called reads, in which the genetic information is known. Unfortunatly, the genomic sequence is decomposed in small pieces. In order to analyse it, it is necessary to reconstruct it using a number of computer processes. In this thesis, we studied two mathematical problems arising from this sequencing: the scaffolding and the linearization.The scaffolding is a process that takes place after the reads assembly into larger subsequences called contigs. It consists in the search of paths and cycles in a particular graph called scaffold graph. These paths and cycles represent the linear and circular chromosomes of the organism whose DNA has been sequenced. The linearization is a problem related to the scaffolding. When we take into account that contigs may appear several times in the genomic sequence, some ambiguities can arise. If this ambiguities are not deleted, then a chimeric sequence may be produced by the scaffolding. To solve this problem, a solution computed by the scaffolding should be wisely deteriorated. In any case, both problems can be modelized as optimization problems in a graph.In this document, we study both problems focusing on three aspects. The first aspect consists in the study of the complexity of these problems. The second aspect consists in the development of algorithms, exact or approximate, to solve these problems. Finally, the last aspect consists in implementing and testing these algorithms to look at their behaviors on real instances
Le, Gonidec Marion. "Sur la complexité des mots q∞-automatiques." Aix-Marseille 2, 2006. http://theses.univ-amu.fr.lama.univ-amu.fr/2006AIX22080.pdf.
Full textIn this thesis, we study a class of infinite words on a finite alphabet, generated by q-automata with countable states set, for an integer q ≥ 2. Those words, called q∞-automatic words are images by some « admissibles » morphisms of fixed points of substitutions of constant length q over countable alphabets. In this sense, thoses words can be viewed as a generalisation of automatic words. We show that the complexity function p of the (2d)∞-automatic word generated by the automaton related to regular random walk on Zd satisfies p(n) = O(n logd+1 2d n). Mo-reover, in the one-dimension case, the function n log2 2 n is an equivalent of the complexity function. This result can be generalized to q∞-automatic words generated by automata related to more general random walks on Zd by the relation p(n) = O(n logq+1 q n). In the more general case of q∞-automatic words generated by bounded degree auto- mata, we prouve that the complexity function is at most a polynomial function. We show how this result can be improved on examples, to obtain a thiner majoration of the com- plexity function order of growth. Moreover, on one exemple related to the Dyck language over two types of parenthesis, we obtain the exact complexity function order of growth. We also give a polynomial majoration of the complexity function for q∞-automatic words generated by monotone automata
Jost, Vincent. "Ordonnancement chromatique : polyèdres, complexité et classification." Université Joseph Fourier (Grenoble), 2006. http://www.theses.fr/2006GRE10158.
Full textSome problems in operations research can be modelled as graph colouring problems. Often however, the classical problem of minimum colouring is not able to express some additional constraints that naturally appear in applied contexts. On the other hand, some of these constraints are weil known in scheduling theory. The scope of "chromatic scheduling" is to deal with these hybrid models. This includes problems such as "bounded colouring", "precolouring extension" or "max colouring". We provide new lower bounds for the optimum value of these minimization problems. The main results state that these lower bounds yield min-max formulas in subclasses of perfect graphs. Ln particular, we obtain min-max formulas and efficient algorithms for complement of interval graphs (which are fundamental in transport and production problems) and line-graphs of bipartite graphs (fundamental in timetabling problems). These chromatic scheduling problems are often NP-hard even for bipartite graphs (and therefore for perfect graphs). Depending on the problem and the graph class to which we restrict, we tried to draw the frontiers between polynomial cases, NP-hard cases and those cases for which the lower bound is tight. This allows to summarize the main known results and to design some more generallower bounds as weil as approximation alaorithms with aood Derformance ratios
Madelaine, Florent. "Problèmes de satisfaction de contraintes : une étude logique et combinatoire." Caen, 2003. http://www.theses.fr/2003CAEN2005.
Full textLe, Bodic Pierre. "Variantes non standards de problèmes d'optimisation combinatoire." Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112190.
Full textThis thesis is composed of two parts, each part belonging to a sub-domain of combinatorial optimization a priori distant from the other. The first research subject is stochastic bilevel programming. This term regroups two research subject rarely studied together, namely stochastic programming on the one hand, and bilevel programming on the other hand. Mathematical Programming (MP) is a set of modelisation and resolution methods, that can be used to tackle practical problems and help take decisions. Stochastic programming and bilevel programming are two sub-domains of MP, each one of them being able to model a specific aspect of these practical problems. Starting from a practical problem, we design a mathematical model where the bilevel and stochastic aspects are used together, then apply a series of transformations to this model. A resolution method is proposed for the resulting MP. We then theoretically prove and numerically verify that this method converges. This algorithm can be used to solve other bilevel programs than the ones we study.The second research subject in this thesis is called "partial cut and cover problems in graphs". Cut and cover problems are among the most studied from the complexity and algorithmical point of view. We consider some of these problems in a partial variant, which means that the cut or cover property that is looked into must be verified partially, according to a given parameter, and not completely, as it was the case with the original problems. More precisely, the problems that we study are the partial multicut, the partial multiterminal cut, and the partial dominating set. Versions of these problems were vertices are
Afif, Mohammed. "Approches algorithmiques pour la résolution de certains problèmes NP-Complets." Paris 9, 1999. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1999PA090026.
Full textTourniaire, Emeric. "Problèmes NP-difficiles : approximation modérément exponentielle et complexité paramétrique." Phd thesis, Université Paris Dauphine - Paris IX, 2013. http://tel.archives-ouvertes.fr/tel-00874599.
Full textKovalev, Sergey. "PROBLÈMES COMBINATOIRES EN CONFIGURATION DES LIGNES DE FABRICATION : ANALYSE DE COMPLEXITÉ ET OPTIMISATION." Phd thesis, Ecole Nationale Supérieure des Mines de Saint-Etienne, 2012. http://tel.archives-ouvertes.fr/tel-00849179.
Full textLabbé, Sébastien. "Structure des pavages, droites discrètes 3D et combinatoire des mots." Thèse, Paris 7, 2012. http://www.archipel.uqam.ca/4940/1/D2363.pdf.
Full textChapdelaine, Philippe. "Contribution à la théorie de la complexité algorithmique : problèmes de contraintes, complétude et résultats de classification, complexité structurelle." Caen, 2004. http://www.theses.fr/2004CAEN2042.
Full textLavallée, Ivan. "Contribution à l'algoritmique parallèle et distribuée : application à l'optimisation combinatoire." Paris 11, 1986. http://www.theses.fr/1986PA112275.
Full textNakache, Elie. "Chemin optimal, conception et amélioration de réseaux sous contrainte de distance." Thesis, Aix-Marseille, 2016. http://www.theses.fr/2016AIXM4023/document.
Full textIn this thesis, we investigate several combinatorial optimization problems and characterize their computational complexity and approximability by providing polynomial reductions and exact or approximation algorithms.In particular, we study the problem of finding, in a vertex-labeled directed acyclic graph, a path collecting a maximum number of distinct labels. We prove that no polynomial time constant factor approximation algorithm exists for this problem. Furthermore, we describe a scheme that produces, for any $epsilon >0$, a polynomial time algorithm that computes a solution collecting $O(OPT^{1-epsilon})$ labels. Then, we study several variants of the minimum cost spanning tree problem that take into account distance and betweenness constraints. We prove that most of these problems can be solved in polynomial time using a reduction to the weighted matroid intersection problem. For an other problem, we give a factor 2 approximation algorithm and prove the optimality of this ratio.Finally, we study a network improvement problem from a cost sharing perspective. We establish that the cost function corresponding to this problem is submodular and use this result to derive a cost sharing mechanism having several good properties
Rouat, Valérie. "Validité de l'approche classification dans la réduction statistique de la complexité de #SAT." Rennes 1, 1999. http://www.theses.fr/1999REN10021.
Full textBlind, Sarah. "Output-sensitive algorithms for enumeration problems in graphs." Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0323.
Full textThis thesis is a study, from an algorithmic point of view, of the complexity of solving some enumeration problems in graphs. Algorithms for enumeration problems are meant to produce all solutions of a combinatorial problem, without repetition. Because there is a potentially exponential number of solutions to be generated, different approaches to analyse the performance of those algorithms have been developed: the input-sensitive approach and the output-sensitive approach. The first is a measure of the complexity that is based on the size of the input, and which consists in analysing the total time needed to enumerate the objects. The second is a measure based on the size of the input and the output. Here we will be interested in an output-sensitive approach and we will pay special attention to the notion of delay, i.e., the time needed between the production of two consecutive solutions. The thesis is divided into two independent parts. In the first one we propose a general framework that allows for the study of enumeration of vertex set properties in graphs. We prove that when such a property is locally definable with respect to some order on the set of vertices, then it can be enumerated with linear delay. Our method is a reduction of the considered enumeration problem to the enumeration of paths in directed acyclic graphs. We apply this general method to enumerate with linear delay minimal connected dominating sets and maximal irredundant sets in interval graphs and in permutation graphs, as well as maximal irredundant sets in circular-arc graphs and in circular-permutation graphs. The second part of the thesis is dedicated to the study of k-arc-connected orientations. These are orientations for which at least k arcs have to be removed in order to destroy the strong connectivity. We first give a simple algorithm to enumerate the k-arc-connected orientations respecting some fix outdegree sequence. We then give a simple algorithm to enumerate the outdegree sequences attained by k-arc-connected orientations. Combining both yields an algorithm that enumerates all k-arc-connected orientations of a graph in delay O(knm²) and amortized time O(m²)
Watel, Dimitri. "Approximation de l'arborescence de Steiner." Thesis, Versailles-St Quentin en Yvelines, 2014. http://www.theses.fr/2014VERS0025/document.
Full textThe directed Steiner tree problem (DST) asks, considering a directed weighted graph, a node r called root and a set of nodes X called terminals, for a minimum cost directed tree rooted in r spanning X. DST is an NP-complete problem. We are interested in the search for polynomial approximations for DST. Unless P = NP, DST can not be approximated neither within a constant ratio nor a logarithmic ratio with respected to k, where k is the number of terminals. The smallest known approximation ratio is O(kԑ)$ where ԑ is a positive real.In the first part, we provide three new approximation algorithms : a practical greedy algorithm merging two of the main approximation techniques for DST, an algorithm for the case where the graph is layered and where the distance between the terminals and the root is high, and an exponential approximation algorithm combining an approximation algorithm and an exact algorithm, parameterized with the number of terminals the exact algorithm must cover.In the last part we study DST with two branching constraints. With this technique, we are able to reduce the number of feasible solutions, and possibly facilitate the search for an optimal solution of the constraint problem. We study how it is possible to build such a solution in polynomial time and if this solution is a good approximation of an optimal solution of the non-constraint problem
Grebinski, Vladimir. "Recherche combinatoire : problèmes de pesage, reconstruction de graphes et applications." Nancy 1, 1998. http://www.theses.fr/1998NAN10248.
Full textGenitrini, Antoine. "Expressions booléennes aléatoires : probabilité, complexité et comparaison quantitative de logiques propositionnelles." Versailles-St Quentin en Yvelines, 2009. http://www.theses.fr/2009VERS0010.
Full textIn this thesis, I am interested in propositional systems from a probability/complexity point of view. I begin with two probability distributions on Boolean functions, induced by the Boolean expressions built with the Implication connective. I obtain the structure of most of the expressions representing a given function, when the number of variables tends to infinity. This gives the asymptotic equivalent of the probability of the function, depending on its complexity. Via the function True, we compare quantitatively the intuitionistic and classical logics of implication. This comparison highlights some properties of a class of expressions, that are found also in the full propositional system, and we can compare the two logics in this system. Finally we study balanced expressions in the two systems built on implication, or on the two connectors And and Or. In both cases, we exhibit the probability distribution of the functions
Gire, Sophie. "Arbres, permutations à motifs exclus et cartes planaires : quelques problèmes algorithmiques et combinatoires." Bordeaux 1, 1993. http://www.theses.fr/1993BOR10534.
Full textCabon, Bertrand. "Problèmes d'optimisation combinatoire : évaluation de méthodes de la physique statistique." Toulouse, ENSAE, 1996. http://www.theses.fr/1996ESAE0024.
Full textBedaride, nicolas. "Etude du billard polyédral." Phd thesis, Université de la Méditerranée - Aix-Marseille II, 2005. http://tel.archives-ouvertes.fr/tel-00009363.
Full textMilioris, Dimitrios. "Trend Detection and Information Propagation in Dynamic Social Networks." Palaiseau, Ecole polytechnique, 2015. https://tel.archives-ouvertes.fr/tel-01152275/document.
Full textDuring the last decade, the information within Dynamic Social Networks has increased dramatically. The ability to study the interaction and communication between users in these networks can provide real time valuable prediction of the evolution of the information. The study of social networks has several research challenges, e. G. (a) real time search has to balance between quality, authority, relevance and timeliness of the content, (b) studying the information of the correlation between groups of users can reveal the influential ones, and predict media consumption, network and traffic resources, (c) detect spam and advertisements, since with the growth of social networks we also have a continuously growing amount of irrelevant information over the network. By extracting the relevant information from online social networks in real time, we can address these challenges. In this thesis a novel method to perform topic detection, classification and trend sensing in short texts is introduced. Instead of relying on words as most other existing methods which use bag-of-words or n-gram techniques, we introduce Joint Complexity, which is defined as the cardinality of a set of all distinct common factors, subsequences of characters, of two given strings. Each short sequence of text is decomposed in linear time into a memory efficient structure called Suffix Tree and by overlapping two trees, in linear or sublinear average time, we obtain the cardinality of factors that are common in both trees. The method has been extensively tested for Markov sources of any order for a finite alphabet and gave good approximation for text generation and language discrimination. The proposed method is language-agnostic since we can detect similarities between two texts in any loosely character-based language. It does not use semantics or based on a specific grammar, therefore there is no need to build any specific dictionary or stemming technique. The proposed method can be used to capture a change of topic within a conversation, as well as the style of a specific writer in a text. In the second part of the thesis, we take advantage of the nature of the data, which motivated us in a natural fashion to use of the theory of Compressive Sensing driven from the problem of target localization. Compressive Sensing states that signals which are sparse or compressible in a suitable transform basis can be recovered from a highly reduced number of incoherent random projections, in contrast to the traditional methods dominated by the well- established Nyquist-Shannon sampling theory. Based on the spatial nature of the data, we apply the theory of Compressive Sensing to perform topic classification by recovering an indicator vector, while reducing significantly the amount of information from tweets. The method works in conjunction with a Kalman filter to update the states of a dynamical system as a refinement step. In this thesis we exploit datasets collected by using the Twitter streaming API, gathering tweets in various languages and we obtain very promising results when comparing to state-of-the-art methods
Dandrieux, Jean-Pierre. "Contributions à l'analyse de la complexité de problèmes de résolution de contraintes." Lille 1, 2000. https://pepite-depot.univ-lille.fr/RESTREINT/Th_Num/2000/50376-2000-210.pdf.
Full textBlin, Guillaume. "Combinatoire and Bio-informatique : Comparaison de structures d'ARN et calcul de distances intergénomiques." Phd thesis, Nantes, 2005. http://www.theses.fr/2005NANT2067.
Full textWe present a set of results concerning two types of biological problems: (1) RNA structure comparison and (2) intergenomic distance computation considering non trivial genomes. In this thesis, we determine the algorithmic complexity of a set of problems linked to either RNA structure comparison (edit distance, APS problem, 2-interval pattern extraction, RNA design), or genomic rearrangements (breakpoints and conserved intervals distances). For each studied problem, we try to find an exact and fast algorithm resolving it. If we do not find such an algorithm, we try to prove that it is impossible to find one. To do so, we prove that the corresponding problem is difficult. Finally, we continue the study of each difficult problem by proposing three types of results: (1) Approximation, (2) Parameterized complexity, (3) Heuristic. We use in this thesis notions of combinatorics, mathematics, graph theory and algorithmics
Wojcik, Caius. "Factorisations des mots de basse complexité." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1340.
Full textWe present in this PhD. in Mathematics the work effectuated during three years at the Lyon 1 University Chaude Bernard Lyon 1. This PhD. has been realied under the supervisions of Boris Adamczewki and Luca Zamboni, both researchers in the Lyon 1 University. The general topic is combinatorics on words, in the form of two contributions, one of them on the limits of Ramsey theory in this context developped in the first part, and the second on the links between the Ostrowski numeration system and factorisations of sturmian words. Combinatorics on words is a domain at the intersections of mathematics, and more generally of sciences. With the emergence of theoretical computer science, or of progresses in genetics, the study of sequences of symbols has become a unavoidable research subject of growing importance. Words seen as a sequence of symboles are indeed bound to deep and subtle mathematical laws. The historical example discovered by Axel Thue of an infinite square-free word over a 3-letter alphabet have been the start of this theory, with a non-trivial construction of a specific word submitted to a seemingly very simple condition. Whether it is within the structure of decimals of real numbers, in the code lines everywhere in computers or inside our own genetic information, combinatorics on words gives a comon theoretical set of tools for a deep study of a number of present scientific issues. The present PhD. lands naturally within this scientific process. Directly inspired by the fundamental work of Axel Thue, we study in the first part the condition for the existence of combinatorial objects, colorations, submitted to a monochromatic constraint, and provide an optimal answer to a conjecture that have remained open for 10 years. This solution exploits the differences and links between the notions of prefixes and suffixes in combinatorics on words. In a second part, we study a infinite version of the Ostrowski numeration system, with the use of the low complexity class of words formed by the sturmian words. Built in a similar way as the p-adic number, the introduced and developped formalism on formal intercepts with the purpose of describing combinatorially the class of sturmian words has several consequences with regards to their factorisations. The calculus of complement, presented at the end shows how comparison of the prefix and suffix operations can be used to derived non-trivial results on factorisation of low complecity words
Chopin, Morgan. "Problèmes d'optimisation avec propagation dans les graphes : complexité paramétrée et approximation." Phd thesis, Université Paris Dauphine - Paris IX, 2013. http://tel.archives-ouvertes.fr/tel-00933769.
Full textSchmidt, Johannes. "Classification en complexité de problèmes de raisonnement non-monotone et d' énumération." Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM4055/document.
Full textIn this thesis we consider the computational complexity of problems from two central formalisms of nonmonotonic reasoning: abduction and argumentation. The first one is designed to formalize the process of finding explanations for some observed manifestation, the second (and more recent) one gives a theoretical framework to formalize the process of argumentation. We focus on the explanation-existence problem for abduction and on the argument-existence problem for argumentation. Considered in full propositional logic these problems are believed to be computationally costly tasks (they are often situated at the second level of the polynomial hierarchy). With the purpose of understanding sources of hardness and of identifying tractable fragments of propositional logic we consider several abduction and argumentation problems in two well-established settings allowing for complexity classifications. In the first one, Post's Framework, restrictions are made on the allowed connectives in the used formulae, whereas in the second one, Schaefer's Framework, one considers formulae in conjunctive normal form, where the clauses are generalized to applications of arbitrary Boolean relations to variables and one restricts the allowed type of relations. We discuss differences and common features between the explanation-existence and the argument-existence problem in function of the two chosen frameworks. Finally, we consider enumeration. In particular we consider the problem of enumerating all solutions (models) of a propositional formula by non-decreasing weight in Schaefer's framework (the weight of a model being the number of variables assigned to true)
Bentz, Cédric. "Résolution exacte et approchée de problèmes de multiflot entier et de multicoupe : algorithmes et complexité." Paris, CNAM, 2006. http://www.theses.fr/2006CNAM0542.
Full textIn this thesis, we consider integral multiflow and multicut problems, which generalize the classical max flow and min cut problems. Two aspects of these problems are studied in particular : polynomial-time resolution and polynomial approximation. Concerning the first aspect, our main contributions focus on the following points : disjoint paths in graphs of bounded cyclomatic number ; multicuts in directed acyclic graphs, in undirected graphs of bounded tree-width and in planar graphs ; integral multiflows in rings ; multicuts and integral multiflows in several special types of grids. Concerning the second aspect, our main contributions focus on the following points : disjoint paths in k-edge-outerplanar graphs ; integral multiflows in graphs of bounded cyclomatic number ; multicuts in unweighted digraphs of bounded maximum degree and bounded tree-width ; integral multiterminal flows in digraphs. We also describe a new heuristic to find a maximum integral multiflow in an undirected graph, and test it on randomly generated graphs
Rottner, Cécile. "Aspects combinatoires du Unit Commitment Problem." Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS272.
Full textThe Min-up/min-down Unit Commitment Problem (MUCP), is to find a minimum-cost production plan on a discrete time horizon for a set of units producing electric power. At each time period, the total production has to meet a forecast demand. Each unit must satisfy minimum up and down time constraints. We show that the MUCP is strongly NP-hard, thus highlighting that the dynamic coupling of demands by minimum up and down time constraints represents one major source of difficulty. To cope with this difficulty, we introduce interval up-set inequalities, a new class of valid inequalities for the MUCP polytope, generalizing both min-up and extended cover inequalities from the 0-1 knapsack polytope. Facet defining cases are characterized and a Branch & Cut algorithm is devised. To deal with the symmetries impairing the solution process, we define sub-symmetries, as symmetries arising from a solution subset. We focus on integer linear programs whose (sub-)symmetry groups are symmetric groups acting on sub-columns of solution matrices. We propose a general framework to handle sub-symmetries in such problems. On this basis, two symmetry-breaking techniques are introduced. The first technique is an orbitopal fixing algorithm for the full (sub-)orbitope. The second technique is based on sub-symmetry breaking inequalities. Experimental results on MUCP instances show that the proposed techniques outperform state-of-the-art techniques. Finally we compare various Dantzig-Wolfe structures for the MUCP. We show that good quality lower bounds can be obtained by dualization of time-coupling constraints. Branch & Price results show that interval up-set inequalities are useful in this context
Serrière, Fabrice. "Approximation polynomiale du recouvrement d'ensemble." Paris 9, 2005. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2005PA090018.
Full textThis thesis deals with the polynomial approximation of the NP_Hard set covering problem. In the first part, we locate the problem of covering within the field of complexity before reporting the most significant results relating to the classical approximation and the traditional inapproximation of it. In the second part, we work on the differential approximation of the problem for weighted and unweighted cases, while starting by locating the limits of the problems then by proving lower and higher bounds for the differential ratio for certain algorithms whose well known greedy one. In the third part we study the probabilistic problems relating to unweighted set cover and prove that it is equivalent to the weighted one for the strategy which we propose. In the fourth and last part, we study the approximation of covering under its experimental angle by testing the algorithms studied during the document and by testing new algorithms on the instances of the literature and instances that we generated. The first appendix is dedicated to well known analyses of algorithm for set covering problem. The second one proposes a general method of analysis adaptable to many combinatorial problems. The third gives the experimental results studied in the forth part, and the last appendix is a question relating to the enumeration under constraint
Lonca, Emmanuel. "Optimisation booléenne multiobjectif : complexité sous contraintes compilées et résolution via SAT." Thesis, Artois, 2015. http://www.theses.fr/2015ARTO0405/document.
Full textDecision aiding aims at helping a decision-maker to pick up a solution among several others. The usefulness of such approaches is as prominent as the size of the problems under consideration increases. The need of decision aiding techniques is salient when the problem does not just consist in deciding whether a solutions exists, but to find one of the best solutions according to a given criterion. In this case, the problem goes from a decision problem (decide whether a solution exists) to a single criterion optimization problem (find one of the best solutions according to a criterion). A decision-maker may even want to consider multiple criteria, which turns the initial decision problem into a multicriteria optimization problem. The main issue arising in such cases lies in the fact that the criteria under consideration are often antagonistic, which implies that there is no solution which is the best for each of the objectives. In this case, a good compromise solution is looked for. In this thesis, we first focus on the complexity of optimization requests based on a set of propositional languages. We then study the practical aspects of the resolution of such problems, using pieces of software designed for dealing with combinatorial decision problems, namely SAT solvers
Manouvrier, Jean-François. "Méthode de décomposition pour résoudre des problèmes combinatoires sur les graphes." Compiègne, 1998. http://www.theses.fr/1998COMP1152.
Full textVoge, Marie-Emilie. "Optimisation des réseaux de télécommunications : Réseaux multiniveaux, Tolérance aux pannes et Surveillance du trafic." Phd thesis, Université de Nice Sophia-Antipolis, 2006. http://tel.archives-ouvertes.fr/tel-00171565.
Full textNous nous intéressons aussi bien aux réseaux de coeur qu'aux réseaux d'accès. Dans le premier chapitre, nous présentons brièvement les réseaux d'accès ainsi que les réseaux multiniveaux de type IP/WDM et l'architecture MPLS que nous considérons pour les réseaux de coeur. Ces réseaux sont composés d'un niveau physique sur lequel est routé un niveau virtuel. A leur tour les requêtes des utilisateurs sont routées sur le niveau virtuel. Nous abordons également la tolérance aux pannes dans les réseaux multiniveaux qui motive deux problèmes que nous avons étudiés.
Le second chapitre est consacré à la conception de réseaux virtuels. Dans un premier temps nous modélisons un problème prenant en compte la tolérance aux pannes, puis nous en étudions un sous-problème, le groupage. Notre objectif est de minimiser le nombre de liens virtuels, ou tubes, à installer pour router un ensemble de requêtes quelconque lorsque le niveau physique est un chemin orienté.
Le troisième chapitre traite des groupes de risque (SRRG) induits par l'empilement de niveaux au sein d'un réseau multiniveaux. Grâce à une modélisation par des graphes colorés, nous étudions la connexité et la vulnérabilité aux pannes de ces réseaux.
L'objet du quatrième chapitre est le problème du placement d'instruments de mesure du trafic dans le réseau d'accès d'un opérateur. Nous considérons aussi bien les mesures passives qu'actives. La surveillance du trafic possède de nombreuses applications, en particulier la détection de pannes et l'évaluation des performances d'un réseau.
Mora, Thierry. "Géométrie et inférence dans l'optimisation et en théorie de l'information." Paris 11, 2007. http://www.theses.fr/2007PA112162.
Full textPerrault, Pierre. "Apprentissage efficient dans les problèmes de semi-bandits stochastiques combinatoires." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASM023.
Full textCombinatorial stochastic semi-bandits appear naturally in many contexts where the exploration/exploitation dilemma arises, such as web content optimization (recommendation/online advertising) or shortest path routing methods. This problem is formulated as follows: an agent sequentially optimizes an unknown and noisy objective function, defined on a power set mathcal{P}([n]). For each set A tried out, the agent suffers a loss equal to the expected deviation from the optimal solution while obtaining observations to reduce its uncertainty on the coordinates from A. Our objective is to study the efficiency of policies for this problem, focusing in particular on the following two aspects: statistical efficiency, where the criterion considered is the regret suffered by the policy (the cumulative loss) that measures learning performance; and computational efficiency. It is sometimes difficult to combine these two aspects in a single policy. In this thesis, we propose different directions for improving statistical efficiency, while trying to maintain the computational efficiency of policies.In particular, we have improved optimistic methods by developing approximation algorithms and refining the confidence regions used. We also explored an alternative to the optimistic methods, namely randomized methods, and found them to be a serious candidate for combining the two types of efficiency
Griffon, Aurelien. "Etude de la complexité des éléments Cis-régulateurs chez les mammifères en utilisant des approches à haut débit." Thesis, Aix-Marseille, 2015. http://www.theses.fr/2015AIXM4017.
Full textGene regulation is responsible for cell diversity by allowing cell differentiation and specialisation. Gene expression regulation relies mainly on the existence of non-coding DNA sequences in the genome, called "cis-regulatory elements", which recruit numerous transcription factors to form (nucleo)protein complexes which act on the gene transcription level. This recruitment is controlled in particular by epigenetic modifications. The rapid development of sequencing technologies and bioinformatics methods makes possible the integration of large amounts of data to study regulatory elements. First, the integration of ChIP-seq data for all transcription factors available in public databases has allowed us to create an extensive catalogue of putative regulatory elements in the human genome. The overall analysis of this catalogue led us to further characterize these elements and to highlight the high level of combinatorial complexity of transcription factors in the genome. Secondly, we conducted a more specific study based on the analysis of the regulatory elements involved in the early differentiation of T-cells in mice. This study provided an opportunity to highlight two levels of complexity based on regulatory elements and involved in gene regulation: the first rests on the transcription factor combinatorial in regulatory elements and the second is based on the combinatorial of elements themselves within loci. Finally, to validate experimentally the regulatory elements, we have developed a new quantitative and high-throughput technique to assess the regulatory activity of genomic regions in mammals
Legay, Sylvain. "Quelques problèmes d'algorithmique et combinatoires en théorie des grapphes." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS030/document.
Full textThis thesis is about graph theory. Formally, a graph is a set of vertices and a set of edges, which are pair of vertices, linking vertices. This thesis deals with various decision problem linked to the notion of graph, and, for each of these problem, try to find its complexity class, or to give an algorithm. The first chapter is about the problem of finding the smallest connected tropical subgraph of a vertex-colored graph, which is the smallest connecter subgraph containing every colors. The second chapter is about problems of tropical homomorphism, a generalization of coloring problem. A link between these problems and several other class of homomorphism problems can be found in this chapter, especially with the class of Constraint Satisfaction Problem. The third chapter is about two variant of the domination problem, namely the global alliance problems in a weighted graph and the safe set problem. The fourth chapter is about the problem of finding a star tree-decomposition, which is a tree-decomposition where the radius of bags is 1. Finally, the fifth chapter is about a variant of the problem of deciding the asymptotic behavior of the iterated biclique graph
Fouks, Jean-Denis. "La résolution du problème de satisfiabilité." Mulhouse, 1991. http://www.theses.fr/1991MULH0175.
Full textGarnero, Valentin. "(Méta)-noyaux constructifs et linéaires dans les graphes peu denses." Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT328/document.
Full textIn the fields of Algorithmic and Complexity, a large area of research is based on the assumption that P ≠ NP(Polynomial time and Non deterministic Polynomial time), which means that there are problems for which a solution can be verified but not constructed in polynomial time. Many natural problems are not in P, which means, that they have no efficient algorithm. In order to tackle such problems, many different branches of Algorithmic have been developed. One of them is called Parametric Complexity. It consists in developing exact algorithms whose complexity is measured as a function of the size of the instance and of a parameter. Such a parameter allows a more precise analysis of the complexity. In this context, an algorithm will be considered to be efficient if it is fixed parameter tractable (fpt), that is, if it has a complexity which is exponential in the parameter and polynomial in the size of the instance. Problems that can be solved by such an algorithm form the FPT class.Kernelisation is a technical that produces fpt algorithms, among others. It can be viewed as a preprocessing of the instance, with a guarantee on the compression of the data. More formally, a kernelisation is a polynomial reduction from a problem to itself, with the additional constraint that the size of the kernel, the reduced instance, is bounded by a function of the parameter. In order to obtain an fpt algorithm, it is sufficient to solve the problem in the reduced instance, by brute-force for example (which has exponential complexity, in the parameter). Hence, the existence of a kernelisiation implies the existence of an fpt algorithm. It holds that the converse is true also. Nevertheless, the existence of an efficient fpt algorithm does not imply a small kernel, meaning a kernel with a linear or polynomial size. Under certain hypotheses, it can be proved that some problems can not have a kernel (that is, are not in FPT) and that some problems in FPT do not have a polynomial kernel.One of the main results in the field of Kernelisation is the construction of a linear kernel for the Dominating Set problem on planar graphs, by Alber, Fellows and Niedermeier.To begin with, the region decomposition method proposed by Alber, Fellows and Niedermeier has been reused many times to develop kernels for variants of Dominating Set on planar graphs. Nevertheless, this method had quite a few inaccuracies, which has invalidated the proofs. In the first part of our thesis, we present a more thorough version of this method and we illustrate it with two examples: Red Blue Dominating Set and Total Dominating Set.Next, the method has been generalised to larger classes of graphs (bounded genus, minor-free, topological-minor-free), and to larger families of problems. These meta-results prove the existence of a linear or polynomial kernel for all problems verifying some generic conditions, on a class of sparse graphs. As a price of generality, the proofs do not provide constructive algorithms and the bound on the size of the kernel is not explicit. In the second part of our thesis, we make a first step to constructive meta-results. We propose a framework to build linear kernels based on principles of dynamic programming and a meta-result of Bodlaender, Fomin, Lokshtanov, Penninkx, Saurabh and Thilikos
Tallot, Didier. "Quelques propositions pour la mise en oeuvre d'algorithmes combinatoires." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 1985. http://tel.archives-ouvertes.fr/tel-00806984.
Full textBarrot, Nathanaël. "Sur les aspects computationnels du vote par approbation." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLED006/document.
Full textThe subject of this thesis is the study of computational aspects of approval voting. Most of the works are theoretical results about computational issues raised by approval voting, in many different settings. However, I also study some questions that are more related to classical choice theory, and some problems are investigated through experimental analysis.Firstly, I study a general family of rules for approval voting in the context of committee elections and multiple referenda. Secondly, I focus on a more general setting, approval voting in combinatorial domains, based on conditional preferences. Finally, I consider approval voting in the context of incomplete preferences, to study the possible and necessary winner problems
Yalaoui, Alice. "Allocation de fiabilité et de redondance dans les systèmes parallèle-série et série-parallèle." Troyes, 2004. http://www.theses.fr/2004TROY0009.
Full textThe integration of the system's design notion takes share in the spirit of proximity and flexibility of companies' contemporary management. The reliability optimization, as well of the products manufactured as of the production equipment, is there of great importance. We were interested particularly in the problems of reliability and redundancy allocation. If reliabilities of the components may be unspecified values between zero and one, we developed, for the maximization of reliability under cost constraint, an algorithm which converges towards the optimum, for the systems parallel-series. In the case of the systems parallel-series, we proposed theoretical results and an approximate methode for the cost minimization under a reliability constraint. We were also interested in the case of reliabilities of the components which can be chosen among a whole of discrete values. This assumption takes account of the avaibility in components on the market. Pseudopolynomial algorithms for the problems of reliability and redundancy allocation were developed, for parallel series and parallel-series structures. These algorithms are dedicated to the minimization of the cost under reliability constraint and to the dual problem of reliability maximization. We generalized them for more complex structures
Rivoire, Olivier. "Phases vitreuses, optimisation et grandes déviations." Phd thesis, Université Paris Sud - Paris XI, 2005. http://tel.archives-ouvertes.fr/tel-00009956.
Full textColares, Rafael. "Exploring Combinatorial Aspects of the Stop Number Problem." Thesis, Université Clermont Auvergne (2017-2020), 2019. http://www.theses.fr/2019CLFAC050.
Full textThe Stop Number Problem arises in the management of a dial-a-ride system with small autonomous electric vehicles. In such a system, a fleet of identical capacitated vehicles travels along a predefined circuit with fixed stations in order to serve clients requesting for a ride from an origin station to a destination station. Notice that multiple clients may share the same origin and/or destination stations. The Stop Number Problem consists of assigning each client request to avehicle such that no vehicle gets overloaded. The goal is to minimize the number of times the fleet of vehicles stops for picking up or delivering clients. When every client requests for exactly one seat in a vehicle, Stop Number Problem is called Unit Stop Number Problem. In this thesis, Unit Stop Number Problem is addressed as a combinatorial-optimization problem.First, we investigate the complexity of such problem. On the one hand, we study some properties of optimal solutions and derive a series of particular cases that are shown to be solvable in polynomial time. On the other hand, we show that Unit Stop Number Problem is NP-Hard even when restricted to case where each vehicle can take at most two clients at once and the graph induced by the client requests is planar bipartite. Such result -- which positively answers a conjecture known in the literature -- is then extended to other related problems such as the k-Edge Partitioning and the Traffic Grooming problem, improving their respective state-of-the-art complexity standards.In a second part, we consider an integer-programming formulation known in the literature for solving the Unit Stop Number Problem. A preliminary analysis is conducted in order to prove the weakness of such formulation. Afterwards, such formulation is reinforced through a polyhedral approach. We provide a facial study of the polytope associated with the solutions of this problem. New valid inequalities are introduced and necessary and sufficient conditions for which they are facet-defining are given.Finally, based on the discussed polyhedral results, we derive a new efficient branch-and-cut algorithm. Performance boosting features such as symmetry breaking methods and variable elimination/relaxation are also investigated and implemented into the developed framework. Results convincingly demonstrate the strength of the reinforcing valid inequalities and therefore of our branch-and-cut algorithm
Baccouche, Mohamed. "Métaheuristiques hybrides d'optimisation d'atterrissage d'avions multipistes." Le Havre, 2007. http://www.theses.fr/2007LEHA0008.
Full textWe studied in this thesis a complex problem, mixed aircraft scheduling and runway assignment, the Aircraft landing Problem. The complexity of this optimization problem is the result of its combinatorial aspects (problems size), of the strong interdependence between functions (assignment and scheduling), constraints and perturbations. Airport landing capacities constitute genuine bottlenecks, making it impossible to respond to an increase in air traffic. The largest airports are already operating at maximum capacity and only increased optimization of the landing sequences will make it possible to cope. We developed and tested different hybrid approaches based on metaheuristics techniques such as evolutionary algorithms and taboo search and on exact methods. We have compared different approaches with a large choice of parameters, analyzed computational complexity and performances and discussed the best values. We used the experience plans method in order to optimize the choice of parameters of developed algorithms