To see the other types of publications on this topic, follow the link: Max-cut problems on graphs.

Dissertations / Theses on the topic 'Max-cut problems on graphs'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 23 dissertations / theses for your research on the topic 'Max-cut problems on graphs.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Levine, Matthew S. (Matthew Steven). "Algorithms for connectivity problems in undirected graphs : maximum flow and minimun [kappa]-way cut." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/29234.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.<br>In title on t.p., "[kappa]" apears as lower-case Greek letter.<br>Includes bibliographical references (p. [69]-72).<br>We consider two connectivity problems on undirected graphs: maximum flow and minimum k-way cut. The maximum flow problem asks about the connectivity between two specified nodes. A traditional approach is to search for augmenting paths. We explore the possibility of restricting the set of edges in which we search for an augmenting path, so that we can find each flow path in sub-linear time. Consider an n-vertex, m-edge, undirected graph with maximum flow value v. We give two methods for finding augmenting paths in such a graph in amortized sub-linear time, based on the idea that undirected graphs have sparse subgraphs that capture connectivity information. The first method sparsifies unused edges by using a spanning forest. It is deterministic and takes ... time per path on average. The second method sparsifies the entire residual graph by taking random samples of the edges. It takes O(n) time per path on average. These results let us improve the O(mv) time bound of the classic augmenting path algorithm to O(m + nv3/2) (deterministic) and O(m + nv) (randomized). For simple graphs, the addition of a blocking flow subroutine yields a deterministic O (nm2/3vl/6)-time algorithm. A minimum k-way cut of an n-vertex, m-edge, weighted, undirected graph is a partition of the vertices into k sets that minimizes the total weight of edges with endpoints in different sets. We give new randomized algorithms to find minimum 3-way and 4-way cuts, which lead to time bounds of O(mnk-2 log3 n) for k =/< 6. Our key insight is that two different structural properties of k-way cuts, both exploited by previous algorithms, can be exploited simultaneously to avoid the bottleneck operations in both prior algorithms. The result is that we improve on the best previous time bounds by a factor of O(n2).<br>Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
2

Fukuda, Ellen Hidemi. "Algoritmo do volume e otimização não diferenciável." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-04062007-115956/.

Full text
Abstract:
Uma maneira de resolver problemas de programação linear de grande escala é explorar a relaxação lagrangeana das restrições \"difíceis\'\' e utilizar métodos de subgradientes. Populares por fornecerem rapidamente boas aproximações de soluções duais, eles não produzem diretamente as soluções primais. Para obtê-las com custo computacional adequado, pode-se construir seqüências ergódicas ou utilizar uma técnica proposta recentemente, denominada algoritmo do volume. As propriedades teóricas de convergência não foram bem estabelecidas nesse algoritmo, mas pequenas modificações permitem a demonstração da convergência dual. Destacam-se como adaptações o algoritmo do volume revisado, um método de feixes específico, e o algoritmo do volume incorporado ao método de variação do alvo. Este trabalho foi baseado no estudo desses algoritmos e de todos os conceitos envolvidos, em especial, análise convexa e otimização não diferenciável. Estudamos as principais diferenças teóricas desses métodos e realizamos comparações numéricas com problemas lineares e lineares inteiros, em particular, o corte máximo em grafos.<br>One way to solve large-scale linear programming problems is to exploit the Lagrangian relaxation of the difficult constraints and use subgradient methods. Such methods are popular as they give good approximations of dual solutions. Unfortunately, they do not directly yield primal solutions. Two alternatives to obtain primal solutions under reasonable computational cost are the construction of ergodic sequences and the use of the recently developed volume algorithm. While the convergence of ergodic sequences is well understood, the convergence properties of the volume algorithm is not well established in the original paper. This lead to some modifications of the original method to ease the proof of dual convergence. Three alternatives are the revised volume algorithm, a special case of the bundle method, and the volume algorithm incorporated by the variable target value method. The aim of this work is to study such algorithms and all related concepts, especially convex analysis and nondifferentiable optimization. We analysed the main theoretical differences among the methods and performed numerical experiments with linear and integer problems, in particular, the maximum cut problem on graphs.
APA, Harvard, Vancouver, ISO, and other styles
3

Zou, Mengchuan. "Aspects of efficiency in selected problems of computation on large graphs." Thesis, Université de Paris (2019-....), 2019. http://www.theses.fr/2019UNIP7132.

Full text
Abstract:
Cette thèse présente trois travaux liés à la conception d’algorithmes efficaces applicables à des graphes de grande taille. Dans le premier travail, nous nous plaçons dans le cadre du calcul centralisé, et ainsi la question de la généralisation des décompositions modulaires et de la conception d’un algorithme efficace pour ce problème. La décomposition modulaire et la détection de module, sont des moyens de révéler et d’analyser les propriétés modulaires de données structurées. Comme la décomposition modulaire classique est bien étudiée et possède un algorithme de temps linéaire optimal, nous étudions d’abord les généralisations de ces concepts en hypergraphes. C’est un sujet peu étudié mais qui permet de trouver de nouvelles structurations dans les familles de parties. Nous présentons ici des résultats positifs obtenus pour trois définitions de la décomposition modulaire dans les hypergraphes de la littérature. Nous considérons également la généralisation en permettant des erreurs dans les modules de graphes classiques et présentons des résultats négatifs pour deux telles définitions. Le deuxième travail est sur des requêtes de données dans un graphe. Ici, le modèle diffère des scénarios classiques dans le sens que nous ne concevons pas d’algorithmes pour résoudre un problème original, mais nous supposons qu’il existe un oracle fournissant des informations partielles sur la solution de problème initial, où les oracle ont une consommation de temps ou de ressources de requête que nous modélisons en tant que coûts, et nous avons besoin d’un algorithme décidant comment interroger efficacement l’oracle pour obtenir la solution exacte au problème initial. L’efficacité ici concerne le coût de la requête. Nous étudions un problème de la méthode de dichotomie généralisée pour lequel nous calculons une stratégie d’interrogation efficace afin de trouver une cible cachée dans le graphe. Nous présentons les résultats de nos travaux sur l’approximation de la stratégie optimale de recherche en dichotomie généralisée sur les arbres pondérés. Notre troisième travail est sur la question de l’efficacité de la mémoire. La configuration dans laquelle nous étudions sont des calculs distribués et avec la limitation en mémoire. Plus précisément, chaque nœud stocke ses données locales en échangeant des données par transmission de messages et est en mesure de procéder à des calculs locaux. Ceci est similaire au modèle LOCAL / CONGEST en calcul distribué, mais notre modèle requiert en outre que chaque nœud ne puisse stocker qu’un nombre constant de variables w.r.t. son degré. Ce modèle peut également décrire des algorithmes naturels. Nous implémentons une procédure existante de repondération multiplicative pour approximer le problème de flux maximal sur ce modèle. D’un point de vue méthodologique, les trois types d’efficacité que nous avons étudiées correspondent aux trois types de scénarios suivants: – Le premier est le plus classique. Considérant un problème, nous essayons de concevoir à la main l’algorithme le plus efficace. – Dans le second, l’efficacité est considérée comme un objectif. Nous modélisons les coûts de requête comme une fonction objectif, et utilisons des techniques d’algorithme d’approximation pour obtenir la conception d’une stratégie efficace. – Dans le troisième, l’efficacité est en fait posée comme une contrainte de mémoire et nous concevons un algorithme sous cette contrainte<br>This thesis presents three works on different aspects of efficiency of algorithm design for large scale graph computations. In the first work, we consider a setting of classical centralized computing, and we consider the question of generalizing modular decompositions and designing time efficient algorithm for this problem. Modular decomposition, and more broadly module detection, are ways to reveal and analyze modular properties in structured data. As the classical modular decomposition is well studied and have an optimal linear time algorithm, we firstly study the generalizations of these concepts to hypergraphs and present here positive results obtained for three definitions of modular decomposition in hypergraphs from the literature. We also consider the generalization of allowing errors in classical graph modules and present negative results for two this kind of definitions. The second work focuses on graph data query scenarios. Here the model differs from classical computing scenarios in that we are not designing algorithms to solve an original problem, but we assume that there is an oracle which provides partial information about the solution to the original problem, where oracle queries have time or resource consumption, which we model as costs, and we need to have an algorithm deciding how to efficiently query the oracle to get the exact solution to the original problem, thus here the efficiency is addressing to the query costs. We study the generalized binary search problem for which we compute an efficient query strategy to find a hidden target in graphs. We present the results of our work on approximating the optimal strategy of generalized binary search on weighted trees. Our third work draws attention to the question of memory efficiency. The setup in which we perform our computations is distributed and memory restricted. Specifically, every node stores its local data, exchanging data by message passing, and is able to proceed local computations. This is similar to the LOCAL/CONGEST model in distributed computing, but our model additionally requires that every node can only store a constant number of variables w.r.t. its degree. This model can also describe natural algorithms. We implement an existing procedure of multiplicative reweighting for approximating the maximum s–t flow problem on this model, this type of methodology may potentially provide new opportunities for the field of local or natural algorithms. From a methodological point of view, the three types of efficiency concerns correspond to the following types of scenarios: the first one is the most classical one given the problem, we try to design by hand the more efficient algorithm; the second one, the efficiency is regarded as an objective function .where we model query costs as an objective function, and using approximation algorithm techniques to get a good design of efficient strategy; the third one, the efficiency is in fact posed as a constraint of memory and we design algorithm under this constraint
APA, Harvard, Vancouver, ISO, and other styles
4

Florén, Mikael. "Approximation of Max-Cut on Graphs of Bounded Degree." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-189143.

Full text
Abstract:
The Max-Cut problem is a well-known NP-hard problem, for which numerous approximation algorithms have been developed over the years. In this thesis, we examine the special case where the degree of vertices in the graph is bounded. With minor modifications to existing algorithms, we are able to obtain an improved approximation ratio for general bounded-degree graphs. Furthermore we show additional improvements for graphs with at least a constant fraction of odd-degree vertices. We also identify some other possible areas for improvement in the general bounded-degree case.<br>Max-Cut-problemet är ett välkänt NP-svårt problem, för vilket ett flertal olika approximationsalgoritmer har utvecklats över åren. I det här arbetet undersöks specialfallet då grafens noder har begränsat gradtal. Med mindre förändringar av existerande algoritmer lyckas vi uppnå en förbättrad approximationskvot för generella gradtals-begränsade grafer. Vi visar också ytterligare förbättringar för grafer med minst en konstant andel noder med udda gradtal. Vi identifierar också några andra möjliga områden för förbättring i det allmänna gradtals-begränsade fallet.
APA, Harvard, Vancouver, ISO, and other styles
5

Le, Bodic Pierre. "Variantes non standards de problèmes d'optimisation combinatoire." Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112190.

Full text
Abstract:
Cette thèse est composée de deux parties, chacune portant sur un sous-domaine de l'optimisation combinatoire a priori distant de l'autre. Le premier thème de recherche abordé est la programmation biniveau stochastique. Se cachent derrière ce terme deux sujets de recherche relativement peu étudiés conjointement, à savoir d'un côté la programmation stochastique, et de l'autre la programmation biniveau. La programmation mathématique (PM) regroupe un ensemble de méthodes de modélisation et de résolution, pouvant être utilisées pour traiter des problèmes pratiques que se posent des décideurs. La programmation stochastique et la programmation biniveau sont deux sous-domaines de la PM, permettant chacun de modéliser un aspect particulier de ces problèmes pratiques. Nous élaborons un modèle mathématique issu d'un problème appliqué, où les aspects biniveau et stochastique sont tous deux sollicités, puis procédons à une série de transformations du modèle. Une méthode de résolution est proposée pour le PM résultant. Nous démontrons alors théoriquement et vérifions expérimentalement la convergence de cette méthode. Cet algorithme peut être utilisé pour résoudre d'autres programmes biniveaux que celui qui est proposé.Le second thème de recherche de cette thèse s'intitule "problèmes de coupe et de couverture partielles dans les graphes". Les problèmes de coupe et de couverture sont parmi les problèmes de graphe les plus étudiés du point de vue complexité et algorithmique. Nous considérons certains de ces problèmes dans une variante partielle, c'est-à-dire que la propriété de coupe ou de couverture dont il est question doit être vérifiée partiellement, selon un paramètre donné, et non plus complètement comme c'est le cas pour les problèmes originels. Précisément, les problèmes étudiés sont le problème de multicoupe partielle, de coupe multiterminale partielle, et de l'ensemble dominant partiel. Les versions sommets des ces problèmes sont également considérés. Notons que les problèmes en variante partielle généralisent les problèmes non partiels. Nous donnons des algorithmes exacts lorsque cela est possible, prouvons la NP-difficulté de certaines variantes, et fournissons des algorithmes approchés dans des cas assez généraux<br>This thesis is composed of two parts, each part belonging to a sub-domain of combinatorial optimization a priori distant from the other. The first research subject is stochastic bilevel programming. This term regroups two research subject rarely studied together, namely stochastic programming on the one hand, and bilevel programming on the other hand. Mathematical Programming (MP) is a set of modelisation and resolution methods, that can be used to tackle practical problems and help take decisions. Stochastic programming and bilevel programming are two sub-domains of MP, each one of them being able to model a specific aspect of these practical problems. Starting from a practical problem, we design a mathematical model where the bilevel and stochastic aspects are used together, then apply a series of transformations to this model. A resolution method is proposed for the resulting MP. We then theoretically prove and numerically verify that this method converges. This algorithm can be used to solve other bilevel programs than the ones we study.The second research subject in this thesis is called "partial cut and cover problems in graphs". Cut and cover problems are among the most studied from the complexity and algorithmical point of view. We consider some of these problems in a partial variant, which means that the cut or cover property that is looked into must be verified partially, according to a given parameter, and not completely, as it was the case with the original problems. More precisely, the problems that we study are the partial multicut, the partial multiterminal cut, and the partial dominating set. Versions of these problems were vertices are
APA, Harvard, Vancouver, ISO, and other styles
6

Hirschfeld, Bernd. "Approximative Lösungen des Max-cut-Problems mit semidefiniten Programmen." [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=970150032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Curado, Manuel. "Structural Similarity: Applications to Object Recognition and Clustering." Doctoral thesis, Universidad de Alicante, 2018. http://hdl.handle.net/10045/98110.

Full text
Abstract:
In this thesis, we propose many developments in the context of Structural Similarity. We address both node (local) similarity and graph (global) similarity. Concerning node similarity, we focus on improving the diffusive process leading to compute this similarity (e.g. Commute Times) by means of modifying or rewiring the structure of the graph (Graph Densification), although some advances in Laplacian-based ranking are also included in this document. Graph Densification is a particular case of what we call graph rewiring, i.e. a novel field (similar to image processing) where input graphs are rewired to be better conditioned for the subsequent pattern recognition tasks (e.g. clustering). In the thesis, we contribute with an scalable an effective method driven by Dirichlet processes. We propose both a completely unsupervised and a semi-supervised approach for Dirichlet densification. We also contribute with new random walkers (Return Random Walks) that are useful structural filters as well as asymmetry detectors in directed brain networks used to make early predictions of Alzheimer's disease (AD). Graph similarity is addressed by means of designing structural information channels as a means of measuring the Mutual Information between graphs. To this end, we first embed the graphs by means of Commute Times. Commute times embeddings have good properties for Delaunay triangulations (the typical representation for Graph Matching in computer vision). This means that these embeddings can act as encoders in the channel as well as decoders (since they are invertible). Consequently, structural noise can be modelled by the deformation introduced in one of the manifolds to fit the other one. This methodology leads to a very high discriminative similarity measure, since the Mutual Information is measured on the manifolds (vectorial domain) through copulas and bypass entropy estimators. This is consistent with the methodology of decoupling the measurement of graph similarity in two steps: a) linearizing the Quadratic Assignment Problem (QAP) by means of the embedding trick, and b) measuring similarity in vector spaces. The QAP problem is also investigated in this thesis. More precisely, we analyze the behaviour of $m$-best Graph Matching methods. These methods usually start by a couple of best solutions and then expand locally the search space by excluding previous clamped variables. The next variable to clamp is usually selected randomly, but we show that this reduces the performance when structural noise arises (outliers). Alternatively, we propose several heuristics for spanning the search space and evaluate all of them, showing that they are usually better than random selection. These heuristics are particularly interesting because they exploit the structure of the affinity matrix. Efficiency is improved as well. Concerning the application domains explored in this thesis we focus on object recognition (graph similarity), clustering (rewiring), compression/decompression of graphs (links with Extremal Graph Theory), 3D shape simplification (sparsification) and early prediction of AD.<br>Ministerio de Economía, Industria y Competitividad (Referencia TIN2012-32839 BES-2013-064482)
APA, Harvard, Vancouver, ISO, and other styles
8

Fügenschuh, Marzena. "Relaxations and solutions for the minimum graph bisection problem /." Berlin : Logos-Verl, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=3025314&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Calinescu, Gruia. "Approximation algorithms for graph-theoretic problems : planar subgraphs and multiway cut." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/9203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hartmann, Tanja [Verfasser], and D. [Akademischer Betreuer] Wagner. "Algorithms for Graph Connectivity and Cut Problems - Connectivity Augmentation, All-Pairs Minimum Cut, and Cut-Based Clustering / Tanja Hartmann. Betreuer: D. Wagner." Karlsruhe : KIT-Bibliothek, 2014. http://d-nb.info/1064504167/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Choudhury, Salimur Rashid, and University of Lethbridge Faculty of Arts and Science. "Approximation algorithms for a graph-cut problem with applications to a clustering problem in bioinformatics." Thesis, Lethbridge, Alta. : University of Lethbridge, Deptartment of Mathematics and Computer Science, 2008, 2008. http://hdl.handle.net/10133/774.

Full text
Abstract:
Clusters in protein interaction networks can potentially help identify functional relationships among proteins. We study the clustering problem by modeling it as graph cut problems. Given an edge weighted graph, the goal is to partition the graph into a prescribed number of subsets obeying some capacity constraints, so as to maximize the total weight of the edges that are within a subset. Identification of a dense subset might shed some light on the biological function of all the proteins in the subset. We study integer programming formulations and exhibit large integrality gaps for various formulations. This is indicative of the difficulty in obtaining constant factor approximation algorithms using the primal-dual schema. We propose three approximation algorithms for the problem. We evaluate the algorithms on the database of interacting proteins and on randomly generated graphs. Our experiments show that the algorithms are fast and have good performance ratio in practice.<br>xiii, 71 leaves : ill. ; 29 cm.
APA, Harvard, Vancouver, ISO, and other styles
12

Cabral, Lucidio dos Anjos Formiga. "Algoritmos exatos para o problema de edição de p-Clusters." Universidade Federal da Paraíba, 2015. http://tede.biblioteca.ufpb.br:8080/handle/tede/7870.

Full text
Abstract:
Submitted by Viviane Lima da Cunha (viviane@biblioteca.ufpb.br) on 2016-02-17T11:41:50Z No. of bitstreams: 1 arquivototal.pdf: 1276201 bytes, checksum: 9dd8fad80c483276a40101ee6a782d5b (MD5)<br>Made available in DSpace on 2016-02-17T11:41:50Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 1276201 bytes, checksum: 9dd8fad80c483276a40101ee6a782d5b (MD5) Previous issue date: 2015-07-21<br>Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES<br>This work deals with the p-Cluster Editing Problem (p-CEP), which consists in performing a minimum number of editions (additions or removals of edges) in a graph G in order to transform it in a disjoint union of p cliques (clusters), where G and p are input data. p-CEP is a NP-Hard problem with applications in areas such as computational biology and machine learning. To solve this problem, we propose two new mathematical formulations and improvements in a formulation from the literature, as well as new valid inequalities. The three formulations were studied both theoretically, by comparing their linear relaxations, and practically, by implementing three exact algorithms: two based on Branch-and-Cut (BC) and one based on Branch-and-Price (BP). The proposed algorithms were tested in instances with up to 211 vertices. The results show the performance of the algorithms according to the graph density and the ratio between p and the number of vertices. Overall, the BC algorithms were superior to the BP algorithm. However, the latter obtained the best dual bounds in some instances.<br>Este trabalho trata do Problema de Edi¸c˜ao de p-Clusters (p-PEC), o qual consiste em realizar um n´umero m´ınimo de edi¸c˜oes (adi¸c˜oes ou remo¸c˜oes de arestas) em um grafo G de modo a transform´a-lo em uma uni˜ao disjunta de p cliques (ou clusters), sendo G e p dados de entrada. O p-PEC ´e um problema NP-Dif´ıcil que possui aplica¸c˜oes em ´areas como biologia computacional e aprendizagem de m´aquina. Para resolvˆe-lo, foram propostas duas novas formula¸c˜oes matem´aticas e melhorias em uma formula¸c˜ao proveniente da literatura, bem como novas desigualdades v´alidas. As trˆes formula¸c˜oes consideradas foram estudadas tanto teoricamente, atrav´es da compara¸c˜ao entre as relaxa¸c˜oes lineares, quanto empiricamente, atrav´es do desenvolvimento de trˆes algoritmos exatos: dois baseados em Branch-and-Cut (BC) e um baseado em Branch-and-Price (BP). Os algoritmos foram testados em instˆancias com at´e 211 v´ertices. Os resultados mostram o impacto da raz˜ao entre p e o n´umero de v´ertices, da densidade do grafo e das desigualdades nos desempenhos dos algoritmos. No geral, os algoritmos BC foram superiores ao algoritmo BP. Por´em, em algumas instˆancias, os melhores limites duais foram obtidos pelo algoritmo BP.
APA, Harvard, Vancouver, ISO, and other styles
13

Rambour, Clément. "Approches tomographiques structurelles pour l'analyse du milieu urbain par tomographie SAR THR : TomoSAR." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT007/document.

Full text
Abstract:
La tomographie SAR exploite plusieurs acquisitions d'une même zone acquises d'un point de vue légerement différent pour reconstruire la densité complexe de réflectivité au sol. Cette technique d'imagerie s'appuyant sur l'émission et la réception d'ondes électromagnétiques cohérentes, les données analysées sont complexes et l'information spatiale manquante (selon la verticale) est codée dans la phase. De nombreuse méthodes ont pu être proposées pour retrouver cette information. L'utilisation des redondances naturelles à certains milieux n'est toutefois généralement pas exploitée pour améliorer l'estimation tomographique. Cette thèse propose d'utiliser l'information structurelle propre aux structures urbaines pour régulariser les densités de réflecteurs obtenues par cette technique<br>SAR tomography consists in exploiting multiple images from the same area acquired from a slightly different angle to retrieve the 3-D distribution of the complex reflectivity on the ground. As the transmitted waves are coherent, the desired spatial information (along with the vertical axis) is coded in the phase of the pixels. Many methods have been proposed to retrieve this information in the past years. However, the natural redundancies of the scene are generally not exploited to improve the tomographic estimation step. This Ph.D. presents new approaches to regularize the estimated reflectivity density obtained through SAR tomography by exploiting the urban geometrical structures
APA, Harvard, Vancouver, ISO, and other styles
14

Chafqane, Ben Rahhou Touria. "Nouvelles méthodes pour les problèmes d'ordonnancement cyclique." Toulouse 3, 2013. http://thesesups.ups-tlse.fr/2400/.

Full text
Abstract:
Les travaux de recherche concernant l'ordonnancement mobilisent un nombre important de chercheurs. Cette forte émulation est principalement due au large panorama des problématiques d'ordonnancement. Parmi elles, le problème d'atelier à cheminement multiple, communément appelé " Job-Shop ", tient une place particulièrement prépondérante tant ce problème est rencontré dans le milieu industriel. De nombreux sujets de recherche, en France et à l'étranger, sont issue de cette problématique. Les problèmes de Job-Shop peuvent souvent être simplifiés en les considérant comme des problèmes cycliques. L'ordonnancement des tâches devient ainsi cyclique et son objectif est d'organiser les activités de production en répétant un cycle de base que l'on a optimisé. De nombreux paramètres entrent en jeu dans l'optimisation du cycle de base tels que la période du cycle choisie, l'ordre des opérations élémentaires pour réaliser un travail, la durée de ces opérations, le nombre de produits à réaliser par cycle, etc. Plusieurs approches ont été utilisées pour résoudre ce problème. Parmi elles, nous pouvons citer l'approche par réseaux de Petri et plus particulièrement par graphes d'événements temporisés, l'approche par les graphes, l'approche par la programmation linéaire et l'approche par la théorie des tas. L'approche par les graphes permet une représentation graphique du problème sous forme d'un graphe où les noeuds représentent les différentes opérations et où les arcs illustrent les contraintes du problème d'ordonnancement cyclique, un tel problème admet une solution réalisable si, et seulement si, le graphe associé est consistant. Cette propriété de consistance d'un problème d'ordonnancement cyclique et de son graphe permet d'élaguer l'arbre de recherche de la procédure de séparation et d'évaluation proposée pour cette approche. Concernant l'approche par la théorie des tas, le sous-problème de l'évaluation d'une solution peut être résolu aisément avec l'aide de la théorie des tas. En effet, en traduisant le problème dans une structure mathématique adaptée, l'évaluation du taux de production du cycle revient au calcul d'une valeur propre d'un produit de matrices dans lequel chacune des matrices représente une opération élémentaire. Cette propriété s'avère particulièrement intéressante dans le cas de l'évaluation successive d'un grand nombre d'ordonnancement. En outre, la théorie des tas permet une représentation très intuitive d'un ordonnancement, puisque celui-ci s'illustre comme un empilement de plusieurs briques (en fait, un " tas " de briques) dont le contour supérieur correspond aux dates de fin des dernières opérations des machines<br>Research on scheduling has been mobilizing a large number of researchers, this is mainly due to the broad range of application of scheduling problems, among them, there is the problem of tracking multiple workshops, commonly called "JobShop". JobShop problems can often be simplified by considering them as cyclic problems. The tasks scheduling becomes cyclic and its purpose is to organize the production activities by repeating a basic cycle that has been optimized. Many parameters are involved in the optimization of the basic cycle such as the period of the cycle chosen, the order of operations to perform basic work, the duration of these operations, the number of outputs per cycle, etc. Several approaches have been used to solve this problem, among which the approach by Petri nets, especially in timed event graphs, graph approach, linear programming approach and heap of pieces approach. The graph approach allows a graphical representation of the problem as a graph where the nodes represent the different operations and where the arcs illustrate constraints. A cyclic scheduling problem has a feasible solution if and only if it is consistent. This consistency property of a cyclic scheduling problem and its associated graph allows to prune the search tree of the branch-bound procedure proposed for this approach. In the heap of pieces approach, the sub-problem of evaluating a solution can be easily solved. Indeed, by translating the problem into an appropriate mathematical structure, finding the cycle time is equivalent to the calculation of an eigenvalue of a product of matrices where each matrix represents an elementary operation. This property is particularly significant in the case of successive sets of evaluation of many schedules. In addition, the theory allows an intuitive representation of scheduling, since it is illustrated as a stack of several blocks. Several cycles will be symbolized by the stack of piles
APA, Harvard, Vancouver, ISO, and other styles
15

Sa, Shibasaki Rui. "Lagrangian Decomposition Methods for Large-Scale Fixed-Charge Capacitated Multicommodity Network Design Problem." Thesis, Université Clermont Auvergne‎ (2017-2020), 2020. http://www.theses.fr/2020CLFAC024.

Full text
Abstract:
Typiquement présent dans les domaines de la logistique et des télécommunications, le problème de synthèse de réseau multi-flot à charge fixe reste difficile, en particulier dans des contextes à grande échelle. Dans ce cas, la capacité à produire des solutions de bonne qualité dans un temps de calcul raisonnable repose sur la disponibilité d'algorithmes efficaces. En ce sens, cette thèse propose des approches lagrangiennes capables de fournir des bornes relativement proches de l'optimal pour des instances de grande taille. L'efficacité des méthodes dépend de l'algorithme appliqué pour résoudre les duals lagrangiens, nous choisissons donc entre deux des solveurs les plus efficaces de la littérature: l'algorithme de Volume et la méthode Bundle, fournissant une comparaison entre eux. Les résultats ont montré que l'algorithme de Volume est plus efficace dans le contexte considéré, étant celui choisi pour le développement du projet de recherche.Une première heuristique lagrangienne a été conçue pour produire des solutions réalisables de bonne qualité pour le problème, obtenant de bien meilleurs résultats que Cplex pour les plus grandes instances. Concernant les limites inférieures, un algorithme Relax-and-Cut a été implémenté intégrant une analyse de sensibilité et une mise à l'échelle des contraintes, ce qui a amélioré les résultats. Les améliorations des bornes inférieures ont atteint 11\%, mais en moyenne, elles sont restées inférieures à 1\%. L'algorithme Relax-and-Cut a ensuite été inclus dans un schéma Branch-and-Cut, pour résoudre des programmes linéaires dans chaque nœud de l'arbre de recherche. De plus, une heuristique Feasibility Pump lagrangienne a été implémentée pour accélérer la recherche de bonnes solutions réalisables. Les résultats obtenus ont montré que le schéma proposé est compétitif avec les meilleurs algorithmes de la littérature et fournit les meilleurs résultats dans des contextes à grande échelle. De plus, une version heuristique de l'algorithme Branch-and-Cut basé sur le Feasibility Pump lagrangien a été testée, fournissant les meilleurs résultats en général, par rapport aux heuristiques de la littérature<br>Typically present in logistics and telecommunications domains, the Fixed-Charge Multicommodity Capacitated Network Design Problem remains challenging, especially when large-scale contexts are involved. In this particular case, the ability to produce good quality soutions in a reasonable amount of time leans on the availability of efficient algorithms. In that sense, the present thesis proposed Lagrangian approaches that are able to provide relatively sharp bounds for large-scale instances of the problem. The efficiency of the methods depend on the algorithm applied to solve Lagrangian duals, so we choose between two of the most efficient solvers in the literature: the Volume Algorithm and the Bundle Method, providing a comparison between them. The results showed that the Volume Algorithm is more efficient in the present context, being the one kept for further research.A first Lagrangian heuristic was devised to produce good quality feasible solutions for the problem, obtaining far better results than Cplex, for the largests instances. Concerning lower bounds, a Relax-and-Cut algorithm was implemented embbeding sensitivity analysis and constraint scaling, which improved results. The increases in lower bounds attained 11\%, but on average they remained under 1\%.The Relax-and-Cut algorithm was then included in a Branch-and-Cut scheme, to solve linear programs in each node of the search tree. Moreover, a Feasibility Pump heuristic using the Volume Algorithm as solver for linear programs was implemented to accelerate the search for good feasible solutions in large-scale cases. The obtained results showed that the proposed scheme is competitive with the best algorithms in the literature, and provides the best results in large-scale contexts. Moreover, a heuristic version of the Branch-and-Cut algorithm based on the Lagrangian Feasibility Pump was tested, providing the best results in general, when compared to efficient heuristics in the literature
APA, Harvard, Vancouver, ISO, and other styles
16

Lê, Ngoc C. "Algorithms for the Maximum Independent Set Problem." Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2015. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-172639.

Full text
Abstract:
This thesis focuses mainly on the Maximum Independent Set (MIS) problem. Some related graph theoretical combinatorial problems are also considered. As these problems are generally NP-hard, we study their complexity in hereditary graph classes, i.e. graph classes defined by a set F of forbidden induced subgraphs. We revise the literature about the issue, for example complexity results, applications, and techniques tackling the problem. Through considering some general approach, we exhibit several cases where the problem admits a polynomial-time solution. More specifically, we present polynomial-time algorithms for the MIS problem in: + some subclasses of $S_{2;j;k}$-free graphs (thus generalizing the classical result for $S_{1;2;k}$-free graphs); + some subclasses of $tree_{k}$-free graphs (thus generalizing the classical results for subclasses of P5-free graphs); + some subclasses of $P_{7}$-free graphs and $S_{2;2;2}$-free graphs; and various subclasses of graphs of bounded maximum degree, for example subcubic graphs. Our algorithms are based on various approaches. In particular, we characterize augmenting graphs in a subclass of $S_{2;k;k}$-free graphs and a subclass of $S_{2;2;5}$-free graphs. These characterizations are partly based on extensions of the concept of redundant set [125]. We also propose methods finding augmenting chains, an extension of the method in [99], and finding augmenting trees, an extension of the methods in [125]. We apply the augmenting vertex technique, originally used for $P_{5}$-free graphs or banner-free graphs, for some more general graph classes. We consider a general graph theoretical combinatorial problem, the so-called Maximum -Set problem. Two special cases of this problem, the so-called Maximum F-(Strongly) Independent Subgraph and Maximum F-Induced Subgraph, where F is a connected graph set, are considered. The complexity of the Maximum F-(Strongly) Independent Subgraph problem is revised and the NP-hardness of the Maximum F-Induced Subgraph problem is proved. We also extend the augmenting approach to apply it for the general Maximum Π -Set problem. We revise on classical graph transformations and give two unified views based on pseudo-boolean functions and αff-redundant vertex. We also make extensive uses of α-redundant vertices, originally mainly used for $P_{5}$-free graphs, to give polynomial solutions for some subclasses of $S_{2;2;2}$-free graphs and $tree_{k}$-free graphs. We consider some classical sequential greedy heuristic methods. We also combine classical algorithms with αff-redundant vertices to have new strategies of choosing the next vertex in greedy methods. Some aspects of the algorithms, for example forbidden induced subgraph sets and worst case results, are also considered. Finally, we restrict our attention on graphs of bounded maximum degree and subcubic graphs. Then by using some techniques, for example ff-redundant vertex, clique separator, and arguments based on distance, we general these results for some subclasses of $S_{i;j;k}$-free subcubic graphs.
APA, Harvard, Vancouver, ISO, and other styles
17

Faure, Nathalie. "Contribution à la résolution de problèmes de regroupement de sessions multicasts." Paris 6, 2006. http://www.theses.fr/2006PA066360.

Full text
Abstract:
Les services multicasts connectent plusieurs entités qui peuvent interagir ensemble pour envoyer ou recevoir de l'information. Pour chaque session ou groupe multicast, il faut déterminer un arbre reliant l'ensemble des entités du groupe. Un grand nombre de sessions arrivent simultanément et les capacités du réseau ne permettent pas de gérer autant d'arbres en même temps. Pour remédier à cela, nous utilisons l'agrégation des arbres qui consiste à regrouper plusieurs sessions sur le même arbre et diminuer ainsi le nombre total d'arbres utilisés. Il en résulte que des ressources du réseau peuvent être gaspillées lorsqu'une session est envoyée vers un client ne voulant pas la recevoir. Nous nous sommes focalisés sur le problème qui consiste à définir comment regrouper les sessions et leurs clients afin de minimiser le nombre d'informations envoyées inutilement. Nous avons proposé plusieurs formulations pour ce problème qui nous ont conduit à différentes résolutions.
APA, Harvard, Vancouver, ISO, and other styles
18

Mahjoub, Meriem. "The Survivable Network Design Problems with High Node-Connectivity Constraints : Polyhedra and Algorithms." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLED046/document.

Full text
Abstract:
Dans un graphe non orienté, le problème du sous-graphe k-sommet connexe consiste à déterminer un sous-graphe de poids minimum tel que entre chaque paires de sommets, il existe k chemins sommet-disjoints. Ce modèle a été étudié dans la littérature en termes d'arête connexité. Cependant, le cas de la sommet connexité n'a pas été traité jusqu'à présent. Nous décrivons de nouvelles inégalités valides et nous présentons un algorithme de Coupes et Branchements ainsi qu'une large étude expérimentale qui montrent l'efficacité des contraintes utilisées. Nous proposons ensuite une formulation étendue pour le même problème pour une connexité k=2, suivi d'un algorithme de Génération de Colonnes et Branchements pour résoudre cette formulation.Nous étudions ensuite la version avec chemins bornés du problème. Le problème consiste à trouver un sous-graphe de poids minimum, tel que entre chaque paire d'origine-destination, il existe k chemins sommet-disjoints de longueur au plus L. Nous proposons une formulation linéaire en nombres entiers pour L=2,3. Nous présentons de nouvelles inégalités valides et nous proposons des algorithmes de séparation pour ces contraintes. Nous présentons ensuite un algorithme de Coupes et Branchements qu'on a testé sur des instances de la TSPLIB<br>Given a weighted undirected graph and an integer k, the k-node-connected subgraph problem is to find a minimum weight subgraph which contains k-node-disjoint paths between every pair of nodes. We introduce new classes of valid inequalities and discuss their facial aspect. We also devise separation routines, investigate the structural properties of the linear relaxation and discuss some reduction operations that can be used in a preprocessing phase for the separation. Using these results, we devise a Branch-and-Cut algorithm and present some computational results. Then we present a new extended formulation for the the k-node-connected subgraph problem, along with a Branch-and-Cut-and-Price algorithm for solving the problem.Next, we investigate the hop-constrained version of the problem. The k node-disjoint hop-constrained network design problem is to find a minimum weight subgraph such that between every origin and destination there exist at least k node-disjoint paths of length at most L. We propose an integer linear programming formulation for L=2,3 and investigate the associated polytope. We introduce valid inequalities and devise separation algorithms. Then, we propose a B\&amp;C algorithm for solving the problem along with some computational results
APA, Harvard, Vancouver, ISO, and other styles
19

Ben, Rahhou Touria. "Nouvelles méthodes pour les problèmes d'ordonnancement cyclique." Phd thesis, Université Paul Sabatier - Toulouse III, 2013. http://tel.archives-ouvertes.fr/tel-00926977.

Full text
Abstract:
Les travaux de recherche concernant l'ordonnancement mobilisent un nombre important de chercheurs. Cette forte émulation est principalement due au large panorama des problématiques d'ordonnancement. Parmi elles, le problème d'atelier à cheminement multiple, communément appelé " Job-Shop ", tient une place particulièrement prépondérante tant ce problème est rencontré dans le milieu industriel. De nombreux sujets de recherche, en France et à l'étranger, sont issus de cette problématique. Les problèmes de Job-Shop peuvent souvent être simplifiés en les considérant comme des problèmes cycliques. L'ordonnancement des tâches devient ainsi cyclique et son objectif est d'organiser les activités de production en répétant un cycle de base que l'on a optimisé. De nombreux paramètres entrent en jeu dans l'optimisation du cycle de base tels que la période du cycle choisie, l'ordre des opérations élémentaires pour réaliser un travail, la durée de ces opérations, le nombre de produits à réaliser par cycle, etc. Plusieurs approches ont été utilisées pour résoudre ce problème. Parmi elles, nous pouvons citer l'approche par réseaux de Petri et plus particulièrement par graphes d'événements temporisés, l'approche par les graphes, l'approche par la programmation linéaire et l'approche par la théorie des tas. L'approche par les graphes permet une représentation graphique du problème sous forme d'un graphe où les noeuds représentent les différentes opérations et où les arcs illustrent les contraintes du problème d'ordonnancement cyclique, un tel problème admet une solution réalisable si, et seulement si, le graphe associé est consistant. Cette propriété de consistance d'un problème d'ordonnancement cyclique et de son graphe permet d'élaguer l'arbre de recherche de la procédure de séparation et d'évaluation proposée pour cette approche. Concernant l'approche par la théorie des tas, le sous-problème de l'évaluation d'une solution peut être résolu aisément avec l'aide de la théorie des tas. En effet, en traduisant le problème dans une structure mathématique adaptée, l'évaluation du taux de production du cycle revient au calcul d'une valeur propre d'un produit de matrices dans lequel chacune des matrices représente une opération élémentaire. Cette propriété s'avère particulièrement intéressante dans le cas de l'évaluation successive d'un grand nombre d'ordonnancement. En outre, la théorie des tas permet une représentation très intuitive d'un ordonnancement, puisque celui-ci s'illustre comme un empilement de plusieurs briques (en fait, un " tas " de briques) dont le contour supérieur correspond aux dates de fin des dernières opérations des machines.
APA, Harvard, Vancouver, ISO, and other styles
20

Magnouche, Youcef. "The multi-terminal vertex separator problem : Complexity, Polyhedra and Algorithms." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLED020/document.

Full text
Abstract:
Étant donné un graphe G = (V U T, E), tel que V U T représente l'ensemble des sommets où T est un ensemble de terminaux, et une fonction poids w associée aux sommets non terminaux, le problème du séparateur de poids minimum consiste à partitionner V U T en k + 1 sous-ensembles {S, V1,..., Vk} tel qu'il n'y a aucune arête entre deux sous-ensembles différents Vi et Vj, chaque Vi contient exactement un terminal et le poids de S est minimum. Dans cette thèse, nous étudions le problème d'un point de vue polyèdral. Nous donnons deux formulations en nombres entiers pour le problème, et pour une de ces formulations, nous étudions le polyèdre associé. Nous présentons plusieurs inégalités valides, et décrivons des conditions de facette. En utilisant ces résultats, nous développons un algorithme de coupes et branchement pour le problème. Nous étudions également le polytope des séparateurs dans les graphes décomposables par sommets d'articulation. Si G est un graphe qui se décompose en G1 et G2, alors nous montrons que le polytope des séparateurs dans G peut être décrit à partir de deux systèmes linéaires liés à G1 et G2. Ceci donne lieu à une technique permettant de caractériser le polytope des séparateurs dans les graphes qui sont récursivement décomposables. Ensuite, nous étudions des formulations étendues pour le problème et proposons des algorithmes de génération de colonnes et branchement ainsi que des algorithmes de génération de colonnes, de coupes et branchement. Pour chaque formulation, nous présentons un algorithme de génération de colonnes, une procedure pour le calcul de la borne duale ainsi qu'une règle de branchement. De plus, nous présentons quatre variantes du problème du séparateur. Nous montrons que celles-ci sont NP-difficiles, et pour chacune d'elles nous donnons une formulation en nombres entiers et présentons certaines classes d'inégalités valides<br>Given a graph G = (V U T, E) with V U T the set of vertices, where T is a set of terminals, and a weight function w, associated with the nonterminal nodes, the multi-terminal vertex separator problem consists in partitioning V U T into k + 1 subsets {S, V1,..., Vk} such that there is no edge between two different subsets Vi and Vj, each Vi contains exactly one terminal and the weight of S is minimum. In this thesis, we consider the problem from a polyhedral point of view. We give two integer programming formulations for the problem, for one of them, we investigate the related polyhedron. We describe some valid inequalities and characterize when these inequalities define facets. Using these results, we develop a Branch-and-Cut algorithm for the problem. We also study the multi-terminal vertex separator polytope in the graphs decomposable by one node cutsets. If G is a graph that decomposes into G1 and G2, we show that the multi-terminal vertex separator polytope in G can be described from two linear systems related to G1 and G2. This gives rise to a technique for characterizing the multi-terminal vertex separator polytope in the graphs that are recursively decomposable. Moreover, we propose three extended formulations for the problem and derive Branch-and-Price and Branch-and-Cut-and-Price algorithms. For each formulation we present a column generation scheme, the way to compute the dual bound, and the branching scheme. Finally, we discuss four variants of the multi-terminal vertex separator problem. We show that all these variants are NP-hard and for each one we give an integer programming formulation and present some class of valid inequalities
APA, Harvard, Vancouver, ISO, and other styles
21

Al-Hasani, Firas Ali Jawad. "Multiple Constant Multiplication Optimization Using Common Subexpression Elimination and Redundant Numbers." Thesis, University of Canterbury. Electrical and Computer Engineering, 2014. http://hdl.handle.net/10092/9054.

Full text
Abstract:
The multiple constant multiplication (MCM) operation is a fundamental operation in digital signal processing (DSP) and digital image processing (DIP). Examples of the MCM are in finite impulse response (FIR) and infinite impulse response (IIR) filters, matrix multiplication, and transforms. The aim of this work is minimizing the complexity of the MCM operation using common subexpression elimination (CSE) technique and redundant number representations. The CSE technique searches and eliminates common digit patterns (subexpressions) among MCM coefficients. More common subexpressions can be found by representing the MCM coefficients using redundant number representations. A CSE algorithm is proposed that works on a type of redundant numbers called the zero-dominant set (ZDS). The ZDS is an extension over the representations of minimum number of non-zero digits called minimum Hamming weight (MHW). Using the ZDS improves CSE algorithms' performance as compared with using the MHW representations. The disadvantage of using the ZDS is it increases the possibility of overlapping patterns (digit collisions). In this case, one or more digits are shared between a number of patterns. Eliminating a pattern results in losing other patterns because of eliminating the common digits. A pattern preservation algorithm (PPA) is developed to resolve the overlapping patterns in the representations. A tree and graph encoders are proposed to generate a larger space of number representations. The algorithms generate redundant representations of a value for a given digit set, radix, and wordlength. The tree encoder is modified to search for common subexpressions simultaneously with generating of the representation tree. A complexity measure is proposed to compare between the subexpressions at each node. The algorithm terminates generating the rest of the representation tree when it finds subexpressions with maximum sharing. This reduces the search space while minimizes the hardware complexity. A combinatoric model of the MCM problem is proposed in this work. The model is obtained by enumerating all the possible solutions of the MCM that resemble a graph called the demand graph. Arc routing on this graph gives the solutions of the MCM problem. A similar arc routing is found in the capacitated arc routing such as the winter salting problem. Ant colony optimization (ACO) meta-heuristics is proposed to traverse the demand graph. The ACO is simulated on a PC using Python programming language. This is to verify the model correctness and the work of the ACO. A parallel simulation of the ACO is carried out on a multi-core super computer using C++ boost graph library.
APA, Harvard, Vancouver, ISO, and other styles
22

Hirschfeld, Bernd [Verfasser]. "Approximative Lösungen des Max-cut-Problems mit semidefiniten Programmen / vorgelegt von Bernd Hirschfeld." 2004. http://d-nb.info/970150032/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Lê, Ngoc C. "Algorithms for the Maximum Independent Set Problem." Doctoral thesis, 2014. https://tubaf.qucosa.de/id/qucosa%3A22990.

Full text
Abstract:
This thesis focuses mainly on the Maximum Independent Set (MIS) problem. Some related graph theoretical combinatorial problems are also considered. As these problems are generally NP-hard, we study their complexity in hereditary graph classes, i.e. graph classes defined by a set F of forbidden induced subgraphs. We revise the literature about the issue, for example complexity results, applications, and techniques tackling the problem. Through considering some general approach, we exhibit several cases where the problem admits a polynomial-time solution. More specifically, we present polynomial-time algorithms for the MIS problem in: + some subclasses of $S_{2;j;k}$-free graphs (thus generalizing the classical result for $S_{1;2;k}$-free graphs); + some subclasses of $tree_{k}$-free graphs (thus generalizing the classical results for subclasses of P5-free graphs); + some subclasses of $P_{7}$-free graphs and $S_{2;2;2}$-free graphs; and various subclasses of graphs of bounded maximum degree, for example subcubic graphs. Our algorithms are based on various approaches. In particular, we characterize augmenting graphs in a subclass of $S_{2;k;k}$-free graphs and a subclass of $S_{2;2;5}$-free graphs. These characterizations are partly based on extensions of the concept of redundant set [125]. We also propose methods finding augmenting chains, an extension of the method in [99], and finding augmenting trees, an extension of the methods in [125]. We apply the augmenting vertex technique, originally used for $P_{5}$-free graphs or banner-free graphs, for some more general graph classes. We consider a general graph theoretical combinatorial problem, the so-called Maximum -Set problem. Two special cases of this problem, the so-called Maximum F-(Strongly) Independent Subgraph and Maximum F-Induced Subgraph, where F is a connected graph set, are considered. The complexity of the Maximum F-(Strongly) Independent Subgraph problem is revised and the NP-hardness of the Maximum F-Induced Subgraph problem is proved. We also extend the augmenting approach to apply it for the general Maximum Π -Set problem. We revise on classical graph transformations and give two unified views based on pseudo-boolean functions and αff-redundant vertex. We also make extensive uses of α-redundant vertices, originally mainly used for $P_{5}$-free graphs, to give polynomial solutions for some subclasses of $S_{2;2;2}$-free graphs and $tree_{k}$-free graphs. We consider some classical sequential greedy heuristic methods. We also combine classical algorithms with αff-redundant vertices to have new strategies of choosing the next vertex in greedy methods. Some aspects of the algorithms, for example forbidden induced subgraph sets and worst case results, are also considered. Finally, we restrict our attention on graphs of bounded maximum degree and subcubic graphs. Then by using some techniques, for example ff-redundant vertex, clique separator, and arguments based on distance, we general these results for some subclasses of $S_{i;j;k}$-free subcubic graphs.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography