To see the other types of publications on this topic, follow the link: Polyhedral approximation.

Dissertations / Theses on the topic 'Polyhedral approximation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 20 dissertations / theses for your research on the topic 'Polyhedral approximation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Upadrasta, Ramakrishna. "Sub-Polyhedral Compilation using (Unit-)Two-Variables-Per-Inequality Polyhedra." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00818764.

Full text
Abstract:
The goal of this thesis is to design algorithms that run with better complexity when compiling or parallelizing loop programs. The framework within which our algorithms operate is the polyhedral model of compilation which has been successful in the design and implementation of complex loop nest optimizers and parallelizing compilers. The algorithmic complexity and scalability limitations of the above framework remain one important weakness. We address it by introducing sub-polyhedral compilation by using (Unit-)Two-Variable-Per-Inequality or (U)TVPI Polyhedra, namely polyhedrawith restricted constraints of the type ax_{i}+bx_{j}\le c (\pm x_{i}\pm x_{j}\le c). A major focus of our sub-polyhedral compilation is the introduction of sub-polyhedral scheduling, where we propose a technique for scheduling using (U)TVPI polyhedra. As part of this, we introduce algorithms that can be used to construct under-aproximations of the systems of constraints resulting from affine scheduling problems. This technique relies on simple polynomial time algorithms to under approximate a general polyhedron into (U)TVPI polyhedra. The above under-approximation algorithms are generic enough that they can be used for many kinds of loop parallelization scheduling problems, reducing each of their complexities to asymptotically polynomial time. We also introduce sub-polyhedral code-generation where we propose algorithms to use the improved complexities of (U)TVPI sub-polyhedra in polyhedral code generation. In this problem, we show that the exponentialities associated with the widely used polyhedral code generators could be reduced to polynomial time using the improved complexities of (U)TVPI sub-polyhedra. The above presented sub-polyhedral scheduling techniques are evaluated in an experimental framework. For this, we modify the state-of-the-art PLuTo compiler which can parallelize for multi-core architectures using permutation and tiling transformations. We show that using our scheduling technique, the above under-approximations yield polyhedra that are non-empty for 10 out of 16 benchmarks from the Polybench (2.0) kernels. Solving the under-approximated system leads to asymptotic gains in complexity, and shows practically significant improvements when compared to a traditional LP solver. We also verify that code generated by our sub-polyhedral parallelization prototype matches the performance of PLuTo-optimized code when the under-approximation preserves feasibility.
APA, Harvard, Vancouver, ISO, and other styles
2

Cantin, Pierre. "Approximation of scalar and vector transport problems on polyhedral meshes." Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1028/document.

Full text
Abstract:
Cette thèse étudie, au niveau continu et au niveau discret sur des maillages polyédriques, les équations de transport tridimensionnelles scalaire et vectorielle. Ces équations sont constituées d'un terme diffusif, d'un terme advectif et d'un terme réactif. Dans le cadre des systèmes de Friedrichs, l'analyse mathématique est effectuée dans les espaces du graphe associés aux espaces de Lebesgue. Les conditions de positivité usuelles sur le tenseur de Friedrichs sont étendues au niveau continu et au niveau discret afin de prendre en compte les cas d'intérêt pratique où ce tenseur prend des valeurs nulles ou raisonnablement négatives. Un nouveau schéma convergeant à l'ordre 3/2 est proposé pour le problème d'advection-réaction scalaire en considérant des degrés de liberté scalaires associés aux sommets du maillage. Deux nouveaux schémas considérant également des degrés de libertés aux sommets sont proposés pour le problème de transport scalaire en traitant de manière robuste les différents régimes dominants. Le premier schéma converge à l'ordre 1/2 si les effets advectifs sont dominants et à l'ordre 1 si les effets diffusifs sont dominants. Le second schéma améliore la précision de ce schéma en convergeant à l'ordre 3/2 lorsque les effets advectifs sont dominants. Enfin, un nouveau schéma convergeant à l'ordre 1/2 est obtenu pour le problème d'advection-réaction vectoriel en considérant un seul et unique degré de liberté scalaire sur chaque arête du maillage. La précision et les performances de tous ces schémas sont examinées sur plusieurs cas tests utilisant des maillages polyédriques tridimensionnels
This thesis analyzes, at the continuous and at the discrete level on polyhedral meshes, the scalar and the vector transport problems in three-dimensional domains. These problems are composed of a diffusive term, an advective term, and a reactive term. In the context of Friedrichs systems, the continuous problems are analyzed in Lebesgue graph spaces. The classical positivity assumption on the Friedrichs tensor is generalized so as to consider the case of practical interest where this tensor takes null or slightly negative values. A new scheme converging at the order 3/2 is devised for the scalar advection-reaction problem using scalar degrees of freedom attached to mesh vertices. Two new schemes considering as well scalar degrees of freedom attached to mesh vertices are devised for the scalar transport problem and are robust with respect to the dominant regime. The first scheme converges at the order 1/2 when advection effects are dominant and at the order 1 when diffusion effects are dominant. The second scheme improves the accuracy by converging at the order 3/2 when advection effects are dominant. Finally, a new scheme converging at the order 1/2 is devised for the vector advection-reaction problem considering only one scalar degree of freedom per mesh edge. The accuracy and the efficiency of all these schemes are assessed on various test cases using three-dimensional polyhedral meshes
APA, Harvard, Vancouver, ISO, and other styles
3

McDonald, Terry Lynn. "Piecewise polynomial functions on a planar region: boundary constraints and polyhedral subdivisions." Texas A&M University, 2003. http://hdl.handle.net/1969.1/3915.

Full text
Abstract:
Splines are piecewise polynomial functions of a given order of smoothness r on a triangulated region (or polyhedrally subdivided region) of Rd. The set of splines of degree at most k forms a vector space Crk() Moreover, a nice way to study Cr k()is to embed n Rd+1, and form the cone b of with the origin. It turns out that the set of splines on b is a graded module Cr b() over the polynomial ring R[x1; : : : ; xd+1], and the dimension of Cr k() is the dimension o This dissertation follows the works of Billera and Rose, as well as Schenck and Stillman, who each approached the study of splines from the viewpoint of homological and commutative algebra. They both defined chain complexes of modules such that Cr(b) appeared as the top homology module. First, we analyze the effects of gluing planar simplicial complexes. Suppose 1, 2, and = 1 [ 2 are all planar simplicial complexes which triangulate pseudomanifolds. When 1 \ 2 is also a planar simplicial complex, we use the Mayer-Vietoris sequence to obtain a natural relationship between the spline modules Cr(b), Cr (c1), Cr(c2), and Cr( \ 1 \ 2). Next, given a simplicial complex , we study splines which also vanish on the boundary of. The set of all such splines is denoted by Cr(b). In this case, we will discover a formula relating the Hilbert polynomials of Cr(cb) and Cr (b). Finally, we consider splines which are defined on a polygonally subdivided region of the plane. By adding only edges to to form a simplicial subdivision , we will be able to find bounds for the dimensions of the vector spaces Cr k() for k 0. In particular, these bounds will be given in terms of the dimensions of the vector spaces Cr k() and geometrical data of both and . This dissertation concludes with some thoughts on future research questions and an appendix describing the Macaulay2 package SplineCode, which allows the study of the Hilbert polynomials of the spline modules.
APA, Harvard, Vancouver, ISO, and other styles
4

Schulz, Henrik. "Polyhedral Surface Approximation of Non-Convex Voxel Sets and Improvements to the Convex Hull Computing Method." Forschungszentrum Dresden, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:d120-qucosa-27865.

Full text
Abstract:
In this paper we introduce an algorithm for the creation of polyhedral approximations for objects represented as strongly connected sets of voxels in three-dimensional binary images. The algorithm generates the convex hull of a given object and modifies the hull afterwards by recursive repetitions of generating convex hulls of subsets of the given voxel set or subsets of the background voxels. The result of this method is a polyhedron which separates object voxels from background voxels. The objects processed by this algorithm and also the background voxel components inside the convex hull of the objects are restricted to have genus 0. The second aim of this paper is to present some improvements to our convex hull algorithm to reduce computation time.
APA, Harvard, Vancouver, ISO, and other styles
5

Schulz, Henrik. "Polyhedral Surface Approximation of Non-Convex Voxel Sets and Improvements to the Convex Hull Computing Method." Forschungszentrum Dresden-Rossendorf, 2009. https://hzdr.qucosa.de/id/qucosa%3A21613.

Full text
Abstract:
In this paper we introduce an algorithm for the creation of polyhedral approximations for objects represented as strongly connected sets of voxels in three-dimensional binary images. The algorithm generates the convex hull of a given object and modifies the hull afterwards by recursive repetitions of generating convex hulls of subsets of the given voxel set or subsets of the background voxels. The result of this method is a polyhedron which separates object voxels from background voxels. The objects processed by this algorithm and also the background voxel components inside the convex hull of the objects are restricted to have genus 0. The second aim of this paper is to present some improvements to our convex hull algorithm to reduce computation time.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Jiaxi. "PARAMETRIZATION AND SHAPE RECONSTRUCTION TECHNIQUES FOR DOO-SABIN SUBDIVISION SURFACES." UKnowledge, 2008. http://uknowledge.uky.edu/gradschool_theses/509.

Full text
Abstract:
This thesis presents a new technique for the reconstruction of a smooth surface from a set of 3D data points. The reconstructed surface is represented by an everywhere -continuous subdivision surface which interpolates all the given data points. And the topological structure of the reconstructed surface is exactly the same as that of the data points. The new technique consists of two major steps. First, use an efficient surface reconstruction method to produce a polyhedral approximation to the given data points. Second, construct a Doo-Sabin subdivision surface that smoothly passes through all the data points in the given data set. A new technique is presented for the second step in this thesis. The new technique iteratively modifies the vertices of the polyhedral approximation 1CM until a new control meshM, whose Doo-Sabin subdivision surface interpolatesM, is reached. It is proved that, for any mesh M with any size and any topology, the iterative process is always convergent with Doo-Sabin subdivision scheme. The new technique has the advantages of both a local method and a global method, and the surface reconstruction process can reproduce special features such as edges and corners faithfully.
APA, Harvard, Vancouver, ISO, and other styles
7

Paradinas, Salsón Teresa. "Simplification, approximation and deformation of large models." Doctoral thesis, Universitat de Girona, 2011. http://hdl.handle.net/10803/51293.

Full text
Abstract:
The high level of realism and interaction in many computer graphic applications requires techniques for processing complex geometric models. First, we present a method that provides an accurate low-resolution approximation from a multi-chart textured model that guarantees geometric fidelity and correct preservation of the appearance attributes. Then, we introduce a mesh structure called Compact Model that approximates dense triangular meshes while preserving sharp features, allowing adaptive reconstructions and supporting textured models. Next, we design a new space deformation technique called *Cages based on a multi-level system of cages that preserves the smoothness of the mesh between neighbouring cages and is extremely versatile, allowing the use of heterogeneous sets of coordinates and different levels of deformation. Finally, we propose a hybrid method that allows to apply any deformation technique on large models obtaining high quality results with a reduced memory footprint and a high performance.
L’elevat nivell de realisme i d’interacció requerit en múltiples aplicacions gràfiques fa que siguin necessàries tècniques pel processament de models geomètrics complexes. En primer lloc, presentem un mètode de simplificació que proporciona una aproximació precisa de baixa resolució d'un model texturat que garanteix fidelitat geomètrica i una correcta preservació de l’aparença. A continuació, introduïm el Compact Model, una nova estructura de dades que permet aproximar malles triangulars denses preservant els trets més distintius del model, permetent reconstruccions adaptatives i suportant models texturats. Seguidament, hem dissenyat *Cages, un esquema de deformació basat en un sistema de caixes multi-nivell que conserva la suavitat de la malla entre caixes veïnes i és extremadament versàtil, permetent l'ús de conjunts heterogenis de coordenades i diferents nivells de deformació. Finalment, proposem un mètode híbrid que permet aplicar qualsevol tècnica de deformació sobre models complexes obtenint resultats d’alta qualitat amb una memòria reduïda i un alt rendiment.
APA, Harvard, Vancouver, ISO, and other styles
8

Schulz, Henrik. "Polyedrisierung dreidimensionaler digitaler Objekte mit Mitteln der konvexen Hülle." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2008. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1225887695624-97002.

Full text
Abstract:
Für die Visualisierung dreidimensionaler digitaler Objekte ist im Allgemeinen nur ihre Oberfläche von Interesse. Da von den bildgebenden Verfahren das gesamte räumliche Objekt in Form einer Volumenstruktur digitalisiert wird, muss aus den Daten die Oberfläche berechnet werden. In dieser Arbeit wird ein Algorithmus vorgestellt, der die Oberfläche dreidimensionaler digitaler Objekte, die als Menge von Voxeln gegeben sind, approximiert und dabei Polyeder erzeugt, die die Eigenschaft besitzen, die Voxel des Objektes von den Voxeln des Hintergrundes zu trennen. Weiterhin werden nicht-konvexe Objekte klassifiziert und es wird untersucht, für welche Klassen von Objekten die erzeugten Polyeder die minimale Flächenanzahl und den minimalen Oberflächeninhalt besitzen.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Guanglei. "Relaxations in mixed-integer quadratically constrained programming and robust programming." Thesis, Evry, Institut national des télécommunications, 2016. http://www.theses.fr/2016TELE0026/document.

Full text
Abstract:
De nombreux problèmes de la vie réelle sont exprimés sous la forme de décisions à prendre à l’aide de l’information accessible dans le but d’atteindre certains objectifs. La programmation numérique a prouvé être un outil efficace pour modéliser et résoudre une grande variété de problèmes de ce type. Cependant, de nombreux problèmes en apparence faciles sont encore durs à résoudre. Et même des problèmes faciles de programmation linéaire deviennent durs avec l’incertitude de l’information disponible. Motivés par un problème de télécommunication où l’on doit associer des machines virtuelles à des serveurs tout en minimisant les coûts, nous avons employé plusieurs outils de programmation mathématique dans le but de résoudre efficacement le problème, et développé de nouveaux outils pour des problèmes plus généraux. Dans l’ensemble, résumons les principaux résultats de cette thèse comme suit. Une formulation exacte et plusieurs reformulations pour le problème d’affectation de machines virtuelles dans le cloud sont données. Nous utilisons plusieurs inégalités valides pour renforcer la formulation exacte, accélérant ainsi l’algorithme de résolution de manière significative. Nous donnons en outre un résultat géométrique sur la qualité de la borne lagrangienne montrant qu’elle est généralement beaucoup plus forte que la borne de la relaxation continue. Une hiérarchie de relaxation est également proposée en considérant une séquence de couverture de l’ensemble de la demande. Ensuite, nous introduisons une nouvelle formulation induite par les symétries du problème. Cette formulation permet de réduire considérablement le nombre de termes bilinéaires dans le modèle, et comme prévu, semble plus efficace que les modèles précédents. Deux approches sont développées pour la construction d’enveloppes convexes et concaves pour l’optimisation bilinéaire sur un hypercube. Nous établissons plusieurs connexions théoriques entre différentes techniques et nous discutons d’autres extensions possibles. Nous montrons que deux variantes de formulations pour approcher l’enveloppe convexe des fonctions bilinéaires sont équivalentes. Nous introduisons un nouveau paradigme sur les problèmes linéaires généraux avec des paramètres incertains. Nous proposons une hiérarchie convergente de problèmes d’optimisation robuste – approche robuste multipolaire, qui généralise les notions de robustesse statique, de robustesse d’affinement ajustable, et de robustesse entièrement ajustable. En outre, nous montrons que l’approche multipolaire peut générer une séquence de bornes supérieures et une séquence de bornes inférieures en même temps et les deux séquences convergent vers la valeur robuste des FARC sous certaines hypothèses modérées
Many real life problems are characterized by making decisions with current information to achieve certain objectives. Mathematical programming has been developed as a successful tool to model and solve a wide range of such problems. However, many seemingly easy problems remain challenging. And some easy problems such as linear programs can be difficult in the face of uncertainty. Motivated by a telecommunication problem where assignment decisions have to be made such that the cloud virtual machines are assigned to servers in a minimum-cost way, we employ several mathematical programming tools to solve the problem efficiently and develop new tools for general theoretical problems. In brief, our work can be summarized as follows. We provide an exact formulation and several reformulations on the cloud virtual machine assignment problem. Then several valid inequalities are used to strengthen the exact formulation, thereby accelerating the solution procedure significantly. In addition, an effective Lagrangian decomposition is proposed. We show that, the bounds providedby the proposed Lagrangian decomposition is strong, both theoretically and numerically. Finally, a symmetry-induced model is proposed which may reduce a large number of bilinear terms in some special cases. Motivated by the virtual machine assignment problem, we also investigate a couple of general methods on the approximation of convex and concave envelopes for bilinear optimization over a hypercube. We establish several theoretical connections between different techniques and prove the equivalence of two seeming different relaxed formulations. An interesting research direction is also discussed. To address issues of uncertainty, a novel paradigm on general linear problems with uncertain parameters are proposed. This paradigm, termed as multipolar robust optimization, generalizes notions of static robustness, affinely adjustable robustness, fully adjustable robustness and fills the gaps in-between. As consequences of this new paradigms, several known results are implied. Further, we prove that the multipolar approach can generate a sequence of upper bounds and a sequence of lower bounds at the same time and both sequences converge to the robust value of fully adjustable robust counterpart under some mild assumptions
APA, Harvard, Vancouver, ISO, and other styles
10

Schulz, Henrik. "Polyedrisierung dreidimensionaler digitaler Objekte mit Mitteln der konvexen Hülle." Doctoral thesis, Technische Universität Dresden, 2007. https://tud.qucosa.de/id/qucosa%3A23659.

Full text
Abstract:
Für die Visualisierung dreidimensionaler digitaler Objekte ist im Allgemeinen nur ihre Oberfläche von Interesse. Da von den bildgebenden Verfahren das gesamte räumliche Objekt in Form einer Volumenstruktur digitalisiert wird, muss aus den Daten die Oberfläche berechnet werden. In dieser Arbeit wird ein Algorithmus vorgestellt, der die Oberfläche dreidimensionaler digitaler Objekte, die als Menge von Voxeln gegeben sind, approximiert und dabei Polyeder erzeugt, die die Eigenschaft besitzen, die Voxel des Objektes von den Voxeln des Hintergrundes zu trennen. Weiterhin werden nicht-konvexe Objekte klassifiziert und es wird untersucht, für welche Klassen von Objekten die erzeugten Polyeder die minimale Flächenanzahl und den minimalen Oberflächeninhalt besitzen.
APA, Harvard, Vancouver, ISO, and other styles
11

Fung, Ping-yuen, and 馮秉遠. "Approximation for minimum triangulations of convex polyhedra." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B29809964.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Fung, Ping-yuen. "Approximation for minimum triangulations of convex polyhedra." Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B23273197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Isoard, Alexandre. "Extending Polyhedral Techniques towards Parallel Specifications and Approximations." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEN011/document.

Full text
Abstract:
Les techniques polyédriques permettent d’appliquer des analyses et transformations de code sur des structures multidimensionnelles telles que boucles imbriquées et tableaux. Elles sont en général restreintes aux programmes séquentiels dont le contrôle est affine et statique. Cette thèse consiste à les étendre à des programmes comportant par exemple des tests non analysables ou exprimant du parallélisme. Le premier résultat est l'extension de l’analyse de durée de vie et conflits mémoire, pour les scalaires et les tableaux, à des programmes à spécification parallèle ou approximée. Dans les travaux précédents sur l’allocation mémoire pour laquelle cette analyse est nécessaire, la notion de temps ordonne totalement les instructions entre elles et l’existence de cet ordre est implicite et nécessaire. Nous avons montré qu'il est possible de mener à bien de telles analyses sur un ordre partiel quelconque qui correspondra au parallélisme du programme étudié. Le deuxième résultat est d'étendre les techniques de repliement mémoire, basées sur les réseaux euclidiens, de manière à trouver automatiquement une base adéquate à partir de l'ensemble des conflits mémoire. Cet ensemble est fréquemment non convexe, cas qui était traité de façon insuffisante par les méthodes précédentes. Le dernier résultat applique les deux analyses précédentes au calcul par blocs "pipelinés" et notamment au cas de blocs de taille paramétrique. Cette situation donne lieu à du contrôle non-affine mais peut être traité de manière précise par le choix d’approximations adaptées. Ceci ouvre la voie au transfert efficace de noyaux de calculs vers des accélérateurs tels que GPU, FPGA ou autre circuit spécialisé
Polyhedral techniques enable the application of analysis and code transformations on multi-dimensional structures such as nested loops and arrays. They are usually restricted to sequential programs whose control is both affine and static. This thesis extend them to programs involving for example non-analyzable conditions or expressing parallelism. The first result is the extension of the analysis of live-ranges and memory conflicts, for scalar and arrays, to programs with parallel or approximated specification. In previous work on memory allocation for which this analysis is required, the concept of time provides a total order over the instructions and the existence of this order is an implicit requirement. We showed that it is possible to carry out such analysis on any partial order which match the parallelism of the studied program. The second result is to extend memory folding techniques, based on Euclidean lattices, to automatically find an appropriate basis from the set of memory conflicts. This set is often non convex, case that was inadequately handled by the previous methods. The last result applies both previous analyzes to "pipelined" blocking methods, especially in case of parametric block size. This situation gives rise to non-affine control but can be processed accurately by the choice of suitable approximations. This paves the way for efficient kernel offloading to accelerators such as GPUs, FPGAs or other dedicated circuit
APA, Harvard, Vancouver, ISO, and other styles
14

Mohamed, Sidi Mohamed Ahmed. "K-Separator problem." Thesis, Evry, Institut national des télécommunications, 2014. http://www.theses.fr/2014TELE0032/document.

Full text
Abstract:
Considérons un graphe G = (V,E,w) non orienté dont les sommets sont pondérés et un entier k. Le problème à étudier consiste à la construction des algorithmes afin de déterminer le nombre minimum de nœuds qu’il faut enlever au graphe G pour que toutes les composantes connexes restantes contiennent chacune au plus k-sommets. Ce problème nous l’appelons problème de k-Séparateur et on désigne par k-séparateur le sous-ensemble recherché. Il est une généralisation du Vertex Cover qui correspond au cas k = 1 (nombre minimum de sommets intersectant toutes les arêtes du graphe)
Let G be a vertex-weighted undirected graph. We aim to compute a minimum weight subset of vertices whose removal leads to a graph where the size of each connected component is less than or equal to a given positive number k. If k = 1 we get the classical vertex cover problem. Many formulations are proposed for the problem. The linear relaxations of these formulations are theoretically compared. A polyhedral study is proposed (valid inequalities, facets, separation algorithms). It is shown that the problem can be solved in polynomial time for many special cases including the path, the cycle and the tree cases and also for graphs not containing some special induced sub-graphs. Some (k + 1)-approximation algorithms are also exhibited. Most of the algorithms are implemented and compared. The k-separator problem has many applications. If vertex weights are equal to 1, the size of a minimum k-separator can be used to evaluate the robustness of a graph or a network. Another application consists in partitioning a graph/network into different sub-graphs with respect to different criteria. For example, in the context of social networks, many approaches are proposed to detect communities. By solving a minimum k-separator problem, we get different connected components that may represent communities. The k-separator vertices represent persons making connections between communities. The k-separator problem can then be seen as a special partitioning/clustering graph problem
APA, Harvard, Vancouver, ISO, and other styles
15

Tjandraatmadja, Christian. "O problema da subsequência comum máxima sem repetições." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-13092010-093911/.

Full text
Abstract:
Exploramos o seguinte problema: dadas duas sequências X e Y sobre um alfabeto finito, encontre uma subsequência comum máxima de X e Y sem símbolos repetidos. Estudamos a estrutura deste problema, particularmente do ponto de vista de grafos e de combinatória poliédrica. Desenvolvemos algoritmos de aproximação e heurísticas para este problema. O enfoque deste trabalho está na construção de um algoritmo baseado na técnica branch-and-cut, aproveitando-nos de um algoritmo de separação eficiente e de heurísticas e técnicas para encontrarmos uma solução ótima mais cedo. Também estudamos um problema mais fácil no qual este problema é baseado: dadas duas sequências X e Y sobre um alfabeto finito, encontre uma subsequência comum máxima de X e Y. Exploramos este problema do ponto de vista de combinatória poliédrica e descrevemos vários algoritmos conhecidos para resolvê-lo.
We explore the following problem: given two sequences X and Y over a finite alphabet, find a longest common subsequence of X and Y without repeated symbols. We study the structure of this problem, particularly from the point of view of graphs and polyhedral combinatorics. We develop approximation algorithms and heuristics for this problem. The focus of this work is in the construction of an algorithm based on the branch-and-cut technique, taking advantage of an efficient separation algorithm and of heuristics and techniques to find an optimal solution earlier. We also study an easier problem on which this problem is based: given two sequences X and Y over a finite alphabet, find a longest common subsequence of X and Y. We explore this problem from the point of view of polyhedral combinatorics and describe several known algorithms to solve it.
APA, Harvard, Vancouver, ISO, and other styles
16

Kokotin, Valentin. "Polyhedra-based analysis of computer simulated amorphous structures." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-38597.

Full text
Abstract:
Bulk metallic glasses represent a newly developed class of materials. Some metallic glasses possess combinations of very good or even excellent mechanical, chemical and/or magnetic properties uncovering a broad range of both industrial and vital applications. Besides all advantages metallic glasses have also significant drawbacks, which have to be overcome for commercial application. Apart from low critical thicknesses, brittleness and chemical inhomogeneity one important problem of metallic glasses is the lack of an appropriate theory describing their structure. Therefore, the search for new glass forming compositions as well as the improving of existing ones occurs at present by means of trial-and-error methods and a number of empirical rules. Empirical rules for good glass-forming ability of bulk metallic glasses have been established in recent years by Inoue and Egami. Two of these rules, (i) Preference of more than 3 elements and (ii) Need of more than 12 % radii difference of base elements, seem to be closely related to topological (geometrical) criteria. From this point of view topological parameters contribute essentially to the glass-forming ability. The third rule (iii) demands a negative mixing enthalpy of base elements and refers to the chemical interaction of the atoms. The generalized Bernal’s model (hard-sphere approximation) was used for the simulation of monatomic, binary and multi-component structures. Excluding chemical interaction, this method allows the investigation of topological criteria of the glass-forming ability. Bernal’s hard-sphere model was shown to be a good approximation for bulk metallic glasses and metallic liquids and yields good coincidence of experimental and theoretical results. • The Laguerre (weighted Voronoi) tessellation technique was used as the main tool for the structural analysis. Due to very complex structures it is impossible to determine the structure of bulk metallic glasses by means of standard crystallographic methods. • Density, radial distribution function, coordination number and Laguerre polyhedra analysis confirm amorphism of the simulated structures and are in a good agreement with available experimental results. • The ratio of the fractions of non-crystalline to crystalline Laguerre polyhedra faces was introduced as a new parameter . This parameter reflects the total non-crystallinity of a structure and the amount of atomic rearrangements necessary for crystallization. Thus, the parameter is related to the glass-forming ability. It depends strongly on composition and atomic size ratio and indicates a region of enhanced glass-forming ability in binary mixtures at 80 % of small atoms and atomic size ratio of 1.3. All found maxima of parameter for ternary mixtures have compositions and size ratios which are nearly the same as for the binary mixture with the maximum value of . • A new method of multiple-compression was introduces in order to test the tendency towards densification and/or crystallization of the simulated mixtures. The results of the multiple-compression of monatomic mixtures indicate a limiting value of about 0.6464 for the density of the amorphous state. Further densification is necessarily connected to formation and growth of nano-crystalline regions. • The results of the multiple-compression for binary mixtures shows a new maximum of the density at the size ratio of 1.3 and 30 % to 90 % of small atoms. This maximum indicates a local island of stability of the amorphous state. The maximal receivable density without crystallization in this region is enhanced compared to neighbouring regions. • The comparison of the parameter and the density to the distribution of known binary bulk metallic (metal-metal) glasses clearly shows that both parameters play a significant role in the glass-forming ability. • The polyhedra analysis shows regions with enhanced fraction of the icosahedral short-range order (polyhedron (0, 0, 12)) in the binary systems with the maximum at 80 % of small atoms and size ratio of 1.3. Comparison of the distribution of the (0, 0, 12) polyhedra to the distribution of known binary metallic (metal-metal) glasses and to the parameter shows that icosahedral short-range order is not related to the glass-forming ability and is a consequence of the high non-crystallinity (high values of ) of the mixtures and non vice versa. Results for the ternary mixtures confirm this observation. • A new approach for the calculation of the mixing enthalpy is proposed. The new method is based on the combination of Miedema’s semi-empirical model and Laguerre tessellation technique. The new method as well as 6 other methods including the original Miedema’s model were tested for more than 1400 ternary and quaternary alloys. The results show a better agreement with experimental values of the mixing enthalpy for the new model compared to all other methods. The new model takes into account the local structure at atom site and can be applied to all metallic alloys without additional extrapolations if the atomic structure of the considered alloy is known from a suitable atomistic structure model.
APA, Harvard, Vancouver, ISO, and other styles
17

Coelho, Rafael Santos. "The k-hop connected dominating set problem: approximation algorithms and hardness results." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-27062017-101521/.

Full text
Abstract:
Let G be a connected graph and k be a positive integer. A vertex subset D of G is a k-hop connected dominating set if the subgraph of G induced by D is connected, and for every vertex v in G, there is a vertex u in D such that the distance between v and u in G is at most k. We study the problem of finding a minimum k-hop connected dominating set of a graph (Mink-CDS). We prove that Mink-CDS is NP-hard on planar bipartite graphs of maximum degree 4. We also prove that Mink-CDS is APX-complete on bipartite graphs of maximum degree 4. We present inapproximability thresholds for Mink-CDS on bipar- tite and on (1, 2)-split graphs. Interestingly, one of these thresholds is a parameter of the input graph which is not a function of its number of vertices. We also discuss the complex- ity of computing this graph parameter. On the positive side, we show an approximation algorithm for Mink-CDS. When k = 1, we present two new approximation algorithms for the weighted version of the problem, one of them restricted to graphs with a poly- nomially bounded number of minimal separators. Finally, also for the weighted variant of the problem where k = 1, we discuss an integer linear programming formulation and conduct a polyhedral study of its associated polytope.
Seja G um grafo conexo e k um inteiro positivo. Um subconjunto D de vértices de G é um conjunto dominante conexo de k-saltos se o subgrafo de G induzido por D é conexo e se, para todo vértice v em G, existe um vértice u em D a uma distância não maior do que k de v. Estudamos neste trabalho o problema de se encontrar um conjunto dominante conexo de k-saltos com cardinalidade mínima (Mink-CDS). Provamos que Mink-CDS é NP-difícil em grafos planares bipartidos com grau máximo 4. Mostramos que Mink-CDS é APX-completo em grafos bipartidos com grau máximo 4. Apresentamos limiares de inaproximabilidade para Mink-CDS para grafos bipartidos e (1, 2)-split, sendo que um desses é expresso em função de um parâmetro independente da ordem do grafo. Também discutimos a complexidade computacional do problema de se computar tal parâmetro. No lado positivo, propomos um algoritmo de aproximação para Mink-CDS cuja razão de aproximação é melhor do que a que se conhecia para esse problema. Finalmente, quando k = 1, apresentamos dois novos algoritmos de aproximação para a versão do problema com pesos nos vértices, sendo que um deles restrito a classes de grafos com um número polinomial de separadores minimais. Além disso, discutimos uma formulação de programação linear inteira para essa versão do problema e provamos resultados poliédricos a respeito de algumas das desigualdades que constituem o politopo associado à formulação.
APA, Harvard, Vancouver, ISO, and other styles
18

Lima, Karla Roberta Pereira Sampaio. "Recoloração convexa de caminhos." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-23012012-144246/.

Full text
Abstract:
O foco central desta tese é o desenvolvimento de algoritmos para o problema de recoloração convexa de caminhos. Neste problema, é dado um caminho cujos vértices estão coloridos arbitrariamente, e o objetivo é recolorir o menor número possível de vértices de modo a obter uma coloração convexa. Dizemos que uma coloração de um grafo é convexa se, para cada cor, o subgrafo induzido pelos vértices dessa cor é conexo. Sabe-se que este problema é NP-difícil. Associamos a este problema um poliedro, e estudamos sua estrutura facial, com vistas ao desenvolvimento de um algoritmo. Mostramos várias inequações válidas para este poliedro, e provamos que várias delas definem facetas. Apresentamos um algoritmo de programação dinâmica que resolve em tempo polinomial o problema da separação para uma classe grande de inequações que definem facetas. Implementamos um algoritmo branch-and-cut baseado nesses resultados, e realizamos testes computacionais com instâncias geradas aleatoriamente. Apresentamos adicionalmente uma heurística baseada numa formulação linear que obtivemos. Estudamos também um caso especial deste problema, no qual as instâncias consistem em caminhos coloridos, onde cada cor ocorre no máximo duas vezes. Apresentamos um algoritmo de 3/2-aproximação para este caso, que é também NP-difícil. Para o caso geral, é conhecido na literatura um algoritmo de 2-aproximação.
The focus of this thesis is the design of algorithms for the convex recoloring problem on paths. In this problem, the instance consists of a path whose vertices are arbitrarily colored, and the objective is to recolor the least number of vertices so as to obtain a convex coloring.Acoloring of a graph is convex if, for each color, the subgraph induced by the vertices of this color is connected. This problem is known to be NP-hard. We associate a polyhedron to this problem and investigate its facial structure. We show various classes of valid inequalities for this polyhedron and prove that many of them define facets.We present a polynomial-time dynamic programming algorithm that solves, in polynomial time, the separation problem for a large class of facet-defining inequalities.We report on the computational experiments with a branch-and-cut algorithm that we propose for the problem. Additionally, we present a heuristic that is based on a linear formulation for the problem. We also study a special case of this problem, restricted to instances consisting of colored paths in which each color occurs at most twice. For this case, which is also NP-hard, we present a 3/2-approximation algorithm. For the general case, it is known a 2-approximation algorithm.
APA, Harvard, Vancouver, ISO, and other styles
19

Wardetzky, Max [Verfasser]. "Discrete differential operators on polyhedral surfaces : convergence and approximation / Max Wardetzky." 2006. http://d-nb.info/988494132/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Montiel, Cendejas Luis Vicente. "Approximations, simulation, and accuracy of multivariate discrete probability distributions in decision analysis." Thesis, 2012. http://hdl.handle.net/2152/ETD-UT-2012-05-5031.

Full text
Abstract:
Many important decisions must be made without full information. For example, a woman may need to make a treatment decision regarding breast cancer without full knowledge of important uncertainties, such as how well she might respond to treatment. In the financial domain, in the wake of the housing crisis, the government may need to monitor the credit market and decide whether to intervene. A key input in this case would be a model to describe the chance that one person (or company) will default given that others have defaulted. However, such a model requires addressing the lack of knowledge regarding the correlation between groups or individuals. How to model and make decisions in cases where only partial information is available is a significant challenge. In the past, researchers have made arbitrary assumptions regarding the missing information. In this research, we developed a modeling procedure that can be used to analyze many possible scenarios subject to strict conditions. Specifically, we developed a new Monte Carlo simulation procedure to create a collection of joint probability distributions, all of which match whatever information we have. Using this collection of distributions, we analyzed the accuracy of different approximations such as maximum entropy or copula-models. In addition, we proposed several new approximations that outperform previous methods. The objective of this research is four-fold. First, provide a new framework for approximation models. In particular, we presented four new models to approximate joint probability distributions based on geometric attributes and compared their performance to existing methods. Second, develop a new joint distribution simulation procedure (JDSIM) to sample joint distributions from the set of all possible distributions that match available information. This procedure can then be applied to different scenarios to analyze the sensitivity of a decision or to test the accuracy of an approximation method. Third, test the accuracy of seven approximation methods under a variety of circumstances. Specifically, we addressed the following questions within the context of multivariate discrete distributions: Are there new approximations that should be considered? Which approximation is the most accurate, according to different measures? How accurate are the approximations as the number of random variables increases? How accurate are they as we change the underlying dependence structure? How does accuracy improve as we add lower-order assessments? What are the implications of these findings for decision analysis practice and research? While the above questions are easy to pose, they are challenging to answer. For Decision Analysis, the answers open a new avenue to address partial information, which bing us to the last contribution. Fourth, propose a new approach to decision making with partial information. The exploration of old and new approximations and the capability of creating large collections of joint distributions that match expert assessments provide new tools that extend the field of decision analysis. In particular, we presented two sample cases that illustrate the scope of this work and its impact on uncertain decision making.
text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography