To see the other types of publications on this topic, follow the link: Partition tree.

Dissertations / Theses on the topic 'Partition tree'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 36 dissertations / theses for your research on the topic 'Partition tree.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ghanbari, Shirin. "Multi-Dimensional Binary Partition Tree for Content Retrieval." Thesis, University of Essex, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.520046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sundberg, Kenneth A. "Partition Based Phylogenetic Search." BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2583.

Full text
Abstract:
Evolutionary relationships are key to modern understanding of biological systems. Phylogenetic search is the means by which these relationships are inferred. Phylogenetic search is NP-Hard. As such it is necessary to employ heuristic methods. This work proposes new methods based on viewing the relationships between species as sets of partitions. These methods produce more parsimonious phylogenies than current methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Sudirman. "Colour image coding indexing and retrieval using binary space partition tree." Thesis, University of Nottingham, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.275171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Agarwal, Khushbu. "A partition based approach to approximate tree mining a memory hierarchy perspective /." Columbus, Ohio : Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1196284256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Agarwal, Khushbu. "A partition based approach to approximate tree mining : a memory hierarchy perspective." The Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=osu1196284256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tan, Kunlun. "On the Role of Partition Inequalities in Classical Algorithms for Steiner Problems in Graphs." Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/1123.

Full text
Abstract:
The Steiner tree problem is a classical, well-studied, $\mathcal{NP}$-hard optimization problem. Here we are given an undirected graph $G=(V,E)$, a subset $R$ of $V$ of terminals, and non-negative costs $c_e$ for all edges $e$ in $E$. A feasible Steiner tree for a given instance is a tree $T$ in $G$ that spans all terminals in $R$. The goal is to compute a feasible Steiner tree of smallest cost. In this thesis we will focus on approximation algorithms for this problem: a $c$-approximation algorithm is an algorithm that returns a tree of cost at most $c$ times that of an optimum solution for any given input instance.

In a series of papers throughout the last decade, the approximation guarantee $c$ for the Steiner tree problem has been improved to the currently best known value of 1. 55 (Robins, Zelikovsky). Robins' and Zelikovsky's algorithm as well as most of its predecessors are greedy algorithms.

Apart from algorithmic improvements, there also has been substantial work on obtaining tight linear-programming relaxations for the Steiner tree problem. Many undirected and directed formulations have been proposed in the course of the last 25 years; their use, however, is to this point mostly restricted to the field of exact optimization. There are few examples of algorithms for the Steiner tree problem that make use of these LP relaxations. The best known such algorithm for general graphs is a 2-approximation (for the more general Steiner forest problem) due to Agrawal, Klein and Ravi. Their analysis is tight as the LP-relaxation used in their work is known to be weak: it has an IP/LP gap of approximately 2.

Most recent efforts to obtain algorithms for the Steiner tree problem that are based on LP-relaxations has focused on directed relaxations. In this thesis we present an undirected relaxation and show that the algorithm of Robins and Zelikovsky returns a Steiner tree whose cost is at most 1. 55 times its optimum solution value. In fact, we show that this algorithm can be viewed as a primal-dual algorithm.

The Steiner forest problem is a generalization of the Steiner tree problem. In the problem, instead of only one set of terminals, we are given more than one terminal set. An feasible Steiner forest is a forest that connects all terminals in the same terminal set for each terminal set. The goal is to find a minimum cost feasible Steiner forest. In this thesis, a new set of facet defining inequalities for the polyhedra of the Steiner forest is introduced.
APA, Harvard, Vancouver, ISO, and other styles
7

Valero, Valbuena Silvia. "Arbre de partition binaire : un nouvel outil pour la représentation hiérarchique et l’analyse des images hyperspectrales." Thesis, Grenoble, 2011. http://www.theses.fr/2011GRENT123/document.

Full text
Abstract:
Résumé non communiqué par le doctorant
The optimal exploitation of the information provided by hyperspectral images requires the development of advanced image processing tools. Therefore, under the title Hyperspectral image representation and Processing with Binary Partition Trees, this PhD thesis proposes the construction and the processing of a new region-based hierarchical hyperspectral image representation:the Binary Partition Tree (BPT). This hierarchical region-based representation can be interpretedas a set of hierarchical regions stored in a tree structure. Hence, the Binary Partition Tree succeedsin presenting: (i) the decomposition of the image in terms of coherent regions and (ii) the inclusionrelations of the regions in the scene. Based on region-merging techniques, the construction of BPTis investigated in this work by studying hyperspectral region models and the associated similaritymetrics. As a matter of fact, the very high dimensionality and the complexity of the data require the definition of specific region models and similarity measures. Once the BPT is constructed, the fixed tree structure allows implementing efficient and advanced application-dependent techniqueson it. The application-dependent processing of BPT is generally implemented through aspecific pruning of the tree. Accordingly, some pruning techniques are proposed and discussed according to different applications. This Ph.D is focused in particular on segmentation, object detectionand classification of hyperspectral imagery. Experimental results on various hyperspectraldata sets demonstrate the interest and the good performances of the BPT representation
APA, Harvard, Vancouver, ISO, and other styles
8

Rattan, Amarpreet. "Parking Functions and Related Combinatorial Structures." Thesis, University of Waterloo, 2001. http://hdl.handle.net/10012/1028.

Full text
Abstract:
The central topic of this thesis is parking functions. We give a survey of some of the current literature concerning parking functions and focus on their interaction with other combinatorial objects; namely noncrossing partitions, hyperplane arrangements and tree inversions. In the final chapter, we discuss generalizations of both parking functions and the above structures.
APA, Harvard, Vancouver, ISO, and other styles
9

Howell, Gareth. "Normalised distance function considered over the partition of the unit interval generated by the points of the Farey tree." Thesis, Cardiff University, 2010. http://orca.cf.ac.uk/55096/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Law, Hiu-Fai. "Trees and graphs : congestion, polynomials and reconstruction." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:54190b51-cd9d-489e-a79e-82ecdf15b4c5.

Full text
Abstract:
Spanning tree congestion was defined by Ostrovskii (2004) as a measure of how well a network can perform if only minimal connection can be maintained. We compute the parameter for several families of graphs. In particular, by partitioning a hypercube into pieces with almost optimal edge-boundaries, we give tight estimates of the parameter thereby disproving a conjecture of Hruska (2008). For a typical random graph, the parameter exhibits a zigzag behaviour reflecting the feature that it is not monotone in the number of edges. This motivates the study of the most congested graphs where we show that any graph is close to a graph with small congestion. Next, we enumerate independent sets. Using the independent set polynomial, we compute the extrema of averages in trees and graphs. Furthermore, we consider inverse problems among trees and resolve a conjecture of Wagner (2009). A result in a more general setting is also proved which answers a question of Alon, Haber and Krivelevich (2011). After briefly considering polynomial invariants of general graphs, we specialize into trees. Three levels of tree distinguishing power are exhibited. We show that polynomials which do not distinguish rooted trees define typically exponentially large equivalence classes. On the other hand, we prove that the rooted Ising polynomial distinguishes rooted trees and that the Negami polynomial determines the subtree polynomial, strengthening results of Bollobás and Riordan (2000) and Martin, Morin and Wagner (2008). The top level consists of the chromatic symmetric function and it is proved to be a complete invariant for caterpillars.
APA, Harvard, Vancouver, ISO, and other styles
11

Moragas, Vilarnau Jordi. "Graph labelings and decompositions by partitioning sets of integers." Doctoral thesis, Universitat Politècnica de Catalunya, 2010. http://hdl.handle.net/10803/5859.

Full text
Abstract:
Aquest treball és una contribució a l'estudi de diferents problemes que sorgeixen de dues àrees fortament connexes de la Teoria de Grafs: etiquetaments i descomposicions. Molts etiquetaments de grafs deuen el seu origen als presentats l'any 1967 per Rosa. Un d'aquests etiquetaments, àmpliament conegut com a etiquetament graceful, va ser definit originalment com a eina per atacar la conjectura de Ringel, la qual diu que el graf complet d'ordre 2m+1 pot ser descompost en m copies d'un arbre donat de mida m. Aquí, estudiem etiquetaments relacionats que ens donen certes aproximacions a la conjectura de Ringel, així com també a una altra conjectura de Graham i Häggkvist que, en una forma dèbil, demana la descomposició d'un graf bipartit complet per un arbre donat de mida apropiada.
Les principals contribucions que hem fet en aquest tema són la prova de la darrera conjectura per grafs bipartits complets del doble de mida essent descompostos per arbres de gran creixement i un nombre primer d'arestes, i la prova del fet que cada arbre és un subarbre gran de dos arbres pels quals les dues conjectures es compleixen respectivament. Aquests resultats estan principalment basats en una aplicació del mètode polinomial d'Alon.
Un altre tipus d'etiquetaments, els etiquetaments magic, també són tractats aquí. Motivats per la noció de quadrats màgics de Teoria de Nombres, en aquest tipus d'etiquetaments volem asignar nombres enters a parts del graf (vèrtexs, arestes, o vèrtexs i arestes) de manera que la suma de les etiquetes assignades a certes subestructures del graf sigui constant. Desenvolupem tècniques basades en particions de certs conjunts d'enters amb algunes condicions additives per construir etiquetaments cycle-magic, un nou tipus d'etiquetament introduït en aquest treball i que estén la noció clàssica d'etiquetament magic. Els etiquetaments magic no donen cap descomposició de grafs, però les tècniques usades per obtenir-los estan al nucli d'un altre problema de descomposició, l'ascending subgraph decomposition (ASD). Alavi, Boals, Chartrand, Erdös i Oellerman, van conjecturar l'any 1987 que tot graf té un ASD.
Aquí, estudiem l'ASD per grafs bipartits, una classe de grafs per la qual la conjectura encara no ha estat provada. Donem una condició necessària i una de suficient sobre la seqüència de graus d'un estable del graf bipartit de manera que admeti un ASD en que cada factor sigui un star forest. Les tècniques utilitzades estan basades en l'existència de branca-acoloriments en multigrafs bipartits.
També tractem amb el sumset partition problem, motivat per la conjectura ASD, que demana una partició de [n] de manera que la suma dels elements de cada part sigui igual a un valor prescrit. Aquí donem la millor condició possible per la versió modular del problema que ens permet provar els millors resultats ja coneguts en el cas enter per n primer. La prova està de nou basada en el mètode polinomial.
This work is a contribution to the study of various problems that arise from two strongly connected areas of the Graph Theory: graph labelings and graph decompositions. Most graph labelings trace their origins to the ones presented in 1967 by Rosa. One of these labelings, widely known as the graceful labeling, originated as a means of attacking the conjecture of Ringel, which states that the complete graph of order 2m+1 can be decomposed into m copies of a given tree of size m. Here, we study related labelings that give some approaches to Ringel's conjecture, as well as to another conjecture by Graham and Häggkvist that, in a weak form, asks for the decomposition of a complete bipartite graph by a given tree of appropriate size.
Our main contributions in this topic are the proof of the latter conjecture for double sized complete bipartite graphs being decomposed by trees with large growth and prime number of edges, and the proof of the fact that every tree is a large subtree of two trees for which both conjectures hold respectively. These results are mainly based on a novel application of the so-called polynomial method by Alon.
Another kind of labelings, the magic labelings, are also treated. Motivated by the notion of magic squares in Number Theory, in these type of labelings we want to assign integers to the parts of a graph (vertices, edges, or vertices and edges) in such a way that the sums of the labels assigned to certain substructures of the graph remain constant. We develop techniques based on partitions of certain sets of integers with some additive conditions to construct cycle-magic labelings, a new brand introduced in this work that extends the classical magic labelings. Magic labelings do not provide any graph decomposition, but the techniques that we use to obtain them are the core of another decomposition problem, the ascending subgraph decomposition (ASD).
In 1987, was conjectured by Alavi, Boals. Chartrand, Erdös and Oellerman that every graph has an ASD. Here, we study ASD of bipartite graphs, a class of graphs for which the conjecture has not been shown to hold. We give a necessary and a sufficient condition on the one sided degree sequence of a bipartite graph in order that it admits an ASD by star forests. Here the techniques are based on the existence of edge-colorings in bipartite multigraphs.
Motivated by the ASD conjecture we also deal with the sumset partition problem, which asks for a partition of [n] in such a way that the sum of the elements of each part is equal to a prescribed value. We give a best possible condition for the modular version of the sumset partition problem that allows us to prove the best known results in the integer case for n a prime. The proof is again based on the polynomial method.
APA, Harvard, Vancouver, ISO, and other styles
12

Vilaplana, Besler Verónica. "Region-based face detection, segmentation and tracking. framework definition and application to other objects." Doctoral thesis, Universitat Politècnica de Catalunya, 2010. http://hdl.handle.net/10803/33330.

Full text
Abstract:
One of the central problems in computer vision is the automatic recognition of object classes. In particular, the detection of the class of human faces is a problem that generates special interest due to the large number of applications that require face detection as a first step. In this thesis we approach the problem of face detection as a joint detection and segmentation problem, in order to precisely localize faces with pixel accurate masks. Even though this is our primary goal, in finding a solution we have tried to create a general framework as independent as possible of the type of object being searched. For that purpose, the technique relies on a hierarchical region-based image model, the Binary Partition Tree, where objects are obtained by the union of regions in an image partition. In this work, this model is optimized for the face detection and segmentation tasks. Different merging and stopping criteria are proposed and compared through a large set of experiments. In the proposed system the intra-class variability of faces is managed within a learning framework. The face class is characterized using a set of descriptors measured on the tree nodes, and a set of one-class classifiers. The system is formed by two strong classifiers. First, a cascade of binary classifiers simplifies the search space, and afterwards, an ensemble of more complex classifiers performs the final classification of the tree nodes. The system is extensively tested on different face data sets, producing accurate segmentations and proving to be quite robust to variations in scale, position, orientation, lighting conditions and background complexity. We show that the technique proposed for faces can be easily adapted to detect other object classes. Since the construction of the image model does not depend on any object class, different objects can be detected and segmented using the appropriate object model on the same image model. New object models can be easily built by selecting and training a suitable set of descriptors and classifiers. Finally, a tracking mechanism is proposed. It combines the efficiency of the mean-shift algorithm with the use of regions to track and segment faces through a video sequence, where both the face and the camera may move. The method is extended to deal with other deformable objects, using a region-based graph-cut method for the final object segmentation at each frame. Experiments show that both mean-shift based trackers produce accurate segmentations even in difficult scenarios such as those with similar object and background colors and fast camera and object movements. Lloc i
Un dels problemes més importants en l'àrea de visió artificial és el reconeixement automàtic de classes d'objectes. En particular, la detecció de la classe de cares humanes és un problema que genera especial interès degut al gran nombre d'aplicacions que requereixen com a primer pas detectar les cares a l'escena. A aquesta tesis s'analitza el problema de detecció de cares com un problema conjunt de detecció i segmentació, per tal de localitzar de manera precisa les cares a l'escena amb màscares que arribin a precisions d'un píxel. Malgrat l'objectiu principal de la tesi és aquest, en el procés de trobar una solució s'ha intentat crear un marc de treball general i tan independent com fos possible del tipus d'objecte que s'està buscant. Amb aquest propòsit, la tècnica proposada fa ús d'un model jeràrquic d'imatge basat en regions, l'arbre binari de particions (BPT: Binary Partition Tree), en el qual els objectes s'obtenen com a unió de regions que provenen d'una partició de la imatge. En aquest treball, s'ha optimitzat el model per a les tasques de detecció i segmentació de cares. Per això, es proposen diferents criteris de fusió i de parada, els quals es comparen en un conjunt ampli d'experiments. En el sistema proposat, la variabilitat dins de la classe cara s'estudia dins d'un marc de treball d'aprenentatge automàtic. La classe cara es caracteritza fent servir un conjunt de descriptors, que es mesuren en els nodes de l'arbre, així com un conjunt de classificadors d'una única classe. El sistema està format per dos classificadors forts. Primer s'utilitza una cascada de classificadors binaris que realitzen una simplificació de l'espai de cerca i, posteriorment, s'aplica un conjunt de classificadors més complexes que produeixen la classificació final dels nodes de l'arbre. El sistema es testeja de manera exhaustiva sobre diferents bases de dades de cares, sobre les quals s'obtenen segmentacions precises provant així la robustesa del sistema en front a variacions d'escala, posició, orientació, condicions d'il·luminació i complexitat del fons de l'escena. A aquesta tesi es mostra també que la tècnica proposada per cares pot ser fàcilment adaptable a la detecció i segmentació d'altres classes d'objectes. Donat que la construcció del model d'imatge no depèn de la classe d'objecte que es pretén buscar, es pot detectar i segmentar diferents classes d'objectes fent servir, sobre el mateix model d'imatge, el model d'objecte apropiat. Nous models d'objecte poden ser fàcilment construïts mitjançant la selecció i l'entrenament d'un conjunt adient de descriptors i classificadors. Finalment, es proposa un mecanisme de seguiment. Aquest mecanisme combina l'eficiència de l'algorisme mean-shift amb l'ús de regions per fer el seguiment i segmentar les cares al llarg d'una seqüència de vídeo a la qual tant la càmera com la cara es poden moure. Aquest mètode s'estén al cas de seguiment d'altres objectes deformables, utilitzant una versió basada en regions de la tècnica de graph-cut per obtenir la segmentació final de l'objecte a cada imatge. Els experiments realitzats mostren que les dues versions del sistema de seguiment basat en l'algorisme mean-shift produeixen segmentacions acurades, fins i tot en entorns complicats com ara quan l'objecte i el fons de l'escena presenten colors similars o quan es produeix un moviment ràpid, ja sigui de la càmera o de l'objecte.
APA, Harvard, Vancouver, ISO, and other styles
13

Yang, Duotong. "A Data Analytics Framework for Regional Voltage Control." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/78712.

Full text
Abstract:
Modern power grids are some of the largest and most complex engineered systems. Due to economic competition and deregulation, the power systems are operated closer their security limit. When the system is operating under a heavy loading condition, the unstable voltage condition may cause a cascading outage. The voltage fluctuations are presently being further aggravated by the increasing integration of utility-scale renewable energy sources. In this regards, a fast response and reliable voltage control approach is indispensable. The continuing success of synchrophasor has ushered in new subdomains of power system applications for real-time situational awareness, online decision support, and offline system diagnostics. The primary objective of this dissertation is to develop a data analytic based framework for regional voltage control utilizing high-speed data streams delivered from synchronized phasor measurement units. The dissertation focuses on the following three studies: The first one is centered on the development of decision-tree based voltage security assessment and control. The second one proposes an adaptive decision tree scheme using online ensemble learning to update decision model in real time. A system network partition approach is introduced in the last study. The aim of this approach is to reduce the size of training sample database and the number of control candidates for each regional voltage controller. The methodologies proposed in this dissertation are evaluated based on an open source software framework.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
14

Schliep, Klaus Peter. "Some applications of statistical phylogenetics : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Biomathematics at Massey University." Massey University, 2009. http://hdl.handle.net/10179/931.

Full text
Abstract:
The increasing availability of molecular data means that phylogenetic studies nowadays often use datasets which combine a large number of loci for many different species. This leads to a trade-off. On the one hand more complex models are preferred to account for heterogeneity in evolutionary processes. On the other hand simple models that can answer biological questions of interest that are easy to interpret and can be computed in reasonable time are favoured. This thesis focuses on four cases of phylogenetic analysis which arise from this conflict. - It is shown that edge weight estimates can be non-identifiable if the data are simulated under a mixture model. Even if the underlying process is known the estimation and interpretation may be difficult due to the high variance of the parameters of interest. - Partition models are commonly used to account for heterogeneity in data sets. Novel methods are presented here which allow grouping of genes under similar evolutionary constraints. A data set, containing 14 genes of the chloroplast from 19 anciently diverged species is used to find groups of co-evolving genes. The prospects and limitations of such methods are discussed. - Penalised likelihood estimation is a useful tool for improving the performance of models and allowing for variable selection. A novel approach is presented that uses pairwise dissimilarities to visualise the data as a network. It is further shown how penalised likelihood can be used to decrease the variance of parameter estimates for mixture and partition models, allowing a more reliable analysis. Estimates for the variance and the expected number of parameters of penalised likelihood estimates are derived. - Tree shape statistics are used to describe speciation events in macroevolution. A new tree shape statistic is introduced and the biases of different cluster methods on tree shape statistics are discussed.
APA, Harvard, Vancouver, ISO, and other styles
15

Broutin, Nicolas. "Random trees, graphs and recursive partitions." Habilitation à diriger des recherches, Université Pierre et Marie Curie - Paris VI, 2013. http://tel.archives-ouvertes.fr/tel-00842019.

Full text
Abstract:
Je présente dans ce mémoire mes travaux sur les limites d'échelle de grandes structures aléatoires. Il s'agit de décrire les structures combinatoires dans la limite des grandes tailles en prenant un point de vue objectif dans le sens où on cherche des limites des objets, et non pas seulement de paramètres caractéristiques (même si ce n'est pas toujours le cas dans les résultats que je présente). Le cadre général est celui des structures critiques pour lesquelles on a typiquement des distances caractéristiques polynomiales en la taille, et non concentrées. Sauf exception, ces structures ne sont en général pas adaptées aux applications informatiques. Elles sont cependant essentielles de part l'universalité de leurs propriétés asymptotiques, prouvées ou attendues. Je parle en particulier d'arbres uniformément choisis, de graphes aléatoires, d'arbres couvrant minimaux et de partitions récursives de domaines du plan:
Arbres aléatoires uniformes. Il s'agit ici de mieux comprendre un objet limite essentiel, l'arbre continu brownien (CRT). Je présente quelques résultats de convergence pour des modèles combinatoires ''non-branchants'' tels que des arbres sujets aux symétries et les arbres à distribution de degrés fixée. Je décris enfin une nouvelle décomposition du CRT basée sur une destruction partielle.
Graphes aléatoires. J'y décris la construction algorithmique de la limite d'échel-le des graphes aléatoires du modèle d'Erdös--Rényi dans la zone critique, et je fais le lien avec le CRT et donne des constructions de l'espace métrique limite. Arbres couvrant minimaux. J'y montre qu'une connection avec les graphes aléatoires permet de quantifier les distances dans un arbre convrant aléatoire. On obtient non seulement l'ordre de grandeur de l'espérance du diamètre, mais aussi la limite d'échelle en tant qu'espace métrique mesuré. Partitions récursives. Sur deux exemples, les arbres cadrant et les laminations du disque, je montre que des idées basées sur des théorèmes de point fixe conduisent à des convergences de processus, où les limites sont inhabituelles, et caractérisées par des décompositions récursives.
APA, Harvard, Vancouver, ISO, and other styles
16

Lu, Huihai. "Evolutionary image analysis in binary partition trees." Thesis, University of Essex, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.438156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Giró, i. Nieto Xavier. "Part-based object retrieval with binary partition trees." Doctoral thesis, Universitat Politècnica de Catalunya, 2012. http://hdl.handle.net/10803/108909.

Full text
Abstract:
This thesis addresses the problem of visual object retrieval, where a user formulates a query to an image database by providing one or multiple examples of an object of interest. The presented techniques aim both at finding those images in the database that contain the object as well as locating the object in the image and segmenting it from the background. Every considered image, both the ones used as queries and the ones contained in the target database, is represented as a Binary Partition Tree (BPT), the hierarchy of regions previously proposed by Salembier and Garrido (2000). This data structure offers multiple opportunities and challenges when applied to the object retrieval problem. A first application of BPTs appears during the formulation of the query, when the user must interactively segment the query object from the background. Firstly, the BPT can assist in adjusting an initial marker, such as a scribble or bounding box, to the object contours. Secondly, BPT can also define a navigation path for the user to adjust an initial selection to the appropriate spatial scale. The hierarchical structure of the BPT is also exploited to extract a new type of visual words named Hierarchical Bag of Regions (HBoR). Each region defined in the BPT is described with a feature vector that combines a soft quantization on a visual codebook with an efficient bottom-up computation through the BPT. These descriptors allow the definition of a novel feature space, the Parts Space, where each object is located according to the parts that compose it. HBoR descriptors have been applied to two scenarios for object retrieval, both of them solved by considering the decomposition of the objects in parts. In the first scenario, the query is formulated with a single object exemplar which is to be matched with each BPT in the target database. The matching problem is solved in two stages: an initial top-down one that assumes that the hierarchy from the query is respected in the target BPT, and a second bottom-up one that relaxes this condition and considers region merges which are not in the target BPT. The second scenario where HBoR descriptors are applied considers a query composed of several visual objects. In this case, the provided exemplars are considered as a training set to build a model of the query concept. This model is composed of two levels, a first one where each part is modelled and detected separately, and a second one that characterises the combinations of parts that describe the complete object. The analysis process exploits the hierarchical nature of the BPT by using a novel classifier that drives an efficient top-down analysis of the target BPTs.
APA, Harvard, Vancouver, ISO, and other styles
18

Valero, Valbuena Silvia. "Hyperspectral image representation and processing with binary partition trees." Doctoral thesis, Universitat Politècnica de Catalunya, 2012. http://hdl.handle.net/10803/130832.

Full text
Abstract:
The optimal exploitation of the information provided by hyperspectral images requires the development of advanced image processing tools. Therefore, under the title Hyperspectral image representation and Processing with Binary Partition Trees, this PhD thesis proposes the construction and the processing of a new region-based hierarchical hyperspectral image representation: the Binary Partition Tree (BPT). This hierarchical region-based representation can be interpreted as a set of hierarchical regions stored in a tree structure. Hence, the Binary Partition Tree succeeds in presenting: (i) the decomposition of the image in terms of coherent regions and (ii) the inclusion relations of the regions in the scene. Based on region-merging techniques, the construction of BPT is investigated in this work by studying hyperspectral region models and the associated similarity metrics. As a matter of fact, the very high dimensionality and the complexity of the data require the definition of specific region models and similarity measures. Once the BPT is constructed, the fixed tree structure allows implementing efficient and advanced application-dependent techniques on it. The application-dependent processing of BPT is generally implemented through a specific pruning of the tree. Accordingly, some pruning techniques are proposed and discussed according to different applications. This Ph.D is focused in particular on segmentation, object detection and classification of hyperspectral imagery. Experimental results on various hyperspectral data sets demonstrate the interest and the good performances of the BPT representation
APA, Harvard, Vancouver, ISO, and other styles
19

Randrianasoa, Tianatahina Jimmy Francky. "Représentation d'images hiérarchique multi-critère." Thesis, Reims, 2017. http://www.theses.fr/2017REIMS040/document.

Full text
Abstract:
La segmentation est une tâche cruciale en analyse d’images. L’évolution des capteurs d’acquisition induit de nouvelles images de résolution élevée, contenant des objets hétérogènes. Il est aussi devenu courant d’obtenir des images d’une même scène à partir de plusieurs sources. Ceci rend difficile l’utilisation des méthodes de segmentation classiques. Les approches de segmentation hiérarchiques fournissent des solutions potentielles à ce problème. Ainsi, l’Arbre Binaire de Partitions (BPT) est une structure de données représentant le contenu d’une image à différentes échelles. Sa construction est généralement mono-critère (i.e. une image, une métrique) et fusionne progressivement des régions connexes similaires. Cependant, la métrique doit être définie a priori par l’utilisateur, et la gestion de plusieurs images se fait en regroupant de multiples informations issues de plusieurs bandes spectrales dans une seule métrique. Notre première contribution est une approche pour la construction multicritère d’un BPT. Elle établit un consensus entre plusieurs métriques, permettant d’obtenir un espace de segmentation hiérarchique unifiée. Par ailleurs, peu de travaux se sont intéressés à l’évaluation de ces structures hiérarchiques. Notre seconde contribution est une approche évaluant la qualité des BPTs en se basant sur l’analyse intrinsèque et extrinsèque, suivant des exemples issus de vérités-terrains. Nous discutons de l’utilité de cette approche pour l’évaluation d’un BPT donné mais aussi de la détermination de la combinaison de paramètres adéquats pour une application précise. Des expérimentations sur des images satellitaires mettent en évidence la pertinence de ces approches en segmentation d’images
Segmentation is a crucial task in image analysis. Novel acquisition devices bring new images with higher resolutions, containing more heterogeneous objects. It becomes also easier to get many images of an area from different sources. This phenomenon is encountered in many domains (e.g. remote sensing, medical imaging) making difficult the use of classical image segmentation methods. Hierarchical segmentation approaches provide solutions to such issues. Particularly, the Binary Partition Tree (BPT) is a hierarchical data-structure modeling an image content at different scales. It is built in a mono-feature way (i.e. one image, one metric) by merging progressively similar connected regions. However, the metric has to be carefully thought by the user and the handling of several images is generally dealt with by gathering multiple information provided by various spectral bands into a single metric. Our first contribution is a generalized framework for the BPT construction in a multi-feature way. It relies on a strategy setting up a consensus between many metrics, allowing us to obtain a unified hierarchical segmentation space. Surprisingly, few works were devoted to the evaluation of hierarchical structures. Our second contribution is a framework for evaluating the quality of BPTs relying both on intrinsic and extrinsic quality analysis based on ground-truth examples. We also discuss about the use of this evaluation framework both for evaluating the quality of a given BPT and for determining which BPT should be built for a given application. Experiments using satellite images emphasize the relevance of the proposed frameworks in the context of image segmentation
APA, Harvard, Vancouver, ISO, and other styles
20

Zini, Roger. "Placement, routage conjoints et hierarchiques de reseaux prediffuses." Paris 6, 1987. http://www.theses.fr/1987PA066116.

Full text
Abstract:
Cette these propose un algorithme original de construction hierarchique d'arbres de steiner ainsi qu'une technique d'estimation de longueur au fur et a mesure de cette construction. Deux algorithmes de partitionnement d'hypergraphes, de maniere gloutonne ou par recuit simule sans rejets, y sont exposes. Elle introduit enfin un concept de directions d'attraction permettant d'effectuer un placement routage de circuits vlsi, a implanter sur des reseaux prediffuses, sous forme de systeme regule par retroaction entre le placement, le routage et l'analyse temporelle, afin d'obtenir du circuit, par un placement-routage adequat, les performances temporelles souhaitees
APA, Harvard, Vancouver, ISO, and other styles
21

Al-Qaysi, Ibrahim, and Yonas Ghidei. "Tre-stegsmetod för att kvantifiera komplexitet för IT-förslag." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-188920.

Full text
Abstract:
Moderna företag är under ständig förändring. Företagen behöver således förändra, bygga ut och modifiera de applikationer som stödjer verksamheten. Att göra ändringar i arkitekturer på stora IT-system är inte heltbekymmerfritt. Det är ofta väldigt dyrt och tidskrävande. Problemet är att det inte finns några enkla metoder för att kvantifiera komplexitet på IT-förslag i tidigt skede av en implementation. Många verksamheter har således behov avatt kvantifiera komplexiteten på IT-förslag i deras beslutsprocess för att avgöra vilka system som är dyra och tidskrävande att implementera. Ett av dem är Sveriges militära försvarsorganisation, Försvarsmakten. Syftet förstudien blir sålunda att presentera en metod för att kvantifiera komplexitet och optimera implementation för IT-förslag. Målet för studien är därmed att presentera en modell som gör det möjligt för verksamheter att identifiera komplexa förslag som kan medföra onödiga projektrisker. Denna studie använder sig utav en kombination av kvalitativ och kvantitativ forskningsmetod med en induktiv forskningsansats. Studien evaluerar vilka olika sätt Försvarsmaktens mobilapplikation, FMTK kan implementeras på och vilken av implementationerna som är mest optimal beträffande komplexitet. Därefter presenteras resultatet för studien, Tre-stegsmetoden som inkorporerar den bästa implementationen. Slutligen drar studien slutsatsen, med hjälp av analyser och utvärderingar, att Tre-stegsmetoden är överlägsen andra metoder.
Modern enterprises are under constant change. Therefore, enterprises need to change, extend and modify the applications that support their businesses. Making changes in the architectures of large IT-systems however, are not straightforward. It is often very costly and time consuming. The problem is that there are no easy methods to quantify complexity and optimize an implementation in early phases of an application construction. Thus, many companies are in need of a method to quanitfy complexity of their IT-businessproposals in order to facilitate their decision-making process. One of them is the Swedish Armed Forces (sv. Försvarsmakten). The purpose of this study is therefore to present a simple method to quantify complexity and optimize implementation for IT-proposals. The purpose aligns with the goal, which is to present a model, which forms the basis for IT-proposals, for companies to consider in their process. This study uses a combination of a qualitative and quantitative research with an inductive research approach. Furthermore, this study evaluates in which ways Swedish Armed Forces application FMTK can be implemented and which implementations that are most optimal in terms of complexity. A method which incorporates the implementation is thereafter presented as a result of the study. Conclusively, the study shows with the help of analyses and evaluations that the presented method, namely the Three-stepmethod is superior to other methods.
APA, Harvard, Vancouver, ISO, and other styles
22

Alonso, González Alberto. "Multidimensional and temporal SAR data representation and processing based on binary partition trees." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/146132.

Full text
Abstract:
This thesis deals with the processing of different types of multidimensional SAR data for distinct applications. Instead of handling the original pixels of the image, which correspond to very local information and are strongly contaminated by speckle noise, a region-based and multiscale data abstraction is defined, the Binary Partition Tree (BPT). In this representation, each region stands for an homogeneous area of the data, grouping pixels with similar properties and making easier its interpretation and processing. The work presented in this thesis concerns the definition of the BPT structures for Polarimetric SAR (PolSAR) images and also for temporal series of SAR acquisitions. It covers the description of the corresponding data models and the algorithms for BPT construction and its exploitation. Particular attention has been paid to the speckle filtering application. The proposed technique has proven to achieve arbitrarily large regions over homogeneous areas while also preserving the spatial resolution and the small details of the original data. As a consequence, this approach has demonstrated an improvement in the performance of the target response estimation with respect to other speckle filtering techniques. Moreover, due to the flexibility and convenience of this representation, it has been employed for other applications as scene segmentation and classification. The processing of SAR time series has also been addressed, proposing different approaches for dealing with the temporal information of the data, resulting into distinct BPT abstractions. These representations have allowed the development of speckle filtering techniques in the spatial and temporal domains and also the improvement and the definition of additional methods for classification and temporal change detection and characterization.
APA, Harvard, Vancouver, ISO, and other styles
23

Mounce, Ross. "Comparative cladistics : fossils, morphological data partitions and lost branches in the fossil tree of life." Thesis, University of Bath, 2013. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.642021.

Full text
Abstract:
In this thesis I attempt to gather together a wide range of cladistic analysis of fossil and extant taxa representing a diverse array of phylogenetic groups. I use this data to quantitatively compare the effect of fossil taxa relative to extant taxa in terms of support for relationships, number of most parsimonious trees (MPTs) and leaf stability. In line with previous studies I find that the effects of fossil taxa are seldom different to extant taxa – although I highlight some interesting exceptions. I also use this data to compare the phylogenetic signal within vertebrate morphological data sets, by choosing to compare cranial data to postcranial data. Comparisons between molecular data and morphological data have been previously well explored, as have signals between different molecular loci. But comparative signal within morphological data sets is much less commonly characterized and certainly not across a wide array of clades. With this analysis I show that there are many studies in which the evidence provided by cranial data appears to be be significantly incongruent with the postcranial data – more than one would expect to see just by the effect of chance and noise alone. I devise and implement a modification to a rarely used measure of homoplasy that will hopefully encourage its wider usage. Previously it had some undesirable bias associated with the distribution of missing data in a dataset, but my modification controls for this. I also take an in-depth and extensive review of the ILD test, noting it is often misused or reported poorly, even in recent studies. Finally, in attempting to collect data and metadata on a large scale, I uncovered inefficiencies in the research publication system that obstruct re-use of data and scientific progress. I highlight the importance of replication and reproducibility – even simple re-analysis of high profile papers can turn up some very different results. Data is highly valuable and thus it must be retained and made available for further re-use to maximize the overall return on research investment.
APA, Harvard, Vancouver, ISO, and other styles
24

Curado, Manuel. "Structural Similarity: Applications to Object Recognition and Clustering." Doctoral thesis, Universidad de Alicante, 2018. http://hdl.handle.net/10045/98110.

Full text
Abstract:
In this thesis, we propose many developments in the context of Structural Similarity. We address both node (local) similarity and graph (global) similarity. Concerning node similarity, we focus on improving the diffusive process leading to compute this similarity (e.g. Commute Times) by means of modifying or rewiring the structure of the graph (Graph Densification), although some advances in Laplacian-based ranking are also included in this document. Graph Densification is a particular case of what we call graph rewiring, i.e. a novel field (similar to image processing) where input graphs are rewired to be better conditioned for the subsequent pattern recognition tasks (e.g. clustering). In the thesis, we contribute with an scalable an effective method driven by Dirichlet processes. We propose both a completely unsupervised and a semi-supervised approach for Dirichlet densification. We also contribute with new random walkers (Return Random Walks) that are useful structural filters as well as asymmetry detectors in directed brain networks used to make early predictions of Alzheimer's disease (AD). Graph similarity is addressed by means of designing structural information channels as a means of measuring the Mutual Information between graphs. To this end, we first embed the graphs by means of Commute Times. Commute times embeddings have good properties for Delaunay triangulations (the typical representation for Graph Matching in computer vision). This means that these embeddings can act as encoders in the channel as well as decoders (since they are invertible). Consequently, structural noise can be modelled by the deformation introduced in one of the manifolds to fit the other one. This methodology leads to a very high discriminative similarity measure, since the Mutual Information is measured on the manifolds (vectorial domain) through copulas and bypass entropy estimators. This is consistent with the methodology of decoupling the measurement of graph similarity in two steps: a) linearizing the Quadratic Assignment Problem (QAP) by means of the embedding trick, and b) measuring similarity in vector spaces. The QAP problem is also investigated in this thesis. More precisely, we analyze the behaviour of $m$-best Graph Matching methods. These methods usually start by a couple of best solutions and then expand locally the search space by excluding previous clamped variables. The next variable to clamp is usually selected randomly, but we show that this reduces the performance when structural noise arises (outliers). Alternatively, we propose several heuristics for spanning the search space and evaluate all of them, showing that they are usually better than random selection. These heuristics are particularly interesting because they exploit the structure of the affinity matrix. Efficiency is improved as well. Concerning the application domains explored in this thesis we focus on object recognition (graph similarity), clustering (rewiring), compression/decompression of graphs (links with Extremal Graph Theory), 3D shape simplification (sparsification) and early prediction of AD.
Ministerio de Economía, Industria y Competitividad (Referencia TIN2012-32839 BES-2013-064482)
APA, Harvard, Vancouver, ISO, and other styles
25

Covi, Patrick. "Multi-hazard analysis of steel structures subjected to fire following earthquake." Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/313383.

Full text
Abstract:
Fires following earthquake (FFE) have historically produced enormous post-earthquake damage and losses in terms of lives, buildings and economic costs, like the San Francisco earthquake (1906), the Kobe earthquake (1995), the Turkey earthquake (2011), the Tohoku earthquake (2011) and the Christchurch earthquakes (2011). The structural fire performance can worsen significantly because the fire acts on a structure damaged by the seismic event. On these premises, the purpose of this work is the investigation of the experimental and numerical response of structural and non-structural components of steel structures subjected to fire following earthquake (FFE) to increase the knowledge and provide a robust framework for hybrid fire testing and hybrid fire following earthquake testing. A partitioned algorithm to test a real case study with substructuring techniques was developed. The framework is developed in MATLAB and it is also based on the implementation of nonlinear finite elements to model the effects of earthquake forces and post-earthquake effects such as fire and thermal loads on structures. These elements should be able to capture geometrical and mechanical non-linearities to deal with large displacements. Two numerical validation procedures of the partitioned algorithm simulating two virtual hybrid fire testing and one virtual hybrid seismic testing were carried out. Two sets of experimental tests in two different laboratories were performed to provide valuable data for the calibration and comparison of numerical finite element case studies reproducing the conditions used in the tests. Another goal of this thesis is to develop a fire following earthquake numerical framework based on a modified version of the OpenSees software and several scripts developed in MATLAB to perform probabilistic analyses of structures subjected to FFE. A new material class, namely SteelFFEThermal, was implemented to simulate the steel behaviour subjected to FFE events.
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Chieh-Yu, and 陳介宇. "Partition of Weighted Tree." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/02573776870576466153.

Full text
Abstract:
碩士
國立交通大學
應用數學系
85
In models of many real applications, we very often need to partition a tree into as many subtrees as possible and keep the weight of every subtree larger than a given constant. In this thesis, we design efficient algorithms for the above tree partition problem under different weight requirements.
APA, Harvard, Vancouver, ISO, and other styles
27

Li, Chih-Hsuan, and 李至軒. "On the Tree Partition Problems." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/03946530272674093030.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
101
Consider a tree T with n vertices, in which each vertex is associated with a nonnegative integer weight and each edge is associated with a positive integer cost. A partition of T is a set of subtrees induced by removing some edges. Let l <= u be two nonnegative integers. An [l, u]-partition is a partition such that the total weight of each subtree is in [l, u]. A p-[l, u]-partition is an [l, u]-partition with p subtrees. The cost of a partition is the total cost of the removed edges. The focus of this thesis is the following two problems: (1) the problem of finding a p-[l, u]-partition and (2) the problem of finding a minimum-cost [l, u]-partition. For (1), an O(n p^{4} loglog p / log^{2} p)-time algorithm is presented. For (2), its NP-hardness is proved. In addition, an O(n u^{2} loglog u / log^{2} u)-time pseudo-polynomial algorithm is presented. The presented algorithms for (1) and (2) improve, respectively, the previous upper bounds from O(n p^{4}) and O(n u^{2}). The improvement is achieved by an efficient application of the current best algorithm for computing the (min, +)-convolution of two vectors to reduce the running time of the previous algorithms. This thesis also considers the following nature extension of (1) and (2): finding a minimum-cost p-[l, u]-partition. For this extended problem, an O(n u^{2} p^{2} loglog p / log^{2} p)-time algorithm is presented.
APA, Harvard, Vancouver, ISO, and other styles
28

Tseng, Wen-Ke, and 曾文科. "A Heuristic Partition Method of Numerical Attributes in Classification Tree Construction." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/64229514474119881705.

Full text
Abstract:
碩士
大同大學
資訊經營學系(所)
92
Inductive Learning, a kind of learning methods, has been applied extensively in Machine Learning. Thus, Classification tree is a well-known method in Inductive Learning. The ID3, a popular classification tree algorithm, had been proposed by Quinlan on 1986. Quinlan proposed the C4.5 algorithm on 1993 again. The C4.5 has not been efficiently searching the splitting points on numerical attributes. Therefore, some researchers had proposed improved approaches and new partition methods for the partition on numerical attributes. However, these approaches and methods have its assumptions and restrictions. So we have proposed a heuristic partition method to improve the defect, which the C4.5 algorithm could not process numerical attributes efficiently. Since the heuristic partition method is based on C4.5 algorithm, the method can greatly reduce the time for searching splitting point on numerical attributes.
APA, Harvard, Vancouver, ISO, and other styles
29

Chin, Hao-Lun, and 秦浩倫. "An OVSF Code Tree Partition Policy for WCDMA Systems and Implementation of SIP Servers." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/16766812158373390183.

Full text
Abstract:
碩士
國立臺灣科技大學
資訊工程系
93
In this thesis, we discuss two topics about the third generation (3G) mobile communications. In the first topic, we focus on the multi-code placement and replacement issue. Since WCDMA systems use orthogonal variable spreading factor (OVSF) codes as channelization codes, it can support high-rate transmission and variable-bit-rate services. For the OVSF codes, there are two approaches to assign codes, i.e., single-code and multi-code. To solve the code fragmentation problem and to reduce the number of code reassignments, we propose a tree partition policy for managing the OVSF code tree based on the multi-code approach. Furthermore, we modify the multi-code rate using a unit-based method, which enables each multi-code to have fewer codes but a slightly higher rate than what is requested, so as to decrease the number of code fragments. Compared with the left-most and crowded-first algorithms, it turns out that the tree partition policy and the unit-based multi-code method perform better through extensive simulations. In the second topic, we emphasize applications of voice over IP (VoIP) using the session initiation protocol (SIP). It is well known that SIP can be used for singnaling in the 3G cellular systems and has become the standard of the multimedia services in the Internet. Besides, VoIP is very popular in recent years since people can use VoIP phones to communicate, too. Therefore, we want not only to understand how SIP works but also to estabilish a VoIP telephony environment. We shall implement SIP servers so that SIP user agents (UAs) or public-switched telephone network (PSTN) calls can commuincate with each other through the SIP servers. Via a thorough test, we carefully examine the connecting compatibility bwtween our SIP servers and SIP servers of other companies.
APA, Harvard, Vancouver, ISO, and other styles
30

Yang, Tsung-Wen, and 楊琮文. "Decision Tree Induction Based on Reinforcement Learning Modelling and its Application on State Space Partition." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/60487693075925935084.

Full text
Abstract:
碩士
國立中正大學
電機工程研究所
93
Most of the tree induction algorithms are typically based on a top-down greedy strategy that makes locally optimal decision at each node. This strategy may induce a larger tree that requires more tests. In this thesis, a decision tree induction which is based on reinforcement learning is proposed to avoid the greedy problem. The basic idea is that the splitting criterion is based on the long-term evaluations of splits instead of the local evaluations. The induction problem is modeled as reinforcement learning problem and solved by the technique of the domain. The method consists of two phases: split estimation and tree growing. In split estimation phase, the inducer estimates the long-term evaluations of splits at visited nodes. In the second phase, the inducer grows the tree using the learned long-term evaluations. A comparison with CART on several datasets is reported. The proposed method is then applied to tree-based reinforcement learning. The decoder in AHC is replaced by a regression tree, which is constructed by the proposed method. The experimental results are also demonstrated
APA, Harvard, Vancouver, ISO, and other styles
31

LEE, RONG-CHOI, and 李榮財. "A Modified Set Partition in Hierarchical Tree Coding Scheme for Wearable Wavelet-Based ECG Data Compression System." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/v2hp2u.

Full text
Abstract:
博士
國立高雄第一科技大學
工學院工程科技博士班
106
The long-term monitoring of electrocardiograms is one of the most important tasks in caring for older people. Using wearable devices incorporating low-power wireless transmission technology is the most convenient way to record long-term multi-person ECG data without affecting the daily lives of people. In response to the large amount of data involved with real-time wireless transmission and storage, ECG compression is a necessary measure, because the diagnostic features of ECG-rich heart functions include shape, magnitude, and segment time. The effects of ECG data compression are often required to remove ECG noise while preserving the diagnostic features, so the reconstruction process can reproduce the ECG without noise and feature information. Transform domain ECG data compression technology uses the independence between sub-bands, as this technology is a real-time data compression method that considers the preceding requirements and high compression rates. Interpreting characteristic information is one of the important tasks in the clinical diagnosis of cardiology, and interpreting technical training and proficiency requires a stable and high quality ECG. The original ECG signal typically has 5–10% distortion caused by system noise; it includes the sensitivity of the sensor, the characteristics of the A/D converter, and the accuracy of the circuit components. If the variation of distortion is maintained at 5% of the standard deviation, it is an acceptable signal with quality stability. In addition, distortion measurements of ECG data compression typically usually use percentage root mean square difference (PRD), the acceptable ranges of the percentage root mean square difference target (PRDT) value is 2 to 5%(±5%) for clinical diagnostic applications, depending on the signal characteristics vary. The wearable devices emphasize light, portable, and long battery life, so circuit design focuses on the practical, simple, and power-saving aspects. Transform domain method is the simplest kind of data compression technology, because this method does not require pre-processing and is suitable for real-time data compression applications. Based on energy aggregation, the transform domain method commonly uses two kinds of functions: discrete cosine function and discrete wavelet function, with discrete wavelet transform offering the best compression performance. The set partition in hierarchical tree (SPIHT) algorithm is a very efficient coding strategy for wavelet transform coefficients, and it was first applied to ECG data compression in 2010. The key feature of the SPIHT strategy is the use of the quantization scales for compressed bit rate control. However, this feature also makes the strategy lose the ability to control distortion: 1) The quantization scale is a uniform quantization scheme that sacrifices low-frequency energy, and this is the most sensitive factor affecting this error. 2) Uniform quantization strategies often result in distorted instability. In addition, the traditional SPIHT encoding process requires a large amount of memory and a complex sorting process, and this is not conducive to the application of wearable devices. In order to overcome these difficulties, the encoding process of the traditional SPIHT algorithm was analyzed based on the bit-plane data structure. Combined with the characteristics of ECG and large amount of data analysis, the absolute value coefficients after wavelet quantization show an very approximately as pyramid distribution. According to this feature, this paper presents a simple direct encoding process called empirical coding, and this method omits the important check bits required in traditional SPIHT coding. The experimental results show that the average compression ratio compared with [34] can be increased by 77.93%. This method is ideal for wearable device applications because it requires no special coding circuitry. In response to the general needs, this paper also proposes a modified SPIHT called MSPIHT coding concept. The MSPIHT overcomes the complicated sorting process by special bit stream, and only needs simple AND/OR logic operation and (1+3/4)*N bits memory can be achieved the effectiveness of the traditional SPIHT strategy does.
APA, Harvard, Vancouver, ISO, and other styles
32

Yun, Yejin Esther. "Development of a correlation based and a decision tree based prediction algorithm for tissue to plasma partition coefficients." Thesis, 2013. http://hdl.handle.net/10012/7483.

Full text
Abstract:
Physiologically based pharmacokinetic (PBPK) modeling is a tool used in drug discovery and human health risk assessment. PBPK models are mathematical representations of the anatomy, physiology and biochemistry of an organism. PBPK models, using both compound and physiologic inputs, are used to predict a drug’s pharmacokinetics in various situations. Tissue to plasma partition coefficients (Kp), a key PBPK model input, define the steady state concentration differential between the tissue and plasma and are used to predict the volume of distribution. Experimental determination of these parameters once limited the development of PBPK models however in silico prediction methods were introduced to overcome this issue. The developed algorithms vary in input parameters and prediction accuracy and none are considered standard, warranting further research. Chapter 2 presents a newly developed Kp prediction algorithm that requires only readily available input parameters. Using a test dataset, this Kp prediction algorithm demonstrated good prediction accuracy and greater prediction accuracy than preexisting algorithms. Chapter 3 introduced a decision tree based Kp prediction method. In this novel approach, six previously published algorithms, including the one developed in Chapter 2, were utilized. The aim of the developed classifier was to identify the most accurate tissue-specific Kp prediction algorithm for a new drug. A dataset consisting of 122 drugs was used to train the classifier and identify the most accurate Kp prediction algorithm for a certain physico-chemical space. Three versions of tissue specific classifiers were developed and were dependent on the necessary inputs. The use of the classifier resulted in a better prediction accuracy as compared to the use of any single Kp prediction algorithm for all tissues; the current mode of use in PBPK model building. With built-in estimation equations for those input parameters not necessarily available, this Kp prediction tool will provide Kp prediction when only limited input parameters are available. The two presented innovative methods will improve tissue distribution prediction accuracy thus enhancing the confidence in PBPK modeling outputs.
APA, Harvard, Vancouver, ISO, and other styles
33

"On-line Coloring of Partial Orders, Circular Arc Graphs, and Trees." Doctoral diss., 2012. http://hdl.handle.net/2286/R.I.15995.

Full text
Abstract:
abstract: A central concept of combinatorics is partitioning structures with given constraints. Partitions of on-line posets and on-line graphs, which are dynamic versions of the more familiar static structures posets and graphs, are examined. In the on-line setting, vertices are continually added to a poset or graph while a chain partition or coloring (respectively) is maintained. %The optima of the static cases cannot be achieved in the on-line setting. Both upper and lower bounds for the optimum of the number of chains needed to partition a width $w$ on-line poset exist. Kierstead's upper bound of $\frac{5^w-1}{4}$ was improved to $w^{14 \lg w}$ by Bosek and Krawczyk. This is improved to $w^{3+6.5 \lg w}$ by employing the First-Fit algorithm on a family of restricted posets (expanding on the work of Bosek and Krawczyk) . Namely, the family of ladder-free posets where the $m$-ladder is the transitive closure of the union of two incomparable chains $x_1\le\dots\le x_m$, $y_1\le\dots\le y_m$ and the set of comparabilities $\{x_1\le y_1,\dots, x_m\le y_m\}$. No upper bound on the number of colors needed to color a general on-line graph exists. To lay this fact plain, the performance of on-line coloring of trees is shown to be particularly problematic. There are trees that require $n$ colors to color on-line for any positive integer $n$. Furthermore, there are trees that usually require many colors to color on-line even if they are presented without any particular strategy. For restricted families of graphs, upper and lower bounds for the optimum number of colors needed to maintain an on-line coloring exist. In particular, circular arc graphs can be colored on-line using less than 8 times the optimum number from the static case. This follows from the work of Pemmaraju, Raman, and Varadarajan in on-line coloring of interval graphs.
Dissertation/Thesis
Ph.D. Mathematics 2012
APA, Harvard, Vancouver, ISO, and other styles
34

Fan, Jia-Hao. "Cuts and Partitions in Graphs/Trees with Applications." Thesis, 2013. http://hdl.handle.net/1969.1/151050.

Full text
Abstract:
Both the maximum agreement forest problem and the multicut on trees problem are NP-hard, thus cannot be solved efficiently if P /=NP. The maximum agreement forest problem was motivated in the study of evolution trees in bioinformatics, in which we are given two leaf-labeled trees and are asked to find a maximum forest that is a subgraph of both trees. The multicuton trees problem has applications in networks, in which we are given a forest and a set of pairs of termianls and are asked to find a cut that separates all pairs of terminals. We develop combinatorial and algorithmic techniques that lead to improved parameterized algorithms, approximation algorithms, and kernelization algorithms for these problems. For the maximum agreement forest problem, we proceed from the bottommost level of trees and extend solutions to whole trees. With this technique, we show that the maxi- mum agreement forest problem is fixed-parameterized tractable in general trees, resolving an open problem in this area. We also provide the first constant ratio approximation algorithm for the problem in general trees. For the multicut on trees problem, we take a new look at the problem through the eyes of vertex cover problem. This connection allows us to develop an kernelization algorithm for the problem, which gives an upper bound of O(k3) on the kernel size, significantly improving the previous best upper bound O(k6). We further exploit this connection to give a parameterized algorithm for the problem that runs in time O∗ (1.62k), thus improving the previous best algorithm of running time O∗ (2k). In the protein complex prediction problem, which comes directly from the study of bioinformatics, we are given a protein-protein interaction network, and are asked to find dense regions in this graph. We formulate this problem as a graph clustering problem and develop an algorithm to refine the results for identifying protein complexes. We test our algorithm on yeast protein- protein interaction networks, and we show that our algorithm is able to identify complexes more accurately than other existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
35

Hsin-MaoChen and 陳心懋. "Partitioned Set-Pruning Segment Trees for Packet Classification." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/01412978938754814182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

AbouEisha, Hassan M. "Optimization of Algorithms Using Extensions of Dynamic Programming." Diss., 2017. http://hdl.handle.net/10754/623124.

Full text
Abstract:
We study and answer questions related to the complexity of various important problems such as: multi-frontal solvers of hp-adaptive finite element method, sorting and majority. We advocate the use of dynamic programming as a viable tool to study optimal algorithms for these problems. The main approach used to attack these problems is modeling classes of algorithms that may solve this problem using a discrete model of computation then defining cost functions on this discrete structure that reflect different complexity measures of the represented algorithms. As a last step, dynamic programming algorithms are designed and used to optimize those models (algorithms) and to obtain exact results on the complexity of the studied problems. The first part of the thesis presents a novel model of computation (element partition tree) that represents a class of algorithms for multi-frontal solvers along with cost functions reflecting various complexity measures such as: time and space. It then introduces dynamic programming algorithms for multi-stage and bi-criteria optimization of element partition trees. In addition, it presents results based on optimal element partition trees for famous benchmark meshes such as: meshes with point and edge singularities. New improved heuristics for those benchmark meshes were ob- tained based on insights of the optimal results found by our algorithms. The second part of the thesis starts by introducing a general problem where different problems can be reduced to and show how to use a decision table to model such problem. We describe how decision trees and decision tests for this table correspond to adaptive and non-adaptive algorithms for the original problem. We present exact bounds on the average time complexity of adaptive algorithms for the eight elements sorting problem. Then bounds on adaptive and non-adaptive algorithms for a variant of the majority problem are introduced. Adaptive algorithms are modeled as decision trees whose depth reflects the worst-case time complexity and average depth indicates the average-case time complexity. Non-adaptive algorithms are represented as decision tests whose size expresses the worst-case time complexity. Finally, we present a dynamic programming algorithm that finds a minimum decision test (minimum reduct) for a given decision table.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography