To see the other types of publications on this topic, follow the link: Graph theory techniques.

Dissertations / Theses on the topic 'Graph theory techniques'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 28 dissertations / theses for your research on the topic 'Graph theory techniques.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Irniger, Christophe-André. "Graph matching filtering databases of graphs using machine learning techniques." Berlin Aka, 2005. http://deposit.ddb.de/cgi-bin/dokserv?id=2677754&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Zhiqian. "Graph Neural Networks: Techniques and Applications." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/99848.

Full text
Abstract:
Effective information analysis generally boils down to the geometry of the data represented by a graph. Typical applications include social networks, transportation networks, the spread of epidemic disease, brain's neuronal networks, gene data on biological regulatory networks, telecommunication networks, knowledge graph, which are lying on the non-Euclidean graph domain. To describe the geometric structures, graph matrices such as adjacency matrix or graph Laplacian can be employed to reveal latent patterns. This thesis focuses on the theoretical analysis of graph neural networks and the development of methods for specific applications using graph representation. Four methods are proposed, including rational neural networks for jump graph signal estimation, RemezNet for robust attribute prediction in the graph, ICNet for integrated circuit security, and CNF-Net for dynamic circuit deobfuscation. For the first method, a recent important state-of-art method is the graph convolutional networks (GCN) nicely integrate local vertex features and graph topology in the spectral domain. However, current studies suffer from drawbacks: graph CNNs rely on Chebyshev polynomial approximation which results in oscillatory approximation at jump discontinuities since Chebyshev polynomials require degree $Omega$(poly(1/$epsilon$)) to approximate a jump signal such as $|x|$. To reduce complexity, RatioanlNet is proposed to integrate rational function and neural networks for graph node level embeddings. For the second method, we propose a method for function approximation which suffers from several drawbacks: non-robustness and infeasibility issue; neural networks are incapable of extracting analytical representation; there is no study reported to integrate the superiorities of neural network and Remez. This work proposes a novel neural network model to address the above issues. Specifically, our method utilizes the characterizations of Remez to design objective functions. To avoid the infeasibility issue and deal with the non-robustness, a set of constraints are imposed inspired by the equioscillation theorem of best rational approximation. The third method proposes an approach for circuit security. Circuit obfuscation is a recently proposed defense mechanism to protect digital integrated circuits (ICs) from reverse engineering. Estimating the deobfuscation runtime is a challenging task due to the complexity and heterogeneity of graph-structured circuit, and the unknown and sophisticated mechanisms of the attackers for deobfuscation. To address the above-mentioned challenges, this work proposes the first graph-based approach that predicts the deobfuscation runtime based on graph neural networks. The fourth method proposes a representation for dynamic size of circuit graph. By analyzing SAT attack method, a conjunctive normal form (CNF) bipartite graph is utilized to characterize the complexity of this SAT problem. To overcome the difficulty in capturing the dynamic size of the CNF graph, an energy-based kernel is proposed to aggregate dynamic features.
Doctor of Philosophy
Graph data is pervasive throughout most fields, including pandemic spread network, social network, transportation roads, internet, and chemical structure. Therefore, the applications modeled by graph benefit people's everyday life, and graph mining derives insightful opinions from this complex topology. This paper investigates an emerging technique called graph neural newton (GNNs), which is designed for graph data mining. There are two primary goals of this thesis paper: (1) understanding the GNNs in theory, and (2) apply GNNs for unexplored and values real-world scenarios. For the first goal, we investigate spectral theory and approximation theory, and a unified framework is proposed to summarize most GNNs. This direction provides a possibility that existing or newly proposed works can be compared, and the actual process can be measured. Specifically, this result demonstrates that most GNNs are either an approximation for a function of graph adjacency matrix or a function of eigenvalues. Different types of approximations are analyzed in terms of physical meaning, and the advantages and disadvantages are offered. Beyond that, we proposed a new optimization for a highly accurate but low efficient approximation. Evaluation of synthetic data proves its theoretical power, and the tests on two transportation networks show its potentials in real-world graphs. For the second goal, the circuit is selected as a novel application since it is crucial, but there are few works. Specifically, we focus on a security problem, a high-value real-world problem in industry companies such as Nvidia, Apple, AMD, etc. This problem is defined as a circuit graph as apply GNN to learn the representation regarding the prediction target such as attach runtime. Experiment on several benchmark circuits shows its superiority on effectiveness and efficacy compared with competitive baselines. This paper provides exploration in theory and application with GNNs, which shows a promising direction for graph mining tasks. Its potentials also provide a wide range of innovations in graph-based problems.
APA, Harvard, Vancouver, ISO, and other styles
3

Gravier, Sylvain. "Coloration et produits de graphes." Université Joseph Fourier (Grenoble), 1996. http://www.theses.fr/1996GRE10084.

Full text
Abstract:
Dans la première partie, nous étudions la notion de coloration par listes. Un graphe d'ordre n est k-liste colorable si, quelque soit la donnée de n listes de taille k (une par sommet), il est possible d'attribuer, à chaque sommet, une couleur de sa liste, de sorte que deux sommets voisins quelconques aient des couleurs différentes. Nous donnons un rappel des différents résultats classiques sur la coloration par liste. Nous abordons l'aspect de la complexité du problème de liste-coloration. Après une étude des différentes constructions utilisées en théorie des graphes (contraction, identification,…), nous donnons un théorème de type Hajós pour la coloration par listes. Enfin, nous terminons cette étude en abordant la conjecture de Vizing par un angle d'attaque nouveau, ce qui nous permet d'obtenir des résultats sur la classe des graphes sans griffe. Dans un second temps, nous traitons l'aspect algorithmique de la coloration, en donnant un algorithme «séquentiel» d'échange chromatique qui nous permet de colorer et de reconnaître deux nouvelles classes de graphes parfaits. Finalement, nous étudions le comportement de certains invariants de graphes via certains produits. Le nombre d'absorpion du produit croisé d'une chaîne et d'une antichaîne est donné, ainsi que certaines valeurs du produit croisé de deux chaînes. Nous proporons une nouvelle approche de la conjecture de Hedetniemi, en étudiant le problème du nombre chromatique du produit fibré (sous-produit du produit croisé). Nous terminons cette étude sur les prdouits, en abordant deux problèmes classiques de recouvrement, l'hamiltonicité et le plongement d'arbres dans l'hypercube. Nous donnons une conditions nécessaire et suffisante pour que le produit croisé de deux graphes hamiltoniens soit hamiltonien. Nous définissons une large classe d'arbres couvrants l'hypercube, et donnons une partition des arêtes de l'hypercube en de tels arbres
APA, Harvard, Vancouver, ISO, and other styles
4

Lukose, Susan. "Ontology learning for online course creation using nlp techniques and graph theory /." Full text available from ProQuest UM Digital Dissertations, 2008. http://0-proquest.umi.com.umiss.lib.olemiss.edu/pqdweb?index=0&did=1798967591&SrchMode=1&sid=2&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1258130866&clientId=22256.

Full text
Abstract:
Thesis (Ph.D.)--University of Mississippi, 2008.
Typescript. Vita. "December 2008." Major professor: Pamela Lawhead. Includes bibliographical references (leaves 86-90). Also available online via ProQuest to authorized users.
APA, Harvard, Vancouver, ISO, and other styles
5

Przytycka, Teresa Maria. "Parallel techniques for construction of trees and related problems." Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/30640.

Full text
Abstract:
The concept of a tree has been used in various areas of mathematics for over a century. In particular, trees appear to be one of the most fundamental notions in computer science. Sequential algorithms for trees are generally well studied. Unfortunately many of these sequential algorithms use methods which seem to be inherently sequential. One of the contributions of this thesis is the introduction of several parallel techniques for the construction of various types of trees and the presentation of new parallel tree construction algorithms using these methods. Along with the parallel tree construction techniques presented here, we develop techniques which have broader applications. We use the Parallel Random Access Machine as our model of computation. We consider two basic methods of constructing trees:tree expansion and tree synthesis. In the tree expansion method, we start with a single vertex and construct a tree by adding nodes of degree one and/or by subdividing edges. We use the parallel tree expansion technique to construct the tree representation for graphs in the family of graphs known as cographs. In the tree synthesis method, we start with a forest of single node subtrees and construct a tree by adding edges or (for rooted trees) by creating parent nodes for some roots of the trees in the forest. We present a family of parallel and sequential algorithms to construct various approximations to the Huffman tree. All these algorithms apply the tree synthesis method by constructing a tree in a level-by-level fashion. To support one of the algorithms in the family we develop a technique which we call the cascading sampling technique. One might suspect that the parallel tree synthesis method can be applied only to trees of polylogarithmic height, but this is not the case.We present a technique which we call the valley filling technique and develop its accelerated version called the accelerated valley filling technique. We present an application of this technique to an optimal parallel algorithm for construction of minimax trees.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
6

Vaz, Alves Gleifer. "Transformations for proof-graphs with cycle treatment augmented via geometric perspective techniques." Universidade Federal de Pernambuco, 2009. https://repositorio.ufpe.br/handle/123456789/1418.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:49:50Z (GMT). No. of bitstreams: 1 license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2009
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
O presente trabalho é baseada em dois aspectos fundamentais: (i) o estudo de procedimentos de normalização para sistemas de provas, especialmente para a lógica clássica com dedução natural; e (ii) a investigação de técnicas da perspectiva geométrica aplicadas em propriedades da teoria da prova. Com isso, a motivação específica deste trabalho reside principalmente na análise daqueles trabalhos que estão voltados à definição de técnicas da normalização através de mecanismos da perspectiva geométrica. Destaca-se que técnicas da perspectiva geométrica trazem o uso de arcabouços gráficos e/ou topológicos com a finalidade de representar sistemas formais de provas e suas propriedades. Dessa forma, a primeira parte do documento apresenta o uso de técnicas e arcabouços topológicos para estabelecer algumas propriedades, como, por exemplo, o critério de corretude e a normalização de sistemas de prova. Ao passo que a segunda parte do documento é inicialmente direcionada à descrição de algumas abordagens de normalização (principalmente) para a lógica clássica com dedução natural. E o complemento da segunda parte é dedicado à definição do principal objetivo do trabalho, i.e., desenvolver um procedimento de normalização para o conjunto completo de operadores dos N-Grafos, através do auxílio de algumas técnicas de perspectiva geométrica. (Destaca-se que as técnicas de perspectiva geométrica, aplicadas à normalização dos N-Grafos, não fazem uso de arcabouços topológicos). N-Grafos é um sistema de prova com múltipla conclusão definido para lógica clássica proposicional com dedução natural. Ademais, os N-Grafos possuem tanto regras lógicas como estruturais, estruturas cíclicas são permitidas e além disso as derivações são representadas como grafos direcionados. De fato, a princpal característica do procedimento de normalização aqui apresentado é fornecer um tratamento completo para as estruturas cíclicas. Ou seja, são definidas classes de ciclos válidos, critério de corretude, propriedades e ainda um algoritmo específico para normalizar os ciclos nos N-Grafos. Destaca-se que esses elementos são construídos através do auxílio de arcabouços gráficos. Além disso, o mecanismo de normalização é capaz de lidar com os diferentes papéis executados pelos operadores ?/>. Adicionalmente, apresenta-se uma prova direta da normalização fraca para os N-Grafos, bem como, a determinação das propriedades da subfórmula e da separação
APA, Harvard, Vancouver, ISO, and other styles
7

Zighem, Ismail. "Etude d'invariants de graphes planaires." Université Joseph Fourier (Grenoble), 1998. http://www.theses.fr/1998GRE10211.

Full text
Abstract:
Dans la première partie, nous construisons, à partir de relations linéaires de récurrence, des invariants de graphes planaires 4-réguliers prenant leurs valeurs dans un anneau commutatif. Ces relations représentent des règles récursives bien définies sur cette catégories de graphes, ramenant le calcul des valeurs de l'invariant en ces graphes à une combinaison linéaire d'autres graphes plus réduits. Après avoir dégagé quelques conditions nécessaires pour que ces règles soient mutuellement compatibles, nous montrons en utilisant un résultat de la théorie des systèmes de réécriture qu'elles sont aussi suffisantes. Nous terminons cette partie en évoquant la relation avec le polynôme de Kauffman et en montrant que, pour une évaluation particulière de ses variables, ce polynôme peut être défini à partir de notre invariant. Ce qui constitue une nouvelle preuve d'existence de ce polynôme. La seconde partie aborde le problème de la détermination du nombre d'absorption des graphes de type grille. Dans un premier temps, nous déterminons ce nombre pour les petites grilles croisées (graphes produit croisé de deux chaînes de longueurs k et n, avec k ≤ 33 et n ≤ 40). En utilisant la programmation dynamique, nous présentons pour k fixé un algorithme linéaire en n pour calculer ce nombre. On en déduit alors que ce nombre vérifie des formules simples en fonction de k et n. Ensuite, nous montrons par récurrence, en prenant ces valeurs comme base de récurrence, que ces formules sont vérifiées par ce nombre, pour tous k = 12 ou k ≥ 14 et n ≥ k. Finalement, nous donnons quelques bornes du nombre d'absorption de la grille carrée (graphe produit carré de deux chaînes) qui améliorent les résultats déjà connus
APA, Harvard, Vancouver, ISO, and other styles
8

Preissmann, Myriam. "Sur quelques problèmes théoriques et appliqués de la théorie des graphes : [thèse en partie soutenue sur un ensemble de travaux]." Grenoble 1, 1988. http://www.theses.fr/1988GRE10162.

Full text
Abstract:
Etude de la théorie des graphes (graphes cubiques et indice chromatique, graphes parfaits) et applications (problèmes d'optimisation combinatoire associes a des modèles de physique statique, algorithmes heuristiques et exactes pour la simulation rapide d'un VLSI)
APA, Harvard, Vancouver, ISO, and other styles
9

Subramanian, Arunkumar. "Coding techniques for information-theoretic strong secrecy on wiretap channels." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42776.

Full text
Abstract:
Traditional solutions to information security in communication systems act in the application layer and are oblivious to the effects in the physical layer. Physical-layer security methods, of which information-theoretic security is a special case, try to extract security from the random effects in the physical layer. In information-theoretic security, there are two asymptotic notions of secrecy---weak and strong secrecy This dissertation investigates the problem of information-theoretic strong secrecy on the binary erasure wiretap channel (BEWC) with a specific focus on designing practical codes. The codes designed in this work are based on analysis and techniques from error-correcting codes. In particular, the dual codes of certain low-density parity-check (LDPC) codes are shown to achieve strong secrecy in a coset coding scheme. First, we analyze the asymptotic block-error rate of short-cycle-free LDPC codes when they are transmitted over a binary erasure channel (BEC) and decoded using the belief propagation (BP) decoder. Under certain conditions, we show that the asymptotic block-error rate falls according to an inverse square law in block length, which is shown to be a sufficient condition for the dual codes to achieve strong secrecy. Next, we construct large-girth LDPC codes using algorithms from graph theory and show that the asymptotic bit-error rate of these codes follow a sub-exponential decay as the block length increases, which is a sufficient condition for strong secrecy. The secrecy rates achieved by the duals of large-girth LDPC codes are shown to be an improvement over that of the duals of short-cycle-free LDPC codes.
APA, Harvard, Vancouver, ISO, and other styles
10

Raspaud, André. "Flots et couvertures par des cycles dans les graphes et les matroïdes." Phd thesis, Grenoble 1, 1985. http://tel.archives-ouvertes.fr/tel-00316071.

Full text
Abstract:
Introduction. Couvertures des graphes par des cycles. Couverture par des circuits d'un matroïde régulier qui admet un Z.5- flot non nul. Flots de Fulkerson. Flots de Petersen. Constructions locales. Annexe. Bibliographie
APA, Harvard, Vancouver, ISO, and other styles
11

Meinhardt, Llopis Enric. "Morphological and statistical techniques for the analysis of 3D images." Doctoral thesis, Universitat Pompeu Fabra, 2011. http://hdl.handle.net/10803/22719.

Full text
Abstract:
Aquesta tesi proposa una estructura de dades per emmagatzemar imatges tridimensionals. L'estructura da dades té forma d'arbre i codifica les components connexes dels conjunts de nivell de la imatge. Aquesta estructura és la eina bàsica per moltes aplicacions proposades: operadors morfològics tridimensionals, visualització d'imatges mèdiques, anàlisi d'histogrames de color, seguiment d'objectes en vídeo i detecció de vores. Motivada pel problema de la completació de vores, la tesi conté un estudi de com l'eliminació de soroll mitjançant variació total anisòtropa es pot fer servir per calcular conjunts de Cheeger en mètriques anisòtropes. Aquests conjunts de Cheeger anisòtrops es poden utilitzar per trobar òptims globals d'alguns funcionals per completar vores. També estan relacionats amb certs invariants afins que s'utilitzen en reconeixement d'objectes, i en la tesi s'explicita aquesta relació.
This thesis proposes a tree data structure to encode the connected components of level sets of 3D images. This data structure is applied as a main tool in several proposed applications: 3D morphological operators, medical image visualization, analysis of color histograms, object tracking in videos and edge detection. Motivated by the problem of edge linking, the thesis contains also an study of anisotropic total variation denoising as a tool for computing anisotropic Cheeger sets. These anisotropic Cheeger sets can be used to find global optima of a class of edge linking functionals. They are also related to some affine invariant descriptors which are used in object recognition, and this relationship is laid out explicitly.
APA, Harvard, Vancouver, ISO, and other styles
12

Imbert, Michel. "Combinatoire des revêtements : cellulation des espaces de Hurwitz." Université Joseph Fourier (Grenoble), 1998. http://www.theses.fr/1998GRE10228.

Full text
Abstract:
L'objet de cette these est l'etude combinatoire des revetements entre surfaces de riemann et de leurs espaces modulaires : les espaces de hurwitz. Les methodes utilisees sont graphiques : nous utilisons largement le concept de graphe epais. Nous donnons un algorithme de calcul des lacets d'homologie d'un tel graphe. Nous montrons le theoreme d'existence de riemann au niveau des graphes epais. Nous detaillons le processus, du a j. L. Harer et m. Kontsevich, realisant un graphe epais a n faces muni de longueurs d'aretes en une surface de riemann compacte privee de n piqures. Ce processus, bijectif par le theoreme de strebel, mene a une decomposition cellulaire connue et fondamentale des espaces de modules de surfaces de riemann compactes de genre fixe et privees de n piqures. Nous sommes alors en mesure de preciser la variation de la structure complexe en fonction des longueurs d'aretes (ce qui donne la continuite du processus), et de generaliser le processus aux revetements de surfaces de riemann compactes. Cela conduit a la decomposition cellulaire des espaces de hurwitz. Nous etudions un exemple ou l'espace de hurwitz s'identifie a une courbe modulaire. L'etude des revetements cycliques des tores permet de relier l'espace de hurwitz correspondant a des espaces de modules definis par e. Witten dans le cadre d'une conjecture generalisant celle demontree par m. Kontsevich. L'extension des resultats aux compactifications des espaces de modules et de hurwitz s'appuie sur le phenomene de retraction de boucles d'un graphe epais ne bordant pas des faces.
APA, Harvard, Vancouver, ISO, and other styles
13

Naderi, Ramin. "Quadtree-based processing of digital images." PDXScholar, 1986. https://pdxscholar.library.pdx.edu/open_access_etds/3590.

Full text
Abstract:
Image representation plays an important role in image processing applications, which usually. contain a huge amount of data. An image is a two-dimensional array of points, and each point contains information (eg: color). A 1024 by 1024 pixel image occupies 1 mega byte of space in the main memory. In actual circumstances 2 to 3 mega bytes of space are needed to facilitate the various image processing tasks. Large amounts of secondary memory are also required to hold various data sets. In this thesis, two different operations on the quadtree are presented. There are, in general, two types of data compression techniques in image processing. One approach is based on elimination of redundant data from the original picture. Other techniques rely on higher levels of processing such as interpretations, generations, inductions and deduction procedures (1, 2). One of the popular techniques of data representation that has received a considerable amount of attention in recent years is the quadtree data structure. This has led to the development of various techniques for performing conversions and operations on the quadtree. Klinger and Dyer (3) provide a good bibliography of the history of quadtrees. Their paper reports experiments on the degree of compaction of picture representation which may be achieved with tree encoding. Their experiments show that tree encoding can produce memory savings. Pavlidis [15] reports on the approximation of pictures by quadtrees. Horowitz and Pavidis [16] show how to segment a picture using traversal of a quadtree. They segment the picture by polygonal boundaries. Tanimoto [17] discusses distortions which may occur in quadtrees for pictures. Tanimoto [18, p. 27] observes that quadtree representation is particularly convenient for scaling a picture by powers of two. Quadtrees are also useful in graphics and animation applications [19, 20] which are oriented toward construction of images from polygons and superpositiofis of images. Encoded pictures are useful for display, especially if encoding lends itself to processing.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Eugene. "Surface Topological Analysis for Image Synthesis." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5038.

Full text
Abstract:
Topology-related issues are becoming increasingly important in Computer Graphics. This research examines the use of topological analysis for solving two important problems in 3D Graphics: surface parameterization, and vector field design on surfaces. Many applications, such as high-quality and interactive image synthesis, benefit from the solutions to these problems. Surface parameterization refers to segmenting a 3D surface into a number of patches and unfolding them onto a plane. A surface parameterization allows surface properties to be sampled and stored in a texture map for high-quality and interactive display. One of the most important quality measurements for surface parameterization is stretch, which causes an uneven sampling rate across the surface and needs to be avoided whenever possible. In this thesis, I present an automatic parameterization technique that segments the surface according to the handles and large protrusions in the surface. This results in a small number of large patches that can be unfolded with relatively little stretch. To locate the handles and large protrusions, I make use of topological analysis of a distance-based function on the surface. Vector field design refers to creating continuous vector fields on 3D surfaces with control over vector field topology, such as the number and location of the singularities. Many graphics applications make use of an input vector field. The singularities in the input vector field often cause visual artifacts for these applications, such as texture synthesis and non-photorealistic rendering. In this thesis, I describe a vector field design system for both planar domains and 3D mesh surfaces. The system provides topological editing operations that allow the user to control the number and location of the singularities in the vector field. For the system to work for 3D meshes surface, I present a novel piecewise interpolating scheme that produces a continuous vector field based on the vector values defined at the vertices of the mesh. I demonstrate the effectiveness of the system through several graphics applications: painterly rendering of still images, pencil-sketches of surfaces, and texture synthesis.
APA, Harvard, Vancouver, ISO, and other styles
15

Majd, Farjam. "Two new parallel processors for real time classification of 3-D moving objects and quad tree generation." PDXScholar, 1985. https://pdxscholar.library.pdx.edu/open_access_etds/3421.

Full text
Abstract:
Two related image processing problems are addressed in this thesis. First, the problem of identification of 3-D objects in real time is explored. An algorithm to solve this problem and a hardware system for parallel implementation of this algorithm are proposed. The classification scheme is based on the "Invariant Numerical Shape Modeling" (INSM) algorithm originally developed for 2-D pattern recognition such as alphanumeric characters. This algorithm is then extended to 3-D and is used for general 3-D object identification. The hardware system is an SIMD parallel processor, designed in bit slice fashion for expandability. It consists of a library of images coded according to the 3-D INSM algorithm and the SIMD classifier which compares the code of the unknown image to the library codes in a single clock pulse to establish its identity. The output of this system consists of three signals: U, for unique identification; M, for multiple identification; and N, for non-identification of the object. Second, the problem of real time image compaction is addressed. The quad tree data structure is described. Based on this structure, a parallel processor with a tree architecture is developed which is independent of the data entry process, i.e., data may be entered pixel by pixel or all at once. The hardware consists of a tree processor containing a tree generator and three separate memory arrays, a data transfer processor, and a main memory unit. The tree generator generates the quad tree of the input image in tabular form, using the memory arrays in the tree processor for storage of the table. This table can hold one picture frame at a given time. Hence, for processing multiple picture frames the data transfer processor is used to transfer their respective quad trees from the tree processor memory to the main memory. An algorithm is developed to facilitate the determination of the connections in the circuit.
APA, Harvard, Vancouver, ISO, and other styles
16

Pocztar, Laurence. "Mise en correspondance spatio-temporelle d'images pour la restitution d'une gerbe de particules." Grenoble 2 : ANRT, 1987. http://catalogue.bnf.fr/ark:/12148/cb376089045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Darishchev, Alexander. "Analyse de connectivité et techniques de partitionnement de données appliquées à la caractérisation et la modélisation d'écoulement au sein des réservoirs très hétérogènes." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S162.

Full text
Abstract:
Les techniques informatiques ont gagné un rôle primordial dans le développement et l'exploitation des ressources d'hydrocarbures naturelles ainsi que dans d'autres opérations liées à des réservoirs souterrains. L'un des problèmes cruciaux de la modélisation de réservoir et les prévisions de production réside dans la présélection des modèles de réservoir appropriés à la quantification d'incertitude et au le calage robuste des résultats de simulation d'écoulement aux réelles mesures et observations acquises du gisement. La présente thèse s'adresse à ces problématiques et à certains autres sujets connexes.Nous avons élaboré une stratégie pour faciliter et accélérer l'ajustement de tels modèles numériques aux données de production de champ disponibles. En premier lieu, la recherche s'était concentrée sur la conceptualisation et l'implémentation des modèles de proxy reposant sur l'analyse de la connectivité, comme une propriété physique intégrante et significative du réservoir, et des techniques avancées du partitionnement de données et de l'analyse de clusters. La méthodologie développée comprend aussi plusieurs approches originales de type probabiliste orientées vers les problèmes d'échantillonnage d'incertitude et de détermination du nombre de réalisations et de l'espérance de la valeur d'information d'échantillon. Afin de cibler et donner la priorité aux modèles pertinents, nous avons agrégé les réalisations géostatistiques en formant des classes distinctes avec une mesure de distance généralisée. Ensuite, afin d'améliorer la classification, nous avons élargi la technique graphique de silhouettes, désormais appelée la "séquence entière des silhouettes multiples" dans le partitionnement de données et l'analyse de clusters. Cette approche a permis de recueillir une information claire et compréhensive à propos des dissimilarités intra- et intre-cluster, particulièrement utile dans le cas des structures faibles, voire artificielles. Finalement, la séparation spatiale et la différence de forme ont été visualisées graphiquement et quantifiées grâce à la mesure de distance probabiliste.Il apparaît que les relations obtenues justifient et valident l'applicabilité des approches proposées pour améliorer la caractérisation et la modélisation d'écoulement. Des corrélations fiables ont été obtenues entre les chemins de connectivité les plus courts "injecteur-producteur" et les temps de percée d'eau pour des configurations différentes de placement de puits, niveaux d'hétérogénéité et rapports de mobilité de fluides variés. Les modèles de connectivité proposés ont produit des résultats suffisamment précis et une performance compétitive au méta-niveau. Leur usage comme des précurseurs et prédicateurs ad hoc est bénéfique en étape du traitement préalable de la méthodologie. Avant le calage d'historique, un nombre approprié et gérable des modèles pertinents peut être identifié grâce à la comparaison des données de production disponibles avec les résultats de
Computer-based workflows have gained a paramount role in development and exploitation of natural hydrocarbon resources and other subsurface operations. One of the crucial problems of reservoir modelling and production forecasting is in pre-selecting appropriate models for quantifying uncertainty and robustly matching results of flow simulation to real field measurements and observations. This thesis addresses these and other related issues. We have explored a strategy to facilitate and speed up the adjustment of such numerical models to available field production data. Originally, the focus of this research was on conceptualising, developing and implementing fast proxy models related to the analysis of connectivity, as a physically meaningful property of the reservoir, with advanced cluster analysis techniques. The developed methodology includes also several original probability-oriented approaches towards the problems of sampling uncertainty and determining the sample size and the expected value of sample information. For targeting and prioritising relevant reservoir models, we aggregated geostatistical realisations into distinct classes with a generalised distance measure. Then, to improve the classification, we extended the silhouette-based graphical technique, called hereafter the "entire sequence of multiple silhouettes" in cluster analysis. This approach provided clear and comprehensive information about the intra- and inter-cluster dissimilarities, especially helpful in the case of weak, or even artificial, structures. Finally, the spatial separation and form-difference of clusters were graphically visualised and quantified with a scale-invariant probabilistic distance measure. The obtained relationships appeared to justify and validate the applicability of the proposed approaches to enhance the characterisation and modelling of flow. Reliable correlations were found between the shortest "injector-producer" pathways and water breakthrough times for different configurations of well placement, various heterogeneity levels and mobility ratios of fluids. The proposed graph-based connectivity proxies provided sufficiently accurate results and competitive performance at the meta-level. The use of them like precursors and ad hoc predictors is beneficial at the pre-processing stage of the workflow. Prior to history matching, a suitable and manageable number of appropriate reservoir models can be identified from the comparison of the available production data with the selected centrotype-models regarded as the class representatives, only for which the full fluid flow simulation is pre-requisite. The findings of this research work can easily be generalised and considered in a wider scope. Possible extensions, further improvements and implementation of them may also be expected in other fields of science and technology
APA, Harvard, Vancouver, ISO, and other styles
18

Chiwewe, Tapiwa Moses. "A distributed topology control technique for low interference and energy efficiency in wireless sensor networks." Diss., University of Pretoria, 2010. http://hdl.handle.net/2263/22800.

Full text
Abstract:
Wireless sensor networks are used in several multi-disciplinary areas covering a wide variety of applications. They provide distributed computing, sensing and communication in a powerful integration of capabilities. They have great long-term economic potential and have the ability to transform our lives. At the same time however, they pose several challenges – mostly as a result of their random deployment and non-renewable energy sources.Among the most important issues in wireless sensor networks are energy efficiency and radio interference. Topology control plays an important role in the design of wireless ad hoc and sensor networks; it is capable of constructing networks that have desirable characteristics such as sparser connectivity, lower transmission power and a smaller node degree.In this research a distributed topology control technique is presented that enhances energy efficiency and reduces radio interference in wireless sensor networks. Each node in the network makes local decisions about its transmission power and the culmination of these local decisions produces a network topology that preserves global connectivity. The topology that is produced consists of a planar graph that is a power spanner, it has lower node degrees and can be constructed using local information. The network lifetime is increased by reducing transmission power and the use of low node degrees reduces traffic interference. The approach to topology control that is presented in this document has an advantage over previously developed approaches in that it focuses not only on reducing either energy consumption or radio interference, but on reducing both of these obstacles. Results are presented of simulations that demonstrate improvements in performance. AFRIKAANS : Draadlose sensor netwerke word gebruik in verskeie multi-dissiplinêre areas wat 'n wye verskeidenheid toepassings dek. Hulle voorsien verspreide berekening, bespeuring en kommunikasie in 'n kragtige integrate van vermoëns. Hulle het goeie langtermyn ekonomiese potentiaal en die vermoë om ons lewens te herskep. Terselfdertyd lewer dit egter verskeie uitdagings op as gevolg van hul lukrake ontplooiing en nie-hernubare energie bronne. Van die belangrikste kwessies in draadlose sensor netwerke is energie-doeltreffendheid en radiosteuring. Topologie-beheer speel 'n belangrike rol in die ontwerp van draadlose informele netwerke en sensor netwerke en dit is geskik om netwerke aan te bring wat gewenste eienskappe het soos verspreide koppeling, laer transmissiekrag en kleiner nodus graad.In hierdie ondersoek word 'n verspreide topologie beheertegniek voorgelê wat energie-doeltreffendheid verhoog en radiosteuring verminder in draadlose sensor netwerke. Elke nodus in die netwerk maak lokale besluite oor sy transmissiekrag en die hoogtepunt van hierdie lokale besluite lewer 'n netwerk-topologie op wat globale verbintenis behou.Die topologie wat gelewer word is 'n tweedimensionele grafiek en 'n kragsleutel; dit het laer nodus grade en kan gebou word met lokale inligting. Die netwerk-leeftyd word vermeerder deur transmissiekrag te verminder en verkeer-steuring word verminder deur lae nodus grade. Die benadering tot topologie-beheer wat voorgelê word in hierdie skrif het 'n voordeel oor benaderings wat vroeër ontwikkel is omdat dit nie net op die vermindering van net energie verbruik of net radiosteuring fokus nie, maar op albei. Resultate van simulasies word voorgelê wat die verbetering in werkverrigting demonstreer.
Dissertation (MEng)--University of Pretoria, 2011.
Electrical, Electronic and Computer Engineering
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
19

Kwok, Wing-hong, and 郭永康. "Streamlined and prioritized hierarchical relations: a technique for improving the effectiveness of theclassification-tree methodology." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B2975107X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Teng, Sin Yong. "Intelligent Energy-Savings and Process Improvement Strategies in Energy-Intensive Industries." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-433427.

Full text
Abstract:
S tím, jak se neustále vyvíjejí nové technologie pro energeticky náročná průmyslová odvětví, stávající zařízení postupně zaostávají v efektivitě a produktivitě. Tvrdá konkurence na trhu a legislativa v oblasti životního prostředí nutí tato tradiční zařízení k ukončení provozu a k odstavení. Zlepšování procesu a projekty modernizace jsou zásadní v udržování provozních výkonů těchto zařízení. Současné přístupy pro zlepšování procesů jsou hlavně: integrace procesů, optimalizace procesů a intenzifikace procesů. Obecně se v těchto oblastech využívá matematické optimalizace, zkušeností řešitele a provozní heuristiky. Tyto přístupy slouží jako základ pro zlepšování procesů. Avšak, jejich výkon lze dále zlepšit pomocí moderní výpočtové inteligence. Účelem této práce je tudíž aplikace pokročilých technik umělé inteligence a strojového učení za účelem zlepšování procesů v energeticky náročných průmyslových procesech. V této práci je využit přístup, který řeší tento problém simulací průmyslových systémů a přispívá následujícím: (i)Aplikace techniky strojového učení, která zahrnuje jednorázové učení a neuro-evoluci pro modelování a optimalizaci jednotlivých jednotek na základě dat. (ii) Aplikace redukce dimenze (např. Analýza hlavních komponent, autoendkodér) pro vícekriteriální optimalizaci procesu s více jednotkami. (iii) Návrh nového nástroje pro analýzu problematických částí systému za účelem jejich odstranění (bottleneck tree analysis – BOTA). Bylo také navrženo rozšíření nástroje, které umožňuje řešit vícerozměrné problémy pomocí přístupu založeného na datech. (iv) Prokázání účinnosti simulací Monte-Carlo, neuronové sítě a rozhodovacích stromů pro rozhodování při integraci nové technologie procesu do stávajících procesů. (v) Porovnání techniky HTM (Hierarchical Temporal Memory) a duální optimalizace s několika prediktivními nástroji pro podporu managementu provozu v reálném čase. (vi) Implementace umělé neuronové sítě v rámci rozhraní pro konvenční procesní graf (P-graf). (vii) Zdůraznění budoucnosti umělé inteligence a procesního inženýrství v biosystémech prostřednictvím komerčně založeného paradigmatu multi-omics.
APA, Harvard, Vancouver, ISO, and other styles
21

Hsu, Chi-Yu, and 胥吉友. "Improved Image Segmentation Techniques Based on Superpixels and Graph Theory with Applications of Saliency Detection." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/40870211370266280310.

Full text
Abstract:
碩士
國立臺灣大學
電信工程學研究所
101
Image segmentation is a fundamental problem in computer vision and image processing. Though this topic has been researched for many years, it is still a challenging task. Recently, the researches of superpixels have great improvement. This new technique makes the traditional segmentation algorithms more efficient and has better performances. On the other hand, the saliency detection is another new topic of image processing and its performance usually closely related to the segmentation techniques we used. In this thesis, we propose two algorithms for image segmentation and saliency detection, respectively. For image segmentation, an effective graph-based image segmentation algorithm using the superpixel-based graph representation is introduced. The techniques of SLIC superpixels, 5-D spectral clustering, and boundary-focused region merging are adopted in the proposed algorithm. With SLIC superpixels, the original image segmentation problem is transformed into the superpixel labeling problem. It makes the proposed algorithm more efficient than pixel-based segmentation algorithms. With the proposed methods of 5-D spectral clustering and boundary-focused region merging, the position information is considered for clustering and the threshold for region merging can be adaptive. These techniques make the segmentation result more consistent with human perception. The simulations on the Berkeley segmentation database show that our proposed method outperforms state-of-the-art methods. For saliency detection, a very effective saliency detection algorithm is proposed. Our algorithm is mainly based on two new techniques. First, the discrete cosine transform (DCT) is used for constructing the block-wise saliency map. Then, the superpixel-based segmentation is applied. Since DCT coefficients can reflect the color features of each block in the frequency domain and superpixels can well preserve object boundaries, with these two techniques, the performance of saliency detection can be significantly improved. The simulations performed on a database of 1000 images with human-marked ground truths show that our proposed method can extract the salient region very accurately and outperforms all of the existing saliency detection methods.
APA, Harvard, Vancouver, ISO, and other styles
22

Guyer, Michael. "Common Techniques in Graceful Tree Labeling with a New Computational Approach." 2016. http://digital.library.duq.edu/u?/etd,197178.

Full text
Abstract:
The graceful tree conjecture was first introduced over 50 years ago, and to this day it remains largely unresolved. Ideas for how to label arbitrary trees have been sparse, and so most work in this area focuses on demonstrating that particular classes of trees are graceful. In my research, I continue this effort and establish the gracefulness of some new tree types using previously developed techniques for constructing graceful trees. Meanwhile, little work has been done on developing computational methods for obtaining graceful labelings, as direct approaches are computationally infeasible for even moderately large trees. With this in mind, I have designed a new computational approach for constructing a graceful labeling for trees with sufficiently many leaves. This approach leverages information about the local structures present in a given tree in order to construct a suitable labeling. It has been shown to work for many small cases and thoughts on how to extend this approach for larger trees are put forth.
McAnulty College and Graduate School of Liberal Arts;
Computational Mathematics
MS;
Thesis;
APA, Harvard, Vancouver, ISO, and other styles
23

"Constraint optimization techniques for graph matching applicable to 3-D object recognition." Chinese University of Hong Kong, 1996. http://library.cuhk.edu.hk/record=b5888888.

Full text
Abstract:
by Chi-Min Pang.
Thesis (M.Phil.)--Chinese University of Hong Kong, 1996.
Includes bibliographical references (leaves 110-[115]).
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Range Images --- p.1
Chapter 1.2 --- Rigid Body Model --- p.3
Chapter 1.3 --- Motivation --- p.4
Chapter 1.4 --- Thesis Outline --- p.6
Chapter 2 --- Object Recognition by Relaxation Processes --- p.7
Chapter 2.1 --- An Overview of Probabilistic Relaxation Labelling --- p.8
Chapter 2.2 --- Formulation of Model-matching Problem Solvable by Probabilistic Relaxation --- p.10
Chapter 2.2.1 --- Compatibility Coefficient --- p.11
Chapter 2.2.2 --- Match Score --- p.13
Chapter 2.2.3 --- Iterative Algorithm --- p.14
Chapter 2.2.4 --- A Probabilistic Concurrent Matching Scheme --- p.15
Chapter 2.3 --- Formulation of Model-merging Problem Solvable by Fuzzy Relaxation --- p.17
Chapter 2.3.1 --- Updating Mechanism --- p.17
Chapter 2.3.2 --- Iterative Algorithm --- p.19
Chapter 2.3.3 --- Merging Sub-Rigid Body Models --- p.20
Chapter 2.4 --- Simulation Results --- p.21
Chapter 2.4.1 --- Experiments in Model-matching Using Probabilistic Relaxation --- p.22
Chapter 2.4.2 --- Experiments in Model-matching Using Probabilistic Concur- rent Matching Scheme --- p.26
Chapter 2.4.3 --- Experiments in Model-merging Using Fuzzy Relaxation --- p.33
Chapter 2.5 --- Summary --- p.36
Chapter 3 --- Object Recognition by Hopfield Network --- p.37
Chapter 3.1 --- An Overview of Hopfield Network --- p.38
Chapter 3.2 --- Model-matching Problem Solved by Hopfield Network --- p.41
Chapter 3.2.1 --- Representation of the Solution --- p.41
Chapter 3.2.2 --- Energy Function --- p.42
Chapter 3.2.3 --- Equations of Motion --- p.46
Chapter 3.2.4 --- Interpretation of Solution --- p.49
Chapter 3.2.5 --- Convergence of the Hopfield Network --- p.50
Chapter 3.2.6 --- Iterative Algorithm --- p.51
Chapter 3.3 --- Estimation of Distance Threshold Value --- p.53
Chapter 3.4 --- Cooperative Concurrent Matching Scheme --- p.55
Chapter 3.4.1 --- Scheme for Recognizing a Single Object --- p.56
Chapter 3.4.2 --- Scheme for Recognizing Multiple Objects --- p.60
Chapter 3.5 --- Simulation Results --- p.60
Chapter 3.5.1 --- Experiments in the Model-matching Problem Using a Hopfield Network --- p.61
Chapter 3.5.2 --- Experiments in Model-matching Problem Using Cooperative Concurrent Matching --- p.69
Chapter 3.5.3 --- Experiments in Model-merging Problem Using Hopfield Network --- p.77
Chapter 3.6 --- Summary --- p.80
Chapter 4 --- Genetic Generation of Weighting Parameters for Hopfield Network --- p.83
Chapter 4.1 --- An Overview of Genetic Algorithms --- p.84
Chapter 4.2 --- Determination of Weighting Parameters for Hopfield Network --- p.86
Chapter 4.2.1 --- Chromosomal Representation --- p.87
Chapter 4.2.2 --- Initial Population --- p.88
Chapter 4.2.3 --- Evaluation Function --- p.88
Chapter 4.2.4 --- Genetic Operators --- p.89
Chapter 4.2.5 --- Control Parameters --- p.91
Chapter 4.2.6 --- Iterative Algorithm --- p.94
Chapter 4.3 --- Simulation Results --- p.95
Chapter 4.3.1 --- Experiments in Model-matching Problem using Hopfield Net- work with Genetic Generated Parameters --- p.95
Chapter 4.3.2 --- Experiments in Model-merging Problem Using Hopfield Network --- p.101
Chapter 4.4 --- Summary --- p.104
Chapter 5 --- Conclusions --- p.106
Chapter 5.1 --- Conclusions --- p.106
Chapter 5.2 --- Suggestions for Future Research --- p.109
Bibliography --- p.110
Chapter A --- Proof of Convergence of Fuzzy Relaxation Process --- p.116
APA, Harvard, Vancouver, ISO, and other styles
24

Θεοχαράτος, Χρήστος. "Επεξεργασία και διαχείριση εικόνας βασισμένη σε μεθόδους γραφημάτων και αναγνώρισης προτύπων με εφαρμογή σε πολυμέσα." Thesis, 2006. http://nemertes.lis.upatras.gr/jspui/handle/10889/3557.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

(8083247), Yong Hoon Kim. "INTEGRATED MODELING FRAMEWORK FOR DYNAMIC INFORMATION FLOW AND TRAFFIC FLOW UNDER VEHICLE-TO-VEHICLE COMMUNICATIONS: THEORETICAL ANALYSIS AND APPLICATION." Thesis, 2019.

Find full text
Abstract:
Advances in information and communication technologies enable new paradigms for connectivity involving vehicles, infrastructure, and the broader road transportation system environment. Vehicle-to-vehicle (V2V) communications under the aegis of the connected vehicle are being leveraged for novel applications related to traffic safety, management, and control, which lead to a V2V-based traffic system. Within the framework of a V2V-based traffic system, this study proposes an integrated modeling framework to model the dynamics of a V2V-based traffic system that entails spatiotemporal interdependencies among the traffic flow dynamics, V2V communication constraints, the dynamics of information flow propagation, and V2V-based application. The proposed framework systematically exploits their spatiotemporal interdependencies by theoretical and computational approaches.
First, a graph-based multi-layer framework is proposed to model the V2V-based advanced traveler information system (ATIS) as a complex system which is comprised of coupled network layers. This framework addresses the dynamics of each physical vehicular traffic flow, inter-vehicle communication, and information flow propagation components within a layer, while capturing their interactions among layers. This enables the capabilities to transparently understand the spatiotemporal evolution of information flow propagation through a graph structure. A novel contribution is the systematic modeling of an evolving information flow network that is characterized as the manifestation of spatiotemporal events in the other two networks to enhance the understanding of the information flow evolution by capturing the dynamics of the interactions involving the traffic flow and the inter-vehicle communication layers. The graph-based approach enables the computationally efficient tracking of information propagation using a simple graph-based search algorithm and the computationally efficient storage of information through a single graph database.
Second, this dissertation proposes analytical approaches that enable theoretical investigation into the qualitative properties of information flow propagation speed. The proposed analytical models, motivated from spatiotemporal epidemiology, introduce the concept of an information flow propagation wave (IFPW) to facilitate the analysis of the information propagation characteristics and impacts of traffic dynamics at a macroscopic level. The first model consists of a system of difference equations in the discrete-space and discrete-time domains where an information dissemination is described in the upper layer and a vehicular traffic flow is modeled in the lower layer. This study further proposes a continuous-space and continuous-time analytical model that can provide a closed-form solution for the IFPW speed to establish an analytical relationship between the IFPW speed and the underlying traffic flow dynamics. It can corporate the effects of congested traffic, such as the backward traffic propagation wave, on information flow propagation. Thereby, it illustrates the linkage between information flow propagation and the underlying traffic dynamics. Further, it captures V2V communication constraints in a realistic manner using a probabilistic communication kernel (which captures the probability).
Third, within the integrated modeling framework, this dissertation captures the impact of information flow propagation on traffic safety and control applications. The proposed multi-anticipative forward collision warning system predicts the driver’s maneuver intention using a coupled hidden Markov model, which is one of statistical machine learning techniques. It significantly reduces the false alarm rates by addressing the uncertainty associate improves the performance of the future motion prediction, while currently available sensor-based kinematic models for addressing the uncertainty associated with the future motion prediction. A network-level simulation framework is developed to investigate a V2V-based ATIS in a large-scale network by capturing its inter-dependencies and feedback loop. This modeling framework provides the understanding of the relationship between the travelers’ routing decisions and information flow propagation.
This thesis provides a holistic understanding of information flow propagation characteristics in space and time by characterizing interactions among information flow propagation, and underlying traffic flow, and V2V communications characteristics. The proposed models and the closed-form solution of IFPW speed can help in designing effective V2V-based traffic systems, without relying on computationally expensive numerical methods. An innovative aspect of this approach represents a building block to develop both descriptive capabilities and prescriptive strategies related to propagating the flow of useful information efficiently and synergistically generating routing mechanisms that enhance the traffic network performance. Given the lack of appropriate methodologies to characterize the information flow propagation, this thesis expects to make a novel and significant contribution to understanding the characteristics of V2V-based traffic systems and their analysis.
APA, Harvard, Vancouver, ISO, and other styles
26

Barnard, Andries. "Uitgebreide struktuurgrafiekgrammatikas." Thesis, 2014. http://hdl.handle.net/10210/12967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Dlamini, Wisdom Mdumiseni Dabulizwe. "Spatial analysis of invasive alien plant distribution patterns and processes using Bayesian network-based data mining techniques." Thesis, 2016. http://hdl.handle.net/10500/20692.

Full text
Abstract:
Invasive alien plants have widespread ecological and socioeconomic impacts throughout many parts of the world, including Swaziland where the government declared them a national disaster. Control of these species requires knowledge on the invasion ecology of each species including how they interact with the invaded environment. Species distribution models are vital for providing solutions to such problems including the prediction of their niche and distribution. Various modelling approaches are used for species distribution modelling albeit with limitations resulting from statistical assumptions, implementation and interpretation of outputs. This study explores the usefulness of Bayesian networks (BNs) due their ability to model stochastic, nonlinear inter-causal relationships and uncertainty. Data-driven BNs were used to explore patterns and processes influencing the spatial distribution of 16 priority invasive alien plants in Swaziland. Various BN structure learning algorithms were applied within the Weka software to build models from a set of 170 variables incorporating climatic, anthropogenic, topo-edaphic and landscape factors. While all the BN models produced accurate predictions of alien plant invasion, the globally scored networks, particularly the hill climbing algorithms, performed relatively well. However, when considering the probabilistic outputs, the constraint-based Inferred Causation algorithm which attempts to generate a causal BN structure, performed relatively better. The learned BNs reveal that the main pathways of alien plants into new areas are ruderal areas such as road verges and riverbanks whilst humans and human activity are key driving factors and the main dispersal mechanism. However, the distribution of most of the species is constrained by climate particularly tolerance to very low temperatures and precipitation seasonality. Biotic interactions and/or associations among the species are also prevalent. The findings suggest that most of the species will proliferate by extending their range resulting in the whole country being at risk of further invasion. The ability of BNs to express uncertain, rather complex conditional and probabilistic dependencies and to combine multisource data makes them an attractive technique for species distribution modeling, especially as joint invasive species distribution models (JiSDM). Suggestions for further research are provided including the need for rigorous invasive species monitoring, data stewardship and testing more BN learning algorithms.
Environmental Sciences
D. Phil. (Environmental Science)
APA, Harvard, Vancouver, ISO, and other styles
28

Malema, Gabofetswe Alafang. "Low-density parity-check codes : construction and implementation." 2007. http://hdl.handle.net/2440/45525.

Full text
Abstract:
Low-density parity-check (LDPC) codes have been shown to have good error correcting performance approaching Shannon’s limit. Good error correcting performance enables efficient and reliable communication. However, a LDPC code decoding algorithm needs to be executed efficiently to meet cost, time, power and bandwidth requirements of target applications. The constructed codes should also meet error rate performance requirements of those applications. Since their rediscovery, there has been much research work on LDPC code construction and implementation. LDPC codes can be designed over a wide space with parameters such as girth, rate and length. There is no unique method of constructing LDPC codes. Existing construction methods are limited in some way in producing good error correcting performing and easily implementable codes for a given rate and length. There is a need to develop methods of constructing codes over a wide range of rates and lengths with good performance and ease of hardware implementability. LDPC code hardware design and implementation depend on the structure of target LDPC code and is also as varied as LDPC matrix designs and constructions. There are several factors to be considered including decoding algorithm computations,processing nodes interconnection network, number of processing nodes, amount of memory, number of quantization bits and decoding delay. All of these issues can be handled in several different ways. This thesis is about construction of LDPC codes and their hardware implementation. LDPC code construction and implementation issues mentioned above are too many to be addressed in one thesis. The main contribution of this thesis is the development of LDPC code construction methods for some classes of structured LDPC codes and techniques for reducing decoding time. We introduce two main methods for constructing structured codes. In the first method, column-weight two LDPC codes are derived from distance graphs. A wide range of girths, rates and lengths are obtained compared to existing methods. The performance and implementation complexity of obtained codes depends on the structure of their corresponding distance graphs. In the second method, a search algorithm based on bit-filing and progressive-edge growth algorithms is introduced for constructing quasi-cyclic LDPC codes. The algorithm can be used to form a distance or Tanner graph of a code. This method could also obtain codes over a wide range of parameters. Cycles of length four are avoided by observing the row-column constraint. Row-column connections observing this condition are searched sequentially or randomly. Although the girth conditions are not sufficient beyond six, larger girths codes were easily obtained especially at low rates. The advantage of this algorithm compared to other methods is its flexibility. It could be used to construct codes for a given rate and length with girths of at least six for any sub-matrix configuration or rearrangement. The code size is also easily varied by increasing or decreasing sub-matrix size. Codes obtained using a sequential search criteria show poor performance at low girths (6 and 8) while random searches result in good performing codes. Quasi-cyclic codes could be implemented in a variety of decoder architectures. One of the many options is the choice of processing nodes interconnect. We show how quasi-cyclic codes processing could be scheduled through a multistage network. Although these net-works have more delay than other modes of communication, they offer more flexibility at a reasonable cost. Banyan and Benes networks are suggested as the most suitable networks. Decoding delay is also one of several issues considered in decoder design and implementation. In this thesis, we overlap check and variable node computations to reduce decoding time. Three techniques are discussed, two of which are introduced in this thesis. The techniques are code matrix permutation, matrix space restriction and sub-matrix row-column scheduling. Matrix permutation rearranges the parity-check matrix such that rows and columns that do not have connections in common are separated. This techniques can be applied to any matrix. Its effectiveness largely depends on the structure of the code. We show that its success also depends on the size of row and column weights. Matrix space restriction is another technique that can be applied to any code and has fixed reduction in time or amount of overlap. Its success depends on the amount of restriction and may be traded with performance loss. The third technique already suggested in literature relies on the internal cyclic structure of sub-matrices to achieve overlapping. The technique is limited to LDPC code matrices in which the number of sub-matrices is equal to row and column weights. We show that it can be applied to other codes with a lager number of sub-matrices than code weights. However, in this case maximum overlap is not guaranteed. We calculate the lower bound on the amount of overlapping. Overlapping could be applied to any sub-matrix configuration of quasi-cyclic codes by arbitrarily choosing the starting rows for processing. Overlapping decoding time depends on inter-iteration waiting times. We show that there are upper bounds on waiting times which depend on the code weights. Waiting times could be further reduced by restricting shifts in identity sub-matrices or using smaller sub-matrices. This overlapping technique can reduce the decoding time by up to 50% compared to conventional message and computation scheduling. Techniques of matrix permutation and space restriction results in decoder architectures that are flexible in LDPC code design in terms of code weights and size. This is due to the fact that with these techniques, rows and columns are processed in sequential order to achieve overlapping. However, in the existing technique, all sub-matrices have to be processed in parallel to achieve overlapping. Parallel processing of all code sub-matrices requires the architecture to have the number of processing units at least equal to the number sub-matrices. Processing units and memory space should therefore be distributed among the sub-matrices according to the sub-matrices arrangement. This leads to high complexity or inflexibility in the decoder architecture. We propose a simple, programmable and high throughput decoder architecture based on matrix permutation and space restriction techniques.
Thesis(Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 2007
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography