To see the other types of publications on this topic, follow the link: Graph-based analysis.

Dissertations / Theses on the topic 'Graph-based analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Graph-based analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Schiller, Benjamin. "Graph-based Analysis of Dynamic Systems." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-230611.

Full text
Abstract:
The analysis of dynamic systems provides insights into their time-dependent characteristics. This enables us to monitor, evaluate, and improve systems from various areas. They are often represented as graphs that model the system's components and their relations. The analysis of the resulting dynamic graphs yields great insights into the system's underlying structure, its characteristics, as well as properties of single components. The interpretation of these results can help us understand how a system works and how parameters influence its performance. This knowledge supports the design of new systems and the improvement of existing ones. The main issue in this scenario is the performance of analyzing the dynamic graph to obtain relevant properties. While various approaches have been developed to analyze dynamic graphs, it is not always clear which one performs best for the analysis of a specific graph. The runtime also depends on many other factors, including the size and topology of the graph, the frequency of changes, and the data structures used to represent the graph in memory. While the benefits and drawbacks of many data structures are well-known, their runtime is hard to predict when used for the representation of dynamic graphs. Hence, tools are required to benchmark and compare different algorithms for the computation of graph properties and data structures for the representation of dynamic graphs in memory. Based on deeper insights into their performance, new algorithms can be developed and efficient data structures can be selected. In this thesis, we present four contributions to tackle these problems: A benchmarking framework for dynamic graph analysis, novel algorithms for the efficient analysis of dynamic graphs, an approach for the parallelization of dynamic graph analysis, and a novel paradigm to select and adapt graph data structures. In addition, we present three use cases from the areas of social, computer, and biological networks to illustrate the great insights provided by their graph-based analysis. We present a new benchmarking framework for the analysis of dynamic graphs, the Dynamic Network Analyzer (DNA). It provides tools to benchmark and compare different algorithms for the analysis of dynamic graphs as well as the data structures used to represent them in memory. DNA supports the development of new algorithms and the automatic verification of their results. Its visualization component provides different ways to represent dynamic graphs and the results of their analysis. We introduce three new stream-based algorithms for the analysis of dynamic graphs. We evaluate their performance on synthetic as well as real-world dynamic graphs and compare their runtimes to snapshot-based algorithms. Our results show great performance gains for all three algorithms. The new stream-based algorithm StreaM_k, which counts the frequencies of k-vertex motifs, achieves speedups up to 19,043 x for synthetic and 2882 x for real-world datasets. We present a novel approach for the distributed processing of dynamic graphs, called parallel Dynamic Graph Analysis (pDNA). To analyze a dynamic graph, the work is distributed by a partitioner that creates subgraphs and assigns them to workers. They compute the properties of their respective subgraph using standard algorithms. Their results are used by the collator component to merge them to the properties of the original graph. We evaluate the performance of pDNA for the computation of five graph properties on two real-world dynamic graphs with up to 32 workers. Our approach achieves great speedups, especially for the analysis of complex graph measures. We introduce two novel approaches for the selection of efficient graph data structures. The compile-time approach estimates the workload of an analysis after an initial profiling phase and recommends efficient data structures based on benchmarking results. It achieves speedups of up to 5.4 x over baseline data structure configurations for the analysis of real-word dynamic graphs. The run-time approach monitors the workload during analysis and exchanges the graph representation if it finds a configuration that promises to be more efficient for the current workload. Compared to baseline configurations, it achieves speedups up to 7.3 x for the analysis of a synthetic workload. Our contributions provide novel approaches for the efficient analysis of dynamic graphs and tools to further investigate the trade-offs between different factors that influence the performance.
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Zan. "GRAPH-BASED ANALYSIS FOR E-COMMERCE RECOMMENDATION." Diss., Tucson, Arizona : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1167%5F1%5Fm.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Corazza, Federico Augusto. "Analysis of graph-based quantum error-correcting codes." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23801/.

Full text
Abstract:
With the advent of quantum computers, there has been a growing interest in the practicality of this device. Due to the delicate conditions that surround physical qubits, one could wonder whether any useful computation could be implemented on such devices. As we describe in this work, it is possible to exploit concepts from classical information theory and employ quantum error-correcting techniques. Thanks to the Threshold Theorem, if the error probability of physical qubits is below a given threshold, then the logical error probability corresponding to the encoded data qubit can be arbitrarily low. To this end, we describe decoherence which is the phenomenon that quantum bits are subject to and is the main source of errors in quantum memories. From the cause of error of a single qubit, we then introduce the error models that can be used to analyze quantum error-correcting codes as a whole. The main type of code that we studied comes from the family of topological codes and is called surface code. Of these codes, we consider both the toric and planar structures. We then introduce a variation of the standard planar surface code which better captures the symmetries of the code architecture. Once the main properties of surface codes have been discussed, we give an overview of the working principles of the algorithm used to decode this type of topological code: the minimum weight perfect matching. Finally, we show the performance of the surface codes that we introduced, comparing them based on their architecture and properties. These simulations have been performed with different error channel models to give a more thorough description of their performance in several situations showing relevant results.
APA, Harvard, Vancouver, ISO, and other styles
4

Rahman, Md Rashedur. "Knowledge Base Population based on Entity Graph Analysis." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS092/document.

Full text
Abstract:
Le peuplement de base de connaissance (KBP) est une tâche importante qui présente de nombreux défis pour le traitement automatique des langues. L'objectif de cette tâche est d'extraire des connaissances de textes et de les structurer afin de compléter une base de connaissances. Nous nous sommes intéressés à la reconnaissance de relations entre entités. L'extraction de relations (RE) entre une paire de mentions d'entités est une tâche difficile en particulier pour les relations en domaine ouvert. Généralement, ces relations sont extraites en fonction des informations lexicales et syntaxiques au niveau de la phrase. Cependant, l'exploitation d'informations globales sur les entités n'a pas encore été explorée. Nous proposons d'extraire un graphe d'entités du corpus global et de calculer des caractéristiques sur ce graphe afin de capturer des indices des relations entre paires d'entités. Pour évaluer la pertinence des fonctionnalités proposées, nous les avons testées sur une tâche de validation de relation dont le but est de décider l'exactitude de relations extraites par différents systèmes. Les résultats expérimentaux montrent que les caractéristiques proposées conduisent à améliorer les résultats de l'état de l'art
Knowledge Base Population (KBP) is an important and challenging task specially when it has to be done automatically. The objective of KBP task is to make a collection of facts of the world. A Knowledge Base (KB) contains different entities, relationships among them and various properties of the entities. Relation extraction (RE) between a pair of entity mentions from text plays a vital role in KBP task. RE is also a challenging task specially for open domain relations. Generally, relations are extracted based on the lexical and syntactical information at the sentence level. However, global information about known entities has not been explored yet for RE task. We propose to extract a graph of entities from the overall corpus and to compute features on this graph that are able to capture some evidence of holding relationships between a pair of entities. In order to evaluate the relevance of the proposed features, we tested them on a task of relation validation which examines the correctness of relations that are extracted by different RE systems. Experimental results show that the proposed features lead to outperforming the state-of-the-art system
APA, Harvard, Vancouver, ISO, and other styles
5

Fujimoto, Masaki Stanley. "Graph-Based Whole Genome Phylogenomics." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8461.

Full text
Abstract:
Understanding others is a deeply human urge basic in our existential quest. It requires knowing where someone has come from and where they sit amongst peers. Phylogenetic analysis and genome wide association studies seek to tell us where we’ve come from and where we are relative to one another through evolutionary history and genetic makeup. Current methods do not address the computational complexity caused by new forms of genomic data, namely long-read DNA sequencing and increased abundances of assembled genomes, that are becoming evermore abundant. To address this, we explore specialized data structures for storing and comparing genomic information. This work resulted in the creation of novel data structures for storing multiple genomes that can be used for identifying structural variations and other types of polymorphisms. Using these methods we illuminate the genetic history of organisms in our efforts to understand the world around us.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Kaijun. "Graph-based Modern Nonparametrics For High-dimensional Data." Diss., Temple University Libraries, 2019. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/578840.

Full text
Abstract:
Statistics
Ph.D.
Developing nonparametric statistical methods and inference procedures for high-dimensional large data have been a challenging frontier problem of statistics. To attack this problem, in recent years, a clear rising trend has been observed with a radically different viewpoint--``Graph-based Nonparametrics," which is the main research focus of this dissertation. The basic idea consists of two steps: (i) representation step: code the given data using graphs, (ii) analysis step: apply statistical methods on the graph-transformed problem to systematically tackle various types of data structures. Under this general framework, this dissertation develops two major research directions. Chapter 2—based on Mukhopadhyay and Wang (2019a)—introduces a new nonparametric method for high-dimensional k-sample comparison problem that is distribution-free, robust, and continues to work even when the dimension of the data is larger than the sample size. The proposed theory is based on modern LP-nonparametrics tools and unexplored connections with spectral graph theory. The key is to construct a specially-designed weighted graph from the data and to reformulate the k-sample problem into a community detection problem. The procedure is shown to possess various desirable properties along with a characteristic exploratory flavor that has practical consequences. The numerical examples show surprisingly well performance of our method under a broad range of realistic situations. Chapter 3—based on Mukhopadhyay and Wang (2019b)—revisits some foundational questions about network modeling that are still unsolved. In particular, we present unified statistical theory of the fundamental spectral graph methods (e.g., Laplacian, Modularity, Diffusion map, regularized Laplacian, Google PageRank model), which are often viewed as spectral heuristic-based empirical mystery facts. Despite half a century of research, this question has been one of the most formidable open issues, if not the core problem in modern network science. Our approach integrates modern nonparametric statistics, mathematical approximation theory (of integral equations), and computational harmonic analysis in a novel way to develop a theory that unifies and generalizes the existing paradigm. From a practical standpoint, it is shown that this perspective can provide adequate guidance for designing next-generation computational tools for large-scale problems. As an example, we have described the high-dimensional change-point detection problem. Chapter 4 discusses some further extensions and application of our methodologies to regularized spectral clustering and spatial graph regression problems. The dissertation concludes with the a discussion of two important areas of future studies.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
7

Sinha, Ravi Som Mihalcea Rada F. "Graph-based centrality algorithms for unsupervised word sense disambiguation." [Denton, Tex.] : University of North Texas, 2008. http://digital.library.unt.edu/permalink/meta-dc-9736.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Malmberg, Filip. "Graph-based Methods for Interactive Image Segmentation." Doctoral thesis, Uppsala universitet, Centrum för bildanalys, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-149261.

Full text
Abstract:
The subject of digital image analysis deals with extracting relevant information from image data, stored in digital form in a computer. A fundamental problem in image analysis is image segmentation, i.e., the identification and separation of relevant objects and structures in an image. Accurate segmentation of objects of interest is often required before further processing and analysis can be performed. Despite years of active research, fully automatic segmentation of arbitrary images remains an unsolved problem. Interactive, or semi-automatic, segmentation methods use human expert knowledge as additional input, thereby making the segmentation problem more tractable. The goal of interactive segmentation methods is to minimize the required user interaction time, while maintaining tight user control to guarantee the correctness of the results. Methods for interactive segmentation typically operate under one of two paradigms for user guidance: (1) Specification of pieces of the boundary of the desired object(s). (2) Specification of correct segmentation labels for a small subset of the image elements. These types of user input are referred to as boundary constraints and regional constraints, respectively. This thesis concerns the development of methods for interactive segmentation, using a graph-theoretic approach. We view an image as an edge weighted graph, whose vertex set is the set of image elements, and whose edges are given by an adjacency relation among the image elements. Due to its discrete nature and mathematical simplicity, this graph based image representation lends itself well to the development of efficient, and provably correct, methods. The contributions in this thesis may be summarized as follows: Existing graph-based methods for interactive segmentation are modified to improve their performance on images with noisy or missing data, while maintaining a low computational cost. Fuzzy techniques are utilized to obtain segmentations from which feature measurements can be made with increased precision. A new paradigm for user guidance, that unifies and generalizes regional and boundary constraints, is proposed. The practical utility of the proposed methods is illustrated with examples from the medical field.
APA, Harvard, Vancouver, ISO, and other styles
9

Durai, Dilip [Verfasser]. "Novel graph based algorithms for transcriptome sequence analysis / Dilip Durai." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2020. http://d-nb.info/1236897064/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gong, Nan. "Using Map-Reduce for Large Scale Analysis of Graph-Based Data." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-102822.

Full text
Abstract:
As social networks have gained in popularity, maintaining and processing the social network graph information using graph algorithms has become an essential source for discovering potential features of the graph. The escalating size of the social networks has made it impossible to process the huge graphs on a single ma chine in a “real-time” level of execution. This thesis is looking into representing and distributing graph-based algorithms using Map-Reduce model. Graph-based algorithms are discussed in the beginning. Then, several distributed graph computing infrastructures are reviewed, followed by Map-Reduce introduction and some graph computation toolkits based on Map-Reduce model. By reviewing the background and related work, graph-based algorithms are categorized, and adaptation of graph-based algorithms to Map-Reduce model is discussed. Two particular algorithms, MCL and DBSCAN are chosen to be designed using Map- Reduce model, and implemented using Hadoop. New matrix multiplication method is proposed while designing MCL. The DBSCAN is reformulated into connectivity problem using filter method, and Kingdom Expansion Game is proposed to do fast expansion. Scalability and performance of these new designs are evaluated. Conclusion is made according to the literature study, practical design experience and evaluation data. Some suggestions of graph-based algorithms design using Map-Reduce model are also given in the end.
APA, Harvard, Vancouver, ISO, and other styles
11

Ponomareva, Natalia. "Graph-based approaches for semi-supervised and cross-domain sentiment analysis." Thesis, University of Wolverhampton, 2014. http://hdl.handle.net/2436/323990.

Full text
Abstract:
The rapid development of Internet technologies has resulted in a sharp increase in the number of Internet users who create content online. User-generated content often represents people's opinions, thoughts, speculations and sentiments and is a valuable source of information for companies, organisations and individual users. This has led to the emergence of the field of sentiment analysis, which deals with the automatic extraction and classification of sentiments expressed in texts. Sentiment analysis has been intensively researched over the last ten years, but there are still many issues to be addressed. One of the main problems is the lack of labelled data necessary to carry out precise supervised sentiment classification. In response, research has moved towards developing semi-supervised and cross-domain techniques. Semi-supervised approaches still need some labelled data and their effectiveness is largely determined by the amount of these data, whereas cross-domain approaches usually perform poorly if training data are very different from test data. The majority of research on sentiment classification deals with the binary classification problem, although for many practical applications this rather coarse sentiment scale is not sufficient. Therefore, it is crucial to design methods which are able to perform accurate multiclass sentiment classification. The aims of this thesis are to address the problem of limited availability of data in sentiment analysis and to advance research in semi-supervised and cross-domain approaches for sentiment classification, considering both binary and multiclass sentiment scales. We adopt graph-based learning as our main method and explore the most popular and widely used graph-based algorithm, label propagation. We investigate various ways of designing sentiment graphs and propose a new similarity measure which is unsupervised, easy to compute, does not require deep linguistic analysis and, most importantly, provides a good estimate for sentiment similarity as proved by intrinsic and extrinsic evaluations. The main contribution of this thesis is the development and evaluation of a graph-based sentiment analysis system that a) can cope with the challenges of limited data availability by using semi-supervised and cross-domain approaches b) is able to perform multiclass classification and c) achieves highly accurate results which are superior to those of most state-of-the-art semi-supervised and cross-domain systems. We systematically analyse and compare semi-supervised and cross-domain approaches in the graph-based framework and propose recommendations for selecting the most pertinent learning approach given the data available. Our recommendations are based on two domain characteristics, domain similarity and domain complexity, which were shown to have a significant impact on semi-supervised and cross-domain performance.
APA, Harvard, Vancouver, ISO, and other styles
12

Sinha, Ravi Som. "Graph-based Centrality Algorithms for Unsupervised Word Sense Disambiguation." Thesis, University of North Texas, 2008. https://digital.library.unt.edu/ark:/67531/metadc9736/.

Full text
Abstract:
This thesis introduces an innovative methodology of combining some traditional dictionary based approaches to word sense disambiguation (semantic similarity measures and overlap of word glosses, both based on WordNet) with some graph-based centrality methods, namely the degree of the vertices, Pagerank, closeness, and betweenness. The approach is completely unsupervised, and is based on creating graphs for the words to be disambiguated. We experiment with several possible combinations of the semantic similarity measures as the first stage in our experiments. The next stage attempts to score individual vertices in the graphs previously created based on several graph connectivity measures. During the final stage, several voting schemes are applied on the results obtained from the different centrality algorithms. The most important contributions of this work are not only that it is a novel approach and it works well, but also that it has great potential in overcoming the new-knowledge-acquisition bottleneck which has apparently brought research in supervised WSD as an explicit application to a plateau. The type of research reported in this thesis, which does not require manually annotated data, holds promise of a lot of new and interesting things, and our work is one of the first steps, despite being a small one, in this direction. The complete system is built and tested on standard benchmarks, and is comparable with work done on graph-based word sense disambiguation as well as lexical chains. The evaluation indicates that the right combination of the above mentioned metrics can be used to develop an unsupervised disambiguation engine as powerful as the state-of-the-art in WSD.
APA, Harvard, Vancouver, ISO, and other styles
13

Gubellini, Riccardo. "Graph-based analysis of brain structural MRI data in Multiple System Atrophy." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/12251/.

Full text
Abstract:
Il lavoro che ho sviluppato presso l’unità di RM funzionale del Policlinico S.Orsola-Malpighi, DIBINEM, è incentrato sull’analisi dei dati strutturali di risonanza magnetica mediante l’utilizzo della graph theory, con lo scopo di valutare eventuali differenze tra un campione di pazienti affetti da Atrofia Multi Sistemica (MSA) e uno di controlli sani (HC). L’MSA è una patologia neurodegenerativa sporadica e progressiva. Essa si divide in due sottotipi: MSA-P ed MSA-C. Circa un terzo delle persone affette da MSA sperimentano una particolare apnea respiratoria, chiamata Stridor. Nello studio sono stati confrontati tra loro tre coppie di gruppi: HC vs MSA, No-stridor vs Stridor, e MSA-C vs MSA-P. I grafi sono strutture matematiche definite da nodi e links, in campo neurologico, la graph theory è usata con lo scopo di comprendere il funzionamento del cervello visto come network. L'approccio qui utilizzato è bastato sulla correlazione volumetriche tra le diverse regioni del cervello. Per costruire un grafo per ogni gruppo il primo step è stato ottenere la parcellizzazione delle immagini cerebrali, in seguito sono stati valutati i volumi delle regioni cerebrali, e in fine le correlazioni tra esse. Una volta costruiti i grafi è stato possibile calcolare i parametri topologici che ne caratterizzano struttura ed organizzazione. Nei vari confronti fatti non sono state riscontrate differenze nelle proprietà globali del network. L’analisi regionale invece ha evidenziato un'alterazione tra MSA e HC relativa a regioni che appartengono al network centrale autonomico, particolarmente colpito dalla malattia. Sono state inoltre riscontrate alterazioni nella organizzazione modulare dei gruppi presi in esame. Questa analisi ha mostrato la possibilità di indagare la funzionalità dei network cerebrali e della loro architettura modulare con misure strutturali quali la covarianza dei volumi delle varie regioni cerebrali in gruppi di soggetti.
APA, Harvard, Vancouver, ISO, and other styles
14

Fält, Markus. "Concurrency model for the Majo language : An analysis of graph based concurrency." Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-34050.

Full text
Abstract:
Today most computers have powerful multi core processors that can perform many calculations simultaneously. However writing programs that take full advan- tage of the processors in modern day computers can be a challenge. This is due to the challenge of managing shared resources between parallel processing threads. This report documents the development of the Majo language that aims to solve these problems by using abstractions to make parallel programming easier. The model for the abstractions is dividing the program in to what is called nodes. One node represents one thread of execution and nodes are connected to each other by thread safe communication channels. All communication channels are frst in frst out queues. The nodes communicate by pushing and popping values form these queues. The performance of the language was measured and compared to other languages such as Python, Ruby and JavaScript. The tests were based on timing how long it took to generate the Mandelbrot set as well as sorting a list of inte- gers. The language scalability was also tested by seeing how much the execution time decreased by adding more parallel threads. The results from these tests showed that the developed prototype of the language had some unforeseen bugs that slowed down the execution more then expected in some tests. However the scalability test gave encouraging results. For future development the language exe- cution time should be improved by fxing relevant bugs and a more generalized model for concurrency should be developed.
APA, Harvard, Vancouver, ISO, and other styles
15

Streib, Kevin. "IMPROVED GRAPH-BASED CLUSTERING WITH APPLICATIONS IN COMPUTER VISION AND BEHAVIOR ANALYSIS." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1331063343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Pereira, Gabriel Maier Fernandes Vidueiro. "Test-case-based call graph construction in dynamically typed programming languages." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/134355.

Full text
Abstract:
Evolução de software é uma das atividades mais desafiadoras do processo de desenvolvimento de software. Uma importante questão associada à essa atividade é a correta compreensão do código fonte e outros artefatos que necessitam ser mantidos e evoluídos. Visando auxiliar desenvolvedores na manutenção de código, Integrated Development Environments (IDE’s) proporcionam ferramentas que informam desenvolvedores sobre as dependências e as particularidades do código a ser modificado. No entanto, linguagens dinamicamente tipadas não definem tipos explicitamente no código fonte, o que dificulta a análise estática do código e consequentemente a contrução dessas ferramentas. Como exemplo, a construção de call graphs (grafos de chamadas), utilizados pelas IDE’s para criar ferramentas de navegação de código, é prejudicada pela ausência da definição de tipos. Para abordar o problema da criação de call graphs para linguagens dinamicamente tipadas, propomos uma técnica dividida em passos para a construção de um call graph baseado em informações extraídas da execução de testes. A técnica é dividida em 3 passos, o Passo #1 cria um call graph conservativo e estático que resolve chamadas de métodos baseado apenas em nomes dos métodos, ainda no primeiro passo, testes são executados e seu traço de execução é armazenado para posterior análise. O Passo #2 combina a informação armazenada da execução dos testes e o call graph construído no primeiro passo, o Passo #2 também é responsável pela criação de um conjunto de regras de associação que servirão para guiar desenvolvedores durante a criação de novas partes do código. Nossa avaliação em uma aplicação real de porte grande mostrou que a técnica melhora a precisão do call graph criado removendo arestas desnecessárias (70%), e mostrou-se apta a auxiliar desenvolvedores definindo pontos de navegação no código baseada na análise de regras de associação extraídas do test-case-based call graph.
Evolving enterprise software systems is one of the most challenging activities of the software development process. An important issue associated with this activity is to properly comprehend the source code and other software assets that must be evolved. To assist developers on these evolution tasks, Integrated Development Environments (IDEs) build tools that provides information about the source code and its dependencies. However, dynamically typed languages do not define types explicitly in the source code, which difficult source code analysis and therefore the construction of these tools. As an example, the call graph construction, used by IDE’s to build source code navigation tools, is hampered by the absence of type definition. To address the problem of constructing call graphs for dynamic languages, we propose a technique based on steps to build a call graph based on test runtime information, called test-case-based call graph. The technique is divided in three steps; Step #1 creates a conservative and static call graph that decides target nodes based on method names, and the first step also run tests profiling its execution; Step #2 combines the test runtime information and the conservative call graph built in the first step to create the test-case-based call graph, it also creates a set of association rules to guide developers in the maintenance while creating new pieces of code; Finally, Step #3 uses the test-case-based call graph and the association rules to assist developers in source code navigation tasks. Our evaluation on a large-size real-world software shows that the technique increases call graph precision removing several unnecessary conservative edges ( %70), and assist developers filtering target nodes of method calls based on association rules extracted from the call graph.
APA, Harvard, Vancouver, ISO, and other styles
17

Giacalone, Elisabetta. "Graph-based analysis of brain structural connectivity using different diffusion MRI reconstruction techniques." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Find full text
Abstract:
Il cervello si può definire un network complesso in cui delle regioni sono interconnesse fra loro. L'imaging di risonanza magnetica pesato in diffusione (DWI) insieme alla trattografia, permettono di ricostruire i fasci di fibre assonali di sostanza bianca indagando la connettività strutturale tra le aree di sostanza grigia. Il "connettoma" risultante può essere analizzato e caratterizzato attraverso la graph theory. Il lavoro che ho sviluppato presso l'unità di RM funzionale del Policlinico S. Orsola-Malpighi, e il DIBINEM, Università di Bologna, si propone di ricostruire il connettoma tramite due diversi metodi trattografici probabilistici confrontando i risultati ottenuti da acquisizioni DWI con diverso numero di direzioni del gradiente di diffusione (NDGD), ma con rapporto segnale rumore (SNR) costante. È stata effettuata un’acquisizione a 66 e tre a 22-NDGD per 18 soggetti sani. Le scansioni a 22-NDGD sono state mediate fra loro per ottenere un SNR comparabile con le 66-NDGD (22avg) e poter confrontare correttamente i diversi NDGD. Questo tipo di analisi non è ancora presente in letteratura. Dopo aver segmentato il cervello in diverse aree è stata effettuata la trattografia, tramite gli algoritmi PROBTRACKX2 e iFOD2, per costruire un network pesato del connettoma. Abbiamo effettuato misure locali e globali sui network e analizzato le proprietà di small-world e l'organizzazione modulare. Tali misure sono state confrontate fra i diversi NDGD e algoritmi trattografici. Si è visto come PROBTRACKX2 risulti più sensibile alle variazioni del SNR nel confronto dei network a 22 e 22avg. Per entrambi gli algoritmi sono state misurate differenze significative fra i network a 66 e a 22avg suggerendo che l'aumento della risoluzione angolare influenza fortemente le proprietà del network. In particolare, a livello locale si evidenzia un'alterazione delle misure nodo-specifiche nelle zone della sostanza grigia profonda e nell'area fronto temporale, per entrambi gli algoritmi.
APA, Harvard, Vancouver, ISO, and other styles
18

Elvas, Duarte De Almeida Jose-Paulo. "A graph-based technique for analysis and visualisation of higher order urban topology." Thesis, University College London (University of London), 2008. http://discovery.ucl.ac.uk/1443953/.

Full text
Abstract:
Analysis of spatial phenomena is a time consuming and laborious task in several fields of the Geomatics world. Automation of these tasks is especially needed in areas such as geographical information science (GISc). Carrying out those tasks in the context of an urban scene is particularly challenging given the complex spatial pattern of its elements. The starting point in this study is unstructured data, and hence no prior knowledge of the spatial entities is assumed. The aim of translating this data into more meaningful homogeneous regions can be achieved by grouping geographic structures within the initial collection of objects according to their spatial arrangement. The techniques applied to achieve this are those of graph theory, applied to urban topology analysis. For the identification of meaningful structures a graph-based system was developed comprising, in particular, a procedure, the containment-first search, based on the breadth-first search algorithm for graph traversal which, by considering the spatial objects' adjacencies, analyzes and interprets their spatial arrangement in terms of the topological relationship of containment. Different LiDAR as well as photogrammetric datasets have been used as an example scenario to test this system. Another aspect is the visualisation of the urban topology. Visual inspections of interim results of the graph analysis process can often reveal patterns not discernable by current automated techniques. An interactive tool was developed and implemented in Visual Basic for Applications (VBA) utilising ESRFs ArcObjects, in ArcMap (ArcGIS). The ultimate goal was the dynamic display of the original map according to the results of the spatial topology analysis. Future work will entail further clustering of the identified containment units into homogeneous regions. After the delineation of cluster shapes, an analysis process will have to be accomplished, either by pattern recognition or interpretation procedures, for the retrieval of higher-level information, for example related to land-us.
APA, Harvard, Vancouver, ISO, and other styles
19

Zhuo, Yuzhen. "Static Priority Schedulability Analysis of Graph-Based Real-TimeTask Models with Resource Sharing." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-225911.

Full text
Abstract:
The correctness  of real-time systems does not only depend on the validity of the output, but also the temporal validity. Tasks are typically designed with strict deadlines and they need to respond in time, which are the timing constraints of real-time systems. Schedulability analysis is one of the approaches to study the workload of the task system. DRTRS (Digraph Real-Time task model with resource  sharing) is introduced to describe the system task model, abstracting away most functional behaviour and focus on the timing properties. We have also developed an efficient schedulability analysis under different resource  access protocols.
APA, Harvard, Vancouver, ISO, and other styles
20

Poudel, Prabesh. "Security Vetting Of Android Applications Using Graph Based Deep Learning Approaches." Bowling Green State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1617199500076786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Fruth, Jana. "Sensitivy analysis and graph-based methods for black-box functions with on application to sheet metal forming." Thesis, Saint-Etienne, EMSE, 2015. http://www.theses.fr/2015EMSE0779/document.

Full text
Abstract:
Le domaine général de la thèse est l’analyse de sensibilité de fonctions boîte noire. L’analyse de sensibilité étudie comment la variation d’une sortie peut être reliée à la variation des entrées. C’est un outil important dans la construction, l’analyse et l’optimisation des expériences numériques (computer experiments).Nous présentons tout d’abord l’indice d’interaction total, qui est utile pour le criblage d’interactions. Plusieurs méthodes d’estimation sont proposées. Leurs propriétés sont étudiées au plan théorique et par des simulations.Le chapitre suivant concerne l’analyse de sensibilité pour des modèles avec des entrées fonctionnelles et une sortie scalaire. Une approche séquentielle très économique est présentée, qui permet non seulement de retrouver la sensibilité de entrées fonctionnelles globalement, mais aussi d’identifier les régions d’intérêt dans leur domaine de définition.Un troisième concept est proposé, les support index functions, mesurant la sensibilité d’une entrée sur tout le support de sa loi de probabilité.Finalement les trois méthodes sont appliquées avec succès à l’analyse de sensibilité de modèles d’emboutissage
The general field of the thesis is the sensitivity analysis of black-box functions. Sensitivity analysis studies how the variation of the output can be apportioned to the variation of input sources. It is an important tool in the construction, analysis, and optimization of computer experiments.The total interaction index is presented, which can be used for the screening of interactions. Several variance-based estimation methods are suggested. Their properties are analyzed theoretically as well as on simulations.A further chapter concerns the sensitivity analysis for models that can take functions as input variables and return a scalar value as output. A very economical sequential approach is presented, which not only discovers the sensitivity of those functional variables as a whole but identifies relevant regions in the functional domain.As a third concept, support index functions, functions of sensitivity indices over the input distribution support, are suggested.Finally, all three methods are successfully applied in the sensitivity analysis of sheet metal forming models
APA, Harvard, Vancouver, ISO, and other styles
22

Vellambi, Badri Narayanan. "Applications of graph-based codes in networks: analysis of capacity and design of improved algorithms." Diss., Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/37091.

Full text
Abstract:
The conception of turbo codes by Berrou et al. has created a renewed interest in modern graph-based codes. Several encouraging results that have come to light since then have fortified the role these codes shall play as potential solutions for present and future communication problems. This work focuses on both practical and theoretical aspects of graph-based codes. The thesis can be broadly categorized into three parts. The first part of the thesis focuses on the design of practical graph-based codes of short lengths. While both low-density parity-check codes and rateless codes have been shown to be asymptotically optimal under the message-passing (MP) decoder, the performance of short-length codes from these families under MP decoding is starkly sub-optimal. This work first addresses the structural characterization of stopping sets to understand this sub-optimality. Using this characterization, a novel improved decoder that offers several orders of magnitude improvement in bit-error rates is introduced. Next, a novel scheme for the design of a good rate-compatible family of punctured codes is proposed. The second part of the thesis aims at establishing these codes as a good tool to develop reliable, energy-efficient and low-latency data dissemination schemes in networks. The problems of broadcasting in wireless multihop networks and that of unicast in delay-tolerant networks are investigated. In both cases, rateless coding is seen to offer an elegant means of achieving the goals of the chosen communication protocols. It was noticed that the ratelessness and the randomness in encoding process make this scheme specifically suited to such network applications. The final part of the thesis investigates an application of a specific class of codes called network codes to finite-buffer wired networks. This part of the work aims at establishing a framework for the theoretical study and understanding of finite-buffer networks. The proposed Markov chain-based method extends existing results to develop an iterative Markov chain-based technique for general acyclic wired networks. The framework not only estimates the capacity of such networks, but also provides a means to monitor network traffic and packet drop rates on various links of the network.
APA, Harvard, Vancouver, ISO, and other styles
23

Sighinolfi, Giovanni. "Exploring brain network features in borderline personality disorder: a graph-based analysis of MR images." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21714/.

Full text
Abstract:
A network approach to the modelling and the analysis of functional and structural Magnetic Resonance (MR) images is an increasingly popular technique, because of the solidity of the mathematical theory at its basis, the graph theory, which allows to explore a wide range of network properties. For this work, both Functional Connectivity (FC) and Structural Covariance (SC) networks were constructed. The cohort of participants to this study was composed of 27 subjects affected by the Borderline Personality Disorder (BPD), a mental disorder causing behavioral and emotional dysregulation, and a matching group of 28 healthy controls. Brain networks were analyzed using the methods provided by graph theory, both at global and nodal level, by exploring their topological and organizational properties mainly in terms of centrality, efficiency in information transfer and modularity. The outcomes obtained from such measures, in patients and controls separately, were compared in order to find statistically significant differences between the two groups, that may be characteristic of the disease. Additionally, the outcomes of the topological quantities were correlated with a series of clinical scores, evaluating the neuro-psychological condition of the subjects. The results show significant differences between patients and controls mostly in the FC networks and especially located in the limbic system of the brain, which indeed has a fundamental role in emotion regulation. Node-specific variations tend to involve the amygdala, the caudal anterior cingulate cortex, the entorhinal cortex and the temporal pole. Such evident results were not retrieved from the SC networks, though they still supported a greater variability within the limbic system. Therefore, the analysis of brain graphs allowed to achieve the detection of topological alterations in a young psychiatric population, which would be interesting to monitor in time.
APA, Harvard, Vancouver, ISO, and other styles
24

Lampka, Kai. "A symbolic approach to the state graph based analysis of high-level Markov reward models." [S.l.] : [s.n.], 2007. http://deposit.ddb.de/cgi-bin/dokserv?idn=985513926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Rossi, Magi Lorenzo. "Graph-based analysis of brain resting-state fMRI data in nocturnal frontal lobe epileptic patients." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/8332/.

Full text
Abstract:
Il lavoro che ho sviluppato presso l'unità di RM funzionale del Policlinico S.Orsola-Malpighi, DIBINEM, è incentrato sull'analisi dati di resting state - functional Magnetic Resonance Imaging (rs-fMRI) mediante l'utilizzo della graph theory, con lo scopo di valutare eventuali differenze in termini di connettività cerebrale funzionale tra un campione di pazienti affetti da Nocturnal Frontal Lobe Epilepsy (NFLE) ed uno di controlli sani. L'epilessia frontale notturna è una peculiare forma di epilessia caratterizzata da crisi che si verificano quasi esclusivamente durante il sonno notturno. Queste sono contraddistinte da comportamenti motori, prevalentemente distonici, spesso complessi, e talora a semiologia bizzarra. L'fMRI è una metodica di neuroimaging avanzata che permette di misurare indirettamente l'attività neuronale. Tutti i soggetti sono stati studiati in condizioni di resting-state, ossia di veglia rilassata. In particolare mi sono occupato di analizzare i dati fMRI con un approccio innovativo in campo clinico-neurologico, rappresentato dalla graph theory. I grafi sono definiti come strutture matematiche costituite da nodi e links, che trovano applicazione in molti campi di studio per la modellizzazione di strutture di diverso tipo. La costruzione di un grafo cerebrale per ogni partecipante allo studio ha rappresentato la parte centrale di questo lavoro. L'obiettivo è stato quello di definire le connessioni funzionali tra le diverse aree del cervello mediante l'utilizzo di un network. Il processo di modellizzazione ha permesso di valutare i grafi neurali mediante il calcolo di parametri topologici che ne caratterizzano struttura ed organizzazione. Le misure calcolate in questa analisi preliminare non hanno evidenziato differenze nelle proprietà globali tra i grafi dei pazienti e quelli dei controlli. Alterazioni locali sono state invece riscontrate nei pazienti, rispetto ai controlli, in aree della sostanza grigia profonda, del sistema limbico e delle regioni frontali, le quali rientrano tra quelle ipotizzate essere coinvolte nella fisiopatologia di questa peculiare forma di epilessia.
APA, Harvard, Vancouver, ISO, and other styles
26

Loureiro, Rui. "Bond graph model based on structural diagnosability and recoverability analysis : application to intelligent autonomous vehicles." Thesis, Lille 1, 2012. http://www.theses.fr/2012LIL10079/document.

Full text
Abstract:
La présente thèse concerne l’étude structurelle pour le recouvrement du défaut par l’approche du bond graph. L'objectif est d'exploiter les propriétés structurelles et causales de l'outil bond graph, afin d’effectuer à la fois le diagnostic et l’analyse de la commande du système physique en présence du défaut. En effet, l’outil bond graph permet de vérifier les conditions structurelles de recouvrement de défauts pas seulement du point de vue de l’analyse de commande, mais aussi en considérant les informations issues de l’étape de diagnostic. Par conséquent, l’ensemble des défauts tolérés est obtenu en mode hors-ligne avant d’effectuer une implémentation réelle. En outre, en estimant le défaut comme une puissance perturbatrice fournie au système, ce qui permet d’étendre les résultats d’analyse structurelle pour le recouvrement du défaut à une compensation locale adaptative, directement à partir du modèle bond graph. Enfin, les résultats obtenus sont validés dans une application d’un véhicule autonome intelligent redondant
This work deals with structural fault recoverability analysis using the bond graph model. The objective is to exploit the structural and causal properties of the bond graph tool in order to perform both diagnosis and control analysis in the presence of faults. Indeed, the bond graph tool enables to verify the structural conditions of fault recoverability not only from a control perspective but also from a diagnosis one. In this way, the set of faults that can be recovered is obtained previous to industrial implementation. In addition, a novel way to estimate the fault by a disturbing power furnished to the system, enabled to extend the results of structural fault recoverability by performing a local adaptive compensation directly from the bond graph model. Finally, the obtained structural results are validated on a redundant intelligent autonomous vehicle
APA, Harvard, Vancouver, ISO, and other styles
27

Zhu, Xiaoting. "Systematic Assessment of Structural Features-Based Graph Embedding Methods with Application to Biomedical Networks." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1592394966493963.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Cheng, Danling. "Integrated System Model Reliability Evaluation and Prediction for Electrical Power Systems: Graph Trace Analysis Based Solutions." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/28944.

Full text
Abstract:
A new approach to the evaluation of the reliability of electrical systems is presented. In this approach a Graph Trace Analysis based approach is applied to integrated system models and reliability analysis. The analysis zones are extended from the traditional power system functional zones. The systems are modeled using containers with iterators, where the iterators manage graph edges and are used to process through the topology of the graph. The analysis provides a means of computationally handling dependent outages and cascading failures. The effects of adverse weather, time-varying loads, equipment age, installation environment, operation conditions are considered. Sequential Monte Carlo simulation is used to evaluate the reliability changes for different system configurations, including distributed generation and transmission lines. Historical weather records and loading are used to update the component failure rates on-the-fly. Simulation results are compared against historical reliability field measurements. Given a large and complex plant to operate, a real-time understanding of the networks and their situational reliability is important to operational decision support. This dissertation also introduces using an Integrated System Model in helping operators to minimize real-time problems. A real-time simulation architecture is described, which predicts where problems may occur, how serious they may be, and what is the possible root cause.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
29

Arney, Dale Curtis. "Rule-based graph theory to enable exploration of the space system architecture design space." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44840.

Full text
Abstract:
NASA's current plans for human spaceflight include an evolutionary series of missions based on a steady increase in capability to explore cis-lunar space, the Moon, near-Earth asteroids, and eventually Mars. Although the system architecture definition has the greatest impact on the eventual performance and cost of an exploration program, selecting an optimal architecture is a difficult task due to the lack of methods to adequately explore the architecture design space and the resource-intensive nature of architecture analysis. This research presents a modeling framework to mathematically represent and analyze the space system architecture design space using graph theory. The framework enables rapid exploration of the design space without the need to limit trade options or the need for user interaction during the exploration process. The architecture design space for three missions in a notional evolutionary exploration program, which includes staging locations, vehicle implementation, and system functionality, for each mission destination is explored. Using relative net present value of various system architecture options, the design space exploration reveals that the launch vehicle selection is the primary driver in reducing cost, and other options, such as propellant type, staging location, and aggregation strategy, provide less impact.
APA, Harvard, Vancouver, ISO, and other styles
30

Moon, Kyungjin. "Self-reconfigurable ship fluid-network modeling for simulation-based design." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34733.

Full text
Abstract:
Our world is filled with large-scale engineering systems, which provide various services and conveniences in our daily life. A distinctive trend in the development of today's large-scale engineering systems is the extensive and aggressive adoption of automation and autonomy that enable the significant improvement of systems' robustness, efficiency, and performance, with considerably reduced manning and maintenance costs, and the U.S. Navy's DD(X), the next-generation destroyer program, is considered as an extreme example of such a trend. This thesis pursues a modeling solution for performing simulation-based analysis in the conceptual or preliminary design stage of an intelligent, self-reconfigurable ship fluid system, which is one of the concepts of DD(X) engineering plant development. Through the investigations on the Navy's approach for designing a more survivable ship system, it is found that the current naval simulation-based analysis environment is limited by the capability gaps in damage modeling, dynamic model reconfiguration, and simulation speed of the domain specific models, especially fluid network models. As enablers of filling these gaps, two essential elements were identified in the formulation of the modeling method. The first one is the graph-based topological modeling method, which will be employed for rapid model reconstruction and damage modeling, and the second one is the recurrent neural network-based, component-level surrogate modeling method, which will be used to improve the affordability and efficiency of the modeling and simulation (M&S) computations. The integration of the two methods can deliver computationally efficient, flexible, and automation-friendly M&S which will create an environment for more rigorous damage analysis and exploration of design alternatives. As a demonstration for evaluating the developed method, a simulation model of a notional ship fluid system was created, and a damage analysis was performed. Next, the models representing different design configurations of the fluid system were created, and damage analyses were performed with them in order to find an optimal design configuration for system survivability. Finally, the benefits and drawbacks of the developed method were discussed based on the result of the demonstration.
APA, Harvard, Vancouver, ISO, and other styles
31

Dash, Santanu Kumar. "Adaptive constraint solving for information flow analysis." Thesis, University of Hertfordshire, 2015. http://hdl.handle.net/2299/16354.

Full text
Abstract:
In program analysis, unknown properties for terms are typically represented symbolically as variables. Bound constraints on these variables can then specify multiple optimisation goals for computer programs and nd application in areas such as type theory, security, alias analysis and resource reasoning. Resolution of bound constraints is a problem steeped in graph theory; interdependencies between the variables is represented as a constraint graph. Additionally, constants are introduced into the system as concrete bounds over these variables and constants themselves are ordered over a lattice which is, once again, represented as a graph. Despite graph algorithms being central to bound constraint solving, most approaches to program optimisation that use bound constraint solving have treated their graph theoretic foundations as a black box. Little has been done to investigate the computational costs or design e cient graph algorithms for constraint resolution. Emerging examples of these lattices and bound constraint graphs, particularly from the domain of language-based security, are showing that these graphs and lattices are structurally diverse and could be arbitrarily large. Therefore, there is a pressing need to investigate the graph theoretic foundations of bound constraint solving. In this thesis, we investigate the computational costs of bound constraint solving from a graph theoretic perspective for Information Flow Analysis (IFA); IFA is a sub- eld of language-based security which veri es whether con dentiality and integrity of classified information is preserved as it is manipulated by a program. We present a novel framework based on graph decomposition for solving the (atomic) bound constraint problem for IFA. Our approach enables us to abstract away from connections between individual vertices to those between sets of vertices in both the constraint graph and an accompanying security lattice which defines ordering over constants. Thereby, we are able to achieve significant speedups compared to state-of-the-art graph algorithms applied to bound constraint solving. More importantly, our algorithms are highly adaptive in nature and seamlessly adapt to the structure of the constraint graph and the lattice. The computational costs of our approach is a function of the latent scope of decomposition in the constraint graph and the lattice; therefore, we enjoy the fastest runtime for every point in the structure-spectrum of these graphs and lattices. While the techniques in this dissertation are developed with IFA in mind, they can be extended to other application of the bound constraints problem, such as type inference and program analysis frameworks which use annotated type systems, where constants are ordered over a lattice.
APA, Harvard, Vancouver, ISO, and other styles
32

Rangapuram, Syama Sundar [Verfasser], and Matthias [Akademischer Betreuer] Hein. "Graph-based methods for unsupervised and semi-supervised data analysis / Syama Sundar Rangapuram ; Betreuer: Matthias Hein." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2016. http://d-nb.info/1117028100/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Schiller, Benjamin Verfasser], Thorsten [Akademischer Betreuer] [Gutachter] [Strufe, and George [Gutachter] Fletcher. "Graph-based Analysis of Dynamic Systems / Benjamin Schiller ; Gutachter: Thorsten Strufe, George Fletcher ; Betreuer: Thorsten Strufe." Dresden : Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://d-nb.info/1147287422/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Schiller, Benjamin [Verfasser], Thorsten [Akademischer Betreuer] [Gutachter] Strufe, and George [Gutachter] Fletcher. "Graph-based Analysis of Dynamic Systems / Benjamin Schiller ; Gutachter: Thorsten Strufe, George Fletcher ; Betreuer: Thorsten Strufe." Dresden : Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://d-nb.info/1147287422/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Lilliehöök, Hampus. "Extraction of word senses from bilingual resources using graph-based semantic mirroring." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-91880.

Full text
Abstract:
In this thesis we retrieve semantic information that exists implicitly in bilingual data. We gather input data by repeatedly applying the semantic mirroring procedure. The data is then represented by vectors in a large vector space. A resource of synonym clusters is then constructed by performing K-means centroid-based clustering on the vectors. We evaluate the result manually, using dictionaries, and against WordNet, and discuss prospects and applications of this method.
I det här arbetet utvinner vi semantisk information som existerar implicit i tvåspråkig data. Vi samlar indata genom att upprepa proceduren semantisk spegling. Datan representeras som vektorer i en stor vektorrymd. Vi bygger sedan en resurs med synonymkluster genom att applicera K-means-algoritmen på vektorerna. Vi granskar resultatet för hand med hjälp av ordböcker, och mot WordNet, och diskuterar möjligheter och tillämpningar för metoden.
APA, Harvard, Vancouver, ISO, and other styles
36

Fober, Thomas [Verfasser], and Eyke [Akademischer Betreuer] Hüllermeier. "Geometric, Feature-based and Graph-based Approaches for the Structural Analysis of Protein Binding Sites : Novel Methods and Computational Analysis / Thomas Fober. Betreuer: Eyke Hüllermeier." Marburg : Philipps-Universität Marburg, 2013. http://d-nb.info/1038786061/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Nguyen, Vu ngoc tung. "Analysis of biochemical reaction graph : application to heterotrophic plant cell metabolism." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0023/document.

Full text
Abstract:
Aujourd’hui, la biologie des systèmes est confrontée aux défis de l’analyse de l’énorme quantité de données biologiques et à la taille des réseaux métaboliques pour des analyses à grande échelle. Bien que plusieurs méthodes aient été développées au cours des dernières années pour résoudre ce problème, ce sujet reste un domaine de recherche en plein essor. Cette thèse se concentre sur l’analyse des propriétés structurales, le calcul des modes élémentaires de flux et la détermination d’ensembles de coupe minimales du graphe formé par ces réseaux. Dans notre recherche, nous avons collaboré avec des biologistes pour reconstruire un réseau métabolique de taille moyenne du métabolisme cellulaire de la plante, environ 90 noeuds et 150 arêtes. En premier lieu, nous avons fait l’analyse des propriétés structurelles du réseau dans le but de trouver son organisation. Les réactions points centraux de ce réseau trouvés dans cette étape n’expliquent pas clairement la structure du réseau. Les mesures classiques de propriétés des graphes ne donnent pas plus d’informations utiles. En deuxième lieu, nous avons calculé les modes élémentaires de flux qui permettent de trouver les chemins uniques et minimaux dans un réseau métabolique, cette méthode donne un grand nombre de solutions, autour des centaines de milliers de voies métaboliques possibles qu’il est difficile de gérer manuellement. Enfin, les coupes minimales de graphe, ont été utilisés pour énumérer tous les ensembles minimaux et uniques des réactions qui stoppent les voies possibles trouvées à la précédente étape. Le nombre de coupes minimales a une tendance à ne pas croître exponentiellement avec la taille du réseau a contrario des modes élémentaires de flux. Nous avons combiné l’analyse de ces modes et les ensembles de coupe pour améliorer l’analyse du réseau. Les résultats montrent l’importance d’ensembles de coupe pour la recherche de la structure hiérarchique du réseau à travers modes de flux élémentaires. Nous avons étudié un cas particulier : qu’arrive-t-il si on stoppe l’entrée de glucose ? En utilisant les coupes minimales de taille deux, huit réactions ont toujours été trouvés dans les modes élémentaires qui permettent la production des différents sucres et métabolites d’intérêt au cas où le glucose est arrêté. Ces huit réactions jouent le rôle du squelette / coeur de notre réseau. En élargissant notre analyse aux coupes minimales de taille 3, nous avons identifié cinq réactions comme point de branchement entre différent modes. Ces 13 réactions créent une classification hiérarchique des modes de flux élémentaires fixés et nous ont permis de réduire considérablement le nombre de cas à étudier (approximativement divisé par 10) dans l’analyse des chemins réalisables dans le réseau métabolique. La combinaison de ces deux outils nous a permis d’approcher plus efficacement l’étude de la production des différents métabolites d’intérêt par la cellule de plante hétérotrophique
Nowadays, systems biology are facing the challenges of analysing the huge amount of biological data and large-scale metabolic networks. Although several methods have been developed in recent years to solve this problem, it is existing hardness in studying these data and interpreting the obtained results comprehensively. This thesis focuses on analysis of structural properties, computation of elementary flux modes and determination of minimal cut sets of the heterotrophic plant cellmetabolic network. In our research, we have collaborated with biologists to reconstructa mid-size metabolic network of this heterotrophic plant cell. This network contains about 90 nodes and 150 edges. First step, we have done the analysis of structural properties by using graph theory measures, with the aim of finding its owned organisation. The central points orhub reactions found in this step do not explain clearly the network structure. The small-world or scale-free attributes have been investigated, but they do not give more useful information. In the second step, one of the promising analysis methods, named elementary flux modes, givesa large number of solutions, around hundreds of thousands of feasible metabolic pathways that is difficult to handle them manually. In the third step, minimal cut sets computation, a dual approach of elementary flux modes, has been used to enumerate all minimal and unique sets of reactions stopping the feasible pathways found in the previous step. The number of minimal cut sets has a decreasing trend in large-scale networks in the case of growing the network size. We have also combined elementary flux modes analysis and minimal cut sets computation to find the relationship among the two sets of results. The findings reveal the importance of minimal cut sets in use of seeking the hierarchical structure of this network through elementary flux modes. We have set up the circumstance that what will be happened if glucose entry is absent. Bi analysis of small minimal cut sets we have been able to found set of reactions which has to be present to produce the different sugars or metabolites of interest in absence of glucose entry. Minimal cut sets of size 2 have been used to identify 8 reactions which play the role of the skeleton/core of our network. In addition to these first results, by using minimal cut sets of size 3, we have pointed out five reactions as the starting point of creating a new branch in creationof feasible pathways. These 13 reactions create a hierarchical classification of elementary flux modes set. It helps us understanding more clearly the production of metabolites of interest inside the plant cell metabolism
APA, Harvard, Vancouver, ISO, and other styles
38

Newbold, James Richard. "Comparison and Simulation of a Water Distribution Network in EPANET and a New Generic Graph Trace Analysis Based Model." Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/31177.

Full text
Abstract:
The main purpose of this study was to compare the Distributed Engineering Workstation (DEW) and EPANET models. These two models are fundamentally different in the approaches taken to simulate hydraulic systems. To better understand the calculations behind each modelsâ hydraulic simulation, three solution methods were evaluated and compared. The three solution approaches were the Todini, Hardy-Cross, and DEW bisection methods. The Todini method was included in the study because of its similarities to EPANETâ s hydraulic solution method and the Hardy-Cross solution was included due to its similarities with the DEW approach. Each solution method was used to solve a simple looped network, and the hydraulic solutions were compared. It was determined that all three solution methods predicted flow values that were very similar. A different, more complex looped network from the solution method comparison was simulated using both EPANET and DEW. Since EPANET is a well established water distribution system model, it was considered the standard for the comparison with DEW. The predicted values from the simulation in EPANET and DEW were compared. This comparison offered insight into the functionality of DEWâ s hydraulic simulation component. The comparison determined that the DEW model is sensitive to the tolerance value chosen for a simulation. The flow predictions between the DEW and EPANET models became much closer when the tolerance value in DEW was decreased.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
39

Conte, Donatello Jolion Jean-Michel Vento Mario. "Detection, tracking, and behaviour analysis of moving people in intelligent video surveillance systems a graph based approach /." Villeurbanne : Doc'INSA, 2006. http://docinsa.insa-lyon.fr/these/pont.php?id=conte.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Conte, Donatello. "Detection, tracking, and behaviour analysis of moving people in intelligent video surveillance systems : a graph based approach." Lyon, INSA, 2006. http://theses.insa-lyon.fr/publication/2006ISAL0033/these.pdf.

Full text
Abstract:
In this thesis video surveillance system is proposed. For each step of such systems it presents some innovations as regard as the state of the art in such systems. First of all, a new selectively and adaptively background substraction algorithm has been proposed to adapt the system at illumination and scene changes. Furthermore, some heuristics are proposed to solve object detection problems in real environment shadows, noise, etc. Result show that proposed techniques are robust in terms of quality of solution and, besides, they are efficient in terms of processing time. The main object of the thesis concerns the object tracking phase. In the thesis we propose a new algorithm based on a new representation of the objects : the graph pyramids. This presentation allows the resolutions of occlusions also in complex cases. They are preformed on standard datasets and standard indexes to provide objective results. The results show the approch is promising
Dans cette thèse, nous proposons un système de vidéo surveillance qui présente des nouveaux algorithmes de détection d’objets et de suivi d’objets, afin de pallier les principaux problèmes qui se présentent dans le développement de tels systèmes. Il a été proposé un nouvel algorithme de soustraction du fond, sélectif et adaptatif, pour adapter le système à des changements de luminosité et de la structure de la scène. En outre, pour rendre applicable le système à des environnements réels, des heuristiques ont été proposées pour la résolution des différents problèmes : ombres, bruit, etc. Les résultats produits sur la phase de détection d’objets montrent que les techniques proposées sont robustes et utilisables en temps réels grâce à un temps de calcul peu élevé. L’objet principal de la thèse a concerné la phase de suivi d’objets. Dans cette thèse, nous proposons un nouvel algorithme basé sur une expérimentation des objets qui utilisent les pyramides de graphes. Des tests expérimentaux sur des bases de données standard et sur des index attestés pour l’évaluation des algorithmes de suivi d’objets en présence d’occlusions montrent que cette approche est très prometteuse
APA, Harvard, Vancouver, ISO, and other styles
41

Bertarelli, Lorenza. "Analysis and simulation of cryptographic techniques based on sparse graph with application to satellite and airborne communication systems." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15537/.

Full text
Abstract:
In view of the future diffusion of quantum computers, many of the cryptographic systems currently in use must be rethought. This drastic but concrete preview has led us to reflect on a possible solution based on error correction code theory. In fact, many lines of thought are in agreement with the possibility of using error correctly codes not only for channel encoding, but also for encryption. In 1978 McEliece was the first to propose this idea, at those time not very much considered because with worse performance in the area of cryptography compared to other techniques, but today re-evaluated. The Low Density Parity Check codes (LDPC) are state-of-art in error correction since they can be decoded efficiently close to the Shannon capacity. Thanks to new advances in the algorithmic aspects of code theory and progress on linear-time encodable/decodable codes it is possible to achieve capacity even against adversarial noise. This thesis work mainly focuses on the decoding (or in terms of cryptography in the decryption) of LDPC codes through the implementation of Hard Decision Iterative Decoding with the aim of studying the performances for different codes belonging to the same LDPC family in terms of error correction capability.
APA, Harvard, Vancouver, ISO, and other styles
42

Wu, Sichao. "Computational Framework for Uncertainty Quantification, Sensitivity Analysis and Experimental Design of Network-based Computer Simulation Models." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/78764.

Full text
Abstract:
When capturing a real-world, networked system using a simulation model, features are usually omitted or represented by probability distributions. Verification and validation (V and V) of such models is an inherent and fundamental challenge. Central to V and V, but also to model analysis and prediction, are uncertainty quantification (UQ), sensitivity analysis (SA) and design of experiments (DOE). In addition, network-based computer simulation models, as compared with models based on ordinary and partial differential equations (ODE and PDE), typically involve a significantly larger volume of more complex data. Efficient use of such models is challenging since it requires a broad set of skills ranging from domain expertise to in-depth knowledge including modeling, programming, algorithmics, high- performance computing, statistical analysis, and optimization. On top of this, the need to support reproducible experiments necessitates complete data tracking and management. Finally, the lack of standardization of simulation model configuration formats presents an extra challenge when developing technology intended to work across models. While there are tools and frameworks that address parts of the challenges above, to the best of our knowledge, none of them accomplishes all this in a model-independent and scientifically reproducible manner. In this dissertation, we present a computational framework called GENEUS that addresses these challenges. Specifically, it incorporates (i) a standardized model configuration format, (ii) a data flow management system with digital library functions helping to ensure scientific reproducibility, and (iii) a model-independent, expandable plugin-type library for efficiently conducting UQ/SA/DOE for network-based simulation models. This framework has been applied to systems ranging from fundamental graph dynamical systems (GDSs) to large-scale socio-technical simulation models with a broad range of analyses such as UQ and parameter studies for various scenarios. Graph dynamical systems provide a theoretical framework for network-based simulation models and have been studied theoretically in this dissertation. This includes a broad range of stability and sensitivity analyses offering insights into how GDSs respond to perturbations of their key components. This stability-focused, structure-to-function theory was a motivator for the design and implementation of GENEUS. GENEUS, rooted in the framework of GDS, provides modelers, experimentalists, and research groups access to a variety of UQ/SA/DOE methods with robust and tested implementations without requiring them to necessarily have the detailed expertise in statistics, data management and computing. Even for research teams having all the skills, GENEUS can significantly increase research productivity.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
43

Hermann, Frank [Verfasser], and Hartmut [Akademischer Betreuer] Ehrig. "Analysis and Optimization of Visual Enterprise Models : Based on Graph and Model Transformation [[Elektronische Ressource]] / Frank Hermann. Betreuer: Hartmut Ehrig." Berlin : Universitätsbibliothek der Technischen Universität Berlin, 2011. http://d-nb.info/1014891507/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Maus, Aaron. "Formulation of Hybrid Knowledge-Based/Molecular Mechanics Potentials for Protein Structure Refinement and a Novel Graph Theoretical Protein Structure Comparison and Analysis Technique." ScholarWorks@UNO, 2019. https://scholarworks.uno.edu/td/2673.

Full text
Abstract:
Proteins are the fundamental machinery that enables the functions of life. It is critical to understand them not just for basic biology, but also to enable medical advances. The field of protein structure prediction is concerned with developing computational techniques to predict protein structure and function from a protein’s amino acid sequence, encoded for directly in DNA, alone. Despite much progress since the first computational models in the late 1960’s, techniques for the prediction of protein structure still cannot reliably produce structures of high enough accuracy to enable desired applications such as rational drug design. Protein structure refinement is the process of modifying a predicted model of a protein to bring it closer to its native state. In this dissertation a protein structure refinement technique, that of potential energy minimization using hybrid molecular mechanics/knowledge based potential energy functions is examined in detail. The generation of the knowledge-based component is critically analyzed, and in the end, a potential that is a modest improvement over the original is presented. This dissertation also examines the task of protein structure comparison. In evaluating various protein structure prediction techniques, it is crucial to be able to compare produced models against known structures to understand how well the technique performs. A novel technique is proposed that allows an in-depth yet intuitive evaluation of the local similarities between protein structures. Based on a graph analysis of pairwise atomic distance similarities, multiple regions of structural similarity can be identified between structures independently of relative orientation. Multidomain structures can be evaluated and this technique can be combined with global measures of similarity such as the global distance test. This method of comparison is expected to have broad applications in rational drug design, the evolutionary study of protein structures, and in the analysis of the protein structure prediction effort.
APA, Harvard, Vancouver, ISO, and other styles
45

Ojha, Hem Raj. "Link Dynamics in Student Collaboration Networks using Schema Based Structured Network Models on Canvas LMS." Miami University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=miami1596154905454069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Nguyen, Thi Kim Ngan. "Generalizing association rules in n-ary relations : application to dynamic graph analysis." Phd thesis, INSA de Lyon, 2012. http://tel.archives-ouvertes.fr/tel-00995132.

Full text
Abstract:
Pattern discovery in large binary relations has been extensively studied. An emblematic success in this area concerns frequent itemset mining and its post-processing that derives association rules. In this case, we mine binary relations that encode whether some properties are satisfied or not by some objects. It is however clear that many datasets correspond to n-ary relations where n > 2. For example, adding spatial and/or temporal dimensions (location and/or time when the properties are satisfied by the objects) leads to the 4-ary relation Objects x Properties x Places x Times. Therefore, we study the generalization of association rule mining within arbitrary n-ary relations: the datasets are now Boolean tensors and not only Boolean matrices. Unlike standard rules that involve subsets of only one domain of the relation, in our setting, the head and the body of a rule can include arbitrary subsets of some selected domains. A significant contribution of this thesis concerns the design of interestingness measures for such generalized rules: besides a frequency measures, two different views on rule confidence are considered. The concept of non-redundant rules and the efficient extraction of the non-redundant rules satisfying the minimal frequency and minimal confidence constraints are also studied. To increase the subjective interestingness of rules, we then introduce disjunctions in their heads. It requires to redefine the interestingness measures again and to revisit the redundancy issues. Finally, we apply our new rule discovery techniques to dynamic relational graph analysis. Such graphs can be encoded into n-ary relations (n ≥ 3). Our use case concerns bicycle renting in the Vélo'v system (self-service bicycle renting in Lyon). It illustrates the added-value of some rules that can be computed thanks to our software prototypes.
APA, Harvard, Vancouver, ISO, and other styles
47

Mehri, Maroua. "Historical document image analysis : a structural approach based on texture." Thesis, La Rochelle, 2015. http://www.theses.fr/2015LAROS005/document.

Full text
Abstract:
Les récents progrès dans la numérisation des collections de documents patrimoniaux ont ravivé de nouveaux défis afin de garantir une conservation durable et de fournir un accès plus large aux documents anciens. En parallèle de la recherche d'information dans les bibliothèques numériques ou l'analyse du contenu des pages numérisées dans les ouvrages anciens, la caractérisation et la catégorisation des pages d'ouvrages anciens a connu récemment un regain d'intérêt. Les efforts se concentrent autant sur le développement d'outils rapides et automatiques de caractérisation et catégorisation des pages d'ouvrages anciens, capables de classer les pages d'un ouvrage numérisé en fonction de plusieurs critères, notamment la structure des mises en page et/ou les caractéristiques typographiques/graphiques du contenu de ces pages. Ainsi, dans le cadre de cette thèse, nous proposons une approche permettant la caractérisation et la catégorisation automatiques des pages d'un ouvrage ancien. L'approche proposée se veut indépendante de la structure et du contenu de l'ouvrage analysé. Le principal avantage de ce travail réside dans le fait que l'approche s'affranchit des connaissances préalables, que ce soit concernant le contenu du document ou sa structure. Elle est basée sur une analyse des descripteurs de texture et une représentation structurelle en graphe afin de fournir une description riche permettant une catégorisation à partir du contenu graphique (capturé par la texture) et des mises en page (représentées par des graphes). En effet, cette catégorisation s'appuie sur la caractérisation du contenu de la page numérisée à l'aide d'une analyse des descripteurs de texture, de forme, géométriques et topologiques. Cette caractérisation est définie à l'aide d'une représentation structurelle. Dans le détail, l'approche de catégorisation se décompose en deux étapes principales successives. La première consiste à extraire des régions homogènes. La seconde vise à proposer une signature structurelle à base de texture, sous la forme d'un graphe, construite à partir des régions homogènes extraites et reflétant la structure de la page analysée. Cette signature assure la mise en œuvre de nombreuses applications pour gérer efficacement un corpus ou des collections de livres patrimoniaux (par exemple, la recherche d'information dans les bibliothèques numériques en fonction de plusieurs critères, ou la catégorisation des pages d'un même ouvrage). En comparant les différentes signatures structurelles par le biais de la distance d'édition entre graphes, les similitudes entre les pages d'un même ouvrage en termes de leurs mises en page et/ou contenus peuvent être déduites. Ainsi de suite, les pages ayant des mises en page et/ou contenus similaires peuvent être catégorisées, et un résumé/une table des matières de l'ouvrage analysé peut être alors généré automatiquement. Pour illustrer l'efficacité de la signature proposée, une étude expérimentale détaillée a été menée dans ce travail pour évaluer deux applications possibles de catégorisation de pages d'un même ouvrage, la classification non supervisée de pages et la segmentation de flux de pages d'un même ouvrage. En outre, les différentes étapes de l'approche proposée ont donné lieu à des évaluations par le biais d'expérimentations menées sur un large corpus de documents patrimoniaux
Over the last few years, there has been tremendous growth in digitizing collections of cultural heritage documents. Thus, many challenges and open issues have been raised, such as information retrieval in digital libraries or analyzing page content of historical books. Recently, an important need has emerged which consists in designing a computer-aided characterization and categorization tool, able to index or group historical digitized book pages according to several criteria, mainly the layout structure and/or typographic/graphical characteristics of the historical document image content. Thus, the work conducted in this thesis presents an automatic approach for characterization and categorization of historical book pages. The proposed approach is applicable to a large variety of ancient books. In addition, it does not assume a priori knowledge regarding document image layout and content. It is based on the use of texture and graph algorithms to provide a rich and holistic description of the layout and content of the analyzed book pages to characterize and categorize historical book pages. The categorization is based on the characterization of the digitized page content by texture, shape, geometric and topological descriptors. This characterization is represented by a structural signature. More precisely, the signature-based characterization approach consists of two main stages. The first stage is extracting homogeneous regions. Then, the second one is proposing a graph-based page signature which is based on the extracted homogeneous regions, reflecting its layout and content. Afterwards, by comparing the different obtained graph-based signatures using a graph-matching paradigm, the similarities of digitized historical book page layout and/or content can be deduced. Subsequently, book pages with similar layout and/or content can be categorized and grouped, and a table of contents/summary of the analyzed digitized historical book can be provided automatically. As a consequence, numerous signature-based applications (e.g. information retrieval in digital libraries according to several criteria, page categorization) can be implemented for managing effectively a corpus or collections of books. To illustrate the effectiveness of the proposed page signature, a detailed experimental evaluation has been conducted in this work for assessing two possible categorization applications, unsupervised page classification and page stream segmentation. In addition, the different steps of the proposed approach have been evaluated on a large variety of historical document images
APA, Harvard, Vancouver, ISO, and other styles
48

Ramírez, Mahaluf Juan Pablo. "The dynamics of emotional and cognitive networks: Graph-based analysis of brain networks using fMRI and theoretical model for cingulo-frontal network dynamics in major depression." Doctoral thesis, Universitat de Barcelona, 2015. http://hdl.handle.net/10803/311623.

Full text
Abstract:
This thesis is composed of two complementary projects. One focuses on the study of the dynamics between emotional and cognitive networks in healthy subjects using functional magnetic resonance imaging (fMRI). The second project builds on the results obtained in healthy subjects to formulate a computational model of the physiopathology and treatment mechanisms in major depression disorder (MDD). 1. Graph-based analysis on the emotional-cognitive demands The regulation of cognitive and emotional processes is critical for diverse functions such as attention, problem solving, error detection, motivation, decision making and social behavior. Dysregulation of these processes is at the core of Major Depressive Disorder (MDD). Currently neuroimaging and anatomical methods applied to emotional and cognitive processes present two views of brain organization: one view presents a considerable degree of functional specialization and the other view proposes that cognition and emotion are integrated in the brain. Here, we address this issue by studying the network topology underlying the competitive interactions between emotional and cognitive networks in healthy subjects. To this end, we designed a task that contrasted periods with very high emotional and cognitive demands. We concatenated two tasks: A Sadness Provocation (SP) followed by a Spatial Working Memory (WM) task. We hypothesized that this behavioral paradigm would enhance the modularity of emotional and cognitive brain networks and would reveal the cortical areas that act as network hubs, which are critical for regulating the flow and integration of information between regions. We collected fMRI data from 22 healthy subjects performing this task. We analyzed their brain activity with a general linear model, looking for activation patterns linked to the various phases of the tasks, which we then used to extract 20 regions of interest (ROI) on a subject-by-subject basis. We computed the correlations between fMRI time series in pairs of ROIs, obtaining a matrix of correlations for each subject, and we then applied network measures from graph theory. Subjects that scored highest their sadness intensity showed a more marked decrease in their cognitive performance after SP, and presented stronger activity in subgenual anterior cingulate cortex (sACC) and weaker activity in dorsolateral prefrontal cortex (dlPFC). The network analysis identified two main modules, one cognitive and one emotional. Analysis of connectivity degree and participation coefficient identified the areas that acted as hubs and their modulation: the left dlPFC degree decreased after sadness provocation and the left medial frontal pole (mFP) degree was modulated by sadness intensity. Functional connectivity analyses revealed that these hub areas modulated their connectivity following sadness experience: dlPFC and sACC showed stronger anticorrelation, and mFP and sACC strengthened their correlation. Our results identify the hubs that mediate the interaction between emotional and cognitive networks in a context of high emotional and cognitive demands, and they suggest possible targets to develop new therapeutic strategies for mood disorders. 2. A computational model of Major Depression Several lines of evidence associate major depressive disorder (MDD) with a dysfunction of cingulo-frontal network dynamics following glutamate metabolism dysfunction in the ventral anterior cingulate cortex (vACC). However, we still lack a mechanistic framework to understand how these alterations underlie MDD and how treatments improve depression symptoms. We built a biophysical computational model of two cortical areas (vACC, and dorso-lateral prefrontal cortex, dlPFC) that acts as a switch between emotional and cognitive processing: the two areas cannot be co-active due to effective mutual inhibition. We simulated MDD by slowing down glutamate decay in vACC, serotonergic treatments (SSRI) by activating serotonin 1A receptors in vACC, and deep brain stimulation by periodic stimulation of vACC interneurons at 130 Hz. We analyzed network dynamics mathematically in a simpler firing rate network model, and we derived the conditions for the emergence of cortical oscillations. MDD networks differed from healthy networks in that vACC presented constant activation in the absence of emotional inputs, which was not suppressed by dlPFC activation. In turn, vACC hyper-activation prevented dlPFC from responding to cognitive signals, mimicking cognitive dysfunction in MDD. SSRI counteracted aberrant vACC activity but it also abolished its normal response to emotional stimuli. In treatment-resistant models, DBS treatment restored the switch function. Activity oscillations in the theta and beta/gamma bands correlated with network function, representing a marker of switch-like operation in the network. The model articulates mechanistically how glutamate deficits generate aberrant vACC dynamics, and how this underlies emotional and cognitive symptoms in MDD. The model accounts for the progression of depression, dose-dependent SSRI treatment, DBS treatment of treatment-resistant models and EEG rhythmic biomarkers in a biophysical model of the pathophysiology of MDD.
Esta tesis se compone de dos proyectos complementarios. Uno se centra en el estudio de la dinámica entre redes emocionales y cognitivas en sujetos sanos utilizando imágenes de resonancia magnética funcional (fMRI). El segundo proyecto se basa en los resultados obtenidos en sujetos sanos para formular un modelo computacional de los mecanismos fisiopatológicos y tratamientos en el trastorno de depresión mayor (MDD). 1. Análisis de grafos en función de las demandas emocionales cognitiva La regulación de los procesos cognitivos y emocionales es fundamental para diversas funciones como la atención, resolución de problemas, la detección de errores, la motivación, la toma de decisiones y el comportamiento social. La desregulación de estos procesos está en el núcleo del trastorno depresivo mayor (MDD). Actualmente métodos de neuroimagen y anatómicos aplicados a los procesos emocionales y cognitivos presentan dos puntos de vista acerca de la organización del cerebro: un punto de vista presenta un alto grado de especialización funcional y el otro punto de vista propone que la cognición y la emoción se integran en el cerebro. Aquí, abordamos esta cuestión mediante el estudio de la topología de red subyacente durante interacciones competitivas entre las redes emocionales y cognitivos en sujetos sanos. Para ello, hemos diseñado una tarea que contrasta períodos muy altas demandas emocionales y cognitivas. Concatenamos dos tareas: una provocación tristeza (SP), seguida de una tarea de memoria de trabajo espacial (WM). La hipótesis es que este paradigma conductual mejoraría la modularidad de las redes cerebrales emocionales y cognitivas y revelaría las áreas corticales que actúan como hub de la red, que son fundamentales para regular el flujo y la integración de información entre regiones. Se recogieron datos de la fMRI de 22 sujetos sanos que realizan esta tarea. Se analizó su actividad cerebral con un modelo general lineal, en busca de patrones de activación ligados a las diversas fases de las tareas, que luego utilizamos para extraer 20 regiones de interés (ROI) para cada sujeto. Hemos calculado las correlaciones entre las series de tiempo de fMRI para pares de regiones de interés, y construimos una matriz de correlaciones para cada sujeto, y luego aplicamos medidas de red desde la teoría de grafos. Los sujetos que puntuaron más alto su intensidad tristeza mostraron una más marcada disminución en su rendimiento cognitivo después de SP, y presentaron una mayor actividad en la corteza anterior cingulada subgenual (sACC) y la actividad más débil en la corteza prefrontal dorsolateral (dlPFC). El análisis de redes identificó dos módulos principales, uno cognitivo y otro emocional. Análisis del grado de conectividad y el coeficiente de participación identificaron las áreas que actuaban como hub y su modulación: el grado del dlPFC disminuyó después de la provocación tristeza y el grado del polo medial frontal (mFP) fue modulada por la intensidad de la tristeza. Los análisis de conectividad funcional reveló que estas áreas modulan su conectividad dependiendo de la experiencia de tristeza: dlPFC y sACC mostraron anticorrelación más fuerte, y mFP y sACC aumentaron su correlación positiva en los sujetos que más se entristecieron. Nuestros resultados identifican los hub que median la interacción entre redes emocionales y cognitivos en un contexto de altas demandas emocionales y cognitivas, y sugieren posibles objetivos para desarrollar nuevas estrategias terapéuticas para los trastornos del estado de ánimo. 2. Modelo computacional para la depresión mayor Varias líneas de evidencia asocian el trastorno depresivo mayor (TDM) con una disfunción de la dinámica de la red cíngulo-frontal, y especialmente una disfunción en el metabolismo del glutamato en la corteza cingulada anterior ventral (vACC). Sin embargo, todavía carecemos de un marco mecanicista para entender cómo estas alteraciones subyacen TDM y cómo los tratamientos mejoran los síntomas de depresión. Construimos un modelo biofísico computacional de dos áreas corticales (vACC y dlPFC) que actúa como un interruptor entre el procesamiento emocional y cognitivo: las dos áreas no pueden ser co-activo debido a la inhibición mutua eficaz. Hemos simulado el TDM por enlentecimiento en la recaptura del glutamato en vACC, los tratamientos serotoninérgicos (ISRS) mediante la activación de los receptores de serotonina 1A en vACC y la estimulación cerebral profunda mediante la estimulación periódica de interneuronas en el vACC a 130 Hz (DBS). Se analizó la dinámica de la red matemáticamente en un modelo de red tasa de disparo más simple, y derivamos las condiciones para la aparición de oscilaciones corticales. Las redes TDM difieren de las redes sanas en que vACC presentó activación constante en ausencia de estímulos emocionales, que no fue suprimida por la activación dlPFC. A su vez, la hiper-activación del vACC impidió al dlPFC responder a los estímulos cognitivos, imitando la disfunción cognitiva del TDM. ISRS contrarrestaron actividad aberrante en el vACC pero también abolió su respuesta normal a los estímulos emocionales. En los modelos resistentes al tratamiento, el tratamiento DBS restauró la función de interruptor. Oscilaciones en las bandas theta y beta/gamma correlacionaron con la función de red, específicamente con el rango bistable, lo que representa un marcador de operación de interruptor de la red. El modelo articula mecánicamente cómo el déficit en el metabolismo del glutamato genera dinámicas aberrantes en vACC, y cómo esto subyace síntomas emocionales y cognitivos en el TDM. El modelo representa la progresión de la depresión, la respuesta (dependiente de dosis) al tratamiento con ISRS, como DBS rescata la función de la red en modelos resistentes al tratamiento y explica porque las oscilaciones son biomarcadores, en un modelo biofísico de la fisiopatología del trastorno depresivo mayor.
APA, Harvard, Vancouver, ISO, and other styles
49

Bajaj, Manas. "Knowledge composition methodology for effective analysis problem formulation in simulation-based design." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26639.

Full text
Abstract:
Thesis (Ph.D)--Mechanical Engineering, Georgia Institute of Technology, 2009.
Committee Co-Chair: Dr. Christiaan J. J. Paredis; Committee Co-Chair: Dr. Russell S. Peak; Committee Member: Dr. Charles Eastman; Committee Member: Dr. David McDowell; Committee Member: Dr. David Rosen; Committee Member: Dr. Steven J. Fenves. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
50

Railing, Brian Paul. "Collecting and representing parallel programs with high performance instrumentation." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54431.

Full text
Abstract:
Computer architecture has looming challenges with finding program parallelism, process technology limits, and limited power budget. To navigate these challenges, a deeper understanding of parallel programs is required. I will discuss the task graph representation and how it enables programmers and compiler optimizations to understand and exploit dynamic aspects of the program. I will present Contech, which is a high performance framework for generating dynamic task graphs from arbitrary parallel programs. The Contech framework supports a variety of languages and parallelization libraries, and has been tested on both x86 and ARM. I will demonstrate how this framework encompasses a diversity of program analyses, particularly by modeling a dynamically reconfigurable, heterogeneous multi-core processor.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography