To see the other types of publications on this topic, follow the link: Graphical Method.

Dissertations / Theses on the topic 'Graphical Method'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Graphical Method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ravi, Sudharshan, and Quang Vu. "Graphical Editor for Diagnostic Method Development." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-107514.

Full text
Abstract:
The adage A picture is worth a thousand words conveys the notion that acomplex concept can be understood with just a single picture. Thus visualisingdata allows users to absorb and use large amounts of data quickly.Although textual programming is widely used, it is not best suited for allsituations. Some of these situations require a graphical way to programdata. This thesis investigates the dierent modeling frameworks available withinthe Eclipse ecosystem that allow the reuse of existing XML schema modelsand the creation as well as editing of diagnostic methods. The chosenframeworks were used to build a graphical editor that allows users to create,edit and use diagnostic methods graphically.
APA, Harvard, Vancouver, ISO, and other styles
2

Berry, Maresi (Maresi Ann) 1969. "Graphical method for airport noise impact analysis." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/50429.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 1998.
Includes bibliographical references (p. 99-102).
by Maresi Berry.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
3

Kim, Bo Hung. "A graphical preprocessing interface for non-conforming spectral element solvers." [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jakuben, Benedict J. "Improving Graphical User Interface (GUI) Design Using the Complete Interaction Sequence (CIS) Testing Method." Case Western Reserve University School of Graduate Studies / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=case1291093142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Vuchi, Aditya. "Graphical user interface for three-dimensional FE modeling of composite steel bridges." Morgantown, W. Va. : [West Virginia University Libraries], 2005. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4389.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2005.
Title from document title page. Document formatted into pages; contains xi, 188 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 111-115).
APA, Harvard, Vancouver, ISO, and other styles
6

Guven, Deniz. "Development Of A Graphical User Interface For Composite Bridge Finite Element Analysis." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12608094/index.pdf.

Full text
Abstract:
Curved bridges with steel/concrete composite girders are used frequently in the recent years. Analysis of these structural systems presents a variety of challenges. Finite element method offers the most elaborate treatment for these systems, however its use is limited in routine design practice due to modeling requirements. In recent years, a finite element program named UTrAp was developed to analyze construction stages of curved/straight composite bridges. The original Graphical User Interface could not be used with the modified computation engine. It is the focus of this thesis work to develop a brand new Graphical User Interface with enhanced visual capabilities compatible with the engine. Pursuant to this goal a Graphical User Interface was developed using C++ programming language together with OPENGL libraries. The interface is linked to the computational engine to enable direct interaction between two programs. In the following thesis work the development of the GUI and the modifications to the computational engine are presented. Moreover, the analysis results pertaining to the newly added features are checked against analytical solutions and recommendations presented in design specifications.
APA, Harvard, Vancouver, ISO, and other styles
7

Mirsad, Ćosović. "Distributed State Estimation in Power Systems using Probabilistic Graphical Models." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2019. https://www.cris.uns.ac.rs/record.jsf?recordId=108459&source=NDLTD&language=en.

Full text
Abstract:
We present a detailed study on application of factorgraphs and the belief propagation (BP) algorithm to thepower system state estimation (SE) problem. We startfrom the BP solution for the linear DC model, for whichwe provide a detailed convergence analysis. Using BPbasedDC model we propose a fast real-time stateestimator for the power system SE. The proposedestimator is easy to distribute and parallelize, thusalleviating computational limitations and allowing forprocessing measurements in real time. The presentedalgorithm may run as a continuous process, with eachnew measurement being seamlessly processed by thedistributed state estimator. In contrast to the matrixbasedSE methods, the BP approach is robust to illconditionedscenarios caused by significant differencesbetween measurement variances, thus resulting in asolution that eliminates observability analysis. Using theDC model, we numerically demonstrate the performanceof the state estimator in a realistic real-time systemmodel with asynchronous measurements. We note thatthe extension to the non-linear SE is possible within thesame framework.Using insights from the DC model, we use two differentapproaches to derive the BP algorithm for the non-linearmodel. The first method directly applies BP methodology,however, providing only approximate BP solution for thenon-linear model. In the second approach, we make a keyfurther step by providing the solution in which the BP isapplied sequentially over the non-linear model, akin towhat is done by the Gauss-Newton method. The resultingiterative Gauss-Newton belief propagation (GN-BP)algorithm can be interpreted as a distributed Gauss-Newton method with the same accuracy as thecentralized SE, however, introducing a number ofadvantages of the BP framework. The thesis providesextensive numerical study of the GN-BP algorithm,provides details on its convergence behavior, and gives anumber of useful insights for its implementation.Finally, we define the bad data test based on the BPalgorithm for the non-linear model. The presented modelestablishes local criteria to detect and identify bad datameasurements. We numerically demonstrate that theBP-based bad data test significantly improves the baddata detection over the largest normalized residual test.
Glavni rezultati ove teze su dizajn i analiza novihalgoritama za rešavanje problema estimacije stanjabaziranih na faktor grafovima i „Belief Propagation“ (BP)algoritmu koji se mogu primeniti kao centralizovani ilidistribuirani estimatori stanja u elektroenergetskimsistemima. Na samom početku, definisan je postupak zarešavanje linearnog (DC) problema korišćenjem BPalgoritma. Pored samog algoritma data je analizakonvergencije i predloženo je rešenje za unapređenjekonvergencije. Algoritam se može jednostavnodistribuirati i paralelizovati, te je pogodan za estimacijustanja u realnom vremenu, pri čemu se informacije moguprikupljati na asinhroni način, zaobilazeći neke odpostojećih rutina, kao npr. provera observabilnostisistema. Proširenje algoritma za nelinearnu estimacijustanja je moguće unutar datog modela.Dalje se predlaže algoritam baziran na probabilističkimgrafičkim modelima koji je direktno primenjen nanelinearni problem estimacije stanja, što predstavljalogičan korak u tranziciji od linearnog ka nelinearnommodelu. Zbog nelinearnosti funkcija, izrazi za određenuklasu poruka ne mogu se dobiti u zatvorenoj formi, zbogčega rezultujući algoritam predstavlja aproksimativnorešenje. Nakon toga se predlaže distribuirani Gaus-Njutnov metod baziran na probabilističkim grafičkimmodelima i BP algoritmu koji postiže istu tačnost kao icentralizovana verzija Gaus-Njutnovog metoda zaestimaciju stanja, te je dat i novi algoritam za otkrivanjenepouzdanih merenja (outliers) prilikom merenjaelektričnih veličina. Predstavljeni algoritam uspostavljalokalni kriterijum za otkrivanje i identifikacijunepouzdanih merenja, a numerički je pokazano daalgoritam značajno poboljšava detekciju u odnosu nastandardne metode.
APA, Harvard, Vancouver, ISO, and other styles
8

Hussin, Mahmud M. "Some studies of a graphical method in statistical data analysis : subjective judgments in the interpretation of boxplots." Thesis, Keele University, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.290317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Boussaid, Haithem. "Efficient inference and learning in graphical models for multi-organ shape segmentation." Thesis, Châtenay-Malabry, Ecole centrale de Paris, 2015. http://www.theses.fr/2015ECAP0002/document.

Full text
Abstract:
Cette thèse explore l’utilisation des modèles de contours déformables pour la segmentation basée sur la forme des images médicales. Nous apportons des contributions sur deux fronts: dans le problème de l’apprentissage statistique, où le modèle est formé à partir d’un ensemble d’images annotées, et le problème de l’inférence, dont le but est de segmenter une image étant donnée un modèle. Nous démontrons le mérite de nos techniques sur une grande base d’images à rayons X, où nous obtenons des améliorations systématiques et des accélérations par rapport à la méthode de l’état de l’art. Concernant l’apprentissage, nous formulons la formation de la fonction de score des modèles de contours déformables en un problème de prédiction structurée à grande marge et construisons une fonction d’apprentissage qui vise à donner le plus haut score à la configuration vérité-terrain. Nous intégrons une fonction de perte adaptée à la prédiction structurée pour les modèles de contours déformables. En particulier, nous considérons l’apprentissage avec la mesure de performance consistant en la distance moyenne entre contours, comme une fonction de perte. L’utilisation de cette fonction de perte au cours de l’apprentissage revient à classer chaque contour candidat selon sa distance moyenne du contour vérité-terrain. Notre apprentissage des modèles de contours déformables en utilisant la prédiction structurée avec la fonction zéro-un de perte surpasse la méthode [Seghers et al. 2007] de référence sur la base d’images médicales considérée [Shiraishi et al. 2000, van Ginneken et al. 2006]. Nous démontrons que l’apprentissage avec la fonction de perte de distance moyenne entre contours améliore encore plus les résultats produits avec l’apprentissage utilisant la fonction zéro-un de perte et ce d’une quantité statistiquement significative.Concernant l’inférence, nous proposons des solveurs efficaces et adaptés aux problèmes combinatoires à variables spatiales discrétisées. Nos contributions sont triples: d’abord, nous considérons le problème d’inférence pour des modèles graphiques qui contiennent des boucles, ne faisant aucune hypothèse sur la topologie du graphe sous-jacent. Nous utilisons un algorithme de décomposition-coordination efficace pour résoudre le problème d’optimisation résultant: nous décomposons le graphe du modèle en un ensemble de sous-graphes en forme de chaines ouvertes. Nous employons la Méthode de direction alternée des multiplicateurs (ADMM) pour réparer les incohérences des solutions individuelles. Même si ADMM est une méthode d’inférence approximative, nous montrons empiriquement que notre implémentation fournit une solution exacte pour les exemples considérés. Deuxièmement, nous accélérons l’optimisation des modèles graphiques en forme de chaîne en utilisant l’algorithme de recherche hiérarchique A* [Felzenszwalb & Mcallester 2007] couplé avec les techniques d’élagage développés dans [Kokkinos 2011a]. Nous réalisons une accélération de 10 fois en moyenne par rapport à l’état de l’art qui est basé sur la programmation dynamique (DP) couplé avec les transformées de distances généralisées [Felzenszwalb & Huttenlocher 2004]. Troisièmement, nous intégrons A* dans le schéma d’ADMM pour garantir une optimisation efficace des sous-problèmes en forme de chaine. En outre, l’algorithme résultant est adapté pour résoudre les problèmes d’inférence augmentée par une fonction de perte qui se pose lors de l’apprentissage de prédiction des structure, et est donc utilisé lors de l’apprentissage et de l’inférence. [...]
This thesis explores the use of discriminatively trained deformable contour models (DCMs) for shape-based segmentation in medical images. We make contributions in two fronts: in the learning problem, where the model is trained from a set of annotated images, and in the inference problem, whose aim is to segment an image given a model. We demonstrate the merit of our techniques in a large X-Ray image segmentation benchmark, where we obtain systematic improvements in accuracy and speedups over the current state-of-the-art. For learning, we formulate training the DCM scoring function as large-margin structured prediction and construct a training objective that aims at giving the highest score to the ground-truth contour configuration. We incorporate a loss function adapted to DCM-based structured prediction. In particular, we consider training with the Mean Contour Distance (MCD) performance measure. Using this loss function during training amounts to scoring each candidate contour according to its Mean Contour Distance to the ground truth configuration. Training DCMs using structured prediction with the standard zero-one loss already outperforms the current state-of-the-art method [Seghers et al. 2007] on the considered medical benchmark [Shiraishi et al. 2000, van Ginneken et al. 2006]. We demonstrate that training with the MCD structured loss further improves over the generic zero-one loss results by a statistically significant amount. For inference, we propose efficient solvers adapted to combinatorial problems with discretized spatial variables. Our contributions are three-fold:first, we consider inference for loopy graphical models, making no assumption about the underlying graph topology. We use an efficient decomposition-coordination algorithm to solve the resulting optimization problem: we decompose the model’s graph into a set of open, chain-structured graphs. We employ the Alternating Direction Method of Multipliers (ADMM) to fix the potential inconsistencies of the individual solutions. Even-though ADMMis an approximate inference scheme, we show empirically that our implementation delivers the exact solution for the considered examples. Second,we accelerate optimization of chain-structured graphical models by using the Hierarchical A∗ search algorithm of [Felzenszwalb & Mcallester 2007] couple dwith the pruning techniques developed in [Kokkinos 2011a]. We achieve a one order of magnitude speedup in average over the state-of-the-art technique based on Dynamic Programming (DP) coupled with Generalized DistanceTransforms (GDTs) [Felzenszwalb & Huttenlocher 2004]. Third, we incorporate the Hierarchical A∗ algorithm in the ADMM scheme to guarantee an efficient optimization of the underlying chain structured subproblems. The resulting algorithm is naturally adapted to solve the loss-augmented inference problem in structured prediction learning, and hence is used during training and inference. In Appendix A, we consider the case of 3D data and we develop an efficientmethod to find the mode of a 3D kernel density distribution. Our algorithm has guaranteed convergence to the global optimum, and scales logarithmically in the volume size by virtue of recursively subdividing the search space. We use this method to rapidly initialize 3D brain tumor segmentation where we demonstrate substantial acceleration with respect to a standard mean-shift implementation. In Appendix B, we describe in more details our extension of the Hierarchical A∗ search algorithm of [Felzenszwalb & Mcallester 2007] to inference on chain-structured graphs
APA, Harvard, Vancouver, ISO, and other styles
10

Wytock, Matt. "Optimizing Optimization: Scalable Convex Programming with Proximal Operators." Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/785.

Full text
Abstract:
Convex optimization has developed a wide variety of useful tools critical to many applications in machine learning. However, unlike linear and quadratic programming, general convex solvers have not yet reached sufficient maturity to fully decouple the convex programming model from the numerical algorithms required for implementation. Especially as datasets grow in size, there is a significant gap in speed and scalability between general solvers and specialized algorithms. This thesis addresses this gap with a new model for convex programming based on an intermediate representation of convex problems as a sum of functions with efficient proximal operators. This representation serves two purposes: 1) many problems can be expressed in terms of functions with simple proximal operators, and 2) the proximal operator form serves as a general interface to any specialized algorithm that can incorporate additional `2-regularization. On a single CPU core, numerical results demonstrate that the prox-affine form results in significantly faster algorithms than existing general solvers based on conic forms. In addition, splitting problems into separable sums is attractive from the perspective of distributing solver work amongst multiple cores and machines. We apply large-scale convex programming to several problems arising from building the next-generation, information-enabled electrical grid. In these problems (as is common in many domains) large, high-dimensional datasets present opportunities for novel data-driven solutions. We present approaches based on convex models for several problems: probabilistic forecasting of electricity generation and demand, preventing failures in microgrids and source separation for whole-home energy disaggregation.
APA, Harvard, Vancouver, ISO, and other styles
11

Pilla, César Augusto Gomes de. "A Álgebra linear como ferramenta para a pesquisa operacional /." Rio Claro, 2019. http://hdl.handle.net/11449/191348.

Full text
Abstract:
Orientador: João Peres Vieira
Resumo: A Programação Linear é usada na Pesquisa Operacional para resolução de problemas cujo objetivo é encontrar a melhor solução para aqueles problemas que tenham seus modelos representados por expressões lineares. A Álgebra Linear vai ser a ferramenta para a Programação Linear, resolvendo problemas de maximização ou minimização. Vamos utilizar o Método Simplex e, no caso de duas variáveis, apresentaremos também o método gráfico.
Abstract: Linear Programming is used in Operational Research to solve problems resolution whose goal is to find the best solution for those problems that have their models represented by linear expressions. Linear Algebra will be the tool for Linear Programming, solving maximization or minimization problems. We will use the Simplex Method and, in the case of two variables, we will also present the graphical method.
Mestre
APA, Harvard, Vancouver, ISO, and other styles
12

Єфименко, В. М. "Розвиток графічного методу у навчанні фізики засобами цифрових лабораторій." Thesis, Сумський державний університет, 2016. http://essuir.sumdu.edu.ua/handle/123456789/47269.

Full text
Abstract:
Сучасні тенденції розвитку глобального інформаційного простору забезпечують людині не тільки ефективну інформаційну взаємодію за допомогою знакових систем (мови, графіки, цифр, літер, образів, тощо), але і можливість використовувати інтернетресурси, освітні платформи, програмовані засоби, мови програмування, тощо для задоволення власних інтересів, особистісного та професійного зростання.
APA, Harvard, Vancouver, ISO, and other styles
13

Shoilekova, Bilyana Todorova. "Graphical enumeration methods." Thesis, University of Oxford, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.526538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Ge, Fei. "The lattice Boltzmann method dedicated to image processing." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI012.

Full text
Abstract:
La méthode de Boltzmann sur réseau est un outil de simulation numérique dont la formulation à l'échelle mésoscopique permet d'éviter la résolution d'une équation différentielle, et repose sur des mécanismes de propagation et de collision au cours du temps, de distributions de particules se propageant sur un réseau régulier. Si les lois de conservation sont imposées en chaque nœud du réseau, alors la solution générée correspondra à la modélisation de phénomènes physiques à l'échelle macroscopique. Dans ce contexte la méthode de Boltzmann est tout à fait adaptée pour résoudre un problème de mécanique des fluides qui équivaut à résoudre indirectement l'équation de Navier-Stokes. Récemment des travaux en traitement d'images ont été réalisés en adaptant la méthode de Boltzmann sur réseau à des opérations de segmentation, de dé bruitage, etc. Par ailleurs la méthode Boltzmann est intrinsèquement adaptée au calcul parallèle sur cartes graphiques permettant ainsi d'optimiser les temps de calculs. Dans ce cadre, l'objectif de cette thèse est de développer une stratégie générale de segmentation multi-seuils appliquée à des données 3D. L'élaboration d'une fonction de collision originale couplée à un algorithme des k-moyennes réalisant une division en "K" partitions ("clusters") des niveaux de gris de l'image considérée, permet une segmentation efficace à seuils multiples. La précision et l’efficacité de la solution proposée ont été validées sur des images de références et sur des séquences d'imagerie médicale traitant d’anévrismes cérébraux. Egalement la méthode proposée couplant la méthode de Boltzmann et la méthode des k-moyennes, a été testée sur cartes graphiques n’VIDIA attestant que la méthode proposée permet une accélération des calculs d'un facteur au moins supérieur à cent et avec une précision identique relevée notamment lors de la segmentation de la paroi d'anévrismes intracrâniens
Lattice Boltzmann Method (LBM) is a numerical tool for solving partial differential equation, LBM being a mesoscopic model dealing with the material containing a quantity of particles in order to simulate macroscopic phenomenon. As a numerical tool LBM has proved its capability to simulate complex fluid flow behaviours and more recently to process medical images. In the framework of image analysis, LBM is implemented to perform de-noising operation, image boundary detection and image segmentation. In addition, LBM has advantage of strong amenability to parallel computing, especially on low-cost, powerful graphics hardware (GPU).In this direction, the main purpose of this thesis is to develop a general parallel computational segmentation algorithm. We have proved the efficiency of the proposed original method through the segmentation of the wall of an aneurysm and associated with parent blood vessels, whole cerebral data-set and stent-assisted aneurysm. The parallel segmentation algorithm has been run on nVIDIA graphic card, and demonstrates that the speedup has been improved by more than 100 times under the same precision
APA, Harvard, Vancouver, ISO, and other styles
15

Cohn, Trevor A. "Scaling conditional random fields for natural language processing /." Connect to thesis, 2007. http://eprints.unimelb.edu.au/archive/00002874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Eiser, Leslie Agrin. "Microcomputer graphics to teach high school physics." Thesis, McGill University, 1985. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Nelson, Michael S. "Graphical methods for depicting combat units." Thesis, Monterey, California. Naval Postgraduate School, 1992. http://hdl.handle.net/10945/23913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Luo, Xueyi. "A tool for computer verification of properties of certain classes of visibility graphs." Virtual Press, 1994. http://liblink.bsu.edu/uhtbin/catkey/897510.

Full text
Abstract:
Segment endpoint visibility graph is a representation scheme for art gallery problems, guard problems, and other shortest path or shortest circuit problems. In the research of visibility graphs, drawing graphs is a time-consuming task. VGE (Visibility Graphs Editor) is developed for visibility graphs reseacheres to create and modify graphs interactively in X-window environment. Appropriate graphics user interface allows the researcher to edit a graph, save and open a file, and make a hard copy of a graph. VGE is developed in C under X-window environment and using EZD[3] graphics tool. The thesis also discusses the uses of EZD. Although it is still only a prototype, VGE is a successful tool for analyzing visibility graphs.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
19

Al-Kabbany, Ahmed. "Graphical Methods for Image Compositing and Completion." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/35071.

Full text
Abstract:
This thesis is concerned with problems encountered in image-based rendering (IBR) systems. The significance of such systems is increasing as virtual reality as well as augmented reality are finding their way into many applications, from entertainment to military. Particularly, I propose methods that are based on graph theory to address the open problems in the literature of image and video compositing, and scene completion. For a visually plausible compositing, it is first required to separate the object to be composed from the background it was initially captured against, a problem that is known as natural image matting. It aims, using some user interactions, to calculate a map that depicts how much a background color(s) contributes to the color of every other pixel in an image. My contributions to matting increase the accuracy of the map calculation as well as automate the whole process, by eliminating the need for user interactions. I propose several techniques for sampling user interactions which enhance the quality of the calculated maps. They rely on statistics of non-parametric color models as well as graph transduction and iterative graph cut techniques. The presented sampling strategies lead to state-of-the-art separation, and their efficiency was acknowledged by the standard benchmark in the literature. I have adopted the Gestalt laws of visual grouping to formulate a novel cost function to automate the generation of interactions that otherwise have to be provided manually. This frees the matting process from a critical limitation when used in rendering contexts. Scene completion is another task that is often required in IBR systems. This document presents a novel image completion method that overcomes a few drawbacks in the literature. It adopts a binary optimization technique to construct an image summary, which is then shifted according to a map, calculated with combinatorial optimization, to complete the image. I also present the formulation with which the proposed method can be extended to complete scenes, rather than images, in a stereoscopically and temporally-consistent manner.
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Chao. "Exploiting non-redundant local patterns and probabilistic models for analyzing structured and semi-structured data." Columbus, Ohio : Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1199284713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Yellepeddi, Atulya. "Graphical model driven methods in adaptive system identification." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/107499.

Full text
Abstract:
Thesis: Ph. D., Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science; and the Woods Hole Oceanographic Institution), 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 209-225).
Identifying and tracking an unknown linear system from observations of its inputs and outputs is a problem at the heart of many different applications. Due to the complexity and rapid variability of modern systems, there is extensive interest in solving the problem with as little data and computation as possible. This thesis introduces the novel approach of reducing problem dimension by exploiting statistical structure on the input. By modeling the input to the system of interest as a graph-structured random process, it is shown that a large parameter identification problem can be reduced into several smaller pieces, making the overall problem considerably simpler. Algorithms that can leverage this property in order to either improve the performance or reduce the computational complexity of the estimation problem are developed. The first of these, termed the graphical expectation-maximization least squares (GEM-LS) algorithm, can utilize the reduced dimensional problems induced by the structure to improve the accuracy of the system identification problem in the low sample regime over conventional methods for linear learning with limited data, including regularized least squares methods. Next, a relaxation of the GEM-LS algorithm termed the relaxed approximate graph structured least squares (RAGS-LS) algorithm is obtained that exploits structure to perform highly efficient estimation. The RAGS-LS algorithm is then recast into a recursive framework termed the relaxed approximate graph structured recursive least squares (RAGS-RLS) algorithm, which can be used to track time-varying linear systems with low complexity while achieving tracking performance comparable to much more computationally intensive methods. The performance of the algorithms developed in the thesis in applications such as channel identification, echo cancellation and adaptive equalization demonstrate that the gains admitted by the graph framework are realizable in practice. The methods have wide applicability, and in particular show promise as the estimation and adaptation algorithms for a new breed of fast, accurate underwater acoustic modems. The contributions of the thesis illustrate the power of graphical model structure in simplifying difficult learning problems, even when the target system is not directly structured.
by Atulya Yellepeddi.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
22

Owen, David R. "Random search of AND-OR graphs representing finite-state models." Morgantown, W. Va. : [West Virginia University Libraries], 2002. http://etd.wvu.edu/templates/showETD.cfm?recnum=2317.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2002.
Title from document title page. Document formatted into pages; contains vi, 96 p. : ill. Includes abstract. Includes bibliographical references (p. 91-96).
APA, Harvard, Vancouver, ISO, and other styles
23

Armstrong, Helen School of Mathematics UNSW. "Bayesian estimation of decomposable Gaussian graphical models." Awarded by:University of New South Wales. School of Mathematics, 2005. http://handle.unsw.edu.au/1959.4/24295.

Full text
Abstract:
This thesis explains to statisticians what graphical models are and how to use them for statistical inference; in particular, how to use decomposable graphical models for efficient inference in covariance selection and multivariate regression problems. The first aim of the thesis is to show that decomposable graphical models are worth using within a Bayesian framework. The second aim is to make the techniques of graphical models fully accessible to statisticians. To achieve these aims the thesis makes a number of statistical contributions. First, it proposes a new prior for decomposable graphs and a simulation methodology for estimating this prior. Second, it proposes a number of Markov chain Monte Carlo sampling schemes based on graphical techniques. The thesis also presents some new graphical results, and some existing results are reproved to make them more readily understood. Appendix 8.1 contains all the programs written to carry out the inference discussed in the thesis, together with both a summary of the theory on which they are based and a line by line description of how each routine works.
APA, Harvard, Vancouver, ISO, and other styles
24

Huang, Xiaodi, and xhuang@turing une edu au. "Filtering, clustering and dynamic layout for graph visualization." Swinburne University of Technology, 2004. http://adt.lib.swin.edu.au./public/adt-VSWT20050428.111554.

Full text
Abstract:
Graph visualization plays an increasingly important role in software engineering and information systems. Examples include UML, E-R diagrams, database structures, visual programming, web visualization, network protocols, molecular structures, genome diagrams, and social structures. Many classical algorithms for graph visualization have already been developed over the past decades. However, these algorithms face difficulties in practice, such as the overlapping nodes, large graph layout, and dynamic graph layout. In order to solve these problems, this research aims to systematically address both algorithmic and approach issues related to a novel framework that describes the process of graph visualization applications. At the same time, all the proposed algorithms and approaches can be applied to other situations as well. First of all, a framework for graph visualization is described, along with a generic approach to the graphical representation of a relational information source. As the important parts of this framework, two main approaches, Filtering and Clustering, are then particularly investigated to deal with large graph layouts effectively. In order to filter 'noise' or less important nodes in a given graph, two new methods are proposed to compute importance scores of nodes called NodeRank, and then to control the appearances of nodes in a layout by ranking them. Two novel algorithms for clustering graphs, KNN and SKM, are developed to reduce visual complexity. Identifying seed nodes as initial members of clusters, both algorithms make use of either the k-nearest neighbour search or a novel node similarity matrix to seek groups of nodes with most affinities or similarities among them. Such groups of relatively highly connected nodes are then replaced with abstract nodes to form a coarse graph with reduced dimensions. An approach called MMD to the layout of clustered graphs is provided using a multiple-window�multiple-level display. As for the dynamic graph layout, a new approach to removing overlapping nodes called Force-Transfer algorithm is developed to greatly improve the classical Force- Scan algorithm. Demonstrating the performance of the proposed algorithms and approaches, the framework has been implemented in a prototype called PGD. A number of experiments as well as a case study have been carried out.
APA, Harvard, Vancouver, ISO, and other styles
25

Eslava-Gomez, Guillermina. "Projection pursuit and other graphical methods for multivariate data." Thesis, University of Oxford, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.236118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

馮榮錦 and Wing-kam Tony Fung. "Analysis of outliers using graphical and quasi-Bayesian methods." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1987. http://hub.hku.hk/bib/B31230842.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Fung, Wing-kam Tony. "Analysis of outliers using graphical and quasi-Bayesian methods /." [Hong Kong] : University of Hong Kong, 1987. http://sunzi.lib.hku.hk/hkuto/record.jsp?B1236146X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Jaakkola, Tommi S. (Tommi Sakari). "Variational methods for inference and estimation in graphical models." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/10307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Ihler, Alexander T. (Alexander Thomas) 1976. "Inference in sensor networks : graphical models and particle methods." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33206.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 175-183).
Sensor networks have quickly risen in importance over the last several years to become an active field of research, full of difficult problems and applications. At the same time, graphical models have shown themselves to be an extremely useful formalism for describing the underlying statistical structure of problems for sensor networks. In part, this is due to a number of efficient methods for solving inference problems defined on graphical models, but even more important is the fact that many of these methods (such as belief propagation) can be interpreted as a set of message passing operations, for which it is not difficult to describe a simple, distributed architecture in which each sensor performs local processing and fusion of information, and passes messages locally among neighboring sensors. At the same time, many of the tasks which are most important in sensor networks are characterized by such features as complex uncertainty and nonlinear observation processes. Particle filtering is one common technique for dealing with inference under these conditions in certain types of sequential problems, such as tracking of mobile objects.
(cont.) However, many sensor network applications do not have the necessary structure to apply particle filtering, and even when they do there are subtleties which arise due to the nature of a distributed inference process performed on a system with limited resources (such as power, bandwidth, and so forth). This thesis explores how the ideas of graphical models and sample-based representations of uncertainty such as are used in particle filtering can be applied to problems defined for sensor networks, in which we must consider the impact of resource limitations on our algorithms. In particular, we explore three related themes. We begin by describing how sample-based representations can be applied to solve inference problems defined on general graphical models. Limited communications, the primary restriction in most practical sensor networks, means that the messages which are passed in the inference process must be approximated in some way. Our second theme explores the consequences of such message approximations, and leads to results with implications both for distributed systems and the use of belief propagation more generally.
(cont.) This naturally raises a third theme, investigating the optimal cost of representing sample-based estimates of uncertainty so as to minimize the communications required. Our analysis shows several interesting differences between this problem and traditional source coding methods. We also use the metrics for message errors to define lossy or approximate4 encoders, and provide an example encoder capable of balancing communication costs with a measure on inferential error. Finally, we put all of these three themes to work to solve a difficult and important task in sensor networks. The self-localization problem for sensors networks involves the estimation of all sensor positions given a set of relative inter-sensor measurements in the network. We describe this problem as a graphical model, illustrate the complex uncertainties involved in the estimation process, and present a method of finding for both estimates of the sensor positions and their remaining uncertainty using a sample-based message passing algorithm. This method is capable of incorporating arbitrary noise distributions, including outlier processes, and by applying our lossy encoding algorithm can be used even when communications is relatively limited.
(cont.) We conclude the thesis with a summary of the work and its contributions, and a description of some of the many problems which remain open within the field.
y Alexander T. Ihler.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
30

Wingfield, Cai. "Graphical foundations for dialogue games." Thesis, University of Bath, 2013. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.608337.

Full text
Abstract:
In the 1980s and 1990s, Joyal and Street developed a graphical notation for various flavours of monoidal category using graphs drawn in the plane, commonly known as string diagrams. In particular, their work comprised a rigorous topological foundation of the notation. In 2007, Harmer, Hyland and Melliès gave a formal mathematical foundation for game semantics using a notions they called ⊸-schedules, ⊗-schedules and heaps. Schedules described interleavings of plays in games formed using ⊸ and ⊗, and heaps provided pointers used for backtracking. Their definitions were combinatorial in nature, but researchers often draw certain pictures when working in practice. In this thesis, we extend the framework of Joyal and Street to give a formal account of the graphical methods already informally employed by researchers in game semantics. We give a geometric formulation of ⊸-schedules and ⊗-schedules, and prove that the games they describe are isomorphic to those described in Harmer et al.’s terms, and also those given by a more general graphical representation of interleaving across games of multiple components. We further illustrate the value of the geometric methods by demonstrating that several proofs of key properties (such as that the composition of ⊸-schedules is associative) can be made straightforward, reflecting the geometry of the plane, and overstepping some of the cumbersome combinatorial detail of proofs in Harmer et al.’s terms. We further extend the framework of formal plane diagrams to account for the heaps and pointer structures used in the backtracking functors for O and P.
APA, Harvard, Vancouver, ISO, and other styles
31

Rowland, Mark. "Structure in machine learning : graphical models and Monte Carlo methods." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/287479.

Full text
Abstract:
This thesis is concerned with two main areas: approximate inference in discrete graphical models, and random embeddings for dimensionality reduction and approximate inference in kernel methods. Approximate inference is a fundamental problem in machine learning and statistics, with strong connections to other domains such as theoretical computer science. At the same time, there has often been a gap between the success of many algorithms in this area in practice, and what can be explained by theory; thus, an important research effort is to bridge this gap. Random embeddings for dimensionality reduction and approximate inference have led to great improvements in scalability of a wide variety of methods in machine learning. In recent years, there has been much work on how the stochasticity introduced by these approaches can be better controlled, and what further computational improvements can be made. In the first part of this thesis, we study approximate inference algorithms for discrete graphical models. Firstly, we consider linear programming methods for approximate MAP inference, and develop our understanding of conditions for exactness of these approximations. Such guarantees of exactness are typically based on either structural restrictions on the underlying graph corresponding to the model (such as low treewidth), or restrictions on the types of potential functions that may be present in the model (such as log-supermodularity). We contribute two new classes of exactness guarantees: the first of these takes the form of particular hybrid restrictions on a combination of graph structure and potential types, whilst the second is given by excluding particular substructures from the underlying graph, via graph minor theory. We also study a particular family of transformation methods of graphical models, uprooting and rerooting, and their effect on approximate MAP and marginal inference methods. We prove new theoretical results on the behaviour of particular approximate inference methods under these transformations, in particular showing that the triplet relaxation of the marginal polytope is unique in being universally rooted. We also introduce a heuristic which quickly picks a rerooting, and demonstrate benefits empirically on models over several graph topologies. In the second part of this thesis, we study Monte Carlo methods for both linear dimensionality reduction and approximate inference in kernel machines. We prove the statistical benefit of coupling Monte Carlo samples to be almost-surely orthogonal in a variety of contexts, and study fast approximate methods of inducing this coupling. A surprising result is that these approximate methods can simultaneously offer improved statistical benefits, time complexity, and space complexity over i.i.d. Monte Carlo samples. We evaluate our methods on a variety of datasets, directly studying their effects on approximate kernel evaluation, as well as on downstream tasks such as Gaussian process regression.
APA, Harvard, Vancouver, ISO, and other styles
32

Zhang, Xinhua, and xinhua zhang cs@gmail com. "Graphical Models: Modeling, Optimization, and Hilbert Space Embedding." The Australian National University. ANU College of Engineering and Computer Sciences, 2010. http://thesis.anu.edu.au./public/adt-ANU20100729.072500.

Full text
Abstract:
Over the past two decades graphical models have been widely used as powerful tools for compactly representing distributions. On the other hand, kernel methods have been used extensively to come up with rich representations. This thesis aims to combine graphical models with kernels to produce compact models with rich representational abilities. Graphical models are a powerful underlying formalism in machine learning. Their graph theoretic properties provide both an intuitive modular interface to model the interacting factors, and a data structure facilitating efficient learning and inference. The probabilistic nature ensures the global consistency of the whole framework, and allows convenient interface of models to data. Kernel methods, on the other hand, provide an effective means of representing rich classes of features for general objects, and at the same time allow efficient search for the optimal model. Recently, kernels have been used to characterize distributions by embedding them into high dimensional feature space. Interestingly, graphical models again decompose this characterization and lead to novel and direct ways of comparing distributions based on samples. Among the many uses of graphical models and kernels, this thesis is devoted to the following four areas: Conditional random fields for multi-agent reinforcement learning Conditional random fields (CRFs) are graphical models for modelling the probability of labels given the observations. They have traditionally been trained with using a set of observation and label pairs. Underlying all CRFs is the assumption that, conditioned on the training data, the label sequences of different training examples are independent and identically distributed (iid ). We extended the use of CRFs to a class of temporal learning algorithms, namely policy gradient reinforcement learning (RL). Now the labels are no longer iid. They are actions that update the environment and affect the next observation. From an RL point of view, CRFs provide a natural way to model joint actions in a decentralized Markov decision process. They define how agents can communicate with each other to choose the optimal joint action. We tested our framework on a synthetic network alignment problem, a distributed sensor network, and a road traffic control system. Using tree sampling by Hamze & de Freitas (2004) for inference, the RL methods employing CRFs clearly outperform those which do not model the proper joint policy. Bayesian online multi-label classification Gaussian density filtering (GDF) provides fast and effective inference for graphical models (Maybeck, 1982). Based on this natural online learner, we propose a Bayesian online multi-label classification (BOMC) framework which learns a probabilistic model of the linear classifier. The training labels are incorporated to update the posterior of the classifiers via a graphical model similar to TrueSkill (Herbrich et al., 2007), and inference is based on GDF with expectation propagation. Using samples from the posterior, we label the test data by maximizing the expected F-score. Our experiments on Reuters1-v2 dataset show that BOMC delivers significantly higher macro-averaged F-score than the state-of-the-art online maximum margin learners such as LaSVM (Bordes et al., 2005) and passive aggressive online learning (Crammer et al., 2006). The online nature of BOMC also allows us to effciently use a large amount of training data. Hilbert space embedment of distributions Graphical models are also an essential tool in kernel measures of independence for non-iid data. Traditional information theory often requires density estimation, which makes it unideal for statistical estimation. Motivated by the fact that distributions often appear in machine learning via expectations, we can characterize the distance between distributions in terms of distances between means, especially means in reproducing kernel Hilbert spaces which are called kernel embedment. Under this framework, the undirected graphical models further allow us to factorize the kernel embedment onto cliques, which yields efficient measures of independence for non-iid data (Zhang et al., 2009). We show the effectiveness of this framework for ICA and sequence segmentation, and a number of further applications and research questions are identified. Optimization in maximum margin models for structured data Maximum margin estimation for structured data, e.g. (Taskar et al., 2004), is an important task in machine learning where graphical models also play a key role. They are special cases of regularized risk minimization, for which bundle methods (BMRM, Teo et al., 2007) and the closely related SVMStruct (Tsochantaridis et al., 2005) are state-of-the-art general purpose solvers. Smola et al. (2007b) proved that BMRM requires O(1/έ) iterations to converge to an έ accurate solution, and we further show that this rate hits the lower bound. By utilizing the structure of the objective function, we devised an algorithm for the structured loss which converges to an έ accurate solution in O(1/√έ) iterations. This algorithm originates from Nesterov's optimal first order methods (Nesterov, 2003, 2005b).
APA, Harvard, Vancouver, ISO, and other styles
33

Best, Lisa A. "Graphical perception of nonlinear trends : discrimination and extrapolation /." Fogler Library, University of Maine, 2001. http://www.library.umaine.edu/theses/pdf/BestLA2001.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Tommasi, Gianpaolo Francesco Maria. "Procedural methods in computer graphics." Thesis, University of Cambridge, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.358787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Woyak, Scott A. "A motif-like object-oriented interface framework using PHIGS." Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-09052009-040824/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Zickmann, Franziska [Verfasser]. "Computational methods and graphical models for integrative proteogenomics / Franziska Zickmann." Berlin : Freie Universität Berlin, 2015. http://d-nb.info/1076038794/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Chan, Sun Fat. "Advancement in robot programming with specific reference to graphical methods." Thesis, Loughborough University, 1989. https://dspace.lboro.ac.uk/2134/7281.

Full text
Abstract:
This research study is concerned with the derivation of advanced robot programming methods. The methods include the use of proprietary simulation modelling and design software tools for the off-line programming of industrial robots. The study has involved the generation of integration software to facilitate the co-operative operation of these software tools. The three major researcli'themes7of "ease of usage", calibration and the integration of product design data have been followed to advance robot programming. The "ease of usage" is concerned with enhancements in the man-machine interface for robo t simulation systems in terms of computer assisted solid modelling and computer assisted task generation. Robot simulation models represent an idealised situation, and any off-line robot programs generated from'them may contain'discrepancies which could seriously effect thq programs' performance; Calibration techniques have therefore been investigated as 'a method of overcoming discrepancies between the simulation model and the real world. At the present time, most computer aided design systems operate as isolated islands of computer technology, whereas their product databases should be used to support decision making processes and ultimately facilitate the generation of machine programs. Thus the integration of product design data has been studied as an important step towards truly computer integrated manufacturing. The functionality of the three areas of study have been generalised and form the basis for recommended enhancements to future robot programming systems.
APA, Harvard, Vancouver, ISO, and other styles
38

Tan, Sze Huey. "Statistical and graphical evidence synthesis methods in health technology assessment." Thesis, University of Leicester, 2016. http://hdl.handle.net/2381/37523.

Full text
Abstract:
This thesis focusses on the challenges relating to clinical- and cost-effectiveness analysis in Health Technology Assessment (HTA). It includes methodological developments, both statistical and presentational, in evidence synthesis aiming to address those challenges. In HTA, analysts often face problems with limited availability of data required to inform economic model. This thesis proposes innovative evidence synthesis approaches to address this challenge, illustrated in two examples. Bivariate random-effects meta-analysis (BRMA) and network meta-analysis (NMA) were used to synthesise all available evidence to predict progression-free survival (PFS), in metastatic prostate cancer. This enabled the specification of a three-state Markov model previously limited to two states when PFS was not recorded. In the second example, a scenario in multiple sclerosis is considered where utility data for the trials included in a HTA were not available and external utility data from a single study was used instead. This thesis illustrates how BRMA can be applied to include all available evidence to inform utility estimates for use in a cost-effectiveness analysis. NMA, allowing for a simultaneous and coherent comparison of multiple interventions, is increasingly used in HTA. However, due to the inherent complexity of presenting NMA results, it is important to ease their interpretability. A review of existing methods of presenting NMA results in HTA reports revealed that there is no standardised presentational tool for their reporting. Novel presentational approaches were developed which are presented in this thesis. The original contributions of this thesis are the innovative approaches to incorporate historical data to predict and increase the precision of parameter estimates for cost-effectiveness analysis to better inform health policy decision-making; and three novel graphical tools to aid clear presentation and facilitate interpretation of NMA results. Ultimately, the hope is that the graphical tools developed will be recommended in updated guidance setting the standards for future HTAs.
APA, Harvard, Vancouver, ISO, and other styles
39

Noye, Janet B. (Janet Barbara) Carleton University Dissertation Computer Science. "A graphical user interface server for graph algorithm programs." Ottawa, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
40

Janošek, Radek. "Výpočet vyhořívání jaderného paliva reaktoru VVER 1000 pomoci programu KENO." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2016. http://www.nusl.cz/ntk/nusl-241979.

Full text
Abstract:
The introduction to operational nuclear reactors focusing on light-water pressurized reactor VVER 1000 is in the beginning of this Master´s thesis. This thesis covers basic technology of VVER 1000 reactor with focus on reactor core and nuclear fuel TVSA-T. A significant part of the thesis deal with basic concepts of nuclear safety and its methods. The main goal is to create a model of VVER 1000 reactor, which can be used in nuclear burn-up calculations using KENO code. Therefore a part of this thesis deals with explanation of statistical Monte Carlo method and the KENO code.
APA, Harvard, Vancouver, ISO, and other styles
41

Jazbutis, Gintautas Bronius. "A systematic approach to assessing and extending graphical models of manufacturing." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/16425.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Camacho, Cosio Hernán. "Método de estabilidad para el dimensionamiento de tajeos obtenido mediante el algoritmo Gradient Boosting Machine considerando la incorporación de los esfuerzos activos en minería subterránea." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2020. http://hdl.handle.net/10757/656716.

Full text
Abstract:
En las últimas cuatro décadas, el método gráfico de estabilidad de Mathews ha constituido el abanico de herramientas indispensables para el dimensionamiento de tajeos; caracterizándose por su eficiencia en costos, ahorro de tiempo y esfuerzo. Asimismo, el aporte de diversos autores por optimizar su rendimiento ha permitido desplegar una serie de criterios que han permitido abordar cada vez más escenarios. No obstante, con la diversificación de la minería en diferentes contextos geológicos y la necesidad trabajar a profundidades más altas se ha mostrado que el método gráfico de estabilidad ha desestimado escenarios con presencia de agua y distintos regímenes de confinamiento. Es por este motivo, que la presente investigación busca incorporar dichos escenarios por medio del algoritmo Gradient Boosting Machine. Para dicho fin, se simuló escenarios con diversos niveles de presión de agua y se consideró el grado de confinamiento alrededor de las excavaciones. El modelo generado se basó en el criterio de la clasificación binaria, siento las clases predichas, “estable” e “inestable”; con lo que se obtuvo un valor AUC de 0.88, lo que demostró una excelente capacidad predictiva del modelo GBM. Asimismo, se demostró las ventajas frente al método tradicional, puesto que se añade una componente de rigurosidad y de generalización. Finalmente, se evidencia el logro de un método de estabilidad que incorpora los esfuerzos activos y que ostenta un adecuado rendimiento predictivo.
In the last four decades, the Mathews' graphical stability method has constituted the range of indispensable tools for the dimensioning of stopes; characterized by its cost efficiency, time and effort savings. Likewise, the contribution of several authors to optimize its performance has made it possible to deploy a series of criteria that have made it possible to address more and more scenarios. However, with the diversification of mining in different geological contexts and the need to work at higher depths, it has been shown that the graphical stability method has neglected scenarios with the presence of water and different confinement regimes. For this reason, the present research sought to incorporate such scenarios by means of the Gradient Boosting Machine algorithm. For this purpose, scenarios with different levels of water pressure were simulated and the degree of confinement around the excavations was considered. The model generated was based on the binary classification criterion, feeling the predicted classes, "stable" and "unstable"; with which an AUC value of 0.88 was obtained, which demonstrated an excellent predictive capacity of the GBM model. Likewise, the advantages over the traditional method were demonstrated since a component of rigor and generalization is added. Finally, the achievement of a stability method that incorporates the active stresses and has an adequate predictive performance is evidenced.
Trabajo de investigación
APA, Harvard, Vancouver, ISO, and other styles
43

Bowen, Judith Alyson. "Formal Models and Refinement for Graphical User Interface Design." The University of Waikato, 2008. http://hdl.handle.net/10289/2613.

Full text
Abstract:
Formal approaches to software development require that we correctly describe (or specify) systems in order to prove properties about our proposed solution prior to building it. We must then follow a rigorous process to transform our specification into an implementation to ensure that the properties we have proved are retained. When we design and build the user interfaces of our systems we are similarly keen to ensure that they have certain properties before we build them. For example, do they satisfy the requirements of the user? Are they designed with known good design principles and usability considerations in mind? User-centred design approaches, which incorporate many different techniques which we may consider as informal, seek to consider these issues so that the UIs we build are designed around the needs and capabilities of real users. Both formal methods and user-centred design are important and beneficial in the development of underlying system functionality and user interfaces respectively. Given this we would like to be able to use both approaches in one integrated software development process. Their differences, however, make this a challenging objective. In this thesis we present a solution this problem by describing models and techniques which provide a bridge between the existing work of user-centred design practitioners and formal methods practitioners enabling us to incorporate (representations of) informal design artefacts into a formal software development process. We then use these models as the basis for a refinement theory for user interfaces which allows interface designers to retain their informal design methods whilst providing an underlying theory grounded in the trace refinement theory of the Microcharts language.
APA, Harvard, Vancouver, ISO, and other styles
44

Hamilton, David. "An Exploration of Procedural Methods for Motion Design." Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/honors/522.

Full text
Abstract:
This research aims to apply procedural techniques, similar to those employed in node-based modeling or compositing software packages, to the motion design workflow in Adobe After Effects. The purpose is to increase efficiency in the motion design process between pre-production and production stages. Systems Theory is referenced as a basis for breaking down problems in terms of inputs and outputs. All of this takes place within the context of a larger motion design project.
APA, Harvard, Vancouver, ISO, and other styles
45

Dutton, Marcus. "Flexible architecture methods for graphics processing." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/43658.

Full text
Abstract:
The FPGA GPU architecture proposed in this thesis was motivated by underserved markets for graphics processing that desire flexibility, long-term device availability, scalability, certifiability, and high reliability. These markets of industrial, medical, and avionics applications often are forced to rely on the latest GPUs that were actually designed for gaming PCs or handheld consumer devices. The architecture for the GPU in this thesis was crafted specifically for an FPGA and therefore takes advantage of its capabilities while also avoiding its limitations. Previous work did not specifically exploit the FPGA's structures and instead used FPGA implementations merely as an integration platform prior to proceeding on to a final ASIC design. The target of an FPGA for this architecture is also important because its flexibility and programmability allow the GPU's performance to be scaled or supplemented to fit unique application requirements. This tailoring of the architecture to specific requirements minimizes power consumption and device cost while still satisfying performance, certification, and device availability requirements. To demonstrate the feasibility of the flexible FPGA GPU architectural concepts, the architecture is applied to an avionics application and analyzed to confirm satisfactory results. The architecture is further validated through the development of extensions to support more comprehensive graphics processing applications. In addition, the breadth of this research is illustrated through its applicability to general-purpose computations and more specifically, scientific visualizations.
APA, Harvard, Vancouver, ISO, and other styles
46

Jones, Graham R. "Accurate radiosity methods for computer graphics." Thesis, University of Oxford, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.311985.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Johnson, Jason K. (Jason Kyle). "Convex relaxation methods for graphical models : Lagrangian and maximum entropy approaches." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45871.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (p. 241-257).
Graphical models provide compact representations of complex probability distributions of many random variables through a collection of potential functions defined on small subsets of these variables. This representation is defined with respect to a graph in which nodes represent random variables and edges represent the interactions among those random variables. Graphical models provide a powerful and flexible approach to many problems in science and engineering, but also present serious challenges owing to the intractability of optimal inference and estimation over general graphs. In this thesis, we consider convex optimization methods to address two central problems that commonly arise for graphical models. First, we consider the problem of determining the most probable configuration-also known as the maximum a posteriori (MAP) estimate-of all variables in a graphical model, conditioned on (possibly noisy) measurements of some variables. This general problem is intractable, so we consider a Lagrangian relaxation (LR) approach to obtain a tractable dual problem. This involves using the Lagrangian decomposition technique to break up an intractable graph into tractable subgraphs, such as small "blocks" of nodes, embedded trees or thin subgraphs. We develop a distributed, iterative algorithm that minimizes the Lagrangian dual function by block coordinate descent. This results in an iterative marginal-matching procedure that enforces consistency among the subgraphs using an adaptation of the well-known iterative scaling algorithm. This approach is developed both for discrete variable and Gaussian graphical models. In discrete models, we also introduce a deterministic annealing procedure, which introduces a temperature parameter to define a smoothed dual function and then gradually reduces the temperature to recover the (non-differentiable) Lagrangian dual. When strong duality holds, we recover the optimal MAP estimate. We show that this occurs for a broad class of "convex decomposable" Gaussian graphical models, which generalizes the "pairwise normalizable" condition known to be important for iterative estimation in Gaussian models.
(cont.) In certain "frustrated" discrete models a duality gap can occur using simple versions of our approach. We consider methods that adaptively enhance the dual formulation, by including more complex subgraphs, so as to reduce the duality gap. In many cases we are able to eliminate the duality gap and obtain the optimal MAP estimate in a tractable manner. We also propose a heuristic method to obtain approximate solutions in cases where there is a duality gap. Second, we consider the problem of learning a graphical model (both the graph and its potential functions) from sample data. We propose the maximum entropy relaxation (MER) method, which is the convex optimization problem of selecting the least informative (maximum entropy) model over an exponential family of graphical models subject to constraints that small subsets of variables should have marginal distributions that are close to the distribution of sample data. We use relative entropy to measure the divergence between marginal probability distributions. We find that MER leads naturally to selection of sparse graphical models. To identify this sparse graph efficiently, we use a "bootstrap" method that constructs the MER solution by solving a sequence of tractable subproblems defined over thin graphs, including new edges at each step to correct for large marginal divergences that violate the MER constraint. The MER problem on each of these subgraphs is efficiently solved using the primaldual interior point method (implemented so as to take advantage of efficient inference methods for thin graphical models). We also consider a dual formulation of MER that minimizes a convex function of the potentials of the graphical model. This MER dual problem can be interpreted as a robust version of maximum-likelihood parameter estimation, where the MER constraints specify the uncertainty in the sufficient statistics of the model. This also corresponds to a regularized maximum-likelihood approach, in which an information-geometric regularization term favors selection of sparse potential representations. We develop a relaxed version of the iterative scaling method to solve this MER dual problem.
by Jason K. Johnson.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
48

Napieralla, Jonah. "Comparing Graphical Projection Methods at High Degrees of Field of View." Thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16664.

Full text
Abstract:
Background. Graphical projection methods define how virtual 3D environments are depicted on 2D monitors. No projection method provides a flawless reproduction, and the look of the resulting projections vary considerably. Field of view is a parameter of these projection methods, it determines the breadth of vision of the virtual camera used in the projection process. Field of view is represented by a degree, that defines the angle from the left to the right extent of the projection, as seen from the camera. Objectives. The aim of this study was to investigate the perceived quality of high degrees of field of view, using different graphical projection methods. The Perspective, the Panini, and the Stereographic projection methods were evaluated at 110, 140, and 170 degrees of field of view. Methods. To evaluate the perceived quality of the three projection methods at varying degrees of field of view; a user study was conducted in which 24 participants rated 81 tests each. This study was held in a conference room where the participants sat undisturbed, and could experience the tests under consistent conditions. The tests took three different usage scenarios into account, presenting scenes in which the camera was still, where it moved, and where the participants could control it. Each test was rated separately, one at a time, using every combination of projection method and degree of field of view. Results. The perceived quality of each projection method dropped at an exponential rate, relative to the increase in the degree of field of view. The Perspective projection method was always rated the most favorably at 110 degrees of field of view, but unlike the other projections, it would be rated much more poorly at higher degrees. The Panini and the Stereographic projections received favorable ratings at up to 140-170 degrees, but the perceived quality of these projection methods varied significantly, depending on the usage scenario and the virtual environment displayed. Conclusions. The study concludes that the Perspective projection method is optimal for use at up to 110 degrees of field of view. At higher degrees of field of view, no consistently optimal choice remains, as the perceived quality of the Panini and the Stereographic projection method vary significantly, depending on the usage scenario. As such, the perceived quality becomes a function of the graphical projection method, the degree of field of view, the usage scenario, and the virtual environment displayed.
APA, Harvard, Vancouver, ISO, and other styles
49

Gaylin, Kenneth B. "An investigation of information display variables utilizing computer-generated graphics for decision support systems." Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/53070.

Full text
Abstract:
The effectiveness of selected computer-generated graphics display variables was examined in a mixed-factors factorial experiment using thirty-two subjects. All subjects performed four different graph reading tasks consisting of point-reading, point-comparison, trendreading, and trend-comparison. In each task, line, point, bar, and three-dimensional bar graphs were investigated under two levels of task complexity, and two levels of coding (color and black-and-white). The effects of these independent variables on measures of task performance errors, time to complete the task, subjective mental workload, and preference ratings were obtained in real-time by a microcomputer control program. Separate MANOVA analyses of these measures for each task indicated significant effects of graph-type for the point—reading task, main effects of complexity and coding for all tasks, and a graph-by—coding interaction for the point-reading, point-comparison, and trend-reading tasks. Subsequent ANOVA analyses showed significance for these effects across several of the dependent measures which are specified in the thesis. Recommendations are made for selecting the most effective graph and coding combinations for the particular types of graph-interpretation tasks and complexity levels encountered.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
50

Hofbauer, Pamela S. Mooney Edward S. "Characterizing high school students' understanding of the purpose of graphical representations." Normal, Ill. : Illinois State University, 2007. http://proquest.umi.com/pqdweb?index=0&did=1414114601&SrchMode=1&sid=6&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1207664408&clientId=43838.

Full text
Abstract:
Thesis (Ph. D.)--Illinois State University, 2007.
Title from title page screen, viewed on April 8, 2008. Dissertation Committee: Edward S. Mooney (chair), Cynthia W. Langrall, Sherry L. Meier, Norma C. Presmeg. Includes bibliographical references (leaves 112-121) and abstract. Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography