To see the other types of publications on this topic, follow the link: Prediction theory Graphic methods.

Dissertations / Theses on the topic 'Prediction theory Graphic methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Prediction theory Graphic methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ketkar, Nikhil S. "Empirical comparison of graph classification and regression algorithms." Pullman, Wash. : Washington State University, 2009. http://www.dissertations.wsu.edu/Dissertations/Spring2009/n_ketkar_042409.pdf.

Full text
Abstract:
Thesis (Ph. D.)--Washington State University, May 2009.
Title from PDF title page (viewed on June 3, 2009). "School of Electrical Engineering and Computer Science." Includes bibliographical references (p. 101-108).
APA, Harvard, Vancouver, ISO, and other styles
2

Corbett, Dan R. "Unification and constraints over conceptual structures." Title page, contents and summary only, 2000. http://web4.library.adelaide.edu.au/theses/09PH/09phc7889.pdf.

Full text
Abstract:
Bibliography: leaves 150-161. This thesis addresses two areas in the field of conceptual structures. The first is the unification of conceptual graphs, and the consequent work in projection and in type hierarchies... The second area of investigation is the definition of constraints, especially real-value constraints on the concept referents, with particular attention to handling constraints during the unification of conceptual graphs.
APA, Harvard, Vancouver, ISO, and other styles
3

Abras, Jennifer N. "Enhancement of aeroelastic rotor airload prediction methods." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/28182.

Full text
Abstract:
Thesis (M. S.)--Aerospace Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Smith, Marilyn; Committee Member: Bauchau, Olivier; Committee Member: Costello, Mark; Committee Member: Moulton, Marvin; Committee Member: Ruffin, Stephen.
APA, Harvard, Vancouver, ISO, and other styles
4

Dey, Sanjoy. "Structural properties of visibility and weak visibility graphs." Virtual Press, 1997. http://liblink.bsu.edu/uhtbin/catkey/1048394.

Full text
Abstract:
Given a finite set S of n nonintersecting line segments with no three end points collinear, the segment end point visibility graph is defined as the graph whose vertices are the end points of the line segments in S and two vertices are adjacent if the straight line segment joining two end points does not intersect any element of S, or if they are end points of the same segment. Segment end point visibility graphs have a wide variety of applications in VLSI circuit design, study of art gallery problems, and other areas of computational geometry. This thesis contains a survey of the important results that are currently known regarding the characterization of these graphs. Also a weak visibility dual of a segment end point visibility graph is defined and some structural properties of such graphs are presented. Some open problems and questions related to the characterization of weak visibility graphs are also discussed.
Department of Mathematical Sciences
APA, Harvard, Vancouver, ISO, and other styles
5

馮榮錦 and Wing-kam Tony Fung. "Analysis of outliers using graphical and quasi-Bayesian methods." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1987. http://hub.hku.hk/bib/B31230842.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Khouzam, Nelly. "A new class of brittle graphs /." Thesis, McGill University, 1986. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Armstrong, Helen School of Mathematics UNSW. "Bayesian estimation of decomposable Gaussian graphical models." Awarded by:University of New South Wales. School of Mathematics, 2005. http://handle.unsw.edu.au/1959.4/24295.

Full text
Abstract:
This thesis explains to statisticians what graphical models are and how to use them for statistical inference; in particular, how to use decomposable graphical models for efficient inference in covariance selection and multivariate regression problems. The first aim of the thesis is to show that decomposable graphical models are worth using within a Bayesian framework. The second aim is to make the techniques of graphical models fully accessible to statisticians. To achieve these aims the thesis makes a number of statistical contributions. First, it proposes a new prior for decomposable graphs and a simulation methodology for estimating this prior. Second, it proposes a number of Markov chain Monte Carlo sampling schemes based on graphical techniques. The thesis also presents some new graphical results, and some existing results are reproved to make them more readily understood. Appendix 8.1 contains all the programs written to carry out the inference discussed in the thesis, together with both a summary of the theory on which they are based and a line by line description of how each routine works.
APA, Harvard, Vancouver, ISO, and other styles
8

Dibble, Emily. "The interpretation of graphs and tables /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/9101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hossain, Mahmud Shahriar. "Apriori approach to graph-based clustering of text documents." Thesis, Montana State University, 2008. http://etd.lib.montana.edu/etd/2008/hossain/HossainM0508.pdf.

Full text
Abstract:
This thesis report introduces a new technique of document clustering based on frequent senses. The developed system, named GDClust (Graph-Based Document Clustering) [1], works with frequent senses rather than dealing with frequent keywords used in traditional text mining techniques. GDClust presents text documents as hierarchical document-graphs and uses an Apriori paradigm to find the frequent subgraphs, which reflect frequent senses. Discovered frequent subgraphs are then utilized to generate accurate sense-based document clusters. We propose a novel multilevel Gaussian minimum support strategy for candidate subgraph generation. Additionally, we introduce another novel mechanism called Subgraph-Extension mining that reduces the number of candidates and overhead imposed by the traditional Apriori-based candidate generation mechanism. GDClust utilizes an English language thesaurus (WordNet [2]) to construct document-graphs and exploits graph-based data mining techniques for sense discovery and clustering. It is an automated system and requires minimal human interaction for the clustering purpose.
APA, Harvard, Vancouver, ISO, and other styles
10

Coetzer, Audrey. "Criticality of the lower domination parameters of graphs." Thesis, Link to the online version, 2007. http://hdl.handle.net/10019/1051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Turkmani, A. M. D. "Empirical and theoretical methods of BER prediction in binary FSK communication systems subjected to impulsive noise." Thesis, University of Liverpool, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Dolan, David M. "Spatial statistics using quasi-likelihood methods with applications." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0029/NQ66201.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Yucel, Burak. "Performance Prediction Of Horizontal Axis Wind Turbines Using Vortex Theory." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605715/index.pdf.

Full text
Abstract:
iv Performance prediction of HAWTs is important because it gives an idea about the power production of a HAWT in out of design conditions without making any experiments. Since the experiments of fluid mechanics are difficult to afford, developing some models is very beneficial. There are some models developed about this subject using miscellaneous methods. In this study, one can find &ldquo
Vortex Theory&rdquo
among one of these theories. Some basic 3D aerodynamics was discussed in order to make the reader to understand the main subject of this study. Just after that, performance prediction of constant speed, stall controlled HAWTs was discussed. In order to understand the closeness of this theory to experiments, as a sample, NREL &ldquo
Combined Experiment Rotor&rdquo
was considered. Performances obtained by AEROPOWER, written in Visual Basic 6.0 and Excel combination, and experimental results were compared for different wind velocities. Acceptable results were obtained for wind speeds not much different than the design wind speed. For relatively lower wind speeds, due to &ldquo
turbulence&rdquo
, and for relatively higher wind speeds, due to &ldquo
stall&rdquo
, the program did not give good results. In the first case it has not given any numerical result. Power curves were obtained by only changing the settling angle, and only changing the rotor angular speed using AEROPOWER. It was seen that, both settling angle and rotor rpm values influence the turbine power output significantly.
APA, Harvard, Vancouver, ISO, and other styles
14

Seth, Ajay. "A Predictive Control Method for Human Upper-Limb Motion: Graph-Theoretic Modelling, Dynamic Optimization, and Experimental Investigations." Thesis, University of Waterloo, 2000. http://hdl.handle.net/10012/787.

Full text
Abstract:
Optimal control methods are applied to mechanical models in order to predict the control strategies in human arm movements. Optimality criteria are used to determine unique controls for a biomechanical model of the human upper-limb with redundant actuators. The motivation for this thesis is to provide a non-task-specific method of motion prediction as a tool for movement researchers and for controlling human models within virtual prototyping environments. The current strategy is based on determining the muscle activation levels (control signals) necessary to perform a task that optimizes several physical determinants of the model such as muscular and joint stresses, as well as performance timing. Currently, the initial and final location, orientation, and velocity of the hand define the desired task. Several models of the human arm were generated using a graph-theoretical method in order to take advantage of similar system topology through the evolution of arm models. Within this framework, muscles were modelled as non-linear actuator components acting between origin and insertion points on rigid body segments. Activation levels of the muscle actuators are considered the control inputs to the arm model. Optimization of the activation levels is performed via a hybrid genetic algorithm (GA) and a sequential quadratic programming (SQP) technique, which provides a globally optimal solution without sacrificing numerical precision, unlike traditional genetic algorithms. Advantages of the underlying genetic algorithm approach are that it does not require any prior knowledge of what might be a 'good' approximation in order for the method to converge, and it enables several objectives to be included in the evaluation of the fitness function. Results indicate that this approach can predict optimal strategies when compared to benchmark minimum-time maneuvers of a robot manipulator. The formulation and integration of the aforementioned components into a working model and the simulation of reaching and lifting tasks represents the bulk of the thesis. Results are compared to motion data collected in the laboratory from a test subject performing the same tasks. Discrepancies in the results are primarily due to model fidelity. However, more complex models are not evaluated due to the additional computational time required. The theoretical approach provides an excellent foundation, but further work is required to increase the computational efficiency of the numerical implementation before proceeding to more complex models.
APA, Harvard, Vancouver, ISO, and other styles
15

Li, Mingrui. "On the size of induced subgraphs of hypercubes and a graphical user interface to graph theory." Virtual Press, 1993. http://liblink.bsu.edu/uhtbin/catkey/879847.

Full text
Abstract:
The hypercube is one of the most versatile and efficient networks yet discovered for parallel computation. It is well suited for both special-purpose and general-purpose tasks, and it can efficiently simulate many other networks of the same size. The size of subgraphs can be used to estimate the efficient communications of hypercube computer systems.The thesis investigates induced subgraphs of a hypercube, discusses sizes of subgraphs, and provides a formula to give bounds on the size of any subgraph of the hypercube.The concept of spanning graphs and line graphs is useful for studying properties of graphs. An MS WINDOWS based graphical system is developed which allows the creation and display of graphs and their spanning graphs, line graphs and super line graphs.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
16

Kwag, Jae-Hwan. "A comparative study of LP methods in MR spectral analysis /." free to MU campus, to others for purchase, 1999. http://wwwlib.umi.com/cr/mo/fullcit?p9962536.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Laury, Marie L. "Accurate and Reliable Prediction of Energetic and Spectroscopic Properties Via Electronic Structure Methods." Thesis, University of North Texas, 2013. https://digital.library.unt.edu/ark:/67531/metadc500071/.

Full text
Abstract:
Computational chemistry has led to the greater understanding of the molecular world, from the interaction of molecules, to the composition of molecular species and materials. Of the families of computational chemistry approaches available, the main families of electronic structure methods that are capable of accurate and/or reliable predictions of energetic, structural, and spectroscopic properties are ab initio methods and density functional theory (DFT). The focus of this dissertation is to improve the accuracy of predictions and computational efficiency (with respect to memory, disk space, and computer processing time) of some computational chemistry methods, which, in turn, can extend the size of molecule that can be addressed, and, for other methods, DFT, in particular, gain greater insight into which DFT methods are more reliable than others. Much, though not all, of the focus of this dissertation is upon transition metal species – species for which much less method development has been targeted or insight about method performance has been well established. The ab initio approach that has been targeted in this work is the correlation consistent composite approach (ccCA), which has proven to be a robust, ab initio computational method for main group and first row transition metal-containing molecules yielding, on average, accurate thermodynamic properties, i.e., within 1 kcal/mol of experiment for main group species and within 3 kcal/mol of experiment for first row transition metal molecules. In order to make ccCA applicable to systems containing any element from the periodic table, development of the method for second row transition metals and heavier elements, including lower p-block (5p and 6p) elements was pursued. The resulting method, the relativistic pseudopotential variant of ccCA (rp-ccCA), and its application are detailed for second row transition metals and lower p-block elements. Because of the computational cost of ab initio methods, DFT is a popular choice for the study of transition metals. Despite this, the most reliable density functionals for the prediction of energetic properties (e.g. enthalpy of formation, ionization potential, electron affinity, dissociation energy) of transition metal species, have not been clearly identified. The examination of DFT performance for first and second row transition metal thermochemistry (i.e., enthalpies of formation) was conducted and density functionals for the study of these species were identified. And, finally, to address the accuracy of spectroscopic and energetic properties, improvements for a series of density functionals have been established. In both DFT and ab initio methods, the harmonic approximation is typically employed. This neglect of anharmonic effects, such as those related to vibrational properties (e.g. zero-point vibrational energies, thermal contributions to enthalpy and entropy) of molecules, generally results in computational predictions that are not in agreement with experiment. To correct for the neglect of anharmonicity, scale factors can be applied to these vibrational properties, resulting in better alignment with experimental observations. Scale factors for DFT in conjunction with both the correlation and polarization consistent basis sets have been developed in this work.
APA, Harvard, Vancouver, ISO, and other styles
18

Gatz, Philip L. Jr. "A comparison of three prediction based methods of choosing the ridge regression parameter k." Thesis, Virginia Tech, 1985. http://hdl.handle.net/10919/45724.

Full text
Abstract:
A solution to the regression model y = xβ+ε is usually obtained using ordinary least squares. However, when the condition of multicollinearity exists among the regressor variables, then many qualities of this solution deteriorate. The qualities include the variances, the length, the stability, and the prediction capabilities of the solution. An analysis called ridge regression introduced a solution to combat this deterioration (Hoerl and Kennard, 1970a). The method uses a solution biased by a parameter k. Many methods have been developed to determine an optimal value of k. This study chose to investigate three little used methods of determining k: the PRESS statistic, Mallows' Ck. statistic, and DF-trace. The study compared the prediction capabilities of the three methods using data that contained various levels of both collinearity and leverage. This was completed by using a Monte Carlo experiment.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
19

Islam, Mustafa R. "A hypertext graph theory reference system." Virtual Press, 1993. http://liblink.bsu.edu/uhtbin/catkey/879844.

Full text
Abstract:
G-Net system is being developed by the members of the G-Net research group under the supervision of Dr. K. Jay Bagga. The principle objective of the G-Net system is to provide an integrated tool for dealing with various aspects of graph theory. G-Net system is divided into two parts. GETS (Graph theory Experiments Tool Set) will provide a set of tools to experiment with graph theory, and HYGRES (HYpertext Graph theory Reference Service), the second subcomponent of the G-Net system to aid graph theory study and research. In this research a hypertext application is built to present the graph theory concepts, graph models and the algorithms. In other words, HYGRES (Guide Version) provides the hypertext facilities for organizing a graph theory database in a very natural and interactive way. An hypertext application development tool, called Guide, is used to implement this version of HYGRES. This project integrates the existing version of GETS so that it can also provide important services to HYGRES. The motivation behind this project is to study the initial criterion for developing a hypertext system, which can be used for future development of a stand alone version of the G-Net system.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
20

Wolman, Stacey D. "The effects of biographical data on the prediction of domain knowledge." Thesis, Available online, Georgia Institute of Technology, 2005, 2005. http://etd.gatech.edu/theses/available/etd-08022005-140654/.

Full text
Abstract:
Thesis (M. S.)--Psychology, Georgia Institute of Technology, 2006.
Dr. Phillip L. Ackerman, Committee Chair ; Dr. Ruth Kanfer, Committee Co-Chair ; Dr. Lawrence James, Committee Member. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
21

Tandan, Isabelle, and Erika Goteman. "Bank Customer Churn Prediction : A comparison between classification and evaluation methods." Thesis, Uppsala universitet, Statistiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-411918.

Full text
Abstract:
This study aims to assess which supervised statistical learning method; random forest, logistic regression or K-nearest neighbor, that is the best at predicting banks customer churn. Additionally, the study evaluates which cross-validation set approach; k-Fold cross-validation or leave-one-out cross-validation that yields the most reliable results. Predicting customer churn has increased in popularity since new technology, regulation and changed demand has led to an increase in competition for banks. Thus, with greater reason, banks acknowledge the importance of maintaining their customer base.   The findings of this study are that unrestricted random forest model estimated using k-Fold is to prefer out of performance measurements, computational efficiency and a theoretical point of view. Albeit, k-Fold cross-validation and leave-one-out cross-validation yield similar results, k-Fold cross-validation is to prefer due to computational advantages.   For future research, methods that generate models with both good interpretability and high predictability would be beneficial. In order to combine the knowledge of which customers end their engagement as well as understanding why. Moreover, interesting future research would be to analyze at which dataset size leave-one-out cross-validation and k-Fold cross-validation yield the same results.
APA, Harvard, Vancouver, ISO, and other styles
22

Yaakob, M. R. "Bayesian reliability analysis and prediction : Techniques for reliability analysis and prediction using a combination of Bayesian methods and information theory together with appropriate computer programs." Thesis, University of Bradford, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.374917.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Min, Suhong. "Causes and consequences of low self-control: Empirical tests of the general theory of crime." Diss., The University of Arizona, 1994. http://hdl.handle.net/10150/186809.

Full text
Abstract:
This study operationalized and empirically tested the general propositions of Gottfredson and Hirschi's general theory of crime (1990). Specifically, the core concept of the theory, self-control, is operationalized using two data sets--Richmond Youth Project and Cambridge Study in Delinquent Development--and tested using criteria of reliability and validity. In this part of the study, a methodological question focuses on the pattern of validity change across types of data, namely, cross-sectional and longitudinal data. In the following tests, causes and consequences of low self-control are tested using Richmond Youth Project data. Child rearing as early socialization and individual traits are tested as sources of self-control. Then the measure of self-control is related to crime, delinquency, and analogous behaviors that are, according to the theory, manifestations of low self-control. A research question here focuses on the generality of self-control theory. Overall, the test results support the claims of the general theory of crime. Findings from the validity tests of the self-control index show theoretically expected relations with important individual variables such as gender, race, and delinquent status. In particular, findings from two differently designed data sets are very similar. Test results also show that boys low on self-control are more likely than others to have committed crime, delinquency, and various analogous behaviors. One possible research problem based on the theoretical assumption was also tested and empirically supported. Theory implies that respondents low on self-control are more likely than others to fail to answer questions in self-report survey. Empirical tests support this theoretical implication, revealing that respondents dropped from the index due to missing data are more likely than others to be delinquents. Further research implications are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
24

Hanlon, Sebastien, and University of Lethbridge Faculty of Arts and Science. "Visualizing three-dimensional graph drawings." Thesis, Lethbridge, Alta. : University of Lethbridge, Faculty of Arts and Science, 2006, 2006. http://hdl.handle.net/10133/348.

Full text
Abstract:
The GLuskap system for interactive three-dimensional graph drawing applies techniques of scientific visualization and interactive systems to the construction, display, and analysis of graph drawings. Important features of the system include support for large-screen stereographic 3D display with immersive head-tracking and motion-tracked interactive 3D wand control. A distributed rendering architecture contributes to the portability of the system, with user control performed on a laptop computer without specialized graphics hardware. An interface for implementing graph drawing layout and analysis algorithms in the Python programming language is also provided. This thesis describes comprehensively the work on the system by the author—this work includes the design and implementation of the major features described above. Further directions for continued development and research in cognitive tools for graph drawing research are also suggested.
viii, 110 leaves : ill. (some col.) ; 29 cm.
APA, Harvard, Vancouver, ISO, and other styles
25

Nickle, Elspeth J., and University of Lethbridge Faculty of Arts and Science. "Classes of arrangement graphs in three dimensions." Thesis, Lethbridge, Alta. : University of Lethbridge, Faculty of Arts and Science, 2005, 2005. http://hdl.handle.net/10133/632.

Full text
Abstract:
A 3D arrangement graph G is the abstract graph induced by an arrangement of planes in general position where the intersection of any two planes forms a line of intersection and an intersection of three planes creates a point. The properties of three classes of arrangement graphs — four, five and six planes — are investigated. For graphs induced from six planes, specialized methods were developed to ensure all possible graphs were discovered. The main results are: the number of 3D arrangement graphs induced by four, five and six planes are one, one and 43 respectively; the three classes are Hamiltonian; and the 3D arrangement graphs created from four and five planes are planar but none of the graphs created from six planes are planar.
x, 89 leaves : ill. (some col.) ; 29 cm
APA, Harvard, Vancouver, ISO, and other styles
26

Schiltz, Gary. "Representation of knowledge using Sowa's conceptual graphs : an implementation of a set of tools." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9951.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Sawant, Vivek Manohar. "A hypertext application and system for G-net and the complementary relationship between graph theory and hypertext." Virtual Press, 1993. http://liblink.bsu.edu/uhtbin/catkey/879843.

Full text
Abstract:
Many areas of computer science use graph theory and thus benefit from research in graph theory. Some of the important activities involved in graph theory work are the study of concepts, algorithm development, and theorem proving. These can be facilitated by providing computerized tools for graph drawing, algorithm animation and accessing graph theory information bases. Project G-Net is aimed at developing a set of such tools.Project G-Net has chosen to provide the tools in hypertext form based on the analysis of users' requirements. The project is presently developing a hypertext application and a hypertext system for providing the above set of tools. In the process of this development various issues pertaining to hypertext authoring, hypertext usability and application of graph theory to hypertext are being explored.The focus of this thesis is in proving that hypertext approach is most appropriate for realizing the goals of the G-Net project. The author was involved in the research that went into analysis of requirements, design of hypertext application and system, and the investigation of the complementary relationship between graph theory and hypertext.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
28

Dion-Dallaire, Andrée-Anne. "A Framework for Mesh Refinement Suitable for Finite-Volume and Discontinuous-Galerkin Schemes with Application to Multiphase Flow Prediction." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42204.

Full text
Abstract:
Modelling multiphase flow, more specifically particle-laden flow, poses multiple challenges. These difficulties are heightened when the particles are differentiated by a set of “internal” variables, such as size or temperature. Traditional treatments of such flows can be classified in two main categories, Lagrangian and Eulerian methods. The former approaches are highly accurate but can also lead to extremely expensive computations and challenges to load balancing on parallel machines. In contrast, the Eulerian models offer the promise of less expensive computations but often introduce modelling artifacts and can become more complicated and expensive when a large number of internal variables are treated. Recently, a new model was proposed to treat such situations. It extends the ten-moment Gaussian model for viscous gases to the treatment of a dilute particle phase with an arbitrary number of internal variables. In its initial application, the only internal variable chosen for the particle phase was the particle diameter. This new polydisperse Gaussian model (PGM) comprises 15 equations, has an eigensystem that can be expressed in closed form and also possesses a convex entropy. Previously, this model has been tested in one dimension. The PGM was developed with the detonation of radiological dispersal devices (RDD) as an immediate application. The detonation of RDDs poses many numerical challenges, namely the wide range of spatial and temporal scales as well as the high computational costs to accurately resolve solutions. In order to address these issues, the goal of this current project is to develop a block-based adaptive mesh refinement (AMR) implementation that can be used in conjunction with a parallel computer. Another goal of this project is to obtain the first three-dimensional results for the PGM. In this thesis, the kinetic theory of gases underlying the development of the PGM is studied. Different numerical schemes and adaptive mesh refinement methods are described. The new block-based adaptive mesh refinement algorithm is presented. Finally, results for different flow problems using the new AMR algorithm are shown, as well as the first three-dimensional results for the PGM.
APA, Harvard, Vancouver, ISO, and other styles
29

Gogonel, Adriana Geanina. "Statistical Post-Processing Methods And Their Implementation On The Ensemble Prediction Systems For Forecasting Temperature In The Use Of The French Electric Consumption." Phd thesis, Université René Descartes - Paris V, 2012. http://tel.archives-ouvertes.fr/tel-00798576.

Full text
Abstract:
The thesis has for objective to study new statistical methods to correct temperature predictionsthat may be implemented on the ensemble prediction system (EPS) of Meteo France so toimprove its use for the electric system management, at EDF France. The EPS of Meteo Francewe are working on contains 51 members (forecasts by time-step) and gives the temperaturepredictions for 14 days. The thesis contains three parts: in the first one we present the EPSand we implement two statistical methods improving the accuracy or the spread of the EPS andwe introduce criteria for comparing results. In the second part we introduce the extreme valuetheory and the mixture models we use to combine the model we build in the first part withmodels for fitting the distributions tails. In the third part we introduce the quantile regressionas another way of studying the tails of the distribution.
APA, Harvard, Vancouver, ISO, and other styles
30

Coyle, Jesse Aaron. "Optimization of nuclear, radiological, biological, and chemical terrorism incidence models through the use of simulated annealing Monte Carlo and iterative methods." Thesis, Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43599.

Full text
Abstract:
A random search optimization method based off an analogous process for the slow cooling of metals is explored and used to find the optimum solution for a number of regression models that analyze nuclear, radiological, biological,and chemical terrorism targets. A non-parametric simulation based off of historical data is also explored. Simulated series of 30 years and a 30 year extrapolation of historical data are provided. The inclusion of independent variables used in the regression analysis is based off existing work in the reviewed literature. CBRN terrorism data is collected from both the Monterey Institute's Weapons of Mass Destruction Terrorism Database as well as from the START Global Terrorism Database. Building similar models to those found in the literature and running them against CBRN terrorism incidence data determines if conventional terrorism indicator variables are also significant predictors of CBRN terrorism targets. The negative binomial model was determined to be the best regression model available for the data analysis. Two general types of models are developed, including an economic development model and a political risk model. From the economic development model we find that national GDP, GDP per capita, trade openness, and democracy to significant indicators of CBRN terrorism targets. Additionally from the political risk model we find corrupt, stable, and democratic regimes more likely to experience a CBRN event. We do not find language/religious fractionalization to be a significant predictive variable. Similarly we do not find ethnic tensions, involvement in external conflict, or a military government to have significant predictive value.
APA, Harvard, Vancouver, ISO, and other styles
31

Hsieh, Chao-Ho. "Implementation of graph manipulation under X Window system environment." Virtual Press, 1992. http://liblink.bsu.edu/uhtbin/catkey/834634.

Full text
Abstract:
In graph theory graphs are mathematical objects that can be used to model networks, data structures, process scheduling, computations and a variety of other systems where the relations between the objects in the system play a dominant role.We will now consider graphs as mathematically self-contained units with rich structure and comprehensive theory; as models for many phenomena, particularly those arising in computer systems; and as structures which can be processed by a variety of sophisticated and interesting algorithms.For graph theory presentation, we need a very good graphical user interface(GUI) to approach the goal. X Window system is ideally suited for such a purpose. This package program is based on X Window system environment. With this package, we can manipulate graphs by special functions which can put nodes, put edges, delete nodes, delete edges, change the whole graph size, move graph location, and modify edge weights.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
32

Kist, Milton. "Novas metodologias de simulação do tipo Monte-Carlo via séries de Neumann aplicadas a problemas de flexão de placas." Universidade Tecnológica Federal do Paraná, 2016. http://repositorio.utfpr.edu.br/jspui/handle/1/1928.

Full text
Abstract:
A engenharia é um campo muito rico e vasto em problemas. Mesmo considerando-se apenas o ramo da engenharia estrutural, a quantidade e a variabilidade de problemas continuam sendo muito grande. O aumento da capacidade computacional proporcionou, nos últimos anos, o desenvolvimento de métodos mais complexos e robustos (métodos estocásticos) para resolução de problemas na área de estruturas passando a considerar incerteza. A incerteza pode ser devido à aleatoriedade das propriedades materiais, condições de apoio e carregamento. Muitos dos métodos estocásticos são baseados na simulação de Monte-Carlo, no entanto o método de Monte-Carlo direto possui custo computacional elevado. Visando o desenvolvimento de novas metodologias para resolução de problemas da área de estruturas, neste trabalho de tese apresentam-se três novas metodologias aplicadas a problemas estocásticos de flexão de placas, caracterizando assim a contribuição científica da tese. Estas metodologias, denominadas de Monte Carlo-Neumann, com ajuste no limitante; Monte Carlo-Neumann, mista 1 e Monte Carlo-Neumann, mista 2, utilizam a série de Neumann associada ao método de Monte-Carlo. Para verificar a eficiência quanto a precisão e ao tempo computacional, as metodologias foram aplicadas em problemas estocásticos de flexão de Placas de Kirchhoff em bases de Winkler e de Pasternak, considerando-se aleatoriedade sobre a rigidez da placa e sobre os coeficientes de rigidez da base de apoio.
Engineering is a very rich and wide field in problems. Even considering just structural engineering branch, the amount and variability of problems remains very large. The increase of computational capacity provided development of complex and robust methods to solve structural problems consi- dering uncertainty. Uncertainty may be due to material property randomness, support conditions and load. Many of stochastic methods are based on Monte-Carlo simulation, however Monte-Carlo direct method has high computation cost. Aiming the development of new methodologies for solving problems of the structures area, this thesis presents three new methodologies applied to plates stochastic bending problems, characterizing the scientific contribution of the thesis. These methodologies, named Monte Carlo-Neumann λ and Monte Carlo-Neumann, with quotas establishment, Monte Carlo-Neumann, with adjustment in limiting, Monte Carlo-Neumann, mixed 1 and Monte Carlo-Neumann, mixed 2, both based on Neumann series, were applied to stochastic problems of flexion of Kirchhoff plates on Winkler and Pasternak bases, considering uncertainty about plate stiffness and stiffness coefficient of the support base.
APA, Harvard, Vancouver, ISO, and other styles
33

Ortman, Robert L. "Sensory input encoding and readout methods for in vitro living neuronal networks." Thesis, Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44856.

Full text
Abstract:
Establishing and maintaining successful communication stands as a critical prerequisite for achieving the goals of inducing and studying advanced computation in small-scale living neuronal networks. The following work establishes a novel and effective method for communicating arbitrary "sensory" input information to cultures of living neurons, living neuronal networks (LNNs), consisting of approximately 20 000 rat cortical neurons plated on microelectrode arrays (MEAs) containing 60 electrodes. The sensory coding algorithm determines a set of effective codes (symbols), comprised of different spatio-temporal patterns of electrical stimulation, to which the LNN consistently produces unique responses to each individual symbol. The algorithm evaluates random sequences of candidate electrical stimulation patterns for evoked-response separability and reliability via a support vector machine (SVM)-based method, and employing the separability results as a fitness metric, a genetic algorithm subsequently constructs subsets of highly separable symbols (input patterns). Sustainable input/output (I/O) bit rates of 16-20 bits per second with a 10% symbol error rate resulted for time periods of approximately ten minutes to over ten hours. To further evaluate the resulting code sets' performance, I used the system to encode approximately ten hours of sinusoidal input into stimulation patterns that the algorithm selected and was able to recover the original signal with a normalized root-mean-square error of 20-30% using only the recorded LNN responses and trained SVM classifiers. Response variations over the course of several hours observed in the results of the sine wave I/O experiment suggest that the LNNs may retain some short-term memory of the previous input sample and undergo neuroplastic changes in the context of repeated stimulation with sensory coding patterns identified by the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
34

Swinson, Michael D. "Statistical Modeling of High-Dimensional Nonlinear Systems: A Projection Pursuit Solution." Diss., Available online, Georgia Institute of Technology, 2005, 2005. http://etd.gatech.edu/theses/available/etd-11232005-204333/.

Full text
Abstract:
Thesis (Ph. D.)--Mechanical Engineering, Georgia Institute of Technology, 2006.
Shapiro, Alexander, Committee Member ; Vidakovic, Brani, Committee Member ; Ume, Charles, Committee Member ; Sadegh, Nader, Committee Chair ; Liang, Steven, Committee Member. Vita.
APA, Harvard, Vancouver, ISO, and other styles
35

Sandberg, Emma. "Respektfull design: För ökad förståelse och vilja att förändra : FUCK ME, en utställning om den svåra sjukdomen ME/CFS." Thesis, Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-79356.

Full text
Abstract:
In this essay, I go through the different parts that compile my degree project. I have explored the power of graphic design for educational purposes. During my process, I have found a new area of graphic design, respectful design. Something that can be briefly described as design created with the greatest reverence for conveying a difficult subject without diminishing the affected person behind. An approach where one works closely with the involved and frequently checks in, so that nothing is portrayed incorrectly. This along with the constant search for information makes up the basics of respectful design. The disease ME has finally begun to be talked about in society. But the way the disease is portrayed can be argued as incorrect in relation to the bigger picture. As a graphic designer I have analyzed the problem and worked out a possible solution. My conclusion is that strong colors mixed with melancholy and straightforward information can make the subject less heavy and easier to handle. This can open the senses to problem solving and understanding instead of stopping at pity. The aim of the project has been to get away from these dark quilts that the media has placed on us. The goal was to create something that does not hide the awfulness of the disease but informs in a more fun and more inviting way. I strongly believe that graphic design can reduce the social stigma around a topic if done properly. When life is the toughest it has helped me and (after the response of the exhibition) others to learn more about our disease. Above all, if more people know more about the disease, ME- patients will get help. Hopefully society will open it’s arms so that no more people are forced to choose suicide.
I denna uppsats går jag igenom de olika delarna som sammanställer mitt examensprojekt. Jag har i grunden utforskat kraften av grafisk design i utbildningssyfte. Under min process har jag kommit fram till ett nytt område inom grafisk design, respektfull design. Detta går kort att beskriva som design utformat med största vördnad inför att förmedla ett svårt ämne utan att förminska den drabbade människan bakom. Respektfull design innebär att man jobbar nära med inblandade, kollar av hela tiden så att inget porträtteras felaktigt och läser på om ämnet så långt det går. Sjukdomen ME har äntligen börjat talas om i samhället. Men sättet sjukdomen porträtteras kan argumenteras som svartvitt och felaktigt gentemot helhetsbilden. Som grafisk designer har jag analyserat problemet och jobbat fram en möjlig lösning. Min slutsats är att starka färger blandat med melankolisk och rak information kan göra ämnet mindre tungt och mer lätthanterligt. Detta kan öppna sinnena för problemlösning och förståelse istället för att stanna vid medlidande. Syftet med projektet har varit att komma ifrån detta mörk täcke som media lagt över oss. Målet var att skapa något som inte döljer det hemska med sjukdomen men informerar på ett roligare och mer inbjudande sätt. Jag tror stark på att grafisk design kan minska det sociala stigmat runt ett ämne om det görs på rätt sätt. När livet är som tuffast har det hjälpt mig och (efter respons av utställningen) också andra med att utbilda och läsa på. Framför allt om fler vet mer om sjukdomen kommer det att gå att få bättre hjälp. Förhoppningvis öppnar samhället sina armar så att inte fler tvingas till att välja självmord.
APA, Harvard, Vancouver, ISO, and other styles
36

Dresch, Andrea Alves Guimarães. "Método para reconhecimento de vogais e extração de parâmetros acústicos para analises forenses." Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1799.

Full text
Abstract:
Exames de Comparação Forense de Locutores apresentam características complexas, demandando análises demoradas quando realizadas manualmente. Propõe-se um método para reconhecimento automático de vogais com extração de características para análises acústicas, objetivando-se contribuir com uma ferramenta de apoio nesses exames. A proposta baseia-se na medição dos formantes através de LPC (Linear Predictive Coding), seletivamente por detecção da frequência fundamental, taxa de passagem por zero, largura de banda e continuidade, sendo o agrupamento das amostras realizado por meio do método k-means. Experimentos realizados com amostras de três diferentes bases de dados trouxeram resultados promissores, com localização das regiões correspondentes a cinco das vogais do Português Brasileiro, propiciando a visualização do comportamento do trato vocal de um falante, assim como detecção de trechos correspondentes as vogais-alvo.
Forensic speaker comparison exams have complex characteristics, demanding a long time for manual analysis. A method for automatic recognition of vowels, providing feature extraction for acoustic analysis is proposed, aiming to contribute as a support tool in these exams. The proposal is based in formant measurements by LPC (Linear Predictive Coding), selectively by fundamental frequency detection, zero crossing rate, bandwidth and continuity, with the clustering being done by the k-means method. Experiments using samples from three different databases have shown promising results, in which the regions corresponding to five of the Brasilian Portuguese vowels were successfully located, providing visualization of a speaker’s vocal tract behavior, as well as the detection of segments corresponding to target vowels.
APA, Harvard, Vancouver, ISO, and other styles
37

Ellison, Cassandra J. "Recovery From Design." VCU Scholars Compass, 2017. http://scholarscompass.vcu.edu/etd/4884.

Full text
Abstract:
Through research, inquiry, and an evaluation of Recovery By Design, a ‘design therapy’ program that serves people with mental illness, substance use disorders, and developmental disabilities, it is my assertion that the practice of design has therapeutic potential and can aid in the process of recovery. To the novice, the practices of conception, shaping form, and praxis have empowering benefit especially when guided by Conditional and Transformation Design methods together with an emphasis on materiality and vernacular form.
APA, Harvard, Vancouver, ISO, and other styles
38

Hattingh, Johannes Hendrik. "Some aspects of the theory of circulant graphs." Thesis, 2014. http://hdl.handle.net/10210/9738.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Dorfling, Samantha. "Generalized chromatic numbers and invariants of hereditary graph properties." Thesis, 2011. http://hdl.handle.net/10210/4149.

Full text
Abstract:
D. Phil (Mathematics)
In this thesis we investigate generalized chromatic numbers in the context of hereditary graph properties. We also investigate the general topic of invariants of graphs as well as graph properties. In Chapter 1 we give relevant definitions and terminology pertaining to graph properties. In Chapter 2 we investigate generalized chromatic numbers of some well-known additive hereditary graph properties. This problem necessitates the investigation of reducible bounds. One of the results here is an improvement on a known upper bound for the path partition number of the property Wk. We also look at the generalized chromatic number of infinite graphs and hereby establish the connection between the generalized chromatic number of properties and infinite graphs. In Chapter 3 the analogous question of the generalized edge-chromatic number of some well-known additive hereditary properties is investigated. Similarly we find decomposable bounds and are also able to find generalized edge-chromatic numbers of properties using some well-known decomposable bounds. In Chapter 4 we investigate the more general topic of graph invariants and the role they play in chains of graph properties and then conversely the invariants that arise from chains of graph properties. Moreover we investigate the effects on monotonicity of the invariants versus heredity and additivity of graph properties. In Chapter 5 the general topic of invariants of graph properties defined in terms of the set of minimal forbidden subgraphs of the properties is studied. This enables us to investigate invariants so defined on binary operations between graph properties. In Chapter 6 the notion of natural and near-natural invariants are introduced and are also studied on binary operations of graph properties. The set of minimal forbidden subgraphs again plays a role in the definition of invariants here and this then leads us to study the completion number of a property.
APA, Harvard, Vancouver, ISO, and other styles
40

Jonck, Elizabeth. "The path partition number of a graph." Thesis, 2012. http://hdl.handle.net/10210/7118.

Full text
Abstract:
Ph.D.
The induced path number p(G) of a graph G is defined as the minimum number of subsets into which the vertex set V(G) of G can be partitioned such that each subset induces a path. In this thesis we determine the induced path number of a complete £-partite graph. We investigate the induced path number of products of complete graphs, of the complement of such products and of products of cycles. For a graph G, the linear vertex arboricity lva(G) is defined as the minimum number of subsets into which the vertex set of C can be partitioned so that each subset induces a linear forest. Since each path is a linear forest, Iva(G) p(G) for each graph C. A graph G is said to be uniquely rn-li near- forest- partition able if lva(C) = in and there is only one partition of V(G) into m subsets so that each subset induces a linear forest. Furthermore, a graph C is defined to be nz- Iva- saturated if Iva(G) < in and lva(C + e) > iii for each e E We construct graphs that are uniquely n2-linear-forest-partitionable and in-lva-saturated. We characterize those graphs that are uniquely m-linear-forest-partitionable and rn-lvasaturated. We also characterize the orders of uniquely in- path- partitionable disconnected, connected and rn-p-saturated graphs. We look at the influence of the addition or deletion of a vertex or an edge on the path partition number. If C is a graph such that p(G) = k and p(G - v) = k - 1 for every v E V(G), then we say that C is k-minus-critical. We prove that if C is a connected graph consisting of cyclic blocks Bi with p(B1 ) = b, for i = 1,2, ... ,n where ii > 2 and k bi - n+ 1, then C is k- minus- critical if and only if each of the blocks B1 is a bj- minus- critical graph.
APA, Harvard, Vancouver, ISO, and other styles
41

Garner, Charles R. "Investigations into the ranks of regular graphs." Thesis, 2012. http://hdl.handle.net/10210/6069.

Full text
Abstract:
Ph.D.
In this thesis, the ranks of many types of regular and strongly regular graphs are determined. Also determined are ranks of regular graphs under unary operations: the line graph, the complement, the subdivision graph, the connected cycle, the complete subdivision graph, and the total graph. The binary operations considered are the Cartesian product and the complete product. The ranks of the Cartesian product of regular graphs have been investigated previously in [BBD1]; here, we summarise and extend those results to include more regular graphs. We also examine a special nonregular graph, the path. Ranks of paths and products of graphs involving paths are presented as well
APA, Harvard, Vancouver, ISO, and other styles
42

"Prediction of structures and properties of high-pressure solid materials using first principles methods." Thesis, 2016. http://hdl.handle.net/10388/ETD-2016-02-2441.

Full text
Abstract:
The purpose of the research contained in this thesis is to allow for the prediction of new structures and properties of crystalline structures due to the application of external pressure by using first-principles numerical computations. The body of the thesis is separated into two primary research projects. The properties of cupric oxide (CuO) have been studied at pressures below 70 GPa, and it has been suggested that it may show room-temperature multiferroics at pressure of 20 to 40 GPa. However, at pressures above these ranges, the properties of CuO have yet to be examined thoroughly. The changes in crystal structure of CuO were examined in these high-pressure ranges. It was predicted that the ambient pressure monoclinic structure changes to a rocksalt structure and CsCl structure at high pressure. Changes in the magnetic ordering were also suggested to occur due to superexchange interactions and Jahn-Teller instabilities arising from the d-orbital electrons. Barium chloride (BaCl) has also been observed, which undergoes a similar structural change due to an s – d transition, and whose structural changes can offer further insight into the transitions observed in CuO. Ammonia borane (NH3BH3) is known to have a crystal structure which contains the molecules in staggered conformation at low pressure. The crystalline structure of NH3BH3 was examined at high pressure, which revealed that the staggered configuration transforms to an eclipsed conformation stabilized by homopolar B–Hδ-∙∙∙ δ-H–B dihydrogen bonds. These bonds are shown to be covalent in nature, comparable in bond strength to conventional hydrogen bonds, and may allow for easier molecular hydrogen formation in hydrogen fuel storage.
APA, Harvard, Vancouver, ISO, and other styles
43

Harris, Laura Marie. "Aspects of functional variations of domination in graphs." Thesis, 2003. http://hdl.handle.net/10413/7384.

Full text
Abstract:
Let G = (V, E) be a graph. For any real valued function f : V >R and SCV, let f (s) = z ues f(u). The weight of f is defined as f(V). A signed k-subdominating function (signed kSF) of G is defined as a function f : V > {-I, I} such that f(N[v]) > 1 for at least k vertices of G, where N[v] denotes the closed neighborhood of v. The signed k-subdomination number of a graph G, denoted by yks-11(G), is equal to min{f(V) I f is a signed kSF of G}. If instead of the range {-I, I}, we require the range {-I, 0, I}, then we obtain the concept of a minus k-subdominating function. Its associated parameter, called the minus k-subdomination number of G, is denoted by ytks-101(G). In chapter 2 we survey recent results on signed and minus k-subdomination in graphs. In Chapter 3, we compute the signed and minus k-subdomination numbers for certain complete multipartite graphs and their complements, generalizing results due to Holm [30]. In Chapter 4, we give a lower bound on the total signed k-subdomination number in terms of the minimum degree, maximum degree and the order of the graph. A lower bound in terms of the degree sequence is also given. We then compute the total signed k-subdomination number of a cycle, and present a characterization of graphs G with equal total signed k-subdomination and total signed l-subdomination numbers. Finally, we establish a sharp upper bound on the total signed k-subdomination number of a tree in terms of its order n and k where 1 < k < n, and characterize trees attaining these bounds for certain values of k. For this purpose, we first establish the total signed k-subdomination number of simple structures, including paths and spiders. In Chapter 5, we show that the decision problem corresponding to the computation of the total minus domination number of a graph is NP-complete, even when restricted to bipartite graphs or chordal graphs. For a fixed k, we show that the decision problem corresponding to determining whether a graph has a total minus domination function of weight at most k may be NP-complete, even when restricted to bipartite or chordal graphs. Also in Chapter 5, linear time algorithms for computing Ytns-11(T) and Ytns-101(T) for an arbitrary tree T are presented, where n = n(T). In Chapter 6, we present cubic time algorithms to compute Ytks-11(T) and Ytks-101l(T) for a tree T. We show that the decision problem corresponding to the computation of Ytks-11(G) is NP-complete, and that the decision problem corresponding to the computation of Ytks-101 (T) is NP-complete, even for bipartite graphs. In addition, we present cubic time algorithms to computeYks-11(T) and Yks-101(T) for a tree T, solving problems appearing in [25].
Thesis (Ph.D.)-University of Natal, Pietermaritzburg, 2003.
APA, Harvard, Vancouver, ISO, and other styles
44

鄭孟玉. "A New Prediction Model of Public Company Alternative Trading Methods by Using the Fuzzy Neural Network Theory." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/39465811400587630771.

Full text
Abstract:
碩士
逢甲大學
會計與財稅研究所
89
The formation causing of the financial deterioration and the condition often depend on the expert’s judgment. There is a fuzzy factor in the process of human thinking and inference, but it is ignored in the past literature. Fuzzy Artificial Neural Network is first adopted in the research to measure the fuzzy factor to form the model constitution. Therefore, the model not only fits the qualities of human thinking and judgment, but also promotes the possibility of practice. Furthermore, the research adopting the comparison of identity increases the judgment probability of the model and avoids the possibility of oversampling at the same time. The following conclusions are found in the results of actual certification from the research: 1.The comparison of homogeneous that promotes the prediction of the model. 2.The influence of oversampling can be avoided. 3.The expert’s experience can be completely showed in the judgment rules by weighty value. 4.Whether the samples are distributed equally or not influences the effect of learning artificial neural network most. We can learn from the results of the actual certification that the model can completely differentiate the unsuccessful company, Type I error is “0”;for the successful company, it still has the error of classification. Although it has Type II error, but it has reduced the cost of the classification’s error to the lowest. It has reached the final goal of the model constitution. Keyword:Fuzzy Theory、Back-propagation Network、Fuzzy Neural Network、Alternative Trading Methods
APA, Harvard, Vancouver, ISO, and other styles
45

"Optimal Bayesian estimators for image segmentation and surface reconstruction." Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, 1985. http://hdl.handle.net/1721.1/2879.

Full text
Abstract:
J.L. Marroquin.
"May, 1985."
Bibliography: p. 16.
"Advanced Research Projects Agency of the Department of Defense under Office of Naval Research Contract N00014-80-C-0505" "The author was supported by the Army Research Office under contract ARO-DAAG29-84-K-0005."
APA, Harvard, Vancouver, ISO, and other styles
46

"Summarizing static graphs and mining dynamic graphs." Thesis, 2011. http://library.cuhk.edu.hk/record=b6075341.

Full text
Abstract:
Besides finding changing areas based on the number of node and edge evolutions, a more interesting problem is to analyze the impact of these evolutions to graphs and find the regions that exhibit significant changes when these evolutions happen. The more different the relationship between nodes in a certain region is, the more significant this region is. This problem is challenging since it is hard to define the range of changing regions that is closely related to actual evolutions. We formalize the problem by using a similarity measure based on neighborhood random walks, and design an efficient algorithm which is able to identify the significant changing regions without recomputing all similarities. Meaningful examples in experiments demonstrate the effectiveness of our algorithms.
Graph patterns are able to represent the complex structural relations among objects in many applications in various domains. Managing and mining graph data, on which we study in this thesis, are no doubt among the most important tasks. We focus on two challenging problems, namely, graph summarization and graph change detection.
In the area of summarizing a collection of graphs, we study the problem of summarizing frequent subgraphs, since it is not much necessary to summarize a collection of random graphs. The bottleneck for exploring and understanding frequent subgraphs is that they are numerous. A summary can be a solution to this issue, so the goal of frequent subgraph summarization is to minimize the restoration errors of the structure and the frequency information. The unique challenge in frequent subgraph summarization comes from the fact that a subgraph can have multiple embeddings in a summarization template graph. We handle this issue by introducing a partial order between edges to allow accurate structure and frequency estimation based on an independence probabilistic model. The proposed algorithm discovers k summarization templates in a top-down fashion to control the restoration error of frequencies within sigma. There is no restoration error of structures. Experiments on both real and synthetic graph datasets show that our framework can control the frequency restoration error within 10% by a compact summarization model.
The objective of graph change detection is to discover the changing areas on graphs when they evolves at a high speed. The most changing areas are those areas having the highest number of evolutions (additions/deletions) of nodes and edges, which is called burst areas. We study on finding the most burst areas in a stream of fast graph evolutions. We propose to use Haar wavelet tree to monitor the upper bound of the number of evolutions. Our approach monitors all potential changing areas of different sizes and computes incrementally the number of evolutions in those areas. The top-k burst areas are returned as soon as they are detected. Our solution is capable of handling a large amount of evolutions in a short time, which is consistent to the experimental results.
The objective of graph summarization is to obtain a concise representation of a single large graph or a collection of graphs, which is interpretable and suitable for analysis. A good summary can reveal the hidden relationships between nodes in a graph. The key issue of summarizing a single graph is how to construct a high-quality and representative summary, which is in the form of a super-graph. We propose an entropy-based unified model for measuring the homogeneity of the super-graph. The best summary in terms of homogeneity could be too large to explore. By using the unified model, we relax three summarization criteria to obtain an approximate homogeneous summary of appropriate size. We propose both agglomerative and divisive algorithms for approximate summarization, as well as pruning techniques and heuristics for both algorithms to save computation cost. Experimental results confirm that our approaches can efficiently generate high-quality summaries.
Liu, Zheng.
Advisers: Wai Lam; Jeffrey Xu Yu.
Source: Dissertation Abstracts International, Volume: 73-06, Section: B, page: .
Thesis (Ph.D.)--Chinese University of Hong Kong, 2011.
Includes bibliographical references (leaves 133-141).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract also in Chinese.
APA, Harvard, Vancouver, ISO, and other styles
47

Moodley, Lohini. "Two conjectures on 3-domination critical graphs." Diss., 1999. http://hdl.handle.net/10500/17505.

Full text
Abstract:
For a graph G = (V (G), E (G)), a set S ~ V (G) dominates G if each vertex in V (G) \S is adjacent to a vertex in S. The domination number I (G) (independent domination number i (G)) of G is the minimum cardinality amongst its dominating sets (independent dominating sets). G is k-edge-domination-critical, abbreviated k-1- critical, if the domination number k decreases whenever an edge is added. Further, G is hamiltonian if it has a cycle that passes through each of its vertices. This dissertation assimilates research generated by two conjectures: Conjecture I. Every 3-1-critical graph with minimum degree at least two is hamiltonian. Conjecture 2. If G is k-1-critical, then I ( G) = i ( G). The recent proof of Conjecture I is consolidated and presented accessibly. Conjecture 2 remains open for k = 3 and has been disproved for k :::>: 4. The progress is detailed and proofs of new results are presented.
Mathematical Science
M. Sc. (Mathematics)
APA, Harvard, Vancouver, ISO, and other styles
48

"Developing and Testing a Theory of Intentions to Exit Street-level Prostitution: A Mixed Methods Study." Doctoral diss., 2013. http://hdl.handle.net/2286/R.I.18161.

Full text
Abstract:
abstract: Exiting prostitution is a process whereby women gradually leave prostitution after a number of environmental, relational, and cognitive changes have taken place. Most women attempting to leave street prostitution reenter five or more times before successfully exiting, if they are able to at all. Prostitution-exiting programs are designed to alleviate barriers to exiting, but several studies indicate only about 20-25% of participants enrolled in such programs are successful. There is little quantitative knowledge on the prostitution exiting process and current literature lacks a testable theory of exiting. This mixed-methods study defined and operationalized key cognitive processes by applying the Integrative Model of Behavioral Prediction (IMBP) to measure intentions to exit street-level prostitution. Intentions are thought to be a determinant of behavior and hypothesized as a function of attitudes, norms, and efficacy beliefs. The primary research objective was to measure and test a theory-driven hypothesis examining intentions to exit prostitution. To accomplish these aims, interviews were conducted with 16 men and women involved in prostitution to better capture the latent nuances of exiting (e.g., attitudinal changes, normative influence). These data informed the design of a quantitative instrument that was pilot-tested with a group of former prostitutes and reviewed by experts in the field. The quantitative phase focused on validating the instrument and testing the theory in a full latent variable structural equation model with a sample of 160 former and active prostitutes. Ultimately, the theory and instrument developed in this study will lay the foundation to test interventions for street prostituted women. Prior research has only been able to describe, but not explain or predict, the prostitution exiting process. This study fills a gap in literature by providing a quantitative examination of women's intentions to leave prostitution. The results contribute to our understanding of the cognitive changes that occur when a person leaves prostitution, and the validated instrument may be used as an intervention assessment or an exit prediction tool. Success in predicting an individual's passage through the exiting process could have important and far-reaching implications on recidivism policies or interventions for this vulnerable group of women.
Dissertation/Thesis
Ph.D. Social Work 2013
APA, Harvard, Vancouver, ISO, and other styles
49

Skaggs, Robert Duane. "Identifying vertices in graphs and digraphs." Thesis, 2007. http://hdl.handle.net/10500/2226.

Full text
Abstract:
The closed neighbourhood of a vertex in a graph is the vertex together with the set of adjacent vertices. A di®erentiating-dominating set, or identifying code, is a collection of vertices whose intersection with the closed neighbour- hoods of each vertex is distinct and nonempty. A di®erentiating-dominating set in a graph serves to uniquely identify all the vertices in the graph. Chapter 1 begins with the necessary de¯nitions and background results and provides motivation for the following chapters. Chapter 1 includes a summary of the lower identi¯cation parameters, °L and °d. Chapter 2 de- ¯nes co-distinguishable graphs and determines bounds on the number of edges in graphs which are distinguishable and co-distinguishable while Chap- ter 3 describes the maximum number of vertices needed in order to identify vertices in a graph, and includes some Nordhaus-Gaddum type results for the sum and product of the di®erentiating-domination number of a graph and its complement. Chapter 4 explores criticality, in which any minor modi¯cation in the edge or vertex set of a graph causes the di®erentiating-domination number to change. Chapter 5 extends the identi¯cation parameters to allow for orientations of the graphs in question and considers the question of when adding orientation helps reduce the value of the identi¯cation parameter. We conclude with a survey of complexity results in Chapter 6 and a collection of interesting new research directions in Chapter 7.
Mathematical Sciences
PhD (Mathematics)
APA, Harvard, Vancouver, ISO, and other styles
50

Yang, Lili. "Joint models for longitudinal and survival data." Thesis, 2014. http://hdl.handle.net/1805/4666.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
Epidemiologic and clinical studies routinely collect longitudinal measures of multiple outcomes. These longitudinal outcomes can be used to establish the temporal order of relevant biological processes and their association with the onset of clinical symptoms. In the first part of this thesis, we proposed to use bivariate change point models for two longitudinal outcomes with a focus on estimating the correlation between the two change points. We adopted a Bayesian approach for parameter estimation and inference. In the second part, we considered the situation when time-to-event outcome is also collected along with multiple longitudinal biomarkers measured until the occurrence of the event or censoring. Joint models for longitudinal and time-to-event data can be used to estimate the association between the characteristics of the longitudinal measures over time and survival time. We developed a maximum-likelihood method to joint model multiple longitudinal biomarkers and a time-to-event outcome. In addition, we focused on predicting conditional survival probabilities and evaluating the predictive accuracy of multiple longitudinal biomarkers in the joint modeling framework. We assessed the performance of the proposed methods in simulation studies and applied the new methods to data sets from two cohort studies.
National Institutes of Health (NIH) Grants R01 AG019181, R24 MH080827, P30 AG10133, R01 AG09956.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography