To see the other types of publications on this topic, follow the link: Statistical and GLCM.

Dissertations / Theses on the topic 'Statistical and GLCM'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 22 dissertations / theses for your research on the topic 'Statistical and GLCM.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Nhapi, Raymond T. "A GLMM analysis of data from the Sinovuyo Caring Families Program (SCFP)." Master's thesis, University of Cape Town, 2018. http://hdl.handle.net/11427/29507.

Full text
Abstract:
We present an analysis of the data from a longitudinal randomized control trial that assesses the impact of an intervention program aimed at improving the quality of childcare within families. The SCFP was a group-based program implemented over two separate waves conducted in Khayelitsha and Nyanga. The data were collected at baseline, post-test and at one-year follow-up via questionnaires (self-assessment) and observational video coding. Multiple imputation (using chained equations) procedures were used to impute missing information. Generalized linear Mixed Effect Models (GLMMs) were used to assess the impact of the intervention program on the responses, adjusted for possible confounding variables. These summed scores were often right skewed with zero-inflation. All the effects (fixed and random) were estimated through the method of maximum likelihood. Primarily, an intention-to-treat analysis was done after which a per-protocol analysis was also implemented with participants who attended a specified number of the group sessions. All these GLMMs were implemented in the imputation framework.
APA, Harvard, Vancouver, ISO, and other styles
2

Chang, Sheng-Mao. "A Stationary Stochastic Approximation Algorithm for Estimation in the GLMM." NCSU, 2007. http://www.lib.ncsu.edu/theses/available/etd-05172007-164438/unrestricted/etd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hatzopoulos, Peter. "Statistical and mathematical modelling for mortality trends and the comparison of mortality experiences, through generalised linear models and GLIM." Thesis, City University London, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.364032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

HUANG, BIN. "STATISTICAL ASSESSMENT OF THE CONTRIBUTION OF A MEDIATOR TO AN EXPOSURE OUTCOME PROCESS." University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1005678075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Leodolter, Johannes. "A Statistical Analysis of the Lake Levels at Lake Neusiedl." Austrian Statistical Society, 2008. http://epub.wu.ac.at/5634/1/296%2D1009%2D1%2DSM.pdf.

Full text
Abstract:
A long record of daily data is used to study the lake levels of Lake Neusiedl, a large steppe lake at the eastern border of Austria. Daily lake level changes are modeled as functions of precipitation, temperature, and wind conditions. The occurrence and the amount of daily precipitation are modeled with logistic regressions and generalized linear models.
APA, Harvard, Vancouver, ISO, and other styles
6

Lu, Rong. "Statistical Methods for Functional Genomics Studies Using Observational Data." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1467830759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lachmann, Jon. "Subsampling Strategies for Bayesian Variable Selection and Model Averaging in GLM and BGNLM." Thesis, Stockholms universitet, Statistiska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-194715.

Full text
Abstract:
Bayesian Generalized Nonlinear Models (BGNLM) offer a flexible alternative to GLM while still providing better interpretability than machine learning techniques such as neural networks. In BGNLM, the methods of Bayesian Variable Selection and Model Averaging are applied in an extended GLM setting. Models are fitted to data using MCMC within a genetic framework in an algorithm called GMJMCMC. In this thesis, we present a new implementation of the algorithm as a package in the programming language R. We also present a novel algorithm called S-IRLS-SGD for estimating the MLE of a GLM by subsampling the data. Finally, we present some theory combining the novel algorithm with GMJMCMC/MJMCMC/MCMC and a number of experiments demonstrating the performance of the contributed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
8

O'Leary, Brian. "A Vertex-Based Approach to the Statistical and Machine Learning Analyses of Brain Structure." University of Toledo / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1576254162111087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

VACCA, GIANMARCO. "Redundancy Analysis Models with Categorical Endogenous Variables: New Estimation Techniques Based on Vector GLM and Artificial Neural Networks." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2017. http://hdl.handle.net/10281/158304.

Full text
Abstract:
I modelli ad equazioni strutturali con variabili latenti hanno subito un notevole sviluppo negli ultimi anni. Partendo dai pionieri delle due macro-definizioni di modelli con variabili latenti, Covariance Structure Analysis e Component Analysis, con LISREL e PLS-PM come le tecniche più importanti, diverse estensioni e miglioramenti sono stati proposti. Inoltre, per i modelli di analisi di ridondanza, che fanno parte della Component Analysis, ma hanno solo variabili endogene osservate, sono stati proposti nuovi metodi in letteratura per affrontare più di un gruppo di variabili osservate esogene, con equazioni lineari semplici ed un'ottimizzazione unificata del problema. La critica principale, che è stata affrontata di recente in nuovi filoni di letteratura riguardanti i modelli ad equazioni strutturali, è l'incapacità parziale di questi sistemi di equazioni di modellizzare indicatori categoriali. Sono stati proposti diversi metodi a tale scopo, in PLS-PM e LISREL rispettivamente, che sfruttano metodi di Optimal Scaling o l’algoritmo EM nel processo di ottimizzazione. Per l’analisi di ridondanza, con variabili endogene solo osservate, la possibilità di estendere le procedure di stima a variabili qualitative è notevolmente meno ostacolata da restrizioni del modello, ancor di più nel modello di analisi di ridondanza estesa, con più di un blocco di variabili esogene. Questo lavoro presenta una nuova stima di modelli di analisi di ridondanza estesa in presenza di variabili endogene binarie o categoriali, con due principali tecniche di stima: Iterated Reweighed Least Squares, e Gradient Descent con backpropagation tramite reti neurali. Per questi ultimi, recenti sviluppi nei modelli ad equazioni strutturali con reti neurali saranno esaminati, e la nuova tecnica sarà quindi introdotta.<br>Structural Equation Models with latent variables have considerably developed in recent years. Starting from the pioneers of the two most prominent ways of defining models with latent variables, namely Covariance Structure Analysis and Component Analysis, with LISREL and PLS-PM as the most famous techniques, several extensions and improvements have been put forward. Moreover, for Redundancy Analysis models, which are part of the Component Analysis framework, but have only observed endogenous variables, new methods have been proposed in literature to deal with more than one group of exogenous observed variables, with simple linear equations and a unified optimization problem. One main criticism, that has been dealt with recently in new strands of literature regarding Structural Equation Modeling, is the partial inability of these systems of linear equations to deal with categorical indicators. Several methods have been proposed, in PLS-PM and LISREL respectively, either related to Optimal Scaling, or adapting the EM algorithm to the particular case under examination. In the Redundancy Analysis framework, with only observed endogenous variables, the possibility of extending the estimation procedures to a qualitative setting is considerably less hampered by model restrictions, even more so in the Extended Redundancy Analysis model, with more than one block of exogenous variables. This work will hence present a new estimation of Extended Redundancy Analysis models in presence of binary or categorical endogenous variables, with two main estimation techniques: Iterated Reweighed Least Squares, and Gradient Descent with backpropagation in an Artificial Neural Network architecture. For the latter, recent developments in Structural Equation Models in the neural networks setting will be firstly examined, and the new technique will be subsequently introduced.
APA, Harvard, Vancouver, ISO, and other styles
10

Nargis, Suraiya, and n/a. "Robust methods in logistic regression." University of Canberra. Information Sciences & Engineering, 2005. http://erl.canberra.edu.au./public/adt-AUC20051111.141200.

Full text
Abstract:
My Masters research aims to deepen our understanding of the behaviour of robust methods in logistic regression. Logistic regression is a special case of Generalized Linear Modelling (GLM), which is a powerful and popular technique for modelling a large variety of data. Robust methods are useful in reducing the effect of outlying values in the response variable on parameter estimates. A literature survey shows that we are still at the beginning of being able to detect extreme observations in logistic regression analyses, to apply robust methods in logistic regression and to present informatively the results of logistic regression analyses. In Chapter 1 I have made a basic introduction to logistic regression, with an example, and to robust methods in general. In Chapters 2 through 4 of the thesis I have described traditional methods and some relatively new methods for presenting results of logistic regression using powerful visualization techniques as well as the concepts of outliers in binomial data. I have used different published data sets for illustration, such as the Prostate Cancer data set, the Damaged Carrots data set and the Recumbent Cow data set. In Chapter 4 I summarize and report on the modem concepts of graphical methods, such as central dimension reduction, and the use of graphics as pioneered by Cook and Weisberg (1999). In Section 4.6 I have then extended the work of Cook and Weisberg to robust logistic regression. In Chapter 5 I have described simulation studies to investigate the effects of outlying observations on logistic regression (robust and non-robust). In Section 5.2 I have come to the conclusion that, in the case of classical or robust multiple logistic regression with no outliers, robust methods do not necessarily provide more reasonable estimates of the parameters for the data that contain no st~ong outliers. In Section 5.4 I have looked into the cases where outliers are present and have come to the conclusion that either the breakdown method or a sensitivity analysis provides reasonable parameter estimates in that situation. Finally, I have identified areas for further study.
APA, Harvard, Vancouver, ISO, and other styles
11

Chardon, Jérémy. "Intérêts de la méthode des analogues pour la génération de scénarios de précipitations à l'échelle de la France métropolitaine : Cohérence spatiale et adaptabilité du lien d'échelle." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENU044/document.

Full text
Abstract:
Les scénarios hydrologiques requis pour les études d'impacts hydrologiques nécessitent de disposer de scénarios météorologiques non biaisés et qui soient de surcroît adaptés aux échelles spatiales et temporelles des hydro-systèmes considérés. Les scénarios météorologiques obtenus en sortie brute des modèles de climat et/ou des modèles de prévision numérique du temps sont de ce fait non appropriées. Les sorties de ces modèles sont par suite souvent adaptées à l'aide de Méthodes de Descente d'Echelle Statistique (MDES). Depuis les années 2000, les MDES ont beaucoup été utilisées pour la génération de scénarios météorologiques en un site. En revanche, la génération de scénarios spatiaux couvrant de larges territoires est une tâche plus difficile, en particulier lorsque l'on souhaite respecter la cohérence spatiale des précipitations à prédire. Parmi les MDES usuelles, les approches basées sur la recherche de situations analogues passées permettent de satisfaire cette contrainte. Dans cette thèse, nous évaluons la capacité d'un Modèle Analog (MA) – où l'analogie porte sur les géopotentiels 1 000 et 500 hPa – pour la génération de scénarios de précipitation spatialement cohérents pour le territoire Français métropolitain. Dans un premier temps, la transposition spatiale du modèle MA est évaluée : le modèle s'avère utilisable pour la génération de scénarios spatiaux cohérents sur des territoires couvrant plusieurs dizaines de milliers de kilomètres carrés dès lors qu'aucune barrière climatique n'est rencontrée. Dans un second temps, nous évaluons la sensibilité des performances de prédiction à l'agrégation spatiale de la variable à prédire. L'augmentation de performance avec l'agrégation s'explique alors par la diminution de la variabilité du prédictand, pour autant que les variables de grande échelle considérées soient de bons prédicteurs pour la région considérée. Dans une dernière étude, nous explorons la possibilité d'améliorer la performance locale du modèle analogue par l'ajout de prédicteurs locaux. Le modèle combiné qui en résulte permet d'accroître sensiblement les performances de prédiction par l'adaptation du lien d'échelle sur la base d'un jeu de prédicteurs additionnels. Il apparaît de plus que la pertinence de ces prédicteurs dépend de la situation de grande échelle rencontrée ainsi que de la région considérée<br>Hydrological scenarios required for the impact studies need to have unbiased meteorological scenarios adapted to the space and time scales of the considered hydro-systems. Hence, meteorological scenarios obtained from global climate models and/or numerical weather prediction models are not really appropriated. Outputs of these models have to be post-processed, which is often carried out thanks to Statistical Downscaling Methods (SDMs). Since the 2000's, SDMs are widely used for the generation of scenarios at a single site. The generation of relevant precipitation fields over large regions or hydro-systems is conversely not straightforward, in particular when the spatial consistency has to be satisfied. One strategy to fulfill this constraint is to use a SDM based on the search of past analog situations. In this PhD, we evaluate the ability of an Analog Model (AM) – where the analogy is applied to the geopotential heights 1000 and 500 hPa – for the generation of spatially coherent precipitation scenarios over the French metropolitan territory. In a first part, the spatial transferability of an AM is evaluated: the model appears to be usable for the generation of spatial coherent scenarios over territories covering several tens of thousands squared kilometers if no climatological barrier is met in between. In a second part, we evaluate the sensitivity of the prediction performance to the spatial aggregation of the predictand. The performance increases with the aggregation level as long as the large scale variables are good predictors of precipitation for the region under consideration. This performance increase has to be related to the decrease of the predictand variability. We finally explore the possibility of improving the local performance of the AM using additional local scale predictors. For each prediction day, the prediction is obtained from a parametric regression model, for which predictors and parameters are estimated from the analog dates. The resulting combined model noticeably allows increasing the prediction performance by adapting the downscaling link for each prediction day. The selected predictors for a given prediction depend on the large scale situation and on the considered region
APA, Harvard, Vancouver, ISO, and other styles
12

Mikl, Michal. "Zkoumání vlivu nepřesností v experimentální stimulaci u fMRI." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-233482.

Full text
Abstract:
Aim of this work is to study the impact of inaccuracy in execution of required task (inaccuracy in subject’s behavioral response to experimental stimulation) by person who undergoes fMRI examination. The work is solved in several stages. First, theoretical analysis of inaccuracy in fMRI experiment was performed, and simulations with synthetic data were created. Several variables in general linear model and t-statistics were followed. We found that estimated effect size depends linearly on covariance between the corresponding columns of X and D matrices or their linear combination. The component of residual variance caused by inaccuracy is negligible at real-life noise levels. In such case, moreover, the dependence of t-statistics on inaccuracy becomes linear. Next, our theoretical results (dependencies/characteristics of variables) were verified using real data. All results were confirmed. Last, I focused on possible practical use of the uncovered characteristics and dependencies. Optimization of experimental design with respect to inaccuracy, correction of inaccurate results and reliability of inaccurate results are introduced and discussed. Especially, the calculation of maps of maximal tolerable inaccuracy can be useful to find robust or weak (tending to be not detected or to be significantly different from accurate value) activation in real fMRI experiments.
APA, Harvard, Vancouver, ISO, and other styles
13

Drakenward, Ellinor, and Emelie Zhao. "Modeling risk and price of all risk insurances with General Linear Models." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-275696.

Full text
Abstract:
Denna kandidatexamen ligger inom området matematisk statistik. I samarbete med försäkringsbolaget Hedvig syftar denna avhandling till att utforska en ny metod för hantering av Hedvigs försäkringsdata genom att bygga en prissättningsmodell för alla riskförsäkringar med generaliserade linjära modeller. Två generaliserade linjära modeller byggdes, där den första förutspår frekvensen för ett anspråk och den andra förutspår svårighetsgraden. De ursprungliga uppgifterna delades in i 9 förklarande variabler. Båda modellerna inkluderade fem förklarande variabler i början och reducerades sedan. Minskningen resulterade i att fyra av fem egenskaper var förklarande signifikanta i frekvensmodellen och endast en av de fem var förklarande signifikanta i svårighetsmodellen. Var och en av modellerna erhöll relativa risker för nivåerna av deras förklarande variabler. De relativa riskerna resulterade i en total risk för varje nivå. Genom multiplicering av en skapad basnivå med en uppsättning kombination av riskparametrar kan premien för en vald kund erhållas.<br>Det här kandidatexamensarbetet ligger inom ämnet matematisk statistik. Jag samarbete med försäkringsbolaget Hedvig, avser uppsatsen att undersöka en ny metod att hantera Hedvigs försäkringsdata genom att bygga en prissättningsmodell för drulleförsäkring med hjälp av generaliserade linjära modeller. Två modeller skapades varav den första förutsättningen frekvensen av ett försäkringsanspråk och den andra förutsäger storleken. Originaldatan var indelad i 9 förklarande variabler. Båda modellerna innehöll till en början fem förklarande variabler, vilka sedan reducerades till fyra respektive en variabler i de motsvarande modellerna. Från varje modell kunde sedan de relativa riskerna tas fram för varje kategori av de förklarande variablerna. Tillsammans bildades sedan totalrisken för alla grupper.
APA, Harvard, Vancouver, ISO, and other styles
14

Andersson, Gustaf. "Generalised linear factor score regression : a comparison of four methods." Thesis, Uppsala universitet, Statistiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-412851.

Full text
Abstract:
Factor score regression has recently received growing interest as an alternative for structural equation modelling. Two issues causing uncertainty for researchers are addressed in this thesis. Firstly, more knowledge is needed on how different approaches to calculating factor score estimates compare when estimating factor score regression models. Secondly, many applications are left without guidance because of the focus on normally distributed outcomes in the literature. This thesis examines how factor scoring methods compare when estimating regression coefficients in generalised linear factor score regression. An evaluation is made of the regression, correlation-preserving, total sum, and weighted sum method in ordinary, logistic, and Poisson factor score regression. In contrast to previous studies, both the mean and variance of loading coefficients and the degree of inter-factor correlation are varied in the simulations. A meta-analysis demonstrates that the choice of factor scoring method can substantially influence research conclusions. The regression and correlation-preserving method outperform the other two methods in terms of coefficient and standard error bias, accuracy, and empirical Type I error rates. Moreover, the regression method generally has the best performance. It is also noticed that performance can differ notably across the considered regression models.
APA, Harvard, Vancouver, ISO, and other styles
15

Saigiridharan, Lakshidaa. "Dynamic prediction of repair costs in heavy-duty trucks." Thesis, Linköpings universitet, Statistik och maskininlärning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166133.

Full text
Abstract:
Pricing of repair and maintenance (R&amp;M) contracts is one among the most important processes carried out at Scania. Predictions of repair costs at Scania are carried out using experience-based prediction methods which do not involve statistical methods for the computation of average repair costs for contracts terminated in the recent past. This method is difficult to apply for a reference population of rigid Scania trucks. Hence, the purpose of this study is to perform suitable statistical modelling to predict repair costs of four variants of rigid Scania trucks. The study gathers repair data from multiple sources and performs feature selection using the Akaike Information Criterion (AIC) to extract the most significant features that influence repair costs corresponding to each truck variant. The study proved to show that the inclusion of operational features as a factor could further influence the pricing of contracts. The hurdle Gamma model, which is widely used to handle zero inflations in Generalized Linear Models (GLMs), is used to train the data which consists of numerous zero and non-zero values. Due to the inherent hierarchical structure within the data expressed by individual chassis, a hierarchical hurdle Gamma model is also implemented. These two statistical models are found to perform much better than the experience-based prediction method. This evaluation is done using the mean absolute error (MAE) and root mean square error (RMSE) statistics. A final model comparison is conducted using the AIC to draw conclusions based on the goodness of fit and predictive performance of the two statistical models. On assessing the models using these statistics, the hierarchical hurdle Gamma model was found to perform predictions the best
APA, Harvard, Vancouver, ISO, and other styles
16

Kuhnert, Petra Meta. "New methodology and comparisons for the analysis of binary data using Bayesian and tree based methods." Thesis, Queensland University of Technology, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Xinlei. "Bayesian variable selection for GLM." Thesis, 2002. http://wwwlib.umi.com/cr/utexas/fullcit?p3110701.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Stone, Ryan Alexander. "Modeling Victoria's Injection Drug Users." Thesis, 2013. http://hdl.handle.net/1828/4899.

Full text
Abstract:
The objective of this thesis is to examine random effect models applied to binary data. I will use classical and Bayesian inference to fit generalized linear mixed models to a specific data set. The data analyzed in this thesis comes from a study examining the injection practices of needle exchange clientele in Victoria, B.C. focusing on their risk networks. First, I will examine the application of social network analysis to the study of injection drug use, focusing on issues of gender, norms, and the problem of hidden populations. Next the focus will be on random effect models, where I will provide some background and a few examples pertaining to generalized linear mixed models (GLMMs). After GLMMs, I will discuss the nature of the injection drug use study and the data which will then be analyzed using a GLMM. Lastly, I will provide a discussion about my results of the GLMM analysis along with a summary of the injection practices of the needle exchange clientele.<br>Graduate<br>0463
APA, Harvard, Vancouver, ISO, and other styles
19

"Bayesian D-Optimal Design Issues and Optimal Design Construction Methods for Generalized Linear Models with Random Blocks." Doctoral diss., 2015. http://hdl.handle.net/2286/R.I.36021.

Full text
Abstract:
abstract: Optimal experimental design for generalized linear models is often done using a pseudo-Bayesian approach that integrates the design criterion across a prior distribution on the parameter values. This approach ignores the lack of utility of certain models contained in the prior, and a case is demonstrated where the heavy focus on such hopeless models results in a design with poor performance and with wild swings in coverage probabilities for Wald-type confidence intervals. Design construction using a utility-based approach is shown to result in much more stable coverage probabilities in the area of greatest concern. The pseudo-Bayesian approach can be applied to the problem of optimal design construction under dependent observations. Often, correlation between observations exists due to restrictions on randomization. Several techniques for optimal design construction are proposed in the case of the conditional response distribution being a natural exponential family member but with a normally distributed block effect . The reviewed pseudo-Bayesian approach is compared to an approach based on substituting the marginal likelihood with the joint likelihood and an approach based on projections of the score function (often called quasi-likelihood). These approaches are compared for several models with normal, Poisson, and binomial conditional response distributions via the true determinant of the expected Fisher information matrix where the dispersion of the random blocks is considered a nuisance parameter. A case study using the developed methods is performed. The joint and quasi-likelihood methods are then extended to address the case when the magnitude of random block dispersion is of concern. Again, a simulation study over several models is performed, followed by a case study when the conditional response distribution is a Poisson distribution.<br>Dissertation/Thesis<br>Doctoral Dissertation Industrial Engineering 2015
APA, Harvard, Vancouver, ISO, and other styles
20

"Analyzing Taguchi's experiments using GLIM with inverse Gaussian distribution." Chinese University of Hong Kong, 1994. http://library.cuhk.edu.hk/record=b5888159.

Full text
Abstract:
by Wong Kwok Keung.<br>Thesis (M.Phil.)--Chinese University of Hong Kong, 1994.<br>Includes bibliographical references (leaves 50-52).<br>Chapter 1. --- Introduction --- p.1<br>Chapter 2. --- Taguchi's methodology in design of experiments --- p.3<br>Chapter 2.1 --- System design<br>Chapter 2.2 --- Parameter design<br>Chapter 2.3 --- Tolerance design<br>Chapter 3. --- Inverse Gaussian distribution --- p.8<br>Chapter 3.1 --- Genesis<br>Chapter 3.2 --- Probability density function<br>Chapter 3.3 --- Estimation of parameters<br>Chapter 3.4 --- Applications<br>Chapter 4. --- Iterative procedures and Derivation of the GLIM 4 macros --- p.21<br>Chapter 4.1 --- Generalized linear models with varying dispersion<br>Chapter 4.2 --- Mean and dispersion models for inverse Gaussian distribution<br>Chapter 4.3 --- Devising the GLIM 4 macro<br>Chapter 4.4 --- Model fitting<br>Chapter 5. --- Simulation Study --- p.34<br>Chapter 5.1 --- Generating random variates from the inverse Gaussian distribution<br>Chapter 5.2 --- Simulation model<br>Chapter 5.3 --- Results<br>Chapter 5.4 --- Discussion<br>Appendix --- p.46<br>References --- p.50
APA, Harvard, Vancouver, ISO, and other styles
21

Bush, Nathan. "Niche-Based Modeling of Japanese Stiltgrass (Microstegium vimineum) Using Presence-Only Information." 2015. https://scholarworks.umass.edu/masters_theses_2/265.

Full text
Abstract:
The Connecticut River watershed is experiencing a rapid invasion of aggressive non-native plant species, which threaten watershed function and structure. Volunteer-based monitoring programs such as the University of Massachusetts’ OutSmart Invasives Species Project, Early Detection Distribution Mapping System (EDDMapS) and the Invasive Plant Atlas of New England (IPANE) have gathered valuable invasive plant data. These programs provide a unique opportunity for researchers to model invasive plant species utilizing citizen-sourced data. This study took advantage of these large data sources to model invasive plant distribution and to determine environmental and biophysical predictors that are most influential in dispersion, and to identify a suitable presence-only model for use by conservation biologists and land managers at varying spatial scales. This research focused on the invasive plant species of high interest - Japanese stiltgrass (Mircostegium vimineum). This was identified as a threat by U.S. Fish and Wildlife Service refuge biologists and refuge managers, but for which no mutli-scale practical and systematic approach for detection, has yet been developed. Environmental and biophysical variables include factors directly affecting species physiology and locality such as annual temperatures, growing degree days, soil pH, available water supply, elevation, closeness to hydrology and roads, and NDVI. Spatial scales selected for this study include New England (regional), the Connecticut River watershed (watershed), and the U.S. Fish and Wildlife, Silvio O. Conte National Fish and Wildlife Refuge, Salmon River Division (local). At each spatial scale, three software programs were implemented: maximum entropy habitat model by means of the MaxEnt software, ecological niche factor analysis (ENFA) using Openmodeller software, and a generalized linear model (GLM) employed in the statistical software R. Results suggest that each modeling algorithm performance varies among spatial scales. The best fit modeling software designated for each scale will be useful for refuge biologists and managers in determining where to allocate resources and what areas are prone to invasion. Utilizing the regional scale results, managers will understand what areas on a broad-scale are at risk of M. vimineum invasion under current climatic variables. The watershed-scale results will be practical for protecting areas designated as most critical for ensuring the persistence of rare and endangered species and their habitats. Furthermore, the local-scale, or fine-scale, analysis will be directly useful for on-the-ground conservation efforts. Managers and biologists can use results to direct resources to areas where M. vimineum is most likely to occur to effectively improve early detection rapid response (EDRR).
APA, Harvard, Vancouver, ISO, and other styles
22

Mader, Felix. "Räumliche, GIS-gestützte Analyse von Linientransektstichproben." Doctoral thesis, 2007. http://hdl.handle.net/11858/00-1735-0000-0006-B626-D.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!