To see the other types of publications on this topic, follow the link: Statistical methods of evaluation.

Dissertations / Theses on the topic 'Statistical methods of evaluation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Statistical methods of evaluation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Cameron, Maxwell Hugh 1943. "Statistical evaluation of road trauma countermeasures." Monash University, Dept. of Mathematics and Statistics, 2000. http://arrow.monash.edu.au/hdl/1959.1/7943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chung, Yuk-ka, and 鍾玉嘉. "On the evaluation and statistical analysis of forensic evidence in DNAmixtures." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B45983586.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Choy, Yan-tsun, and 蔡恩浚. "Statistical evaluation of mixed DNA stains." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B42664287.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Longzhuang. "Statistical methods for performance evaluation and their applications /." free to MU campus, to others for purchase, 2002. http://wwwlib.umi.com/cr/mo/fullcit?p3060118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bujatzeck, Baldur. "Statistical evaluation of water quality measurements." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0017/MQ44134.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Knopp, Jeremy Scott. "Modern Statistical Methods and Uncertainty Quantification for Evaluating Reliability of Nondestructive Evaluation Systems." Wright State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=wright1395942220.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ackerberg, Björn. "Application of some statistical methods for evaluation of groundwater observations." Licentiate thesis, KTH, Land and Water Resources Engineering, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-1531.

Full text
Abstract:
<p>With the objective of reviewing different statisticalmethods for evaluation of groundwater data and the design of agroundwater observation network, a comprehensive literaturesurvey was performed. The literature survey focuses on spatialstatistics (geostatistics) but also includes methods toevaluate time-series of groundwater data and the determinationof the sampling frequency. A method is developed which providesa means of quantifying the accuracy of an existing groundwatermonitoring network with regards to spatial interpolation andthe locations of the corresponding observation points. Thespatial interpolation method of ordinary kriging was used. Aresult from ordinary kriging was estimated (interpolated)levels at unmeasured points, but also a kriging variance. Thekriging variance can be interpreted as a measure of theestimation accuracy and used as a criterion for network design.Design of a monitoring network for groundwater levels in anarea includes the selection of: - the number of observationpoints and - the spatial locations of observation points.The method was applied to design a monitoring network inan area in a glaciofluvial deposit, the Nybro esker, which isthe main aquifer for the water supply of the Kalmar-Nybroregion in the southeast of Sweden. This thesis shows that it ispossible to quantify the accuracy of an existing observationnetwork using the average kriging variance as a measure ofaccuracy. It is also possible to describe how this krigingvariance changes (increases) when the observation network isreduced. By using this variance is it possible to rank thedifferent points in the network as to their relativeimportance. It is thus possible to identify the points, whichare to be removed when the observation network is reduced, onepoint at a time. This study shows that a monitoring network inthe study area could be reduced by 35% while the increase inaverage estimation (kriging) variance is only about 10%.Although the method is applied to groundwater levels in aglaciofluvial deposit, it is applicable also to other variablesthat can be considered regionalized and to other geologicalenvironments.</p>
APA, Harvard, Vancouver, ISO, and other styles
8

Elia, Eleni. "Statistical methods in prognostic factor research : application, development and evaluation." Thesis, University of Birmingham, 2017. http://etheses.bham.ac.uk//id/eprint/7259/.

Full text
Abstract:
In patients with a particular disease or health condition, prognostic factors are characteristics (such as age, biomarkers) that are associated with different risks of a future clinical outcome. Research is needed to identify prognostic factors, but current evidence suggests that primary research is of low quality and poorly/selectively reported, which limits subsequent systematic reviews and meta-analysis. This thesis aims to improve prognostic factor research, through the application, development and evaluation of statistical methods to quantify the effect of potential prognostic factors. Firstly, I conduct a new prognostic factor study in pregnant women. The findings suggest that the albumin/creatinine ratio (ACR) is an independent prognostic factor for neonatal and, in particular, maternal composite adverse outcomes; thus ACR may enhance individualised risk prediction and clinical decision-making. Then, a literature review is performed to flag challenges in conducting meta-analysis of prognostic factor studies in the same clinical area. Many issues are identified, especially between-study heterogeneity and potential bias in the thresholds (cut-off points) used to dichotomise continuous factors, and the set of adjustment factors. Subsequent chapters aim to tackle these issues by proposing novel multivariate meta-analysis methods to ‘borrow strength’ across correlated thresholds and/or adjustment factors. These are applied to a variety of examples, and evaluated through simulation, which show how the approach can reduce bias and improve precision of meta-analysis results, compared to traditional univariate methods. In particular, the percentage reduction in the variance is of a similar magnitude to the percentage of data missing at random.
APA, Harvard, Vancouver, ISO, and other styles
9

Nanzad, Bolorchimeg. "EVALUATION OF STATISTICAL METHODS FOR MODELING HISTORICAL RESOURCE PRODUCTION AND FORECASTING." OpenSIUC, 2017. https://opensiuc.lib.siu.edu/theses/2192.

Full text
Abstract:
This master’s thesis project consists of two parts. Part I of the project compares modeling of historical resource production and forecasting of future production trends using the logit/probit transform advocated by Rutledge (2011) with conventional Hubbert curve fitting, using global coal production as a case study. The conventional Hubbert/Gaussian method fits a curve to historical production data whereas a logit/probit transform uses a linear fit to a subset of transformed production data. Within the errors and limitations inherent in this type of statistical modeling, these methods provide comparable results. That is, despite that apparent goodness-of-fit achievable using the Logit/Probit methodology, neither approach provides a significant advantage over the other in either explaining the observed data or in making future projections. For mature production regions, those that have already substantially passed peak production, results obtained by either method are closely comparable and reasonable, and estimates of ultimately recoverable resources obtained by either method are consistent with geologically estimated reserves. In contrast, for immature regions, estimates of ultimately recoverable resources generated by either of these alternative methods are unstable and thus, need to be used with caution. Although the logit/probit transform generates high quality-of-fit correspondence with historical production data, this approach provides no new information compared to conventional Gaussian or Hubbert-type models and may have the effect of masking the noise and/or instability in the data and the derived fits. In particular, production forecasts for immature or marginally mature production systems based on either method need to be regarded with considerable caution. Part II of the project investigates the utility of a novel alternative method for multicyclic Hubbert modeling tentatively termed “cycle-jumping” wherein overlap of multiple cycles is limited. The model is designed in a way that each cycle is described by the same three parameters as conventional multicyclic Hubbert model and every two cycles are connected with a transition width. Transition width indicates the shift from one cycle to the next and is described as weighted coaddition of neighboring two cycles. It is determined by three parameters: transition year, transition width, and γ parameter for weighting. The cycle-jumping method provides superior model compared to the conventional multicyclic Hubbert model and reflects historical production behavior more reasonably and practically, by better modeling of the effects of technological transitions and socioeconomic factors that affect historical resource production behavior by explicitly considering the form of the transitions between production cycles.
APA, Harvard, Vancouver, ISO, and other styles
10

Matysek, Paul Frank. "An evaluation of regional stream sediment data by advanced statistical procedures." Thesis, University of British Columbia, 1985. http://hdl.handle.net/2429/24860.

Full text
Abstract:
This study was directed towards the development of rigorous, systematic, computer-assisted statistical procedures for the interpretation of quantitative and qualitative data commonly encountered in practical exploration-oriented surveys. A suite of data analysis tools were developed to evaluate the quality of geochemical data sets, to investigate the value and utilization of categorical field data, and to recognize and rank anomalous samples. Data obtained from regional stream sediment surveys as undertaken by the British Columbia Ministry of Energy, Mines and Petroleum Resources in southern British Columbia were examined as a case history. A procedure based on a statistical analysis of field-site duplicates was developed to evaluate the quality of regional geochemical silt data. The technique determines: (1) whether differences in metal concentrations between sample sites reflect a real trend related to geological and geochemical features and not merely a consequence of sampling and analytical error, and (2) absolute precision estimates at any particular accumulation across a metal's concentration range. Results for metals Zn, Cu, Ni, Co, Fe and Mn indicated that combined variability due to local and procedural error averaged less than 5% of the total error and that precision estimates at the 95th percentile concentration value averaged less than 6.0%. Results presented indicate duplicates are more in accord with splits of individual samples (analytical duplicates) rather than separate field-site duplicates. This type of systematic approach provides a basis for interpreting geochemical trends within the survey area, while simultaneously allowing evaluation of the method of sampling and laboratory analysis. A procedure utilizing Duncan's Multiple Range Test examined the relationships between metal concentrations and class-interval and categorical observations of the drainage catchment, sample site and sediment sample. Results show that, many field observations can be systematically related to metal content of drainage sediments. Some elements are more susceptible than others to environmental factors and some factors influence few or many elements. For example, in sediments derived from granites there are significant relationships between bank type and concentration of 8 elements (Zn, Cu, Ni, Pb, Co, Fe, Mn and Hg). In contrast, the texture of these sediments, using estimates of fines contents as an index, did not significantly affect the concentration of any of the elements studied. In general, results indicate that groups of environmental factors acting collectively are more important than any single factor in determining background metal contents of drainage sediments. A procedure utilizing both a graphical and multiple regression approach was developed to identify and characterize anomalous samples. The procedure determines multivariate models based on background metal values which are used to describe very general geochemical relations of no interest for prospecting purposes. These models are then applied to sample subsets selected on the basis of factor/s known to strongly influence geochemical results. Individual samples are characterized after comparisons with relevant determined threshold levels and background multielemenmodels. One hundred and fifteen anomalous samples for zinc from seven provenance groups draining 1259 sample sites were identified and characterized by this procedure. Forty three of these samples had zinc concentrations greater than its calculated provenance threshold, while 72 of these anomalous samples were identified solely because their individual metal associations were significantly different than their provenance multivariate background model. The method provides a means to reduce the effects of background variations while simultaneously identifying and characterizing anomalous samples. The data analysis tools described here allow extraction of useful information from regional geochemical data, and as a result provide and effective means of defining problems of geological interest that warrant further investigation.<br>Science, Faculty of<br>Earth, Ocean and Atmospheric Sciences, Department of<br>Graduate
APA, Harvard, Vancouver, ISO, and other styles
11

Liberati, Giorgia. "Application of Statistical Methods for the Evaluation of Anaerobic Digestion Process Parameters." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Find full text
Abstract:
Sono stati condotti due studi sull’applicabilità di disegni sperimentali e metodi statistici a processi di digestione anaerobica (AD). Nel primo è stato effettuato uno screening di 11 elementi in traccia (TE) per la produzione di biometano dalla frazione organica di rifiuti solidi urbani (OF-MSW) e deiezioni avicole (CM), utilizzando il Plackett-Burman (PB) design. L’esperimento è stato condotto in condizioni mesofile in micro-reattori batch. L’influenza dei TE si è rivelata dipendente dal tipo di substrato utilizzato. Dall’analisi statistica, Fe, Se W, Cu, Mn e Ni sono risultati significativi per la mono-digestione della OF-MSW, mentre alcuno dei TE aggiunti alle condizioni con il CM si è rivelato altamente influente. Nel secondo studio, sono state testate diverse proporzioni di insilato di mais (CM), vinacce (GP) e acque di vegetazione olearie (OMW) per la produzione di biogas e con l’obiettivo di valutare gli effetti dovuti a tali combinazioni e individuare la miscela che massimizzasse il processo. L’esperimento è stato condotto in condizioni termofile e in micro-reattori batch, applicando il three-factor mixture design, con la resa di metano e la produttività come variabili di risposta. I valori più alti di queste ultime, rispettivamente 163.31 ± 21.05 Nml CH4/g VS e 0.82 Nml CH4/d/ml, sono stati conseguiti dalle condizioni con elevate proporzioni di CS e basse di OMW. Il modello ottenuto dai dati sperimentali ha rivelato le interazioni sinergiche e antagonistiche tra i substrati. La co-digestione, rispetto alla mono-digestione, ha determinato un aumento della resa di metano, migliorando le performance del processo. Attraverso la metodologia response surface, le due risposte sono state ottimizzate simultaneamente ed è stata trovata la combinazione che le massimizzasse. I risultati sono stati validati sperimentalmente. L’analisi statistica si è rivelata un valido strumento per valutare e ottimizzare parametri di processo della digestione anaerobica.
APA, Harvard, Vancouver, ISO, and other styles
12

Haddon, Andrew L. "Evaluation of Some Statistical Methods for the Identification of Differentially Expressed Genes." FIU Digital Commons, 2015. http://digitalcommons.fiu.edu/etd/1913.

Full text
Abstract:
Microarray platforms have been around for many years and while there is a rise of new technologies in laboratories, microarrays are still prevalent. When it comes to the analysis of microarray data to identify differentially expressed (DE) genes, many methods have been proposed and modified for improvement. However, the most popular methods such as Significance Analysis of Microarrays (SAM), samroc, fold change, and rank product are far from perfect. When it comes down to choosing which method is most powerful, it comes down to the characteristics of the sample and distribution of the gene expressions. The most practiced method is usually SAM or samroc but when the data tends to be skewed, the power of these methods decrease. With the concept that the median becomes a better measure of central tendency than the mean when the data is skewed, the tests statistics of the SAM and fold change methods are modified in this thesis. This study shows that the median modified fold change method improves the power for many cases when identifying DE genes if the data follows a lognormal distribution.
APA, Harvard, Vancouver, ISO, and other styles
13

Orak, Nur H. "Statistical Methods for Evaluating Exposure-Health Relationships." Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/738.

Full text
Abstract:
Conventional experimental techniques are sometimes limited in their ability to assess the actual risk of chemical exposures. Therefore, there is a rising awareness of mathematical, computational, and statistical approaches to provide insight into the adverse effects of environmental contaminants. Richard Bach once wrote: “Any powerful idea is absolutely fascinating and absolutely useless until we choose to use it.” Likewise, any data may be viewed as absolutely fascinating and absolutely useless until we choose to understand and use it. Recent advances in science and technology provide alternative paths to develop effective risk-assessment methods for environmental contaminants. Moreover, these methods are more efficient in terms of time and cost. Therefore, I develop three Chapters to show the importance of statistical methods in environmental-health risk assessment, and highlight the potency of data-driven knowledge and multidisciplinary research for the future of environmental science and engineering. In Chapter 1, I review the potential risks of missing chemical data and concentration variability on mixture toxicity by developing 27 occurrence scenarios based on data from the literature. The @RISK software simulates random concentrations, assuming multivariate lognormal distributions for the mixture components. In Chapter 2, I demonstrate how a performance analysis can be implemented for a Bayesian Network (BN) representation of a dose-response relationship. I explore the effect of different sample sizes on predicting the strength of the relationship between true responses and true doses of environmental toxicants. In Chapter 3, I characterize the risk factors of a prenatal arsenic exposure network by using Bayesian Network (BN) modeling as a tool for health risk assessment.
APA, Harvard, Vancouver, ISO, and other styles
14

Elsas, Jonathan L. "An Evaluation of Projection Techniques for Document Clustering: Latent Semantic Analysis and Independent Component Analysis." Thesis, School of Information and Library Science, 2005. http://hdl.handle.net/1901/208.

Full text
Abstract:
Dimensionality reduction in the bag-of-words vector space document representation model has been widely studied for the purposes of improving accuracy and reducing computational load of document retrieval tasks. These techniques, however, have not been studied to the same degree with regard to document clustering tasks. This study evaluates the effectiveness of two popular dimensionality reduction techniques for clustering, and their effect on discovering accurate and understandable topical groupings of documents. The two techniques studied are Latent Semantic Analysis and Independent Component Analysis, each of which have been shown to be effective in the past for retrieval purposes.
APA, Harvard, Vancouver, ISO, and other styles
15

Lanhede, Daniel. "Non-parametric Statistical Process Control : Evaluation and Implementation of Methods for Statistical Process Control at GE Healthcare, Umeå." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-104512.

Full text
Abstract:
Statistical process control (SPC) is a toolbox to detect changes in the output of a process distribution. It can serve as a valuable resource to maintain high quality in a manufacturing process. This report is based on the work on evaluating and implementing methods for SPC in the process of chromatography instrument manufacturing at GE Healthcare, Umeå. To handle low volume and non-normally distributed process output data, non-parametric methods are considered. Eight control charts, three for for Phase I analysis, and five for Phase II analysis, are evaluated in this study. The usability of the charts are assessed based on ease of interpretation and the performance to detect distributional changes. The later is evaluated with simulations. The result of the project is the implementation of the RS/P-chart, suggested by Capizzi et al (2013), for Phase I analysis. Of the considered Phase I methods (and simulation scenarios), the RS/P-chart has the highest overall probability, of detecting a variety of distributional changes. Further, the RS/P-chart is easily interpreted, facilitating the analysis. For Phase II analysis, the use of two control charts, one based on the Mann-Whitney U statistic, suggested by Chakraborti et al (2008), and one on the Mood test statistic for dispersion, suggested by Ghute et al (2014), have been implemented. These are chosen mainly based on the ease of interpretation. To reduce the detection time for changes in the process distribution, the change-point chart based on the Cramer Von Mises statistic, suggested by Ross et al (2012), could be used instead. Using single observations, instead of larger samples, this chart is updated more frequently. However, this efficiently increases the false alarm rate and the chart is also considered much more difficult to interpret for the SPC practitioner.<br>Statistisk processkontroll (SPC) är en samling verktyg för att upptäcka förändringar, i fördelningen, hos utfallen i en process. Det kan fungera som en värdefull resurs för att upprätthålla en hög kvalitet i en tillverkningsprocess. Denna rapport är baserad på arbetet med att utvärdera och implementera metoder för SPC i en monteringsprocess av kromatografiinstrument på GE Healthcare, Umeå. Åtta styrdiagram, tre för för fas I analys, och fem för fas II analys, studeras i denna rapport. Användbarheten hos styrdiagrammen bedöms efter hur enkla de är att tolka och förmågan att upptäcka fördelningsförändringar. Den senare utvärderas med simuleringar. Resultatet av projektet är införandet av RS/P-metod, utvecklad av Capizzi et al (2013), för analysen i fas I. Av de utvärderade metoderna, (och simuleringsscenarier), har RS/P-diagrammet den högsta övergripande sannolikheten, för att upptäcka en mängd olika fördelningsförändringar. Vidare är metodens grafiska diagram lätt att tolka, vilket underlättar analysen. För fas II analys, har två styrdiagram, ett baserat på Mann-Whitney's U teststatistika, som föreslagits av Chakraborti et al (2008), och ett på Mood's teststatistika för spridning, som föreslagits av Ghute et al (2014), implementerats. Styrkan i dessa styrdiagram ligger främst i dess enkla tolkning. För snabbare identifiering av processförändringar kan styrdiagrammet baserat på Cramer von Mises teststatistika, som föreslagits av Ross et al (2012), användas. Baserat på enskilda observationer, istället för stickprov, har styrdiagrammet en högre uppdateringsfrekvens. Detta leder dock till ett ökat antal falska larm och styrdiagrammet anses dessutom vara avsevärt mycket svårare att tolka för SPC-utövaren.
APA, Harvard, Vancouver, ISO, and other styles
16

Valero, Rafael. "Essays on Sparse-Grids and Statistical-Learning Methods in Economics." Doctoral thesis, Universidad de Alicante, 2017. http://hdl.handle.net/10045/71368.

Full text
Abstract:
Compuesta por tres capítulos: El primero es un estudio sobre la implementación the Sparse Grid métodos para es el estudio de modelos económicos con muchas dimensiones. Llevado a cabo mediante aplicaciones noveles del método de Smolyak con el objetivo de favorecer la tratabilidad y obtener resultados preciso. Los resultados muestran mejoras en la eficiencia de la implementación de modelos con múltiples agentes. El segundo capítulo introduce una nueva metodología para la evaluación de políticas económicas, llamada Synthetic Control with Statistical Learning, todo ello aplicado a políticas particulares: a) reducción del número de horas laborales en Portugal en 1996 y b) reducción del coste del despido en España en 2010. La metodología funciona y se erige como alternativa a previos métodos. En términos empíricos se muestra que tras la implementación de la política se produjo una reducción efectiva del desempleo y en el caso de España un incremento del mismo. El tercer capítulo utiliza la metodología utiliza en el segundo capítulo y la aplica para evaluar la implementación del Tercer Programa Europeo para la Seguridad Vial (Third European Road Safety Action Program) entre otras metodologías. Los resultados muestran que la coordinación a nivel europeo de la seguridad vial a supuesto una ayuda complementaria. En el año 2010 se estima una reducción de víctimas mortales de entre 13900 y 19400 personal en toda Europa.
APA, Harvard, Vancouver, ISO, and other styles
17

Hoyle, Simon David. "Statistical methods for assessing and managing wild populations." Queensland University of Technology, 2005. http://eprints.qut.edu.au/16413/.

Full text
Abstract:
This thesis is presented as a collection of five papers and one report, each of which has been either published after peer review or submitted for publication. It covers a broad range of applied statistical methods, from deterministic modelling to integrated Bayesian modelling using MCMC, via bootstrapping and stochastic simulation. It also covers a broad range of subjects, from analysis of recreational fishing diaries, to genetic mark recapture for wombats. However, it focuses on practical applications of statistics to the management of wild populations. The first chapter (Hoyle and Jellyman 2002, published in Marine and Freshwater Research) applies a simple deterministic yield per recruit model to a fishery management problem: possible overexploitation of the New Zealand longfin eel. The chapter has significant implications for longfin eel fishery management. The second chapter (Hoyle and Cameron 2003, published in Fisheries Management and Ecology) focuses on uncertainty in the classical paradigm, by investigating the best way to estimate bootstrap confidence limits on recreational harvest and catch rate using catch diary data. The third chapter (Hoyle et al., in press with Molecular Ecology Notes) takes a different path by looking at genetic mark-recapture in a fisheries management context. Genetic mark-recapture was developed for wildlife abundance estimation but has not previously been applied to fish harvest rate estimation. The fourth chapter (Hoyle and Banks, submitted) addresses genetic mark-recapture, but in the wildlife context for estimates of abundance rather than harvest rate. Our approach uses individual-based modeling and Bayesian analysis to investigate the effect of shadows on abundance estimates and confidence intervals, and to provide guidelines for developing sets of loci for populations of different sizes and levels of relatedness. The fifth chapter (Hoyle and Maunder 2004, Animal Biodiversity and Conservation) applies integrated analysis techniques developed in fisheries to the modeling of protected species population dynamics - specifically the north-eastern spotted dolphin, Stenella attenuata. It combines data from a number of different sources in a single statistical model, and estimates parameters using both maximum likelihood and Bayesian MCMC. The sixth chapter (Hoyle 2002, peer reviewed and published as Queensland Department of Primary Industries Information Series) results directly from a pressing management issue: developing new management procedures for the Queensland east coast Spanish mackerel fishery. It uses an existing stock assessment as a starting point for an integrated Bayesian management strategy evaluation. Possibilities for further research have been identified within the subject areas of each chapter, both within the chapters and in the final discussion chapter.
APA, Harvard, Vancouver, ISO, and other styles
18

Thomas, Len. "Evaluation of statistical methods for estimating long-term population change from extensive wildlife surveys." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq25174.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Bo. "Novel statistical methods for evaluation of metabolic biomarkers applied to human cancer cell lines." Miami University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=miami1399046331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Lindbom, Lars. "Development, Application and Evaluation of Statistical Tools in Pharmacometric Data Analysis." Doctoral thesis, Uppsala : Acta Universitatis Upaliensis : Universitetsbiblioteket [distributör], 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-6825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Tsao, Su-Ching 1961. "Evaluation of drug absorption by cubic spline and numerical deconvolution." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/276954.

Full text
Abstract:
A novel approach using smoothing cubic splines and point-area deconvolution to estimate the absorption kinetics of linear systems has been investigated. A smoothing cubic spline is employed as an interpolation function since it is superior to polynomials and other functions commonly used for representation of empirical data in several aspects. An advantage of the method is that results obtained from the same data set will be more consistent, irrespective of who runs the program or how many times you run it. In addition, no initial estimates are needed to run the program. The same sampling time or equally spaced measurement of unit impulse response and response of interest is not required. The method is compared with another method by using simulated data containing various degrees of random noise.
APA, Harvard, Vancouver, ISO, and other styles
22

CROPPER, JOHN PHILIP. "TREE-RING RESPONSE FUNCTIONS. AN EVALUATION BY MEANS OF SIMULATIONS (DENDROCHRONOLOGY RIDGE REGRESSION, MULTICOLLINEARITY)." Diss., The University of Arizona, 1985. http://hdl.handle.net/10150/187946.

Full text
Abstract:
The problem of determining the response of tree ring width growth to monthly climate is examined in this study. The objective is to document which of the available regression methods are best suited to deciphering the complex link between tree growth variation and climate. Tree-ring response function analysis is used to determine which instrumental climatic variables are best associated with tree-ring width variability. Ideally such a determination would be accomplished, or verified, through detailed physiological monitoring of trees in their natural environment. A statistical approach is required because such biological studies on mature trees are currently too time consuming to perform. The use of lagged climatic data to duplicate a biological, rather than a calendar, year has resulted in an increase in the degree of intercorrelation (multicollinearity) of the independent climate variables. The presence of multicollinearity can greatly affect the sign and magnitude of estimated regression coefficients. Using series of known response, the effectiveness of five different regression methods were objectively assessed in this study. The results from each of the 2000 regressions were compared to the known regression weights and a measure of relative efficiency computed. The results indicate that ridge regression analysis is, on average, four times more efficient (average relative efficiency of 4.57) than unbiased multiple linear regression at producing good coefficient estimates. The results from principal components regression are slight improvements over those from multiple linear regression with an average relative efficiency of 1.45.
APA, Harvard, Vancouver, ISO, and other styles
23

Zanga, Ambroise 1956. "Sampling efficiency evaluation in Emory oak woodlands of southeastern Arizona." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/291343.

Full text
Abstract:
A forest inventory was made by a two-man team in the Emory oak (Quercus emoryi) woodlands, near the Huachuca Mountains, in southeastern Arizona. Two plot sizes, 1/10th, and 1/25th hectare, and three basal area factors, 2, 4 and 6 (square meters per hectare) were used. Tree tally time was recorded, but the travelling time between plots was not recorded. Total number of trees, total basal area, and total volume of trees per hectare were measured, summarized, and analyzed. Significant differences were noted between plot sampling and point sampling. Results suggested that with plot sampling, 1/25th hectare plot was more efficient than 1/10th hectare plot for all measures of forest densities. With point sampling, basal area factor 6 had the highest relative sampling efficiency in terms of trees per hectare. Basal area factor 2 had the highest relative sampling efficiency in terms of basal area and volume per hectare. From this information, more efficient forest inventories of the Emory oak woodlands can be designed.
APA, Harvard, Vancouver, ISO, and other styles
24

Shen, Xia. "Novel Statistical Methods in Quantitative Genetics : Modeling Genetic Variance for Quantitative Trait Loci Mapping and Genomic Evaluation." Doctoral thesis, Uppsala universitet, Beräknings- och systembiologi, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-170091.

Full text
Abstract:
This thesis develops and evaluates statistical methods for different types of genetic analyses, including quantitative trait loci (QTL) analysis, genome-wide association study (GWAS), and genomic evaluation. The main contribution of the thesis is to provide novel insights in modeling genetic variance, especially via random effects models. In variance component QTL analysis, a full likelihood model accounting for uncertainty in the identity-by-descent (IBD) matrix was developed. It was found to be able to correctly adjust the bias in genetic variance component estimation and gain power in QTL mapping in terms of precision.  Double hierarchical generalized linear models, and a non-iterative simplified version, were implemented and applied to fit data of an entire genome. These whole genome models were shown to have good performance in both QTL mapping and genomic prediction. A re-analysis of a publicly available GWAS data set identified significant loci in Arabidopsis that control phenotypic variance instead of mean, which validated the idea of variance-controlling genes.  The works in the thesis are accompanied by R packages available online, including a general statistical tool for fitting random effects models (hglm), an efficient generalized ridge regression for high-dimensional data (bigRR), a double-layer mixed model for genomic data analysis (iQTL), a stochastic IBD matrix calculator (MCIBD), a computational interface for QTL mapping (qtl.outbred), and a GWAS analysis tool for mapping variance-controlling loci (vGWAS).
APA, Harvard, Vancouver, ISO, and other styles
25

Krishnamurthy, Raju Chemical Sciences &amp Engineering Faculty of Engineering UNSW. "Prediction of consumer liking from trained sensory panel information: evaluation of artificial neural networks (ANN)." Awarded by:University of New South Wales. Chemical Sciences & Engineering, 2007. http://handle.unsw.edu.au/1959.4/40746.

Full text
Abstract:
This study set out to establish artificial neural networks (ANN) as an alternate to regression methods (multiple linear, principal components and partial least squares regression) to predict consumer liking from trained sensory panel data. The study has two parts viz., I) Flavour study - evaluation of ANNs to predict consumer flavour preferences from trained sensory panel data and 2) Fragrance study ??? evaluation of different ANN architectures to predict consumer fragrance liking from trained sensory panel data. In this study, a multi-layer feedforward neural network architecture with input, hidden and output layer(s) was designed. The back-propagation algorithm was utilised in training of neural networks. The network learning parameters such as learning rate and momentum rate were optimised by the grid experiments for a fixed number of learning cycles. In flavour study, ANNs were trained using the trained sensory panel raw data as well as transformed data. The networks trained with sensory panel raw data achieved 98% correct learning, whereas the testing was within the range of 28 -35%. A suitable transformation methods were applied to reduce the variations in trained sensory panel raw data. The networks trained with transformed sensory panel data achieved between 80-90% correct learning and 80-95% correct testing. In fragrance study, ANNs were trained using the trained sensory panel raw data as well as principal component data. The networks trained with sensory panel raw data achieved 100% correct learning, and testing was in a range of 70-94%. Principal component analysis was applied to reduce redundancy in the trained sensory panel data. The networks trained with principal component data achieved about 100% correct learning and 90% correct testing. It was shown that due to its excellent noise tolerance property and ability to predict more than one type of consumer liking using a single model, the ANN approach promises to be an effective modelling tool.
APA, Harvard, Vancouver, ISO, and other styles
26

Carlén, Emma. "Genetic evaluation of clinical mastitis in dairy cattle /." Uppsala : Dept. of Animal Breeding and Genetics, Swedish University of Agricultural Sciences, 2008. http://epsilon.slu.se/200863.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Meintjes, M. M. (Maria Magdalena). "Evaluating the properties of sensory tests using computer intensive and biplot methodologies." Thesis, Stellenbosch : Stellenbosch University, 2007. http://hdl.handle.net/10019.1/20881.

Full text
Abstract:
Assignment (MComm)--University of Stellenbosch, 2007.<br>ENGLISH ABSTRACT: This study is the result of part-time work done at a product development centre. The organisation extensively makes use of trained panels in sensory trials designed to asses the quality of its product. Although standard statistical procedures are used for analysing the results arising from these trials, circumstances necessitate deviations from the prescribed protocols. Therefore the validity of conclusions drawn as a result of these testing procedures might be questionable. This assignment deals with these questions. Sensory trials are vital in the development of new products, control of quality levels and the exploration of improvement in current products. Standard test procedures used to explore such questions exist but are in practice often implemented by investigators who have little or no statistical background. Thus test methods are implemented as black boxes and procedures are used blindly without checking all the appropriate assumptions and other statistical requirements. The specific product under consideration often warrants certain modifications to the standard methodology. These changes may have some unknown effect on the obtained results and therefore should be scrutinized to ensure that the results remain valid. The aim of this study is to investigate the distribution and other characteristics of sensory data, comparing the hypothesised, observed and bootstrap distributions. Furthermore, the standard testing methods used to analyse sensory data sets will be evaluated. After comparing these methods, alternative testing methods may be introduced and then tested using newly generated data sets. Graphical displays are also useful to get an overall impression of the data under consideration. Biplots are especially useful in the investigation of multivariate sensory data. The underlying relationships among attributes and their combined effect on the panellists’ decisions can be visually investigated by constructing a biplot. Results obtained by implementing biplot methods are compared to those of sensory tests, i.e. whether a significant difference between objects will correspond to large distances between the points representing objects in the display. In conclusion some recommendations are made as to how the organisation under consideration should implement sensory procedures in future trials. However, these proposals are preliminary and further research is necessary before final adoption. Some issues for further investigation are suggested.<br>AFRIKAANSE OPSOMMING: Hierdie studie spruit uit deeltydse werk by ’n produk-ontwikkeling-sentrum. Die organisasie maak in al hul sensoriese proewe rakende die kwaliteit van hul produkte op groot skaal gebruik van opgeleide panele. Alhoewel standaard prosedures ingespan word om die resultate te analiseer, noodsaak sekere omstandighede dat die voorgeskrewe protokol in ’n aangepaste vorm geïmplementeer word. Dié aanpassings mag meebring dat gevolgtrekkings gebaseer op resultate ongeldig is. Hierdie werkstuk ondersoek bogenoemde probleem. Sensoriese proewe is noodsaaklik in kwaliteitbeheer, die verbetering van bestaande produkte, asook die ontwikkeling van nuwe produkte. Daar bestaan standaard toets- prosedures om vraagstukke te verken, maar dié word dikwels toegepas deur navorsers met min of geen statistiese kennis. Dit lei daartoe dat toetsprosedures blindelings geïmplementeer en resultate geïnterpreteer word sonder om die nodige aannames en ander statistiese vereistes na te gaan. Alhoewel ’n spesifieke produk die wysiging van die standaard metode kan regverdig, kan hierdie veranderinge ’n groot invloed op die resultate hê. Dus moet die geldigheid van die resultate noukeurig ondersoek word. Die doel van hierdie studie is om die verdeling sowel as ander eienskappe van sensoriese data te bestudeer, deur die verdeling onder die nulhipotese sowel as die waargenome- en skoenlusverdelings te beskou. Verder geniet die standaard toetsprosedure, tans in gebruik om sensoriese data te analiseer, ook aandag. Na afloop hiervan word alternatiewe toetsprosedures voorgestel en dié geëvalueer op nuut gegenereerde datastelle. Grafiese voorstellings is ook nuttig om ’n geheelbeeld te kry van die data onder bespreking. Bistippings is veral handig om meerdimensionele sensoriese data te bestudeer. Die onderliggende verband tussen die kenmerke van ’n produk sowel as hul gekombineerde effek op ’n paneel se besluit, kan hierdeur visueel ondersoek word. Resultate verkry in die voorstellings word vergelyk met dié van sensoriese toetsprosedures om vas te stel of statisties betekenisvolle verskille in ’n produk korrespondeer met groot afstande tussen die relevante punte in die bistippingsvoorstelling. Ten slotte word sekere aanbevelings rakende die implementering van sensoriese proewe in die toekoms aan die betrokke organisasie gemaak. Hierdie aanbevelings word gemaak op grond van die voorafgaande ondersoeke, maar verdere navorsing is nodig voor die finale aanvaarding daarvan. Waar moontlik, word voorstelle vir verdere ondersoeke gedoen.
APA, Harvard, Vancouver, ISO, and other styles
28

Tandan, Isabelle, and Erika Goteman. "Bank Customer Churn Prediction : A comparison between classification and evaluation methods." Thesis, Uppsala universitet, Statistiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-411918.

Full text
Abstract:
This study aims to assess which supervised statistical learning method; random forest, logistic regression or K-nearest neighbor, that is the best at predicting banks customer churn. Additionally, the study evaluates which cross-validation set approach; k-Fold cross-validation or leave-one-out cross-validation that yields the most reliable results. Predicting customer churn has increased in popularity since new technology, regulation and changed demand has led to an increase in competition for banks. Thus, with greater reason, banks acknowledge the importance of maintaining their customer base.   The findings of this study are that unrestricted random forest model estimated using k-Fold is to prefer out of performance measurements, computational efficiency and a theoretical point of view. Albeit, k-Fold cross-validation and leave-one-out cross-validation yield similar results, k-Fold cross-validation is to prefer due to computational advantages.   For future research, methods that generate models with both good interpretability and high predictability would be beneficial. In order to combine the knowledge of which customers end their engagement as well as understanding why. Moreover, interesting future research would be to analyze at which dataset size leave-one-out cross-validation and k-Fold cross-validation yield the same results.
APA, Harvard, Vancouver, ISO, and other styles
29

Neylan, Julian School of History &amp Philosophy of Science UNSW. "The sociology of numbers: statistics and social policy in Australia." Awarded by:University of New South Wales. School of History and Philosophy of Science, 2005. http://handle.unsw.edu.au/1959.4/31963.

Full text
Abstract:
This dissertation presents an historical-sociological study of how governments of the modern western state use the language and techniques of quantification in the domain of social policy. The case material has an Australian focus. The thesis argues that by relying on techniques of quantification, governments risk introducing a false legitimacy to their social policy decisions. The thesis takes observed historical phenomena, language and techniques of quantification for signifying the social, and seeks meaningful interpretations in light of the culturally embedded actions of individuals and collective members of Australian bureaucracies. These interpretations are framed by the arguments of a range of scholars on the sociology of mathematics and quantitative technologies. The interpretative framework is in turn grounded in the history and sociology of modernity since the Enlightenment period, with a particular focus on three aspects: the nature and purpose of the administrative bureaucracy, the role of positivism in shaping scientific inquiry and the emergence of a risk consciousness in the late twentieth century. The thesis claim is examined across three case studies, each representative of Australian government action in formulating social policy or providing human services. Key social entities examined include the national census of population, housing needs indicators, welfare program performance and social capital. The analysis of these social statistics reveals a set of recurring characteristics that are shown to reduce their certainty. The analysis provides evidence for a common set of institutional attitudes toward social numbers, essentially that quantification is an objective technical device capable of reducing unstable social entities to stable, reliable significations (numbers). While this appears to strengthen the apparatus of governmentality for developing and implementing state policy, ignoring the many unarticulated and arbitrary judgments that are embedded in social numbers introduces a false legitimacy to these government actions.
APA, Harvard, Vancouver, ISO, and other styles
30

Magnusson, Mattias. "Evaluation of remote sensing techniques for estimation of forest variables at stand level /." Umeå : Dept. of Forest Resource Management and Geomatics, Swedish University of Agricultural Sciences, 2006. http://epsilon.slu.se/200685.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Charland, Katia. "Evaluation of fully Bayesian disease mapping models in correctly identifying high-risk areas with an application to multiple sclerosis." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=103370.

Full text
Abstract:
Disease maps are geographical maps that display local estimates of disease risk. When the disease is rare, crude risk estimates can be highly variable, leading to extreme estimates in areas with low population density. Bayesian hierarchical models are commonly used to stabilize the disease map, making them more easily interpretable. By exploiting assumptions about the correlation structure in space and time, the statistical model stabilizes the map by shrinking unstable, extreme risk estimates to the risks in surrounding areas (local spatial smoothing) or to the risks at contiguous time points (temporal smoothing). Extreme estimates that are based on smaller populations are subject to a greater degree of shrinkage, particularly when the risks in adjacent areas or at contiguous time points do not support the extreme value and are more stable themselves.<br>A common goal in disease mapping studies is to identify areas of elevated risk. The objective of this thesis is to compare the accuracy of several fully Bayesian hierarchical models in discriminating between high-risk and background-risk areas. These models differ according to the various spatial, temporal and space-time interaction terms that are included in the model, which can greatly affect the smoothing of the risk estimates. This was accomplished with simulations based on the cervical cancer rate of Kentucky and at-risk person-years of the state of Kentucky's 120 counties from 1995 to 2002. High-risk areas were 'planted' in the generated maps that otherwise had background relative risks of one. The various disease mapping models were applied and their accuracy in correctly identifying high- and background-risk areas was compared by means of Receiver Operating Characteristic curve methodology. Using data on Multiple Sclerosis (MS) on the island of Sardinia, Italy we apply the more successful models to identify areas of elevated MS risk.
APA, Harvard, Vancouver, ISO, and other styles
32

Abecassis, Judith. "Statistical methods for deciphering intra-tumor hereterogeneity : challenges and opportunities for cancer clinical management." Thesis, Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLM065.

Full text
Abstract:
L'obtention du répertoire des gènes de cancer mutés a été déterminant pour notre compréhension de la tumorigénèse. Cependant, les efforts menés pour caractériser les cancers au niveau génétique ne sont pas suffisants pour prédire la survie des patients, ou leur réponse aux traitements, ce qui est essentiel pour améliorer leur prise en charge. Cet échec est en partie attribué au caractère évolutif des cancers. En effet, comme toute population biologique capable d'acquérir des changements héréditaires, les cellules tumorales sont soumises à la sélection naturelle et la dérive génétique, résultant en une structure mosaique, dans laquelle coexistent plusieurs sous-clones ayant des génomes et des propriétés différentes. Cela a d'importantes conséquences sur les traitements anti-cancéreux, puisque ces sous-populations peuvent être sensibles ou résistantes à différentes thérapies, et de nouveaux phénotypes résistants peuvent continuer d'apparaître alors que la maladie continue à progresser. Un nombre importants de méthodes mathématiques ou statistiques a été développé pour détecter et mesurer l'hétérogénéité intra-tumorale (ITH), mais aucune évaluation systématique de leurs performances et de leur application clinique potentielle n'a été effectué. Notre première contribution a donc été de réaliser une étude des approches existantes pour détecter l'hétérogénéité intra-tumorale, pour permettre de naviguer plus facilement entre les idées sous-tendant ces approches. Nous avons aussi proposé un cadre pour analyser la robustesse de ces approches, et leur usage potentiel pour la stratification des patients. Cette enquête approfondie nous a aussi permis d'identifier un type de données encore non exploité pour la reconstruction de l'hétérogénéité intra-tumorale, et notre seconde contribution vise à combler ce manque. En effet, au-delà de la fréquence observée d'une mutation somatique dans un échantillon tumoral, qui permet de distinguer plusieurs clones, le contexte nucléotidique d'une mutation révèle les processus mutationnels causaux et non observables. Nous montrons, à la fois avec des données simulées et réelles la possibilité de modéliser ces deux aspects de l'évolution tumorale conjointement. En conclusion, nous mettons en évidence le besoin de renforcer l'intégration de données de nature ou d'origine multiples pour exploiter pleinement le potentiel de l'évolution tumorale dans la prise en charge clinique du cancer<br>Accessing the repertoire of cancer somatic alterations has been instrumental in our current understanding of carcinogenesis. However, efforts in genomic characterization of cancers are not sufficient to predict a patient's outcome or response to therapy, which is key to inform their clinical management. This failure is partly attributed to the evolutionary aspect of cancers. Indeed, as any biological population able to acquire heritable transformations, tumor cells are shaped by natural selection and genetic drift, resulting in a mosaic structure, where several subclones with distinct genomes and properties coexist. This has important implications for cancer treatment as those subpopulations can be sensitive or resistant to different therapies, and new resistant phenotypes can keep emerging as the diseases progresses further. An important number of mathematical or statistical methods have been developed to detect and quantify the intra-tumor heterogeneity (ITH), but no systematic evaluation of their performances and potential for clinical application has been performed. Our first contribution consists in a survey of existing approaches to decipher ITH, that allows to navigate the different underlying ideas easily. We have also proposed a framework to assess the robustness of those approaches, and their potential for use in patient stratification. This survey has allowed us to identify an unexploited type of data in the process of ITH reconstruction, and our second contribution fills remedies to this shortfall. Indeed, besides observed prevalences of somatic mutations within a tumor sample that allow us to distinguish several clones, the nucleotidic context of those mutations reveals the unknown causative mutational processes. We illustrate on both simulated and real data the opportunity to jointly model those two aspects of tumor evolution. In conclusion, we highlight the need to reinforce data integration from several sources or samples to harness the potential of tumor evolution for cancer clinical management
APA, Harvard, Vancouver, ISO, and other styles
33

Chen, Jie. "Improvements of statistical downscaling methods and evaluation of their contributions to the uncertainty of hydrologic impacts in a changing climate." Mémoire, École de technologie supérieure, 2011. http://espace.etsmtl.ca/906/1/CHEN_Jie.pdf.

Full text
Abstract:
Les plus importants impacts dus aux changements climatiques seront vraisemblablement liées aux ressources en eau. Les producteurs d’hydroélectricité réalisent maintenant qu’ils doivent tenir compte de ces changements. Pour évaluer adéquatement les impacts futurs, il existe un besoin pour des projections climatiques réalistes qui encadrent l’incertitude liée aux changements climatiques. Compte tenu des biais liés aux simulations des modèles climatiques, et ce particulièrement pour les précipitations, il est nécessaire de traiter les sorties de ces modèles pour les besoins d’études d’impacts en hydrologie ou en gestion des ressources hydriques. Les deux approches couramment utilisées à cet effet (approches de mise à l’échelle dynamique et statistique) ont chacune leurs avantages et inconvénients, et il est difficile de les départager. Ce travail vise à coupler les sorties de modèles climatiques globaux et régionaux avec une approche statistique de mise à l’échelle basée sur un générateur stochastique de climat, et de quantifier les impacts hydrologiques des changements climatiques sur deux bassins versants québécois (Manicouagan 5 et Ceizur). La performance d’un générateur stochastique de climat fut d’abord améliorée et une approche hybride combinant le générateur de climat et la méthode de mise à l’échelle ‘des deltas’ a été développée. L’incertitude liée au choix d’une méthode de mise à l’échelle a été quantifiée dans le cas d’études hydrologiques. Une approche de correction spectrale et un schéma intégré pour la génération des températures a résulté en un générateur de climat capable de reproduire la variabilité interannuelle des précipitations et températures, de même que l’autocorrélation et les corrélations croisées des températures maximales et minimales. Le générateur de climat ainsi amélioré a été utilisé comme outil de mise à l’échelle sur deux bassins versants, et a été comparé à la méthode des deltas pour la quantification des impacts de changements climatiques. Les deux méthodes suggèrent des augmentations de débits annuels et saisonniers pour la période 2025-2084. L’approche du générateur stochastique suggère toutefois des augmentations de débits plus grandes au printemps et plus faibles l’été. Un grand nombre de prédicteurs atmosphériques a été utilisé pour valider l’habileté des approches de régressions linéaires, couramment utilisées. La mise à l’échelle de l’occurrence des précipitations a résulté en un faible taux de succès. Le pourcentage de variance expliqué pour la mise à l’échelle des quantités de précipitation est très bas, et ce pour l’ensemble des stations testées et modèles climatiques utilisés. Bien que globalement médiocre, la performance de ces approches augmente avec l’augmentation de la résolution des modèles climatiques utilisées (300km, 45km, 15km). Une étape additionnelle de classification en types de climat n’a pas amélioré la performance de manière significative. Toutes les méthodes de mise à l’échelle utilisées/développées dans le cadre de cette étude prévoient une augmentation des débits hivernaux et une diminution des débits d’été pour la période 2071-2099. L’augmentation des débits hivernaux est particulièrement grande pour les méthodes basées sur les régressions linéaires, qui prédisent aussi les plus grandes augmentations de température pour l’automne et l’été. La pointe de crue printanière est devancée pour toutes les méthodes testées mais les résultats varient en fonction de la méthode. Les modèles climatiques contribuent à l’incertitude globale du changement climatique de façon majeure peu importe le critère utilisé. Toutefois, d’autres sources d’incertitude telles que les approches de mise à l’échelle et la variabilité naturelle (telle que représenté par les simulations d’ensemble des modèles climatiques) peuvent contribuer à l’incertitude globale de manière similaire (et même plus) dépendant du critère choisi. Par exemple, le choix d’une méthode de mise à l’échelle est la principale source d’incertitude pour la pointe de la crue printanière et les débits d’étiage, alors que la variabilité naturelle domine l’incertitude pour les dates d’occurrence de la crue printanière et de la fin de la crue. L’incertitude liée au scénario d’émission et au modèle hydrologique est aussi importante, mais moins que les sources mentionnées ci-haut. L’incertitude liée au choix de paramètres du modèle hydrologique était la moins importante pour tous les critères choisis. De façon globale, la combinaison de méthodes dynamique et statistique de mise à l’échelle présente des avantages dans la quantification des impacts hydrologiques du changement climatique, notamment dans la détermination de l’incertitude future.
APA, Harvard, Vancouver, ISO, and other styles
34

Hochheimer, Camille J. "Methods for evaluating dropout attrition in survey data." VCU Scholars Compass, 2019. https://scholarscompass.vcu.edu/etd/5735.

Full text
Abstract:
As researchers increasingly use web-based surveys, the ease of dropping out in the online setting is a growing issue in ensuring data quality. One theory is that dropout or attrition occurs in phases that can be generalized to phases of high dropout and phases of stable use. In order to detect these phases, several methods are explored. First, existing methods and user-specified thresholds are applied to survey data where significant changes in the dropout rate between two questions is interpreted as the start or end of a high dropout phase. Next, survey dropout is considered as a time-to-event outcome and tests within change-point hazard models are introduced. Performance of these change-point hazard models is compared. Finally, all methods are applied to survey data on patient cancer screening preferences, testing the null hypothesis of no phases of attrition (no change-points) against the alternative hypothesis that distinct attrition phases exist (at least one change-point).
APA, Harvard, Vancouver, ISO, and other styles
35

Olsen, Andrew Nolan. "Hierarchical Bayesian Methods for Evaluation of Traffic Project Efficacy." BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/2922.

Full text
Abstract:
A main objective of Departments of Transportation is to improve the safety of the roadways over which they have jurisdiction. Safety projects, such as cable barriers and raised medians, are utilized to reduce both crash frequency and crash severity. The efficacy of these projects must be evaluated in order to use resources in the best way possible. Five models are proposed for the evaluation of traffic projects: (1) a Bayesian Poisson regression model; (2) a hierarchical Poisson regression model building on model (1) by adding hyperpriors; (3) a similar model correcting for overdispersion; (4) a dynamic linear model; and (5) a traditional before-after study model. Evaluation of these models is discussed using various metrics including DIC. Using the models selected for analysis, it was determined that cable barriers are quite effective at reducing severe crashes and cross-median crashes on Utah highways. Raised medians are also largely effective at reducing severe crashes. The results of before and after analyses are highly valuable to Departments of Transportation in identifying effective projects and in determining which roadway segments will benefit most from their implementation.
APA, Harvard, Vancouver, ISO, and other styles
36

Schulz-Streeck, Torben [Verfasser], and Hans-Peter [Akademischer Betreuer] Piepho. "Evaluation of alternative statistical methods for genomic selection for quantitative traits in hybrid maize / Torben Schulz-Streeck. Betreuer: Hans-Peter Piepho." Hohenheim : Kommunikations-, Informations- und Medienzentrum der Universität Hohenheim, 2013. http://d-nb.info/1037391497/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Kreif, Noemi. "Statistical methods to address selection bias in economic evaluations that use patient-level observational data." Thesis, London School of Hygiene and Tropical Medicine (University of London), 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.590633.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Petzold, Max. "Evaluation of information in longitudinal data." Göteborg : Statistical Research Unit, Göteborg University, 2003. http://catalog.hathitrust.org/api/volumes/oclc/52551306.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Mwandigha, Lazaro Mwakesi. "Evaluating and extending statistical methods for estimating the construct-level predictive validity of selection tests." Thesis, University of York, 2017. http://etheses.whiterose.ac.uk/21267/.

Full text
Abstract:
Background: In this thesis the problem of range restriction was addressed using the United Kingdom Clinical Aptitude Test (UKCAT) and Professional and Linguistic Assessments Board (PLAB) test in the selection of undergraduate medical school entrants and International Medical Graduates (IMGs) in the UK as motivating examples. Methods for correcting for bias in the estimate of predictive validity due to range restriction (particularly Multiple Imputation (MI) and Full Information Maximum Likelihood (FIML)) were evaluated for the predictive validity, single hurdle concurrent and multiple hurdle validity designs under varying degrees of strictness in selection. For MI, the impact of the composition of the imputation model was also investigated. Methods: The performance of MI and FIML was tested through Monte Carlo simulations and validated using PLAB data. Results: Generally, MI and FIML were found to be equivalent in performance and superior to other methods of correcting for range restriction bias for selection ratios of ≤ 20% only in instances where data were multivariate normal. The inclusion of highly predictive variables in the imputation model increased the precision of MI. Conclusion: MI and FIML are viable alternatives for tackling bias in the estimate of predictive validity for direct range restricted data that satisfies the assumption of multivariate normality. Caution should be taken to avoid their application in instances where the assumption of multivariate normality is violated.
APA, Harvard, Vancouver, ISO, and other styles
40

Rundin, Patrick. "Evaluation of a statistical method to use prior information in the estimation of combustion parameters." Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-6255.

Full text
Abstract:
<p>Ion current sensing, where information about the combustion process in an SI-engine is gained by applying a voltage over the spark gap, is currently used to detect and avoid knock and misfire. Several researchers have pointed out that information on peak pressure location and air/fuel ratio can be gained from the ion current and have suggested several ways to estimate these parameters.</p><p>Here a simplified Bayesian approach was taken to construct a lowpass-like filter or estimator that makes use of prior information to improve estimates in crucial areas. The algorithm is computationally light and could, if successful, improve estimates enough for production use.</p><p>The filter was implemented in several variants and evaluated in a number of simulated cases. It was found that the proposed filter requires a number of trade-offs between variance, bias, tracking speed and accuracy that are difficult to balance. For satisfactory estimates and trade-off balance the prior information must be more accurate than was available.</p><p>It was also found that similar a task, constructing a general Bayesian estimator, has already been tackled in the area of particle filtering and that there are promising and unexplored possibilities there. However, particle filters require computational power that will not be available to production engines for some years. </p><br><p>Vid jonströmsmätning utvinns information om förbränningsprocessen i en bensinmotor genom att en spänning läggs över gnistgapet och den resulterande strömmen mäts. Jonströmsmätning används idag för knack- och feltändningsdetektion. Flera forskare har påpekat att det finns än mer information i jonströmmen, bl.a. om bränsleblandningen och cylindertrycket och har även föreslagit metoder för att utvinna och använda den informationen för skattning av dessa parametrar.</p><p>Här presenteras en förenklad Bayesisk metod i form av en lågpassfilter-liknande skattare som använder förkunskap till att förbättra estimat på relevanta områden. Algoritmen är beräkningsmässigt lätt och kan, om den är framgångsrik, leverera skattningar av förbränningsparametrar som är tillräckligt bra för att användas för sluten styrning av en bensinmotor.</p><p>Skattaren, eller filtret, implementerades i flera varianter och utvärderades i ett antal simulerade fall. Resultaten visade på att flera svåra avvägningar måste göras mellan förbättring i varians, avvikelse och följning eftersom förbättring i den ena ledde till försämring i de andra. För att göra dessa avvägningar och få goda skattningar krävs bättre förhandskunskap och mätdata än vad som var tillgängligt.</p><p>Bayesisk skattning är ett stort befintligt område inom statistik och signalbehandling och den mest generella skattaren är partikelfiltret som har många intressanta tillämpningar och möjligheter. De har hittills inte använts inom skattning av förbränningsparametrar och har således go potential för framtida utveckling. De är dock beräkningsmässigt tunga och kräver beräkningsresurser utöver vad som är tillgängliga i ett motorstyrsystem idag.</p>
APA, Harvard, Vancouver, ISO, and other styles
41

Majeke, Lunga. "Preliminary investigation into estimating eye disease incidence rate from age specific prevalence data." Thesis, University of Fort Hare, 2011. http://hdl.handle.net/10353/464.

Full text
Abstract:
This study presents the methodology for estimating the incidence rate from the age specific prevalence data of three different eye diseases. We consider both situations where the mortality may differ from one person to another, with and without the disease. The method used was developed by Marvin J. Podgor for estimating incidence rate from prevalence data. It delves into the application of logistic regression to obtain the smoothed prevalence rates that helps in obtaining incidence rate. The study concluded that the use of logistic regression can produce a meaningful model, and the incidence rates of these diseases were not affected by the assumption of differential mortality.
APA, Harvard, Vancouver, ISO, and other styles
42

Wong, Oi-ling Irene, and 黃愛玲. "Understanding and evaluating population preventive strategies for breast cancer using statistical and decision analytic models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B4284163X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Alexandra, Markovic Markovic, and Edforss Arvid. "An evaluation of current calculations for safety stock levels." Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Maskinteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-36505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Parker, Denham. "An evaluation of sampling and statistical methods for long-term monitoring of subtidal reef fishes : a case study of Tsitsikamma National Park marine protected area." Thesis, Rhodes University, 2016. http://hdl.handle.net/10962/d1019991.

Full text
Abstract:
Tsitsikamma National Park (TNP) possesses the oldest (established 1954), and one of the largest (350 km2) ‘no-take’ marine protected areas (MPA) in South Africa. A long-term monitoring (LTM) programme to observe the subtidal reef fishes in the TNP MPA was established in 2007. To date, 243 angling replicates have been completed, and a total of 2,751 fish belonging to 41 different species have been caught and released. In an era of unprecedented global biodiversity loss, data that can be used to monitor ecosystems and gauge changes in biodiversity through time are essential. This thesis aims to improve the methodological and statistical processes currently available for LTM of subtidal reef fish by providing an evaluation of the TNP MPA LTM programme. Angling data revealed definitive spatial structuring, in the form of spatial autocorrelation, and a shift in viewing spatial dependency as a statistical obstacle to a source of ecological information created a new avenue of data inference. Species-specific distribution maps identified localized habitat as the main predictor variable for species abundance, emphasizing the need for accurate a priori bathymetric information for subtidal monitoring. ‘Random forest’ analyses confirmed spatial variables are more important than temporal variables in predicting species abundance. The effectiveness of Generalized Linear Mixed Models (GAMMs) to account for spatial autocorrelation was highlighted, and evidence that disregarding spatial dependencies in temporal analyses can produce erroneous results was illustrated in the case of dageraad (Chrysoblephus cristiceps). Correlograms indicated that the current sampling strategy produced spatially redundant data and the sampling unit size (150 m2) could be doubled to optimize sampling. Temporal analyses demonstrated that after 50 years of ‘no take’ protection the TNP MPA ichthyofauna exhibits a high level of stability. Species-specific size structure was also found to be highly stable. Dageraad was the only species to exhibit a definitive temporal trend in their size structure, which was attributed to recruitment variation and the possibility that large individuals may migrate out of the study area. The inadequacy of angling as a method for monitoring a broad spectrum of the fish species was highlighted, particularly due to its selectivity towards large predators. As a result, a new sampling technique known as Stereo Baited Remote Underwater Videos (stereo-BRUVs) was introduced to the LTM programme in 2013. Stereo-BRUVs enabled sampling of 2640 fish belonging to 52 different species, from 57 samples collected in less than two years. A comparison of the sampling methods concluded that, compared to angling, stereo-BRUVs provide a superior technique that can survey a significantly larger proportion of the ichthyofauna with minimal length-selectivity biases. In addition, stereo-BRUVs possess a higher statistical power to detect changes in population abundance. However, a potential bias in the form of ‘hyperstability’ in sites with unusually high fish densities was identified as a possible flaw when using stereo-BRUVs. In an attempt to provide a more rigorous method evaluation, simulation testing was employed to assess the ability of angling and stereo-BRUVs to accurately describe a decreasing population. The advantage of this approach is that the simulated population abundances are known, so that each sampling method can be tested in terms of how well it tracks known abundance trends. The study established that stereo- BRUVs provided more accurate data when describing a distinct population decline of roman (Chrysoblephus laticeps) over 10- and 20-year periods. In addition, spawner-biomass was found to be a more accurate population estimate than relative abundance estimates (CPUE and MaxN) due to the inclusion of population size structure information, highlighting the importance of length-frequency data. The study illustrated that an evaluation framework that utilizes simulation testing has the potential to optimize LTM sampling procedures by addressing a number of methodological questions. This includes developing a procedure that aligns data collected from different sampling methods by applying correction factors, thus ensuring LTM programmes are able to adapt sampling strategies without losing data continuity.
APA, Harvard, Vancouver, ISO, and other styles
45

Kleingeld, Wynand. "La geostatistique pour des variables discretes." Paris, ENMP, 1987. http://www.theses.fr/1987ENMP0064.

Full text
Abstract:
On s'interesse au developpement des techniques d'estimation geostatistiques pour les gisements dans lesquels le minerai est reparti sous forme de particules discretes. Compte tenu de la taille habituelle des echantillons, la loi de distribution des particules peut etre extremement dissymetrique, et les lois particulieres elaborees pour ce type de gisements sent examinees dans cette these. La recherche a ete orientee vers l'estimation des reserves locales et globales, les techniques statistiques d'estimation des parametres, les calculs d'intervalle de confiance, l'estimation isofactorielle des lois bivariables, l'influence de l'effet de support et l'application de diverses techniques de krigeage
APA, Harvard, Vancouver, ISO, and other styles
46

Sabat, Philippe Jacques. "Evaluation of fiber-matrix interfacial shear strength in fiber reinforced plastics." Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/77733.

Full text
Abstract:
The role of the interphase in fiberglass reinforced composites was studied by a combination of theoretical analysis, mechanical tests, and several high-resolution analytical techniques. The interphase was varied in composition by using epoxy and polyester matrix polymers with and without added coupling agents, as well as four fiber surface modifications. Different coupling agents on the fibers were shown to change the fiber tensile strength markedly. Filament wound unidirectional composites were tested in short beam "shear." Corresponding samples were fabricated by embedding one to seven fibers in the center of polymer dogbone specimens that were tested in tension to determine critical fiber lengths. Those values were used in a new theoretical treatment (that combines stress gradient shear-lag theory with Weibull statistics) to evaluate "interfacial shear strengths". The fact that results did not correlate with the short beam data was examined in detail via a combination of polarized light microscopy, electron microscopy (SEM) and spectroscopy (XPS or ESCA) and mass spectroscopy (SIMS). When the single fiber specimens were unloaded, a residual birefringent zone was measured and correlated with composite properties, as well as with SIMS and SEM analysis that identified changes in the locus of interphase failure. Variations in the interphase had dramatic effects upon composite properties, but it appears ·that there may be an optimum level of fiber-matrix adhesion depending upon the properties of both fiber and matrix. Fiber-fiber interactions were elucidated by combining tensile tests on multiple fiber dogbone specimens with high-resolution analytical techniques. In general, this work exemplifies a multidisciplinary approach that promises to help understand and characterize the structure and properties of the fiber-matrix interphase, and to optimize the properties of composite materials.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
47

Backman, Emil, and David Petersson. "Evaluation of methods for quantifying returns within the premium pension." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288499.

Full text
Abstract:
Pensionsmyndigheten's (the Swedish Pensions Agency) current calculation of the internal rate of return for 7.7 million premium pension savers is both time and resource consuming. This rate of return mirrors the overall performance of the funded part of the pension system and is analyzed internally, but also reported to the public monthly and yearly based on differently sized data samples. This thesis aims to investigate the possibility of utilizing other approaches in order to improve the performance of these calculations. Further, the study aims to verify the results stemming from said calculations and investigate their robustness. In order to investigate competitive matrix methods, a sample of approaches are compared to the more classical numerical methods. The approaches are compared in different scenarios aimed to mirror real practice. The robustness of the results are then analyzed by a stochastic modeling approach, where a small error term is introduced aimed to mimic possible errors which could arise in data management. It is concluded that a combination of Halley's method and the Jacobi-Davidson algorithm is the most robust and high performing method. The proposed method combines the speed and robustness from numerical and matrix methods, respectively. The result show a performance improvement of 550% in time, while maintaining the accuracy of the current server computations. The analysis of error propagation suggests the output error to be less than 0.12 percentage points in 99 percent of the cases, considering an introduced error term of large proportions. In this extreme case, the modeled expected number of individuals with an error exceeding 1 percentage point is estimated to be 212 out of the whole population.<br>Pensionsmyndighetens nuvarande beräkning av internräntan för 7,7 miljoner pensionssparare är både tid- och resurskrävande. Denna avkastning ger en översikt av hur väl den fonderade delen av pensionssystemet fungerar. Detta analyseras internt men rapporteras även till allmänheten varje månad samt årligen baserat på olika urval av data. Denna uppsats avser att undersöka möjligheten att använda andra tillvägagångssätt för att förbättra prestanda för denna typ av beräkningar. Vidare syftar studien till att verifiera resultaten som härrör från dessa beräkningar och undersöka deras stabilitet. För att undersöka om det finns konkurrerande matrismetoder jämförs ett urval av tillvägagångssätt med de mer klassiska numeriska metoderna. Metoderna jämförs i flera olika scenarier som syftar till att spegla verklig praxis. Stabiliteten i resultaten analyseras med en stokastisk modellering där en felterm införs för att efterlikna möjliga fel som kan uppstå i datahantering. Man drar slutsatsen att en kombination av Halleys metod och Jacobi-Davidson-algoritmen är den mest robusta och högpresterande metoden. Den föreslagna metoden kombinerar hastigheten från numeriska metoder och tillförlitlighet från matrismetoder. Resultatet visar en prestandaförbättring på 550 % i tid, samtidigt som samma noggrannhet som ses i de befintliga serverberäkningarna bibehålls. Analysen av felutbredning föreslår att felet i 99 procent av fallen är mindre än 0,12 procentenheter i det fall där införd felterm har stora proportioner. I detta extrema fall uppskattas det förväntade antalet individer med ett fel som överstiger 1 procentenhet vara 212 av hela befolkningen.
APA, Harvard, Vancouver, ISO, and other styles
48

Kim, Hyunsun-Sunny. "Evaluation of the distributions of cost-effectiveness ratios and comparison of methods to constructing confidence intervals for such ratios /." The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488195633519149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Zhao, Meng John. "Analysis and Evaluation of Social Network Anomaly Detection." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/79849.

Full text
Abstract:
As social networks become more prevalent, there is significant interest in studying these network data, the focus often being on detecting anomalous events. This area of research is referred to as social network surveillance or social network change detection. While there are a variety of proposed methods suitable for different monitoring situations, two important issues have yet to be completely addressed in network surveillance literature. First, performance assessments using simulated data to evaluate the statistical performance of a particular method. Second, the study of aggregated data in social network surveillance. The research presented tackle these issues in two parts, evaluation of a popular anomaly detection method and investigation of the effects of different aggregation levels on network anomaly detection.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
50

Garabedian, Laura Faden. "Quasi-Experimental Health Policy Research: Evaluation of Universal Health Insurance and Methods for Comparative Effectiveness Research." Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:10764.

Full text
Abstract:
This dissertation consists of two empirical papers and one methods paper. The first two papers use quasi-experimental methods to evaluate the impact of universal health insurance reform in Massachusetts (MA) and Thailand and the third paper evaluates the validity of a quasi-experimental method used in comparative effectiveness research (CER).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography