To see the other types of publications on this topic, follow the link: Maximum likelihood method - MMV.

Dissertations / Theses on the topic 'Maximum likelihood method - MMV'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Maximum likelihood method - MMV.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Costa, Sidney Tadeu Santiago. "Teoria de resposta ao item aplicada no ENEM." Universidade Federal de Goiás, 2017. http://repositorio.bc.ufg.br/tede/handle/tede/6944.

Full text
Abstract:
Submitted by JÚLIO HEBER SILVA (julioheber@yahoo.com.br) on 2017-03-15T17:36:59Z No. of bitstreams: 2 Dissertação - Sidney Tadeu Santiago Costa - 2017.pdf: 1406618 bytes, checksum: 291719e6f7eaaff496ec405e241ce518 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2017-03-20T12:39:15Z (GMT) No. of bitstreams: 2 Dissertação - Sidney Tadeu Santiago Costa - 2017.pdf: 1406618 bytes, checksum: 291719e6f7eaaff496ec405e241ce518 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2017-03-20T12:39:15Z (GMT). No. of bitstreams: 2 Dissertação - Sidney Tadeu Santiago Costa - 2017.pdf: 1406618 bytes, checksum: 291719e6f7eaaff496ec405e241ce518 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017-03-03
With the note gotten in the Exame Nacional do Ensino Médio - ENEM the students can applay the vacant in diverse public institutions of superior education and programs of the government, for example, the program Universidade para Todos(Prouni) and the Fundo de Financiamento Estudantil (Fies). The ENEM uses a methodology of correction of the objective questions called Theory of Reply to the Item - TRI, that has some aspects that are different of the Classic Theory of the Tests - TCT. The main factor that determines the result of a citizen in a avaliativo process where if uses the TCT, is the number of correct answers, while in the TRI, beyond the amount of rightnesss is basic if to analyze which answers they are correct. The objective of this work is to explain what it is the TRI and as if it applies this methodology in evaluations of wide scale. A historical boarding of the logistic models used by the TRI and the justification of the existence of each parameter will be made that composes the main equation of the modeling. To determine each parameter that composes the model of the TRI and to calculate the final note of each candidate, a procedure of called optimization will be used Method of Maximum Probability - MMV. The computational tools in the work had been software R, with packages developed for application of the TRI and the Visual programming language beginner’s all-purpose symbolic instruction code to program functions, called as macros, in electronic spread sheets.
Com a nota obtida no Exame Nacional do Ensino Médio - ENEM os estudantes podem se candidatar a vagas em diversas instituições públicas de ensino superior e programas do governo, por exemplo, o programa Universidade para Todos (Prouni) e o Fundo de Financiamento Estudantil (Fies). O ENEM utiliza uma metodologia de correção das questões objetivas denominada Teoria de Resposta ao Item - TRI, que possui vários aspectos que são diferentes da Teoria Clássica dos Testes - TCT. O principal fator que determina o resultado de um sujeito em um processo avaliativo onde se utiliza a TCT, é o número de respostas corretas, enquanto na TRI, além da quantidade de acertos é fundamental se analisar quais respostas estão corretas. O objetivo deste trabalho é explicar o que é a TRI e como se aplica essa metodologia em avaliações de larga escala. Será feita uma abordagem histórica dos modelos logísticos utilizados pela TRI e a justificativa da existência de cada parâmetro que compõe a equação principal da modelagem. Para determinar cada parâmetro que compõe o modelo da TRI e calcular a nota final de cada candidato, será utilizado um procedimento de otimização denominado Método da Máxima Verossimilhança - MMV. As ferramentas computacionais no trabalho foram o software R, com pacotes desenvolvidos para aplicação da TRI e a linguagem de programação Visual Basic para programar funções, denominadas como macros, em planilhas eletrônicas.
APA, Harvard, Vancouver, ISO, and other styles
2

Al-Nashi, Hamid Rasheed. "A maximum likelihood method to estimate EEG evoked potentials /." Thesis, McGill University, 1985. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=72016.

Full text
Abstract:
A new method for the estimation of the EEG evoked potential (EP) is presented in this thesis. This method is based on a new model of the EEG response which is assumed to be the sum of the EP and independent correlated Gaussian noise representing the spontaneous EEG activity. The EP is assumed to vary in both shape and latency, with the shape variation represented by correlated Gaussian noise which is modulated by the EP. The latency of the EP is also assumed to vary over the ensemble of responses in a random manner governed by some unspecified probability density. No assumption on stationarity is needed for the noise.
With the model described in state-space form, a Kalman filter is constructed, and the variance of the innovation process of the response measurements is derived. A maximum likelihood solution to the EP estimation problem is then obtained via this innovation process.
Tests using simulated responses show that the method is effective in estimating the EP signal at signal-to-noise ratio as low as -6db. Other tests using real normal visual response data yield reasonably consistent EP estimates whose main components are narrower and larger than the ensemble average. In addition, the likelihood function obtained by our method can be used as a discriminant between normal and abnormal responses, and it requires smaller ensembles than other methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Montpellier, Pierre Robert. "The maximum likelihood method of estimating dynamic properties of structures." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq21050.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Khiabanian, Hossein. "A maximum-likelihood multi-resolution weak lensing mass reconstruction method." View abstract/electronic edition; access limited to Brown University users, 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3318339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Donmez, Ayca. "Adaptive Estimation And Hypothesis Testing Methods." Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/3/12611724/index.pdf.

Full text
Abstract:
For statistical estimation of population parameters, Fisher&rsquo
s maximum likelihood estimators (MLEs) are commonly used. They are consistent, unbiased and efficient, at any rate for large n. In most situations, however, MLEs are elusive because of computational difficulties. To alleviate these difficulties, Tiku&rsquo
s modified maximum likelihood estimators (MMLEs) are used. They are explicit functions of sample observations and easy to compute. They are asymptotically equivalent to MLEs and, for small n, are equally efficient. Moreover, MLEs and MMLEs are numerically very close to one another. For calculating MLEs and MMLEs, the functional form of the underlying distribution has to be known. For machine data processing, however, such is not the case. Instead, what is reasonable to assume for machine data processing is that the underlying distribution is a member of a broad class of distributions. Huber assumed that the underlying distribution is long-tailed symmetric and developed the so called M-estimators. It is very desirable for an estimator to be robust and have bounded influence function. M-estimators, however, implicitly censor certain sample observations which most practitioners do not appreciate. Tiku and Surucu suggested a modification to Tiku&rsquo
s MMLEs. The new MMLEs are robust and have bounded influence functions. In fact, these new estimators are overall more efficient than M-estimators for long-tailed symmetric distributions. In this thesis, we have proposed a new modification to MMLEs. The resulting estimators are robust and have bounded influence functions. We have also shown that they can be used not only for long-tailed symmetric distributions but for skew distributions as well. We have used the proposed modification in the context of experimental design and linear regression. We have shown that the resulting estimators and the hypothesis testing procedures based on them are indeed superior to earlier such estimators and tests.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Ka Lok. "A Strategy for Earthquake Catalog Relocations Using a Maximum Likelihood Method." Thesis, Uppsala universitet, Geofysik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-188826.

Full text
Abstract:
A strategy for relocating earthquakes in a catalog is presented. The strategy is based on the argument that the distribution of the earthquake events in a catalog is reasonable a priori information for earthquake relocation in that region. This argument can be implemented using the method of maximum likelihood for arrival time data inversion, where the a priori probability distribution of the event locations is defined as the sum of the probability densities of all events in the catalog. This a priori distribution is then added to the standard misfit criterion in earthquake location to form the likelihood function. The probability density of an event in the catalog is described by a Gaussian probability density. The a priori probability distribution is, therefore, defined as the normalized sum of the Gaussian probability densities of all events in the catalog, excluding the event being relocated. For a linear problem, the likelihood function can be approximated by the joint probability density of the a priori distribution and the distribution of an unconstrained location due to the misfit alone. After relocating the events according to the maximum of the likelihood function, a modified distribution of events is generated. This distribution should be more densely clustered than before in general since the events are moved towards the maximum of the posterior distribution. The a priori distribution is updated and the process is iterated. The strategy is applied to the aftershock sequence in southwest Iceland after a pair of earthquakes on 29th May 2008. The relocated events reveal the fault systems in that area. Three synthetic data sets are used to test the general behaviour of the strategy. It is observed that the synthetic data give significantly different behaviour from the real data.
APA, Harvard, Vancouver, ISO, and other styles
7

Kraay, Andrea L. (Andrea Lorraine) 1976. "Physically constrained maximum likelihood method for snapshot deficient adaptive array processing." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/87331.

Full text
Abstract:
Thesis (Elec.E. and S.M. in Electrical Engineering)--Joint Program in Applied Ocean Physics and Engineering (Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science; and the Woods Hole Oceanographic Institution), 2003.
"February 2003."
Includes bibliographical references (leaves 139-141).
by Andrea L. Kraay.
Elec.E.and S.M.in Electrical Engineering
APA, Harvard, Vancouver, ISO, and other styles
8

Stamatakis, Alexandros. "Distributed and parallel algorithms and systems for inference of huge phylogenetic trees based on the maximum likelihood method." [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=973053380.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ishakova, Gulmira. "On the use of Quasi-Maximum Likelihood Estimation and Indirect Method for Stochastic Volatility models." Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-1641.

Full text
Abstract:

Stochastic volatility models have been focus for research in recent years.

One interesting and important topic has been the estimation procedure.

For a given stochastic volatility model this project aims to compare two

methods of parameter estimation.

APA, Harvard, Vancouver, ISO, and other styles
10

Li, Xiangfei. "Reliability Assessment for Complex Systems Using Multi-level, Multi-type Reliability Data and Maximum Likelihood Method." Ohio University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1402483535.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Hu, Mike. "A Collapsing Method for Efficient Recovery of Optimal Edges." Thesis, University of Waterloo, 2002. http://hdl.handle.net/10012/1144.

Full text
Abstract:
In this thesis we present a novel algorithm, HyperCleaning*, for effectively inferring phylogenetic trees. The method is based on the quartet method paradigm and is guaranteed to recover the best supported edges of the underlying phylogeny based on the witness quartet set. This is performed efficiently using a collapsing mechanism that employs memory/time tradeoff to ensure no loss of information. This enables HyperCleaning* to solve the relaxed version of the Maximum-Quartet-Consistency problem feasibly, thus providing a valuable tool for inferring phylogenies using quartet based analysis.
APA, Harvard, Vancouver, ISO, and other styles
12

Güimil, Fernando. "Comparing the Maximum Likelihood Method and a Modified Moment Method to fit a Weibull distribution to aircraft engine failure time data." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1997. http://handle.dtic.mil/100.2/ADA337364.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Güimil, Fernando. "Comparing the Maximum Likelihood Method and a Modified Moment Method to fit a Weibull distribution to aircraft engine failure time data." Thesis, Monterey, California. Naval Postgraduate School, 1997. http://hdl.handle.net/10945/8112.

Full text
Abstract:
Approved for public release; distribution is unlimited.
This thesis provides a comparison of the accuracies of two methods for fitting a Weibull distribution to a set of aircraft engines time between failure data. One method used is the Maximum Likelihood Method and assumes that these engine failure times are independent. The other method is a Modified Method of Moments procedure and uses the fact that if time to failure T has a Weibull distribution with scale parameter lambda and shape parameter beta, then T(beta) has an exponential distribution with scale parameter lambda(beta). The latter method makes no assumption about independent failure times. A comparison is made from times that are randomly generated with a program. The program generates times in a manner that resembles the way in which engine failures occur in the real world for an engine with three subsystems. These generated operating times between failures for the same engine are not statistically independent. This comparison was extended to real data. Although the two methods gave good fits, the Maximum Likelihood Method produced a better fit than the Modified Method of Moments. Explanations for this fact are analyzed and presented in the conclusions
APA, Harvard, Vancouver, ISO, and other styles
14

Ikeda, Mitsuru, Kazuhiro Shimamoto, Takeo Ishigaki, Kazunobu Yamauchi, 充. 池田, and 一信 山内. "Statistical method in a comparative study in which the standard treatment is superior to others." Nagoya University School of Medicine, 2002. http://hdl.handle.net/2237/5385.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Hehn, Lukas [Verfasser], and J. [Akademischer Betreuer] Blümer. "Search for dark matter with EDELWEISS-III using a multidimensional maximum likelihood method / Lukas Hehn ; Betreuer: J. Blümer." Karlsruhe : KIT-Bibliothek, 2016. http://d-nb.info/1114273473/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Smith, Gary Douglas. "Measurements of spin asymmetries for deeply virtual compton scattering off the proton using the extended maximum likelihood method." Thesis, University of Glasgow, 2013. http://theses.gla.ac.uk/5042/.

Full text
Abstract:
Generalised Parton Distributions (GPDs) provide a theoretical framework that promises to deliver new information about proton structure. In the impact parameter interpretation, they describe the substructure of the proton in terms of its quark (and gluon) constituents in three dimensions: two transverse spacial dimensions and one longitudinal momentum dimension. Through Ji’s sum rule, they offer a means by which to determine the total angular momentum contribution of quarks to the proton’s spin of h ̄ /2. GPDs are directly related to Compton Form Factors (CFFs), which are distributions that are measurable us- ing Deep Exclusive Scattering (DES) processes such as deeply virtual Compton scattering (DVCS). DVCS is characterised by the scattering of a single photon with large virtual- ity off a single quark (or gluon) inside the proton, resulting in the production of a hard photon: γ ∗ p → γ p′ . In this work, data recorded with the CLAS detector at Jefferson Laboratory during the EG1DVCS experimental run were analysed. This experiment ran for over 80 days with a longitudinally polarised electron beam and a solid NH3 target containing longitudinally polarised protons. The resulting data set accommodated pre- cise measurements of DVCS beam, target and double spin asymmetries, each of which is sensitive to different combinations of CFFs. The final results presented here are fits to these three asymmetries, which were performed using the Extended Maximum Like- lihood method. It is intended that these results will be used in the future, along with other DES asymmetry and cross-section measurements, to constrain CFFs and thus move towards a more complete understanding of the structure of the proton.
APA, Harvard, Vancouver, ISO, and other styles
17

He, Bin. "APPLICATION OF THE EMPIRICAL LIKELIHOOD METHOD IN PROPORTIONAL HAZARDS MODEL." Doctoral diss., University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4384.

Full text
Abstract:
In survival analysis, proportional hazards model is the most commonly used and the Cox model is the most popular. These models are developed to facilitate statistical analysis frequently encountered in medical research or reliability studies. In analyzing real data sets, checking the validity of the model assumptions is a key component. However, the presence of complicated types of censoring such as double censoring and partly interval-censoring in survival data makes model assessment difficult, and the existing tests for goodness-of-fit do not have direct extension to these complicated types of censored data. In this work, we use empirical likelihood (Owen, 1988) approach to construct goodness-of-fit test and provide estimates for the Cox model with various types of censored data. Specifically, the problems under consideration are the two-sample Cox model and stratified Cox model with right censored data, doubly censored data and partly interval-censored data. Related computational issues are discussed, and some simulation results are presented. The procedures developed in the work are applied to several real data sets with some discussion.
Ph.D.
Department of Mathematics
Sciences
Mathematics
APA, Harvard, Vancouver, ISO, and other styles
18

Papp, Joseph C. "Physically constrained maximum likelihood (PCML) mode filtering and its application as a pre-processing method for underwater acoustic communication." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/54649.

Full text
Abstract:
Thesis (S.M.)--Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science; and the Woods Hole Oceanographic Institution), 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 85-87).
Mode filtering is most commonly implemented using the sampled mode shape or pseudoinverse algorithms. Buck et al [1] placed these techniques in the context of a broader maximum a posteriori (MAP) framework. However, the MAP algorithm requires that the signal and noise statistics be known a priori. Adaptive array processing algorithms are candidates for improving performance without the need for a priori signal and noise statistics. A variant of the physically constrained, maximum likelihood (PCML) algorithm [2] is developed for mode filtering that achieves the same performance as the MAP mode filter yet does not need a priori knowledge of the signal and noise statistics. The central innovation of this adaptive mode filter is that the received signal's sample covariance matrix, as estimated by the algorithm, is constrained to be that which can be physically realized given a modal propagation model and an appropriate noise model. The first simulation presented in this thesis models the acoustic pressure field as a complex Gaussian random vector and compares the performance of the pseudoinverse, reduced rank pseudoinverse, sampled mode shape, PCML minimum power distortionless response (MPDR), PCML-MAP, and MAP mode filters. The PCML-MAP filter performs as well as the MAP filter without the need for a priori data statistics. The PCML-MPDR filter performs nearly as well as the MAP filter as well, and avoids a sawtooth pattern that occurs with the reduced rank pseudoinverse filter. The second simulation presented models the underwater environment and broadband communication setup of the Shallow Water 2006 (SW06) experiment.
(cont.) Data processing results are presented from the Shallow Water 2006 experiment, showing the reduced sensitivity of the PCML-MPDR filter to white noise compared with the reduced rank pseudoinverse filter. Lastly, a linear, decision-directed, RLS equalizer is used to combine the response of several modes and its performance is compared with an equalizer applied directly to the data received on each hydrophone.
by Joseph C. Papp.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
19

Luo, Li [Verfasser], Florian [Akademischer Betreuer] Jarre, and Björn [Akademischer Betreuer] Scheuermann. "Beating the Clock - An Offline Clock Synchronization Method Inspired by Maximum Likelihood Techniques / Li Luo. Gutachter: Florian Jarre ; Björn Scheuermann." Düsseldorf : Universitäts- und Landesbibliothek der Heinrich-Heine-Universität Düsseldorf, 2014. http://d-nb.info/1064379818/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Luo, Li Verfasser], Florian [Akademischer Betreuer] [Jarre, and Björn [Akademischer Betreuer] Scheuermann. "Beating the Clock - An Offline Clock Synchronization Method Inspired by Maximum Likelihood Techniques / Li Luo. Gutachter: Florian Jarre ; Björn Scheuermann." Düsseldorf : Universitäts- und Landesbibliothek der Heinrich-Heine-Universität Düsseldorf, 2014. http://nbn-resolving.de/urn:nbn:de:hbz:061-20141218-144630-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Orozco, M. Catalina (Maria Catalina). "Inversion Method for Spectral Analysis of Surface Waves (SASW)." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5124.

Full text
Abstract:
This research focuses on estimating the shear wave velocity (Vs) profile based on the dispersion curve obtained from SASW field test data (i.e., inversion of SASW data). It is common for the person performing the inversion to assume the prior information required to constrain the problem based on his/her own judgment. Additionally, the Vs profile is usually shown as unique without giving a range of possible solutions. For these reasons, this work focuses on: (i) studying the non-uniqueness of the solution to the inverse problem; (ii) implementing an inversion procedure that presents the estimated model parameters in a way that reflects their uncertainties; and (iii) evaluating tools that help choose the appropriate prior information. One global and one local search procedures were chosen to accomplish these purposes: a pure Monte Carlo method and the maximum likelihood method, respectively. The pure Monte Carlo method was chosen to study the non-uniqueness by looking at the range of acceptable solutions (i.e., Vs profiles) obtained with as few constraints as possible. The maximum likelihood method was chosen because it is a statistical approach, which enables us to estimate the uncertainties of the resulting model parameters and to apply tools such as the Bayesian criterion to help select the prior information objectively. The above inversion methods were implemented for synthetic data, which was produced with the same forward algorithm used during inversion. This implies that all uncertainties were caused by the nature of the SASW inversion problem (i.e., there were no uncertainties added by experimental errors in data collection, analysis of the data to create the dispersion curve, layered model to represent a real 3-D soil stratification, or wave propagation theory). At the end of the research, the maximum likelihood method of inversion and the tools for the selection of prior information were successfully used with real experimental data obtained in Memphis, Tennessee.
APA, Harvard, Vancouver, ISO, and other styles
22

Tarawneh, Monther. "A Novel Quartet-Based Method for Inferring Evolutionary Trees from Molecular Data." University of Sydney, 2008. http://hdl.handle.net/2123/2301.

Full text
Abstract:
octor of Philosophy(PhD)
Molecular Evolution is the key to explain the divergence of species and the origin of life on earth. The main task in the study of molecular evolution is the reconstruction of evolutionary trees from sequences data of the current species. This thesis introduces a novel algorithm for inferring evolutionary trees from genetic data using quartet-based approach. The new method recursively merges sub-trees based on a global statistical provided by the global quartet weight matrix. The quarte weights can be computed using several methods. Since the quartet weights computation is the most expensive procedure in this approach, the new method enables the parallel inference of large evolutionary trees. Several techniques developed to deal with quartets inaccuracies. In addition, the new method we developed is flexible in such a way that can combine morphological and molecular phylogenetic analyses to yield more accurate trees. Also, we introduce the concept of critical point where more than one possible merges are possible for the same sub-tree. The critical point concept can provide information about the relationships between species in more details and show how close they are. This enables us to detect other reasonable trees. We evaluated the algorithm on both synthetic and real data sets. Experimental results showed that the new method achieved significantly better accuracy in comparison with existing methods.
APA, Harvard, Vancouver, ISO, and other styles
23

Ginos, Brenda Faith. "Parameter Estimation for the Lognormal Distribution." Diss., CLICK HERE for online access, 2009. http://contentdm.lib.byu.edu/ETD/image/etd3205.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Owen, Claire Elayne Bangerter. "Parameter Estimation for the Beta Distribution." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2670.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Taylor, Simon. "Dalitz Plot Analysis of η'→ηπ+π-." Thesis, Uppsala universitet, Kärnfysik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-421196.

Full text
Abstract:
Chiral Perturbation Theory (ChPT) is a tool for studying the strong interaction at low energies. The Perturbation theory is developed around the limit where the light quarks, u,d,s are approximated to be massless. In this approximation the isospin symmetry, one of the main features of the strong interaction, is fulfilled automatically. The study of the light quark masses and isospin violation can be done with the η'→πππ and η'→ηππ decay channels by analyzing the kinematic distribution using so-called Dalitz plots. A Dalitz plot analysis of the η'→ηπ+π- decay mode is conducted by the BESIII collaboration. The unbinned maximum likelihood method is used to fit the parameters that describe the Dalitz plot distribution. In this fit a polynomial expansion of the matrix element squared is used. However, in order to study light quark masses, it is better to use a parameterization which includes the description of the final-state interaction based on a dispersion relation. Hence, it is desirable to use a representation of the Dalitz plot as a two-dimensional histogram with acceptance corrected data as input to extract the substraction constants. Therefore, the goal of this thesis is to make a consistency check between the unbinned and binned representation of the data. In this thesis Monte Carlo data of η'→ηπ+π- decay channel is generated based on the BESIII. An unbinned maximum likelihood fit is performed to find the Dalitz plot parameters repeating the BESIII analysis method. The Monte Carlo data is then used for a binned maximum likelihood and a χ2 fit. Finally, the prepared binned experimental acceptance corrected data from BESIII is used to fit the Dalitz plot parameters using the same statistical methods. The results based on the binned maximum likelihood and the χ2 methods are consistent with the fit using the unbinned maximum likelihood method applied in the original BESIII publication.
APA, Harvard, Vancouver, ISO, and other styles
26

Hattaway, James T. "Parameter Estimation and Hypothesis Testing for the Truncated Normal Distribution with Applications to Introductory Statistics Grades." Diss., CLICK HERE for online access, 2010. http://contentdm.lib.byu.edu/ETD/image/etd3412.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Sarlak, Nermin. "Evaluation And Modeling Of Streamflow Data: Entropy Method, Autoregressive Models With Asymmetric Innovations And Artificial Neural Networks." Phd thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/3/12606135/index.pdf.

Full text
Abstract:
In the first part of this study, two entropy methods under different distribution assumptions are examined on a network of stream gauging stations located in Kizilirmak Basin to rank the stations according to their level of importance. The stations are ranked by using two different entropy methods under different distributions. Thus, showing the effect of the distribution type on both entropy methods is aimed. In the second part of this study, autoregressive models with asymmetric innovations and an artificial neural network model are introduced. Autoregressive models (AR) which have been developed in hydrology are based on several assumptions. The normality assumption for the innovations of AR models is investigated in this study. The main reason of making this assumption in the autoregressive models established is the difficulties faced in finding the model parameters under the distributions other than the normal distributions. From this point of view, introduction of the modified maximum likelihood procedure developed by Tiku et. al. (1996) in estimation of the autoregressive model parameters having non-normally distributed residual series, in the area of hydrology has been aimed. It is also important to consider how the autoregressive model parameters having skewed distributions could be estimated. Besides these autoregressive models, the artificial neural network (ANN) model was also constructed for annual and monthly hydrologic time series due to its advantages such as no statistical distribution and no linearity assumptions. The models considered are applied to annual and monthly streamflow data obtained from five streamflow gauging stations in Kizilirmak Basin. It is shown that AR(1) model with Weibull innovations provides best solutions for annual series and AR(1) model with generalized logistic innovations provides best solution for monthly as compared with the results of artificial neural network models.
APA, Harvard, Vancouver, ISO, and other styles
28

Cloyd, James Dale. "Data mining with Newton's method." [Johnson City, Tenn. : East Tennessee State University], 2002. http://etd-submit.etsu.edu/etd/theses/available/etd-1101102-081311/unrestricted/CloydJ111302a.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Mtotywa, Busisiwe Percelia, and G. J. Lyman. "A systems engineering approach to metallurgical accounting of integrated smelter complexes." Thesis, Stellenbosch : Stellenbosch University, 2008. http://hdl.handle.net/10019.1/4846.

Full text
Abstract:
Thesis (PhD)--Stellenbosch University, 2008.
ENGLISH ABSTRACT: The growing need to improve accounting accuracy, precision and to standardise generally accepted measurement methods in the mining and processing industries has led to the joining of a number of organisations under the AMIRA International umbrella, with the purpose of fulfilling these objectives. As part of this venture, Anglo Platinum undertook a project on the material balancing around its largest smelter, the Waterval Smelter. The primary objective of the project was to perform a statistical material balance around the Waterval Smelter using the Maximum Likelihood method with respect to platinum, rhodium, nickel, sulphur and chrome (III) oxide. Pt, Rh and Ni were selected for their significant contribution to the company’s profit margin, whilst S was included because of its environmental importance. Cr2O3 was included for its importance in as far as the difficulties its presence poses in smelting of PGMs. The objective was achieved by performing a series of statistical computations. These include; quantification of total and analytical uncertainties, detection of outliers, estimation and modelling of daily and monthly measurement uncertainties, parameter estimation and data reconciliation. Comparisons were made between the Maximum Likelihood and Least Squares methods. Total uncertainties associated with the daily grades were determined by use of variographic studies. The estimated Pt standard deviations were within 10% relative to the respective average grades with a few exceptions. The total uncertainties were split into their respective components by determining analytical variances from analytical replicates. The results indicated that the sampling components of the total uncertainty were generally larger as compared to their analytical counterparts. WCM, the platinum rich Waterval smelter product, has an uncertainty that is worth ~R2 103 000 in its daily Pt grade. This estimated figure shows that the quality of measurements do not only affect the accuracy of metal accounting, but can have considerable implications if not quantified and managed. The daily uncertainties were estimated using Kriging and bootstrapped to obtain estimates for the monthly uncertainties. Distributions were fitted using MLE on the distribution fitting tool of the JMP6.0 programme and goodness of fit tests were performed. The data were fitted with normal and beta distributions, and there was a notable decrease in the skewness from the daily to the monthly data. The reconciliation of the data was performed using the Maximum Likelihood and comparing that with the widely used Least Squares. The Maximum Likelihood and Least Squares adjustments were performed on simulated data in order to conduct a test of accuracy and to determine the extent of error reduction after the reconciliation exercise. The test showed that the two methods had comparable accuracies and error reduction capabilities. However, it was shown that modelling of uncertainties with the unbounded normal distribution does lead to the estimation of adjustments so large that negative adjusted values are the result. The benefit of modelling the uncertainties with a bounded distribution, which is the beta distribution in this case, is that the possibility of obtaining negative adjusted values is annihilated. ML-adjusted values (beta) will always be non-negative, therefore feasible. In a further comparison of the ML(bounded model) and the LS methods in the material balancing of the Waterval smelter complex, it was found that for all those streams whose uncertainties were modelled with a beta distribution, i.e. those whose distribution possessed some degree of skewness, the ML adjustments were significantly smaller than the LS counterparts It is therefore concluded that the Maximum Likelihood (bounded models) is a rigorous alternative method of data reconciliation to the LS method with the benefits of; -- Better estimates due to the fact that the nature of the data (distribution) is not assumed, but determined through distribution fitting and parameter estimation -- Adjusted values can never be negative due to the bounded nature of the distribution The novel contributions made in this thesis are as follows; -- The Maximum Likelihood method was for the first time employed in the material balancing of non-normally distributed data and compared with the well-known Least Squares method -- This was an original integration of geostatistical methods with data reconciliation to quantify and predict measurement uncertainties. -- For the first time, measurement uncertainties were modeled with a distribution that was non-normal and bounded in nature, leading to smaller adjustments
AFRIKAANSE OPSOMMING: Die groeiende behoefte aan rekeningkundige akkuraatheid, en om presisie te verbeter, en te standardiseer op algemeen aanvaarde meetmetodes in die mynbou en prosesseringsnywerhede, het gelei tot die samwewerking van 'n aantal van organisasies onder die AMIRA International sambreel, met die doel om bogenoemde behoeftes aan te spreek. As deel van hierdie onderneming, het Anglo Platinum onderneem om 'n projek op die materiaal balansering rondom sy grootste smelter, die Waterval smelter. Die primêre doel van die projek was om 'n statistiese materiaal balans rondom die Waterval smelter uit te voer deur gebruik te maak van die sogenaamde maksimum waarskynlikheid metode met betrekking tot platinum, rodium, nikkel, swawel en chroom (iii) oxied. Pt, Rh en Ni was gekies vir hul beduidende bydrae tot die maatskappy se winsmarge, terwyl S ingesluit was weens sy belangrike omgewingsimpak. Cr2O3 was ingesluit weens sy impak op die smelting van Platinum groep minerale. Die doelstelling was bereik deur die uitvoering van 'n reeks van statistiese berekeninge. Hierdie sluit in: die kwantifisering van die totale en analitiese variansies, opsporing van uitskieters, beraming en modellering van daaglikse en maandelikse metingsvariansies, parameter beraming en data rekonsiliasie. Vergelykings was getref tussen die maksimum waarskynlikheid en kleinste kwadrate metodes. Totale onsekerhede of variansies geassosieer met die daaglikse grade was bepaal deur ’n Variografiese studie. Die beraamde Pt standaard afwykings was binne 10% relatief tot die onderskeie gemiddelde grade met sommige uitsonderings. Die totale onsekerhede was onderverdeel in hul onderskeie komponente deur bepaling van die ontledingsvariansies van duplikate. Die uitslae toon dat die monsternemings komponente van die totale onsekerheid oor die algemeen groter was as hul bypassende analitiese variansies. WCM, ‘n platinum-ryke Waterval Smelter produk, het 'n onsekerheid in die orde van ~twee miljoen rand in sy daagliks Pt graad. Hierdie beraamde waarde toon dat die kwaliteit van metings nie alleen die akkuraatheid van metaal rekeningkunde affekteer nie, maar aansienlike finansiële implikasies het indien nie die nie gekwantifiseer en bestuur word nie. Die daagliks onsekerhede was beraam deur gebruik te maak van “Kriging” en “Bootstrap” metodes om die maandelikse onsekerhede te beraam. Verspreidings was gepas deur gebruik te maak van hoogste waarskynlikheid beraming passings en goedheid–van-pas toetse was uitgevoer. Die data was gepas met Normaal en Beta verspreidings, en daar was 'n opmerklike vermindering in die skeefheid van die daaglikse tot die maandeliks data. Die rekonsiliasies van die massabalans data was uitgevoer deur die gebruik die maksimum waarskynlikheid metodes en vergelyk daardie met die algemeen gebruikde kleinste kwadrate metode. Die maksimum waarskynlikheid (ML) en kleinste kwadrate (LS) aanpassings was uitgevoer op gesimuleerde data ten einde die akkuraatheid te toets en om die mate van fout vermindering na die rekonsiliasie te bepaal. Die toets getoon dat die twee metodes het vergelykbare akkuraathede en foutverminderingsvermoëns. Dit was egter getoon dat modellering van die onsekerhede met die onbegrensde Normaal verdeling lei tot die beraming van aanpassings wat so groot is dat negatiewe verstelde waardes kan onstaan na rekosniliasie. Die voordeel om onsekerhede met 'n begrensde distribusie te modelleer, soos die beta distribusie in hierdie geval, is dat die moontlikheid om negatiewe verstelde waardes te verkry uitgelsuit word. ML-verstelde waardes (met die Beta distribusie funksie) sal altyd nie-negatief wees, en om hierdie rede uitvoerbaar. In 'n verdere vergelyking van die ML (begrensd) en die LS metodes in die materiaal balansering van die waterval smelter kompleks, is dit gevind dat vir almal daardie strome waarvan die onserkerhede gesimuleer was met 'n Beta distribusie, dus daardie strome waarvan die onsekerheidsdistribusie ‘n mate van skeefheid toon, die ML verstellings altyd beduidend kleiner was as die ooreenkomstige LS verstellings. Vervolgens word die Maksimum Waarskynlikheid metode (met begrensde modelle) gesien as 'n beter alternatiewe metode van data rekosiliasie in vergelyking met die kleinste kwadrate metode met die voordele van: • Beter beramings te danke aan die feit dat die aard van die onsekerheidsdistribusie nie aangeneem word nie, maar bepaal is deur die distribusie te pas en deur van parameter beraming gebruik te maak. • Die aangepaste waardes kan nooit negatief wees te danke aan die begrensde aard van die verdeling. Die volgende oorspronklike bydraes is gelewer in hierdie verhandeling: • Die Maksimum Waarskynlikheid metode was vir die eerste keer geëvalueer vir massa balans rekonsiliasie van nie-Normaal verspreide data en vergelyk met die bekendde kleinste kwadrate metode. • Dit is die eerste keer geostatistiese metodes geïntegreer is met data rekonsiliasie om onsekerhede te beraam waarbinne verstellings gemaak word. • Vir die eerste keer, is meetonsekerhede gemoddelleer met 'n distribusie wat nie- Normaal en begrensd van aard is, wat lei tot kleiner en meer realistiese verstellings.
APA, Harvard, Vancouver, ISO, and other styles
30

Sozen, Serkan. "A Viterbi Decoder Using System C For Area Efficient Vlsi Implementation." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607567/index.pdf.

Full text
Abstract:
In this thesis, the VLSI implementation of Viterbi decoder using a design and simulation platform called SystemC is studied. For this purpose, the architecture of Viterbi decoder is tried to be optimized for VLSI implementations. Consequently, two novel area efficient structures for reconfigurable Viterbi decoders have been suggested. The traditional and SystemC design cycles are compared to show the advantages of SystemC, and the C++ platforms supporting SystemC are listed, installation issues and examples are discussed. The Viterbi decoder is widely used to estimate the message encoded by Convolutional encoder. For the implementations in the literature, it can be found that special structures called trellis have been formed to decrease the complexity and the area. In this thesis, two new area efficient reconfigurable Viterbi decoder approaches are suggested depending on the rearrangement of the states of the trellis structures to eliminate the switching and memory addressing complexity. The first suggested architecture based on reconfigurable Viterbi decoder reduces switching and memory addressing complexity. In the architectures, the states are reorganized and the trellis structures are realized by the usage of the same structures in subsequent instances. As the result, the area is minimized and power consumption is reduced. Since the addressing complexity is reduced, the speed is expected to increase. The second area efficient Viterbi decoder is an improved version of the first one and has the ability to configure the parameters of constraint length, code rate, transition probabilities, trace-back depth and generator polynomials.
APA, Harvard, Vancouver, ISO, and other styles
31

Chabičovský, Martin. "Statistická analýza rozdělení extrémních hodnot pro cenzorovaná data." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2011. http://www.nusl.cz/ntk/nusl-229487.

Full text
Abstract:
The thesis deals with extreme value distributions and censored samples. Theoretical part describes a maximum likelihood method, types of censored samples and introduce a extreme value distributions. In the thesis are derived likelihood equations for censored samples from exponential, Weibull, lognormal, Gumbel and generalized extreme value distribution. For these distributions are also derived asymptotic interval estimates and is made simulation studies on the dependence of the parameter estimate on the percentage of censoring.
APA, Harvard, Vancouver, ISO, and other styles
32

Goto, Daniela Bento Fonsechi. "Estimação de maxima verossimilhança para processo de nascimento puro espaço-temporal com dados parcialmente observados." [s.n.], 2008. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306192.

Full text
Abstract:
Orientador: Nancy Lopes Garcia
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica
Made available in DSpace on 2018-08-11T16:45:43Z (GMT). No. of bitstreams: 1 Goto_DanielaBentoFonsechi_M.pdf: 3513260 bytes, checksum: ff6f9e35005ad9015007d1f51ee722c1 (MD5) Previous issue date: 2008
Resumo: O objetivo desta dissertação é estudar estimação de máxima verossimilhança para processos de nascimento puro espacial para dois diferentes tipos de amostragem: a) quando há observação permanente em um intervalo [0, T]; b) quando o processo é observado após um tempo T fixo. No caso b) não se conhece o tempo de nascimento dos pontos, somente sua localização (dados faltantes). A função de verossimilhança pode ser escrita para o processo de nascimento puro não homogêneo em um conjunto compacto através do método da projeção descrito por Garcia and Kurtz (2008), como projeção da função de verossimilhança. A verossimilhança projetada pode ser interpretada como uma esperança e métodos de Monte Carlo podem ser utilizados para estimar os parâmetros. Resultados sobre convergência quase-certa e em distribuição são obtidos para a aproximação do estimador de máxima verossimilhança. Estudos de simulação mostram que as aproximações são adequadas.
Abstract: The goal of this work is to study the maximum likelihood estimation of a spatial pure birth process under two different sampling schemes: a) permanent observation in a fixed time interval [0, T]; b) observation of the process only after a fixed time T. Under scheme b) we don't know the birth times, we have a problem of missing variables. We can write the likelihood function for the nonhomogeneous pure birth process on a compact set through the method of projection described by Garcia and Kurtz (2008), as the projection of the likelihood function. The fact that the projected likelihood can be interpreted as an expectation suggests that Monte Carlo methods can be used to compute estimators. Results of convergence almost surely and in distribution are obtained for the aproximants to the maximum likelihood estimator. Simulation studies show that the approximants are appropriate.
Mestrado
Inferencia em Processos Estocasticos
Mestre em Estatística
APA, Harvard, Vancouver, ISO, and other styles
33

Nguyen, Ngoc B. "Estimation of Technical Efficiency in Stochastic Frontier Analysis." Bowling Green State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1275444079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Kondlo, Lwando Orbet. "Estimation of Pareto distribution functions from samples contaminated by measurement errors." Thesis, University of the Western Cape, 2010. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_6141_1297831463.

Full text
Abstract:

The intention is to draw more specific connections between certain deconvolution methods and also to demonstrate the application of the statistical theory of estimation in the presence of measurement error. A parametric methodology for deconvolution when the underlying distribution is of the Pareto form is developed. Maximum likelihood estimation (MLE) of the parameters of the convolved distributions is considered. Standard errors of the estimated parameters are calculated from the inverse Fisher&rsquo
s information matrix and a jackknife method. Probability-probability (P-P) plots and Kolmogorov-Smirnov (K-S) goodnessof- fit tests are used to evaluate the fit of the posited distribution. A bootstrapping method is used to calculate the critical values of the K-S test statistic, which are not available.

APA, Harvard, Vancouver, ISO, and other styles
35

Horimoto, Andréa Roselí Vançan Russo. "Estimativa do valor da taxa de penetrância em doenças autossômicas dominantes: estudo teórico de modelos e desenvolvimento de um programa computacional." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/41/41131/tde-15102009-161545/.

Full text
Abstract:
O objetivo principal do trabalho foi o desenvolvimento de um programa computacional, em linguagem Microsoft Visual Basic 6.0 (versão executável), para estimativa da taxa de penetrância a partir da análise de genealogias com casos de doenças com herança autossômica dominante. Embora muitos dos algoritmos empregados no programa tenham se baseado em idéias já publicadas na literatura (em sua maioria por pesquisadores e pós-graduandos do Laboratório de Genética Humana do Instituto de Biociências da Universidade de São Paulo), desenvolvemos alguns métodos inéditos para lidar com situações encontradas com certa frequência nos heredogramas publicados na literatura, como: a) ausência de informações sobre o fenótipo do indivíduo gerador da genealogia; b) agrupamento de árvores de indivíduos normais sem a descrição da distribuição de filhos entre os progenitores; c) análise de estruturas da genealogia contendo uniões consanguíneas, utilizando um método alternativo ao descrito na literatura; d) determinação de soluções gerais para as funções de verossimilhança de árvores de indivíduos normais com ramificação regular e para as probabilidades de heterozigose de qualquer indivíduo pertencente a essas árvores. Além da versão executável, o programa, denominado PenCalc, é apresentado também numa versão para Internet (PenCalc Web), a qual fornece adicionalmente as probabilidades de heterozigose e o cálculo de afecção na prole de todos os indivíduos da genealogia. Essa versão pode ser acessada livre e gratuitamente no endereço http://www.ib.usp.br/~otto/pencalcweb. Desenvolvemos também um modelo com taxa de penetrância variável dependente da geração, uma vez que a inspeção de famílias com doenças autossômicas dominantes, como é o caso da síndrome da ectrodactilia associada à hemimelia tibial (EHT), sugere a existência de um fenômeno similar à antecipação, em relação à taxa de penetrância. Os modelos com taxa de penetrância constante e variável, e os métodos desenvolvidos neste trabalho foram aplicados a 21 heredogramas de famílias com afetados pela EHT e ao conjunto das informações de todas essas genealogias (meta-análise), obtendo-se em todos os casos estimativas da taxa de penetrância.
The main objective of this dissertation was the development of a computer program, in Microsoft® Visual Basic® 6.0, for estimating the penetrance rate of autosomal dominant diseases by means of the information contained on genealogies. Some of the algorithms we used in the program were based on ideas already published in the literature by researchers and (post-) graduate students of the Laboratory of Human Genetics, Department of Genetics and Evolutionary Biology, Institute of Biosciences, University of São Paulo. We developed several other methods to deal with particular structures found frequently in the genealogies published in the literature, such as: a) the absence of information on the phenotype of the individual generating of the genealogy; b) the grouping of trees of normal individuals without the separate description of the offspring number per individual; c) the analysis of structures containing consanguineous unions; d) the determination of general solutions in simple analytic form for the likelihood functions of trees of normal individuals with regular branching and for the heterozygosis probabilities of any individual belonging to these trees. In addition to the executable version of the program summarized above, we also prepared, in collaboration with the dissertation supervisor and the undergraduate student Marcio T. Onodera (main author of this particular version), another program, represented by a web version (PenCalc Web). It enables the calculation of heterozygosis probabilities and the offspring risk for all individuals of the genealogy, two details we did not include in the present version of our program. The program PenCalc Web can be accessed freely at the home-page address http://www.ib.usp.br/~otto/pencalcweb. Another important contribution of this dissertation was the development of a model of estimation with generationdependent penetrance rate, as suggested by the inspection of families with some autosomal dominant diseases, such as the ectrodactyly-tibial hemimelia syndrome (ETH), a condition which exhibits a phenomenon similar to anticipation in relation to the penetrance rate. The models with constant and variable penetrance rates, as well as practically all the methods developed in this dissertation, were applied to 21 individual genealogies from the literature with cases of ETH and to the set of all these genealogies (meta-analysis). The corresponding results of all these analysis are comprehensively presented.
APA, Harvard, Vancouver, ISO, and other styles
36

El, Matouat Abdelaziz. "Sélection du nombre de paramètres d'un modèle comparaison avec le critère d'Akaike." Rouen, 1987. http://www.theses.fr/1987ROUES054.

Full text
Abstract:
Dans ce travail, nous considérons une structure statistique (Oméga, A, f(thêta) sur laquelle nous disposons de n observations, et nous étudions, dans un premier temps, le critère d'Akaike pour une structure gaussienne. Ce critère permet lorsque l'échantillon est de taille n de définir un ordre k optimal plus petit que m, nombre de paramètres du modèle correct. L'ordre k, fonction de n, doit être suffisamment petit afin de faire apparaître une redondance statistique lors de l'estimation des k paramètres. Ensuite, pour tout thêta de I ensemble quelconque d'indices, soit f(thêta) une densité de Probabilité par rapport à une probabilité mu représentant la connaissance a priori. Les probabilités sont définies sur une tribu A provenant d'une partition finie M de oméga. Nous sommes alors amenés à modifier la fonction de perte en utilisant la distance de Hellinger. Le critère original présenté permet de définir la densité estimée. Comme des densités de probabilité interviennent, leur estimation est améliorée en utilisant la méthode du noyau ce qui conduit à une seconde modification du critère d'Akaike. Les résultats antérieurs sont appliqués à la déterminantion des paramètres p, q d'un modèle ARMA. Au préalable, l'utilisation du modèle interne fournit l'estimateur du maximum de vraisemblance pour les coefficients du modèle ARMA lorsque les paramètres p, q sont connus.
APA, Harvard, Vancouver, ISO, and other styles
37

Kunz, Lukas Brad. "A New Method for Melt Detection on Antarctic Ice-Shelves and Scatterometer Calibration Verification." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd527.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Nourmohammadi, Mohammad. "Statistical inference with randomized nomination sampling." Elsevier B.V, 2014. http://hdl.handle.net/1993/30150.

Full text
Abstract:
In this dissertation, we develop several new inference procedures that are based on randomized nomination sampling (RNS). The first problem we consider is that of constructing distribution-free confidence intervals for quantiles for finite populations. The required algorithms for computing coverage probabilities of the proposed confidence intervals are presented. The second problem we address is that of constructing nonparametric confidence intervals for infinite populations. We describe the procedures for constructing confidence intervals and compare the constructed confidence intervals in the RNS setting, both in perfect and imperfect ranking scenario, with their simple random sampling (SRS) counterparts. Recommendations for choosing the design parameters are made to achieve shorter confidence intervals than their SRS counterparts. The third problem we investigate is the construction of tolerance intervals using the RNS technique. We describe the procedures of constructing one- and two-sided RNS tolerance intervals and investigate the sample sizes required to achieve tolerance intervals which contain the determined proportions of the underlying population. We also investigate the efficiency of RNS-based tolerance intervals compared with their corresponding intervals based on SRS. A new method for estimating ranking error probabilities is proposed. The final problem we consider is that of parametric inference based on RNS. We introduce different data types associated with different situation that one might encounter using the RNS design and provide the maximum likelihood (ML) and the method of moments (MM) estimators of the parameters in two classes of distributions; proportional hazard rate (PHR) and proportional reverse hazard rate (PRHR) models.
APA, Harvard, Vancouver, ISO, and other styles
39

Macková, Simona. "Makroekonomická analýza s využitím postupů prostorové ekonometrie." Master's thesis, Vysoká škola ekonomická v Praze, 2017. http://www.nusl.cz/ntk/nusl-359198.

Full text
Abstract:
Spatial econometrics can bring a useful approach to macroeconomic analysis of regional data. This thesis delineates suitable cross-section data models regarding their geographical location. Neighbourhood relation is used for the analysis. The relation of neighbourhood among the regions is expressed using spatial weight matrix. We focus on spatial autocorrelation tests and introduce processes of finding a suitable spatial model. Further, we describe regression coefficients estimates and estimates of spatial dependence coefficients, especially method of maximum likelihood estimates. Besides illustrative examples we apply chosen basic spatial models on real macroeconomic data. We examine how they describe relation between household incomes, GDP and unemployment rate in western Europe. Results are compared with a linear regression model.
APA, Harvard, Vancouver, ISO, and other styles
40

Svoboda, Ondřej. "Využití Poissonova rozdělení pro předpovědi výsledků sportovních utkání." Master's thesis, Vysoká škola ekonomická v Praze, 2017. http://www.nusl.cz/ntk/nusl-359303.

Full text
Abstract:
The aim of this master thesis is to verify possibility to use Poisson distribution for predicting soccer matches. At first for analysis is applied the original model from English statisticians Mark J. Dixon and Stuart G. Coles from 1997. Thereafter the model is extended in the thesis. All models are based on the maximum likelihood method. Chosen league for deducing conclusions is the first English league - Premier League. The matches are played in the period from season 2004/2005 to half of season 2015/2016. For identification of models performance are used the most market odds from American bookmaker Pinnacle. In the theoretical part are described models and statistical methods that are used in the practical part. In the practical part are realized calculations. Counted performance of models is based on profit from market odds. In the period ex-post are calculated optimum model parameters that are used in the ex-ante period, where is calculated performance of the model. The thesis answers question: Are these models gaining from public database effective in modern age?
APA, Harvard, Vancouver, ISO, and other styles
41

Zhang, Tianyu. "Problème inverse statistique multi-échelle pour l'identification des champs aléatoires de propriétés élastiques." Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC2068.

Full text
Abstract:
Dans le cadre de la théorie de l'élasticité linéaire, la modélisation et simulation numérique du comportement mécanique des matériaux hétérogènes à microstructure aléatoire complexe soulèvent de nombreux défis scientifiques à différentes échelles. Bien qu'à l'échelle macroscopique, ces matériaux soient souvent modélisés comme des milieux homogènes et déterministes, ils sont non seulement hétérogènes et aléatoires à l'échelle microscopique, mais ils ne peuvent généralement pas non plus être explicitement décrits par les propriétés morphologiques et mécaniques locales de leurs constituants. Par conséquent, une échelle mésoscopique est introduite entre l'échelle macroscopique et l'échelle mésoscopique, pour laquelle les propriétés mécaniques d'un tel milieu élastique linéaire aléatoire sont décrites par un modèle stochastique prior non-gaussien paramétré par un nombre faible ou modéré d'hyperparamètres inconnus. Afin d'identifier ces hyperparamètres, une méthodologie innovante a été récemment proposée en résolvant un problème statistique inverse multi-échelle en utilisant uniquement des données expérimentales partielles et limitées aux deux échelles macroscopique et mésoscopique. Celui-ci a été formulé comme un problème d'optimisation multi-objectif qui consiste à minimiser une fonction-coût multi-objectif (à valeurs vectorielles) définie par trois indicateurs numériques correspondant à des fonctions-coût mono-objectif (à valeurs scalaires) permettant de quantifier et minimiser des distances entre les données expérimentales multi-échelles mesurées simultanément aux deux échelles macroscopique et mésoscopique sur un seul échantillon soumis à un essai statique, et les solutions des modèles numériques déterministe et stochastique utilisés pour simuler la configuration expérimentale multi-échelle sous incertitudes. Ce travail de recherche vise à contribuer à l'amélioration de la méthodologie d'identification inverse statistique multi-échelle en terme de coût de calcul, de précision et de robustesse en introduisant (i) une fonction-coût mono-objectif (indicateur numérique) supplémentaire à l'échelle mésoscopique quantifiant la distance entre la(les) longueur(s) de corrélation spatiale des champs expérimentaux mesurés et celle(s) des champs numériques calculés, afin que chaque hyperparamètre du modèle stochastique prior ait sa propre fonction-coût mono-objectif dédiée, permettant ainsi d'éviter d'avoir recours à l'algorithme d'optimisation global (algorithme génétique) utilisé précédemment et de le remplacer par un algorithme plus performant en terme d'efficacité numérique, tel qu'un algorithme itératif de type point fixe, pour résoudre le problème d'optimisation multi-objectif avec un coût de calcul plus faible, et (ii) une représentation stochastique ad hoc des hyperparamètres impliqués dans le modèle stochastique prior du champ d'élasticité aléatoire à l'échelle mésoscopique en les modélisant comme des variables aléatoires, pour lesquelles les distributions de probabilité peuvent être construites en utilisant le principe du maximum d'entropie sous un ensemble de contraintes définies par les informations objectives et disponibles, et dont les hyperparamètres peuvent être déterminés à l'aide de la méthode d'estimation du maximum de vraisemblance avec les données disponibles, afin d'améliorer à la fois la robustesse et la précision de la méthode d'identification inverse du modèle stochastique prior. En parallèle, nous proposons également de résoudre le problème d'optimisation multi-objectif en utilisant l’apprentissage automatique par des réseaux de neurones artificiels. Finalement, la méthodologie améliorée est tout d'abord validée sur un matériau virtuel fictif dans le cadre de l'élasticité linéaire en 2D contraintes planes et 3D, puis illustrée sur un matériau biologique hétérogène réel (os cortical de bœuf) en élasticité linéaire 2D contraintes planes
Within the framework of linear elasticity theory, the numerical modeling and simulation of the mechanical behavior of heterogeneous materials with complex random microstructure give rise to many scientific challenges at different scales. Despite that at macroscale such materials are usually modeled as homogeneous and deterministic elastic media, they are not only heterogeneous and random at microscale, but they often also cannot be properly described by the local morphological and mechanical properties of their constituents. Consequently, a mesoscale is introduced between macroscale and microscale, for which the mechanical properties of such a random linear elastic medium are represented by a prior non-Gaussian stochastic model parameterized by a small or moderate number of unknown hyperparameters. In order to identify these hyperparameters, an innovative methodology has been recently proposed by solving a multiscale statistical inverse problem using only partial and limited experimental data at both macroscale and mesoscale. It has been formulated as a multi-objective optimization problem which consists in minimizing a (vector-valued) multi-objective cost function defined by three numerical indicators corresponding to (scalar-valued) single-objective cost functions for quantifying and minimizing distances between multiscale experimental data measured simultaneously at both macroscale and mesoscale on a single specimen subjected to a static test, and the numerical solutions of deterministic and stochastic computational models used for simulating the multiscale experimental test configuration under uncertainties. This research work aims at contributing to the improvement of the multiscale statistical inverse identification method in terms of computational efficiency, accuracy and robustness by introducing (i) an additional mesoscopic numerical indicator allowing the distance between the spatial correlation length(s) of the measured experimental fields and the one(s) of the computed numerical fields to be quantified at mesoscale, so that each hyperparameter of the prior stochastic model has its own dedicated single-objective cost-function, thus allowing the time-consuming global optimization algorithm (genetic algorithm) to be avoided and replaced with a more efficient algorithm, such as the fixed-point iterative algorithm, for solving the underlying multi-objective optimization problem with a lower computational cost, and (ii) an ad hoc stochastic representation of the hyperparameters involved in the prior stochastic model of the random elasticity field at mesoscale by modeling them as random variables, for which the probability distributions can be constructed by using the maximum entropy principle under a set of constraints defined by the available and objective information, and whose hyperparameters can be determined using the maximum likelihood estimation method with the available data, in order to enhance both the robustness and accuracy of the statistical inverse identification method of the prior stochastic model. Meanwhile, we propose as well to solve the multi-objective optimization problem by using machine learning based on artificial neural networks. Finally, the improved methodology is first validated on a fictitious virtual material within the framework of 2D plane stress and 3D linear elasticity theory, and then illustrated on a real heterogenous biological material (beef cortical bone) in 2D plane stress linear elasticity
APA, Harvard, Vancouver, ISO, and other styles
42

Lucci, Lisa. "Valutazione della resistenza a fatica di provini in Maraging Steel realizzati in Additive Manufacturing." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/19826/.

Full text
Abstract:
Questo elaborato ha lo scopo di testare a fatica una serie di provini realizzati con tecniche di Additive Manufacturing e sviluppati per avere strutture tolleranti ai danni a fatica. Queste strutture sono caratterizzate dall’essere delle “Impossible Design”, cioè non possono essere create tramite le tecniche di manifattura tradizionale. Ci si focalizzerà su delle prove a flessione rotante in cui il provino di forma tradizionale viene sostituito da un tubo cavo avente lo stesso peso del provino tradizionale, con una geometria interna gerarchica. In questo modo, quando dalla superficie o dal punto più stressato del provino si origina una cricca, si può fermare o rallentare la sua propagazione attraverso un meccanismo estrinseco.
APA, Harvard, Vancouver, ISO, and other styles
43

Chabot, John Alva. "VALIDATING STEADY TURBULENT FLOW SIMULATIONS USING STOCHASTIC MODELS." Miami University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=miami1443188391.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Sonono, Masimba Energy. "Applications of conic finance on the South African financial markets /| by Masimba Energy Sonono." Thesis, North-West University, 2012. http://hdl.handle.net/10394/9206.

Full text
Abstract:
Conic finance is a brand new quantitative finance theory. The thesis is on the applications of conic finance on South African Financial Markets. Conic finance gives a new perspective on the way people should perceive financial markets. Particularly in incomplete markets, where there are non-unique prices and the residual risk is rampant, conic finance plays a crucial role in providing prices that are acceptable at a stress level. The theory assumes that price depends on the direction of trade and there are two prices, one for buying from the market called the ask price and one for selling to the market called the bid price. The bid-ask spread reects the substantial cost of the unhedgeable risk that is present in the market. The hypothesis being considered in this thesis is whether conic finance can reduce the residual risk? Conic finance models bid-ask prices of cashows by applying the theory of acceptability indices to cashows. The theory of acceptability combines elements of arbitrage pricing theory and expected utility theory. Combining the two theories, set of arbitrage opportunities are extended to the set of all opportunities that a wide range of market participants are prepared to accept. The preferences of the market participants are captured by utility functions. The utility functions lead to the concepts of acceptance sets and the associated coherent risk measures. The acceptance sets (market preferences) are modeled using sets of probability measures. The set accepted by all market participants is the intersection of all the sets, which is convex. The size of this set is characterized by an index of acceptabilty. This index of acceptability allows one to speak of cashows acceptable at a level, known as the stress level. The relevant set of probability measures that can value the cashows properly is found through the use of distortion functions. In the first chapter, we introduce the theory of conic finance and build a foundation that leads to the problem and objectives of the thesis. In chapter two, we build on the foundation built in the previous chapter, and we explain in depth the theory of acceptability indices and coherent risk measures. A brief discussion on coherent risk measures is done here since the theory of acceptability indices builds on coherent risk measures. It is also in this chapter, that some new acceptability indices are introduced. In chapter three, focus is shifted to mathematical tools for financial applications. The chapter can be seen as a prerequisite as it bridges the gap from mathematical tools in complete markets to incomplete markets, which is the market that conic finance theory is trying to exploit. As the chapter ends, models used for continuous time modeling and simulations of stochastic processes are presented. In chapter four, the attention is focussed on the numerical methods that are relevant to the thesis. Details on obtaining parameters using the maximum likelihood method and calibrating the parameters to market prices are presented. Next, option pricing by Fourier transform methods is detailed. Finally a discussion on the bid-ask formulas relevant to the thesis is done. Most of the numerical implementations were carried out in Matlab. Chapter five gives an introduction to the world of option trading strategies. Some illustrations are used to try and explain the option trading strategies. Explanations of the possible scenarios at the expiration date for the different option strategies are also included. Chapter six is the appex of the thesis, where results from possible real market scenarios are presented and discussed. Only numerical results were reported on in the thesis. Empirical experiments could not be done due to limitations of availabilty of real market data. The findings from the numerical experiments showed that the spreads from conic finance are reduced. This results in reduced residual risk and reduced low cost of entering into the trading strategies. The thesis ends with formal discussions of the findings in the thesis and some possible directions for further research in chapter seven.
Thesis (MSc (Risk Analysis))--North-West University, Potchefstroom Campus, 2013.
APA, Harvard, Vancouver, ISO, and other styles
45

Ulgen, Burcin Emre. "Estimation In The Simple Linear Regression Model With One-fold Nested Error." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/3/12606171/index.pdf.

Full text
Abstract:
In this thesis, estimation in simple linear regression model with one-fold nested error is studied. To estimate the fixed effect parameters, generalized least squares and maximum likelihood estimation procedures are reviewed. Moreover, Minimum Norm Quadratic Estimator (MINQE), Almost Unbiased Estimator (AUE) and Restricted Maximum Likelihood Estimator (REML) of variance of primary units are derived. Also, confidence intervals for the fixed effect parameters and the variance components are studied. Finally, the aforesaid estimation techniques and confidence intervals are applied to a real-life data and the results are presented
APA, Harvard, Vancouver, ISO, and other styles
46

Luo, Hao. "Some Aspects on Confirmatory Factor Analysis of Ordinal Variables and Generating Non-normal Data." Doctoral thesis, Uppsala universitet, Statistiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-149423.

Full text
Abstract:
This thesis, which consists of five papers, is concerned with various aspects of confirmatory factor analysis (CFA) of ordinal variables and the generation of non-normal data. The first paper studies the performances of different estimation methods used in CFA when ordinal data are encountered.  To take ordinality into account the four estimation methods, i.e., maximum likelihood (ML), unweighted least squares, diagonally weighted least squares, and weighted least squares (WLS), are used in combination with polychoric correlations. The effect of model sizes and number of categories on the parameter estimates, their standard errors, and the common chi-square measure of fit when the models are both correct and misspecified are examined. The second paper focuses on the appropriate estimator of the polychoric correlation when fitting a CFA model. A non-parametric polychoric correlation coefficient based on the discrete version of Spearman's rank correlation is proposed to contend with the situation of non-normal underlying distributions. The simulation study shows the benefits of using the non-parametric polychoric correlation under conditions of non-normality. The third paper raises the issue of simultaneous factor analysis. We study the effect of pooling multi-group data on the estimation of factor loadings. Given the same factor loadings but different factor means and correlations, we investigate how much information is lost by pooling the groups together and only estimating the combined data set using the WLS method. The parameter estimates and their standard errors are compared with results obtained by multi-group analysis using ML. The fourth paper uses a Monte Carlo simulation to assess the reliability of the Fleishman's power method under various conditions of skewness, kurtosis, and sample size. Based on the generated non-normal samples, the power of D'Agostino's (1986) normality test is studied. The fifth paper extends the evaluation of algorithms to the generation of multivariate non-normal data.  Apart from the requirement of generating reliable skewness and kurtosis, the generated data also need to possess the desired correlation matrices.  Four algorithms are investigated in terms of simplicity, generality, and reliability of the technique.
APA, Harvard, Vancouver, ISO, and other styles
47

Rendas, Luís Manuel Pinto. "Estimação da densidade populacional em amostragem por transectos lineares com recurso ao modelo logspline." Master's thesis, Universidade de Évora, 2001. http://hdl.handle.net/10174/15256.

Full text
Abstract:
Apresentamos abordagem sobre a evolução histórica da amostragem por transectos lineares e desenvolve-se a teoria que lhe é subjacente. Descrevemos a metodologia de estimação em amostragem por transectos lineares mais utilizada actualmente que foi proposta por Buckland (1992a). São discutidos os seus aspectos mais relevantes no que respeita aos pressupostos utilizados e à escolha de uma função de detecção adequada. Fazemos a revisão de uma teoria recente denominada logspline density estimation, desenvolvida por Koo & Stone (1986a e 1986b) e Stone (1990), que permite estimar o logaritmo de uma função densidade de probabilidade utilizando-se splines cúbicos, estimação pelo método da máxima verosimilhança e adição e detecção de nós seleccionados pelas estatísticas de Rao e Wald, respectivamente. Fazemos uma pequena adaptação que permite aplicar esta teoria ao cálculo do estimados da probabilidade de detecção f (0), no contexto dos transectos Lineares, e consequentemente estimar a densidade populacional de animais. São analisados dois exemplos práticos: os dados das estacas, de madeira e dos ungulados africanos descritos e estudados por Burnham et al. (1980). Comparamos os resultados obtidos utilizando a metodologia logspline com a utilizada no programa DISTANCE. Avaliamos a metodologia das logsplines aplicadas aos transectos lineares através de um conjunto de simulações de populações animais, utilizando-se seis funções de detecção e seis diferentes dimensões de populações. Os cálculos foram efectuados através dos programas DISTANCE e POLSPLINE e desenvolvemos pequenos programas que permitiram gerar e formatar os dados, calcular as medidas utilizadas e gerar amostragens por bootstrap para calcular os intervalos de confiança, no caso da estimação por logsplines. Discutimos os resultados obtidos e apontamos perspectivas de desenvolvimento futuro. /*** Abstract - We present a brief historical note of line transect sampling and its underlying theory. We describe the method, which is commonly used at the present time, proposed by Buckland (1992a). The most relevant features of the line transect methodology are discussed in terms of assumptions used and the choice of an adequate detection function. We review a recent theory called logspline density estimation, developed by Koo & Stone (1986a e 1986b) and Stone (1990) which allows estimating the logarithm of density probability function using cubic splines, maximum likelihood estimation and addition and deletion of knots selected by the Rao and Wald statistics. We made a slight adjustment that allows us to apply this theory to the value of f (0) estimator in the area of line transects and therefore to estimate the population density. Here two practical examples are presented: the wooden stake data and the African ungulate data analised by Burnham et al. (1980). We compare the results obtained by using the logspline method with the ore used by program DISTANCE. The logspline method applied to line transects is evaluated through a set of simulation scenarios, six detection functions and six different population dimensions. Are used Programs DISTANCE and POLSPLINE were used and small programs were developed. These enabled us to measure and generate samples by bootstrap to calculate the confidence intervals in logspline density estimation. Finally the results obtained are discussed and we point out perspectives of future development.
APA, Harvard, Vancouver, ISO, and other styles
48

Reichmanová, Barbora. "Užití modelů diskrétních dat." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2018. http://www.nusl.cz/ntk/nusl-392846.

Full text
Abstract:
Při analýze dat růstu rostlin v řádku dané délky bychom měli uvažovat jak pravděpodobnost, že semínko zdárně vyroste, tak i náhodný počet semínek, které byly zasety. Proto se v celé práci věnujeme analýze náhodných sum, kde počet nezávisle stejně rozdělených sčítanců je na nich nezávislé náhodné číslo. První část práce věnuje pozornost teoretickému základu, definuje pojem náhodná suma a uvádí vlastnosti, jako jsou číslené míry polohy nebo funkční charakteristiky popisující dané rozdělení. Následně je diskutována metoda odhadu parametrů pomocí maximální věrohodnosti a zobecněné lineární modely. Metoda kvazi-věrohodnosti je též krátce zmíněna. Tato část je ilustrována příklady souvisejícími s výchozím problémem. Poslední kapitola se věnuje aplikaci na reálných datech a následné analýze.
APA, Harvard, Vancouver, ISO, and other styles
49

Xu, Xingbai Xu. "Asymptotic Analysis for Nonlinear Spatial and Network Econometric Models." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1461249529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Mishchenko, Kateryna. "Numerical Algorithms for Optimization Problems in Genetical Analysis." Doctoral thesis, Västerås : Scool of education, Culture and Communication, Mälardalen University, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-650.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography