To see the other types of publications on this topic, follow the link: Estimation theory – Simulation methods.

Dissertations / Theses on the topic 'Estimation theory – Simulation methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Estimation theory – Simulation methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Laughlin, Trevor William. "A parametric and physics-based approach to structural weight estimation of the hybrid wing body aircraft." Thesis, Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45829.

Full text
Abstract:
Estimating the structural weight of a Hybrid Wing Body (HWB) aircraft during conceptual design has proven to be a significant challenge due to its unconventional configuration. Aircraft structural weight estimation is critical during the early phases of design because inaccurate estimations could result in costly design changes or jeopardize the mission requirements and thus degrade the concept's overall viability. The tools and methods typically employed for this task are inadequate since they are derived from historical data generated by decades of tube-and-wing style construction. In addition to the limited applicability of these empirical models, the conceptual design phase requires that any new tools and methods be flexible enough to enable design space exploration without consuming a significant amount of time and computational resources. This thesis addresses these challenges by developing a parametric and physics-based modeling and simulation (M&S) environment for the purpose of HWB structural weight estimation. The tools in the M&S environment are selected based on their ability to represent the unique HWB geometry and model the physical phenomena present in the centerbody section. The new M&S environment is used to identify key design parameters that significantly contribute to the variability of the HWB centerbody structural weight and also used to generate surrogate models. These surrogate models can augment traditional aircraft sizing routines and provide improved structural weight estimations.
APA, Harvard, Vancouver, ISO, and other styles
2

Tang, Qi. "Comparison of Different Methods for Estimating Log-normal Means." Digital Commons @ East Tennessee State University, 2014. https://dc.etsu.edu/etd/2338.

Full text
Abstract:
The log-normal distribution is a popular model in many areas, especially in biostatistics and survival analysis where the data tend to be right skewed. In our research, a total of ten different estimators of log-normal means are compared theoretically. Simulations are done using different values of parameters and sample size. As a result of comparison, ``A degree of freedom adjusted" maximum likelihood estimator and Bayesian estimator under quadratic loss are the best when using the mean square error (MSE) as a criterion. The ten estimators are applied to a real dataset, an environmental study from Naval Construction Battalion Center (NCBC), Super Fund Site in Rhode Island.
APA, Harvard, Vancouver, ISO, and other styles
3

Alcantara, Adeilton Pedro de 1973. "Inferência não paramétrica baseada no método H-splines para a intensidade de processos de Poisson não-homogêneos." [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306513.

Full text
Abstract:
Orientadores: Ronaldo Dias, Nancy Lopes Garcia<br>Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica<br>Made available in DSpace on 2018-08-21T02:16:44Z (GMT). No. of bitstreams: 1 Alcantara_AdeiltonPedrode_D.pdf: 7403994 bytes, checksum: a1b986bd21c825efb7bc7ecbb40c550f (MD5) Previous issue date: 2012<br>Resumo: Esta tese tem por objetivo propor uma nova metodologia baseada no método da expansão por bases B-splines e suas variantes para estimação não paramétrica da função intensidade...Observação: O resumo, na íntegra, poderá ser visualizado no texto completo da tese digital<br>Abstract: The main goal of this thesis is to propose a new methodology based on the method of expansion by B-splines bases for non-parametric estimate of the intensity function...Note: The complete abstract is available with the full electronic document<br>Doutorado<br>Estatistica<br>Doutor em Estatística
APA, Harvard, Vancouver, ISO, and other styles
4

Choy, Vivian K. Y. 1971. "Estimating the inevitability of fast oscillations in model systems with two timescales." Monash University, Dept. of Mathematics and Statistics, 2001. http://arrow.monash.edu.au/hdl/1959.1/9068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wallman, Kaj Mikael Joakim. "Computational methods for the estimation of cardiac electrophysiological conduction parameters in a patient specific setting." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:2d5573b9-5115-4434-b9c8-60f8d0531f86.

Full text
Abstract:
Cardiovascular disease is the primary cause of death globally. Although this group encompasses a heterogeneous range of conditions, many of these diseases are associated with abnormalities in the cardiac electrical propagation. In these conditions, structural abnormalities in the form of scars and fibrotic tissue are known to play an important role, leading to a high individual variability in the exact disease mechanisms. Because of this, clinical interventions such as ablation therapy and CRT that work by modifying the electrical propagation should ideally be optimized on a patient specific basis. As a tool for optimizing these interventions, computational modelling and simulation of the heart have become increasingly important. However, in order to construct these models, a crucial step is the estimation of tissue conduction properties, which have a profound impact on the cardiac activation sequence predicted by simulations. Information about the conduction properties of the cardiac tissue can be gained from electrophysiological data, obtained using electroanatomical mapping systems. However, as in other clinical modalities, electrophysiological data are often sparse and noisy, and this results in high levels of uncertainty in the estimated quantities. In this dissertation, we develop a methodology based on Bayesian inference, together with a computationally efficient model of electrical propagation to achieve two main aims: 1) to quantify values and associated uncertainty for different tissue conduction properties inferred from electroanatomical data, and 2) to design strategies to optimise the location and number of measurements required to maximise information and reduce uncertainty. The methodology is validated in several studies performed using simulated data obtained from image-based ventricular models, including realistic fibre orientation and conduction heterogeneities. Subsequently, by using the developed methodology to investigate how the uncertainty decreases in response to added measurements, we derive an a priori index for placing electrophysiological measurements in order to optimise the information content of the collected data. Results show that the derived index has a clear benefit in minimising the uncertainty of inferred conduction properties compared to a random distribution of measurements, suggesting that the methodology presented in this dissertation provides an important step towards improving the quality of the spatiotemporal information obtained using electroanatomical mapping.
APA, Harvard, Vancouver, ISO, and other styles
6

野口, 裕之, та Hiroyuki NOGUCHI. "<原著>項目困難度の分布の偏りが IRT 項目パラメタの発見的推定値に与える影響". 名古屋大学教育学部, 1992. http://hdl.handle.net/2237/3870.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Moravej, Mohammadtaghi. "Investigating Scale Effects on Analytical Methods of Predicting Peak Wind Loads on Buildings." FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3799.

Full text
Abstract:
Large-scale testing of low-rise buildings or components of tall buildings is essential as it provides more representative information about the realistic wind effects than the typical small scale studies, but as the model size increases, relatively less large-scale turbulence in the upcoming flow can be generated. This results in a turbulence power spectrum lacking low-frequency turbulence content. This deficiency is known to have significant effects on the estimated peak wind loads. To overcome these limitations, the method of Partial Turbulence Simulation (PTS) has been developed recently in the FIU Wall of Wind lab to analytically compensate for the effects of the missing low-frequency content of the spectrum. This method requires post-test analysis procedures and is based on the quasi-steady assumptions. The current study was an effort to enhance that technique by investigating the effect of scaling and the range of applicability of the method by considering the limitations risen from the underlying theory, and to simplify the 2DPTS (includes both in-plane components of the turbulence) by proposing a weighted average method. Investigating the effect of Reynolds number on peak aerodynamic pressures was another objective of the study. The results from five tested building models show as the model size was increased, PTS results showed a better agreement with the available field data from TTU building. Although for the smaller models (i.e., 1:100,1:50) almost a full range of turbulence spectrum was present, the highest peaks observed at full-scale were not reproduced, which apparently was because of the Reynolds number effect. The most accurate results were obtained when the PTS was used in the case with highest Reynolds number, which was the1:6 scale model with a less than 5% blockage and a xLum/bm ratio of 0.78. Besides that, the results showed that the weighted average PTS method can be used in lieu of the 2DPTS approach. So to achieve the most accurate results, a large-scale test followed by a PTS peak estimation method deemed to be the desirable approach which also allows the xLum/bm values much smaller than the ASCE recommended numbers.
APA, Harvard, Vancouver, ISO, and other styles
8

King, David R. "A bayesian solution for the law of categorical judgment with category boundary variability and examination of robustness to model violations." Thesis, Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/52960.

Full text
Abstract:
Previous solutions for the the Law of Categorical Judgment with category boundary variability have either constrained the standard deviations of the category boundaries in some way or have violated the assumptions of the scaling model. In the current work, a fully Bayesian Markov chain Monte Carlo solution for the Law of Categorical Judgment is given that estimates all model parameters (i.e. scale values, category boundaries, and the associated standard deviations). The importance of measuring category boundary standard deviations is discussed in the context of previous research in signal detection theory, which gives evidence of interindividual variability in how respondents perceive category boundaries and even intraindividual variability in how a respondent perceives category boundaries across trials. Although the measurement of category boundary standard deviations appears to be important for describing the way respondents perceive category boundaries on the latent scale, the inclusion of category boundary standard deviations in the scaling model exposes an inconsistency between the model and the rating method. Namely, with category boundary variability, the scaling model suggests that a respondent could experience disordinal category boundaries on a given trial. However, the idea that a respondent actually experiences disordinal category boundaries seems unlikely. The discrepancy between the assumptions of the scaling model and the way responses are made at the individual level indicates that the assumptions of the model will likely not be met. Therefore, the current work examined how well model parameters could be estimated when the assumptions of the model were violated in various ways as a consequence of disordinal category boundary perceptions. A parameter recovery study examined the effect of model violations on estimation accuracy by comparing estimates obtained from three response processes that violated the assumptions of the model with estimates obtained from a novel response process that did not violate the assumptions of the model. Results suggest all parameters in the Law of Categorical Judgment can be estimated reasonably well when these particular model violations occur, albeit to a lesser degree of accuracy than when the assumptions of the model are met.
APA, Harvard, Vancouver, ISO, and other styles
9

Pettersson, Tobias. "Global optimization methods for estimation of descriptive models." Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11781.

Full text
Abstract:
<p>Using mathematical models with the purpose to understand and store knowlegde about a system is not a new field in science with early contributions dated back to, e.g., Kepler’s laws of planetary motion.</p><p>The aim is to obtain such a comprehensive predictive and quantitative knowledge about a phenomenon so that mathematical expressions or models can be used to forecast every relevant detail about that phenomenon. Such models can be used for reducing pollutions from car engines; prevent aviation incidents; or developing new therapeutic drugs. Models used to forecast, or predict, the behavior of a system are refered to predictive models. For such, the estimation problem aims to find one model and is well known and can be handeled by using standard methods for global nonlinear optimization.</p><p>Descriptive models are used to obtain and store quantitative knowledge of system. Estimation of descriptive models has not been much described by the literature so far; instead the methods used for predictive models have beed applied. Rather than finding one particular model, the parameter estimation for descriptive models aims to find every model that contains descriptive information about the system. Thus, the parameter estimation problem for descriptive models can not be stated as a standard optimization problem.</p><p>The main objective for this thesis is to propose methods for estimation of descriptive models. This is made by using methods for nonlinear optimization including both new and existing theory.</p>
APA, Harvard, Vancouver, ISO, and other styles
10

Davis, J. Wade. "Wavelet-based methods for estimation and discrimination /." free to MU campus, to others for purchase, 2003. http://wwwlib.umi.com/cr/mo/fullcit?p3099616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Azzam, M. E. H. "Developments in decomposition methods for power system state estimation." Thesis, Brunel University, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.355486.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Peng, Linghua. "Normalizing constant estimation for discrete distribution simulation /." Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Wong, Oi-ling Irene, and 黃愛玲. "A study of Saddlepoint-based resampling methods." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31223667.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Wong, Oi-ling Irene. "A study of Saddlepoint-based resampling methods /." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B21629912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kelly, P. J. "A comparison of classical and robust methods of parameter estimation." Thesis, University of Newcastle Upon Tyne, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.383998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Adolfsson, David, and Tom Claesson. "Estimation methods for Asian Quanto Basket options." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-160920.

Full text
Abstract:
All financial institutions that provide options to counterparties will in most cases get involved withMonte Carlo simulations. Options with a payoff function that depends on asset’s value at differenttime points over its lifespan are so called path dependent options. This path dependency impli-cates that there exists no parametric solution and the price must hence be estimated, it is hereMonte Carlo methods come into the picture. The problem though with this fundamental optionpricing method is the computational time. Prices fluctuate continuously on the open market withrespect to different risk factors and since it’s impossible to re-evaluate the option for all shifts dueto its computing intensive nature, estimations of the option price must be used. Estimating theprice from known points will of course never produce the same result as a full re-evaluation but anestimation method that produces reliable results and greatly reduces computing time is desirable.This thesis will evaluate different approaches and try to minimize the estimation error with respectto a certain number of risk factors.This is the background for our master thesis at Swedbank. The goal is to create multiple estima-tion methods and compare them to Swedbank’s current estimation model. By doing this we couldpotentially provide Swedbank with improvement ideas regarding some of its option products andrisk measurements. This thesis is primarily based on two estimation methods that estimate optionprices with respect to two variable risk factors, the value of the underlying assets and volatility.The first method is a grid that uses a second order Taylor expansion and the sensitivities delta,gamma and vega. The other method uses a grid of pre-simulated option prices for different shiftsin risk factors. The interpolation technique that is used in this method is calledPiecewise CubicHermiteinterpolation. The methods (or referred to as approaches in the report) are implementedto handle a relative change of 50 percent in the underlying asset’s index value, which is the firstrisk factor. Concerning the second risk factor, volatility, both methods estimate prices for a 50percent relative downward change and an upward change of 400 percent from the initial volatility.Should there emerge even more extreme market conditions both methods use linear extrapolationto estimate a new option price.
APA, Harvard, Vancouver, ISO, and other styles
17

Kuha, Jouni. "Simulation-based estimation methods for regression models with covariate measurement error." Thesis, University of Southampton, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Patel, Pansy. "SIMULATION OF PHOTOCHROMIC COMPOUNDS USING DENSITY FUNCTIONAL THEORY METHODS." Doctoral diss., University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2208.

Full text
Abstract:
This Thesis describes the systematic theoretical study aimed at prediction of the essential properties for the functional organic molecules that belong to diarylethene (DA) family of compounds. Diarylethenes present the distinct ability to change color under the influence of light, known as photochromism. This change is due to ultrafast chemical transition from open to closed ring isomers (photocyclization). It can be used for optical data storage, photoswitching, and other photonic applications. In this work we apply Density Functional Theory methods to predict 6 of the related properties: (i) molecular geometry; (ii) resonant wavelength; (iii) thermal stability; (iv) fatigue resistance; (v) quantum yield and (vi) nanoscale organization of the material. In order to study sensitivity at diode laser wavelengths, we optimized geometry and calculated vertical absorption spectra for a benchmark set of 28 diarylethenes. Bond length alternation (BLA) parameters and maximum absorption wavelengths (&#955;max) are compared to the data presently available from X-ray diffraction and spectroscopy experiments. We conclude that TD-M05/6-31G*/PCM//M05-2X/6-31G*/PCM level of theory gives the best agreement for both the parameters. For our predictions the root mean square deviation (RMSD) are below 0.014 Å for the BLAs and 25 nm for &#955;max. The polarization functions in the basis set and solvent effects are both important for this agreement. Next we consider thermal stability. Our results suggest that UB3LYP and UM05-2X functionals predict the activation barrier for the cycloreversion reaction within 3-4 kcal/mol from experimental value for a set of 7 photochromic compounds. We also study thermal fatigue, defined as the rate of undesirable photochemical side reactions. In order to predict the kinetics of photochemical fatigue, we investigate the mechanism of by-product formation. It has been established experimentally that the by-product is formed from the closed isomer; however the mechanism was not known. We found that the thermal by-product pathway involves the bicyclohexane (BCH) ring formation as a stable intermediate, while the photochemical by-product formation pathway may involve the methylcyclopentene diradical (MCPD) intermediate. At UM05-2X/6-31G* level, the calculated barrier between the closed form and the BCH intermediate is 51.2 kcal/mol and the barrier between the BCH intermediate and the by-product 16.2 kcal/mol. Next we investigate two theoretical approaches to the prediction of quantum yield (QY) for a set of 14 diarylethene derivatives at the validated M05-2X/6-31G* theory level. These include population of ground-state conformers and location of the pericycylic minimum on the potential energy surface 2-A state. Finally, we investigate the possibility of nanoscale organization of the photochromic material based on DNA template, as an alternative to the amorphous polymer matrix. Here we demonstrate that Molecular Dynamic methods are capable to describe the intercalation of &#960;-conjugated systems between DNA base pairs and accurately reproduced the available photophysical properties of these nanocomposites. In summary, our results are in good agreement with the experimental data for the benchmark set of molecules we conclude that Density Functional Theory methods could be successfully used as an important component of material design strategy in prediction of accurate molecular geometry, absorption spectra, thermal stability of isomers, fatigue resistance, quantum yield of photocyclization and photophysical properties of nanocomposites.<br>Ph.D.<br>Department of Chemistry<br>Sciences<br>Chemistry PhD
APA, Harvard, Vancouver, ISO, and other styles
19

Meterelliyoz, Kuyzu Melike. "Variance parameter estimation methods with re-use of data." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26490.

Full text
Abstract:
Thesis (Ph.D)--Industrial and Systems Engineering, Georgia Institute of Technology, 2009.<br>Committee Co-Chair: Alexopoulos, Christos; Committee Co-Chair: Goldsman, David; Committee Member: Kim, Seong-Hee; Committee Member: Shapiro, Alexander; Committee Member: Spruill, Carl. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
20

Kim, Heeyoung. "Statistical methods for function estimation and classification." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/44806.

Full text
Abstract:
This thesis consists of three chapters. The first chapter focuses on adaptive smoothing splines for fitting functions with varying roughness. In the first part of the first chapter, we study an asymptotically optimal procedure to choose the value of a discretized version of the variable smoothing parameter in adaptive smoothing splines. With the choice given by the multivariate version of the generalized cross validation, the resulting adaptive smoothing spline estimator is shown to be consistent and asymptotically optimal under some general conditions. In the second part, we derive the asymptotically optimal local penalty function, which is subsequently used for the derivation of the locally optimal smoothing spline estimator. In the second chapter, we propose a Lipschitz regularity based statistical model, and apply it to coordinate measuring machine (CMM) data to estimate the form error of a manufactured product and to determine the optimal sampling positions of CMM measurements. Our proposed wavelet-based model takes advantage of the fact that the Lipschitz regularity holds for the CMM data. The third chapter focuses on the classification of functional data which are known to be well separable within a particular interval. We propose an interval based classifier. We first estimate a baseline of each class via convex optimization, and then identify an optimal interval that maximizes the difference among the baselines. Our interval based classifier is constructed based on the identified optimal interval. The derived classifier can be implemented via a low-order-of-complexity algorithm.
APA, Harvard, Vancouver, ISO, and other styles
21

Martins, Raphael dos Santos Veloso. "Cost-push channel of monetary policy: estimation and simulation." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/12/12138/tde-29032011-203758/.

Full text
Abstract:
Essa dissertação investiga a existência de um canal de custos de política monetária. Para tanto, utiliza métodos econométricos e de simulação. No primeiro caso, três métodos diferentes são utilizados para a estimação com dados brasileiros de quatro especificações, sendo três delas além das estimadas na literatura. Os resultados obtidos indicam a presença do canal de custos quando essas outras especificações são consideradas. Logo, a literatura empírica existente pode ter subestimado a importância do canal de custo da transmissão monetária. Além disso, foi adaptado e simulado um modelo stock-flow, estudando-se seu comportamento quando são impostas mudanças exógenas nos juros. Neste exercício também foi ilustrada a possibilidade de operação do canal de custos.<br>This dissertation investigates the existence of a cost-push channel of monetary policy. For this aim, it uses econometric and simulation methods. In the first case, three different methods are used for the estimation using Brazilian data of four specifications, three of them being unusual in the literature. The obtained results reveal the presence of the cost-push channel when these other specifications are considered. Hence the existing empirical literature may have underestimated the relevance of the cost-push channel of monetary transmission. Also, a stock-flow model was adapted and simulated, where we looked into its behavior when exogenous changes on the interest rate are imposed. In this exercise the possibility of the operation of the cost-push channel was also present.
APA, Harvard, Vancouver, ISO, and other styles
22

Armstrong, Helen School of Mathematics UNSW. "Bayesian estimation of decomposable Gaussian graphical models." Awarded by:University of New South Wales. School of Mathematics, 2005. http://handle.unsw.edu.au/1959.4/24295.

Full text
Abstract:
This thesis explains to statisticians what graphical models are and how to use them for statistical inference; in particular, how to use decomposable graphical models for efficient inference in covariance selection and multivariate regression problems. The first aim of the thesis is to show that decomposable graphical models are worth using within a Bayesian framework. The second aim is to make the techniques of graphical models fully accessible to statisticians. To achieve these aims the thesis makes a number of statistical contributions. First, it proposes a new prior for decomposable graphs and a simulation methodology for estimating this prior. Second, it proposes a number of Markov chain Monte Carlo sampling schemes based on graphical techniques. The thesis also presents some new graphical results, and some existing results are reproved to make them more readily understood. Appendix 8.1 contains all the programs written to carry out the inference discussed in the thesis, together with both a summary of the theory on which they are based and a line by line description of how each routine works.
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Chiao-En. "Theory and applications of parametric estimation methods for sensor array signal processing." Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1666392601&sid=24&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Amartey, Philomina. "A COMPARISON OF SOME ESTIMATION METHODS FOR HANDLING OMITTED VARIABLES : A SIMULATION STUDY." Thesis, Uppsala universitet, Statistiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-412896.

Full text
Abstract:
Omitted variable problem is a primary statistical challenge in various observational studies. Failure to control for the omitted variable bias in any regression analysis can alter the efficiency of results obtained. The purpose of this study is to compare the performance of four estimation methods (Proxy variable, Instrumental Variable, Fixed Effect, First Difference) in controlling the omitted variable problem when they are varying with time, constant over time and slightly varying with time. Results from the Monte Carlo study showed that, the prefect proxy variable estimator performed better  than the other models under all three cases. The instrument Variable estimator performed better than the Fixed Effect and First Difference estimator except in the case when the omitted variable is constant over time. Also, the Fixed Effect performed better than First Difference estimator when the omitted variable is time-invariant and vice versa when the omitted is slightly varying with time.
APA, Harvard, Vancouver, ISO, and other styles
25

Nelson, Kerrie P. "Generalized linear mixed models : development and comparison of different estimation methods /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/8960.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Fang, Fang. "A simulation study for Bayesian hierarchical model selection methods." View electronic thesis (PDF), 2009. http://dl.uncw.edu/etd/2009-2/fangf/fangfang.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Das, Mitali 1971. "Non-parametric estimation methods for instrumental variables and sample selection : theory and applications." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/9828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Brännström, Anton. "A Comparison of Three Methods of Estimation Applied to Contaminated Circular Data." Thesis, Umeå universitet, Statistik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-149426.

Full text
Abstract:
This study compares the performance of the Maximum Likelihood estimator (MLE), estimators based on spacings called Generalized Maximum Spacing estimators (GSEs), and the One Step Minimum Hellinger Distance estimator (OSMHD), on data originating from a circular distribution.  The purpose of the study is to investigate the different estimators’ performance on directional data. More specifically, we compare the estimators’ ability to estimate parameters of the von Mises distribution, which is determined by a location parameter and a scale parameter. For this study, we only look at the scenario in which one of the parameters is unknown. The main part of the study is concerned with estimating the parameters under the condition, in which the data contain outliers, but a small part is also dedicated to estimation at the true model.  When estimating the location parameter under contaminated conditions, the results indicate that some versions of the GSEs tend to outperform the other estimators. It should be noted that these seemingly more robust estimators appear comparatively less optimal at the true model, but this is a tradeoff that must be made on a case by case basis. Under the same contaminated conditions, all included estimators appear to have seemingly greater difficulties estimating the scale parameter. However, for this case, some of the GSEs are able to handle the contamination a bit better than the rest. In addition, there might exist other versions of GSEs, not included in this study, which perform better.
APA, Harvard, Vancouver, ISO, and other styles
29

Enqvist, Per. "Spectral Estimation by Geometric, Topological and Optimization Methods." Doctoral thesis, Stockholm, 2001. http://media.lib.kth.se:8080/kthdisseng.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Prudius, Andrei A. "Adaptive Random Search Methods for Simulation Optimization." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16318.

Full text
Abstract:
This thesis is concerned with identifying the best decision among a set of possible decisions in the presence of uncertainty. We are primarily interested in situations where the objective function value at any feasible solution needs to be estimated, for example via a ``black-box' simulation procedure. We develop adaptive random search methods for solving such simulation optimization problems. The methods are adaptive in the sense that they use information gathered during previous iterations to decide how simulation effort is expended in the current iteration. We consider random search because such methods assume very little about the structure of the underlying problem, and hence can be applied to solve complex simulation optimization problems with little expertise required from an end-user. Consequently, such methods are suitable for inclusion in simulation software. We first identify desirable features that algorithms for discrete simulation optimization need to possess to exhibit attractive empirical performance. Our approach emphasizes maintaining an appropriate balance between exploration, exploitation, and estimation. We also present two new and almost surely convergent random search methods that possess these desirable features and demonstrate their empirical attractiveness. Second, we develop two frameworks for designing adaptive and almost surely convergent random search methods for discrete simulation optimization. Our frameworks involve averaging, in that all decisions that require estimates of the objective function values at various feasible solutions are based on the averages of all observations collected at these solutions so far. We present two new and almost surely convergent variants of simulated annealing and demonstrate the empirical effectiveness of averaging and adaptivity in the context of simulated annealing. Finally, we present three random search methods for solving simulation optimization problems with uncountable feasible regions. One of the approaches is adaptive, while the other two are based on pure random search. We provide conditions under which the three methods are convergent, both in probability and almost surely. Lastly, we include a computational study that demonstrates the effectiveness of the methods when compared to some other approaches available in the literature.
APA, Harvard, Vancouver, ISO, and other styles
31

Svärd, Karl. "Developing new methods for estimating population divergence times from sequence data." Thesis, Uppsala universitet, Institutionen för medicinsk biokemi och mikrobiologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-450123.

Full text
Abstract:
Methods for estimating past demographic events of populations are powerful tools in order to get insights of otherwise hidden pasts. The genetic data of people is a valuable resource for these purposes as patterns of variation can inform of the past evolutionary forces and historical events that generated them. There is, however, a lack of methods within the field that uses this information to its full extent. That is why this project has looked at developing a set of new alternatives for estimating demographic events. The work done has been based on modifying the purely sequence based method TTo (Two-Two-outgroup) for estimating divergence times of two populations. The modifications consisted of using beta distributions to model the polymorphic diversity of the ancestral population in order to increase the max sample size possible. The finished project resulted in two implemented methods: TT-beta and a partial variant of MM. TT-beta was able to produce estimations in the same region as TTo and showed that the usage of beta distributions had real potential. For MM there only was a partial implementation able to be done, but this one also showed promise and the ability to use varying sample sizes to estimate demographic values.
APA, Harvard, Vancouver, ISO, and other styles
32

Rydén, Patrik. "Statistical analysis and simulation methods related to load-sharing models." Doctoral thesis, Umeå universitet, Matematisk statistik, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-46772.

Full text
Abstract:
We consider the problem of estimating the reliability of bundles constructed of several fibres, given a particular kind of censored data. The bundles consist of several fibres which have their own independent identically dis-tributed failure stresses (i.e.the forces that destroy the fibres). The force applied to a bundle is distributed between the fibres in the bundle, accord-ing to a load-sharing model. A bundle with these properties is an example of a load-sharing system. Ropes constructed of twisted threads, compos-ite materials constructed of parallel carbon fibres, and suspension cables constructed of steel wires are all examples of load-sharing systems. In par-ticular, we consider bundles where load-sharing is described by either the Equal load-sharing model or the more general Local load-sharing model. In order to estimate the cumulative distribution function of failure stresses of bundles, we need some observed data. This data is obtained either by testing bundles or by testing individual fibres. In this thesis, we develop several theoretical testing methods for both fibres and bundles, and related methods of statistical inference. Non-parametric and parametric estimators of the cumulative distribu-tion functions of failure stresses of fibres and bundles are obtained from different kinds of observed data. It is proved that most of these estimators are consistent, and that some are strongly consistent estimators. We show that resampling, in this case random sampling with replacement from sta-tistically independent portions of data, can be used to assess the accuracy of these estimators. Several numerical examples illustrate the behavior of the obtained estimators. These examples suggest that the obtained estimators usually perform well when the number of observations is moderate.
APA, Harvard, Vancouver, ISO, and other styles
33

Magnevall, Martin. "Methods for Simulation and Characterization of Nonlinear Mechanical Structures." Licentiate thesis, Karlskrona : Blekinge Institute of Technology, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00430.

Full text
Abstract:
Trial and error and the use of highly time-consuming methods are often necessary for modeling, simulating and characterizing nonlinear dynamical systems. However, for the rather common special case when a nonlinear system has linear relations between many of its degrees of freedom there are particularly interesting opportunities for more efficient approaches. The aim of this thesis is to develop and validate new efficient methods for the theoretical and experimental study of mechanical systems that include significant zero-memory or hysteretic nonlinearities related to only small parts of the whole system. The basic idea is to take advantage of the fact that most of the system is linear and to use much of the linear theories behind forced response simulations. This is made possible by modeling the nonlinearities as external forces acting on the underlying linear system. The result is very fast simulation routines where the model is based on the residues and poles of the underlying linear system. These residues and poles can be obtained analytically, from finite element models or from experimental measurements, making these forced response routines very versatile. Using this approach, a complete nonlinear model contains both linear and nonlinear parts. Thus, it is also important to have robust and accurate methods for estimating both the linear and nonlinear system parameters from experimental data. The results of this work include robust and user-friendly routines based on sinusoidal and random noise excitation signals for characterization and description of nonlinearities from experimental measurements. These routines are used to create models of the studied systems. When combined with efficient simulation routines, complete tools are created which are both versatile and computationally inexpensive. The developed methods have been tested both by simulations and with experimental test rigs with promising results. This indicates that they are useful in practice and can provide a basis for future research and development of methods capable of handling more complex nonlinear systems.
APA, Harvard, Vancouver, ISO, and other styles
34

Dube, Ntuthuko Marcus. "Development of methods for modelling, parameter and state estimation for nonlinear processes." Thesis, Cape Peninsula University of Technology, 2017. http://hdl.handle.net/20.500.11838/2619.

Full text
Abstract:
Thesis (DTech (Electrical Engineering))--Cape Peninsula University of Technology, 2018.<br>Industrial processes tend to have very complex mathematical models that in most instances result in very model specific optimal estimation and designs of control strategies. Such models have many composition components, energy compartments and energy inventories that result in many process variables that are intertwined and too complex to separate from one another. Most of the derived mathematical process models, based on the application of first principles, are nonlinear and incorporate unknown parameters and unmeasurable states. This fact results in difficulties in design and implementation of controllers for a majority of industrial processes. There is a need for the existing parameter and state estimation methods to be further developed and for new methods to be developed in order to simplify the process of parameters or states calculation and be applicable for real-time implementation of various controllers for nonlinear systems. The thesis describes the research work done on developing new parameter and state estimation methods and algorithms for bilinear and nonlinear processes. Continuous countercurrent ion exchange (CCIX) process for desalination of water is considered as a case study of a process that can be modelled as a bilinear system with affine parameters or as purely nonlinear system. Many models of industrial processes can be presented in such a way. The ion exchange process model is developed based on the mass balance principle as a state space bilinear model according to the state and control variables. The developed model is restructured according to its parameters in order to formulate two types of parameter estimation problem – with process models linear and nonlinear according to the parameters. The two models developed are a bilinear model with affine and a nonlinear according to the parameters model. Four different methods are proposed for the first case: gradient-based optimization method that uses the process output measurements, optimization gradient based method that uses the full state vector measurements, direct solution using the state vector measurements, and Lagrange’s optimization technique. Two methods are proposed for the second case: direct solution of the model equation using MATLAB software and Lagrange’s optimisation techniques.<br>National Research Foundation (NRF)
APA, Harvard, Vancouver, ISO, and other styles
35

Narayanamurthi, Mahesh. "Advanced Time Integration Methods with Applications to Simulation, Inverse Problems, and Uncertainty Quantification." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/104357.

Full text
Abstract:
Simulation and optimization of complex physical systems are an integral part of modern science and engineering. The systems of interest in many fields have a multiphysics nature, with complex interactions between physical, chemical and in some cases even biological processes. This dissertation seeks to advance forward and adjoint numerical time integration methodologies for the simulation and optimization of semi-discretized multiphysics partial differential equations (PDEs), and to estimate and control numerical errors via a goal-oriented a posteriori error framework. We extend exponential propagation iterative methods of Runge-Kutta type (EPIRK) by [Tokman, JCP 2011], to build EPIRK-W and EPIRK-K time integration methods that admit approximate Jacobians in the matrix-exponential like operations. EPIRK-W methods extend the W-method theory by [Steihaug and Wofbrandt, Math. Comp. 1979] to preserve their order of accuracy under arbitrary Jacobian approximations. EPIRK-K methods extend the theory of K-methods by [Tranquilli and Sandu, JCP 2014] to EPIRK and use a Krylov-subspace based approximation of Jacobians to gain computational efficiency. New families of partitioned exponential methods for multiphysics problems are developed using the classical order condition theory via particular variants of T-trees and corresponding B-series. The new partitioned methods are found to perform better than traditional unpartitioned exponential methods for some problems in mild-medium stiffness regimes. Subsequently, partitioned stiff exponential Runge-Kutta (PEXPRK) methods -- that extend stiffly accurate exponential Runge-Kutta methods from [Hochbruck and Ostermann, SINUM 2005] to a multiphysics context -- are constructed and analyzed. PEXPRK methods show full convergence under various splittings of a diffusion-reaction system. We address the problem of estimation of numerical errors in a multiphysics discretization by developing a goal-oriented a posteriori error framework. Discrete adjoints of GARK methods are derived from their forward formulation [Sandu and Guenther, SINUM 2015]. Based on these, we build a posteriori estimators for both spatial and temporal discretization errors. We validate the estimators on a number of reaction-diffusion systems and use it to simultaneously refine spatial and temporal grids.<br>Doctor of Philosophy<br>The study of modern science and engineering begins with descriptions of a system of mathematical equations (a model). Different models require different techniques to both accurately and effectively solve them on a computer. In this dissertation, we focus on developing novel mathematical solvers for models expressed as a system of equations, where only the initial state and the rate of change of state as a function are known. The solvers we develop can be used to both forecast the behavior of the system and to optimize its characteristics to achieve specific goals. We also build methodologies to estimate and control errors introduced by mathematical solvers in obtaining a solution for models involving multiple interacting physical, chemical, or biological phenomena. Our solvers build on state of the art in the research community by introducing new approximations that exploit the underlying mathematical structure of a model. Where it is necessary, we provide concrete mathematical proofs to validate theoretically the correctness of the approximations we introduce and correlate with follow-up experiments. We also present detailed descriptions of the procedure for implementing each mathematical solver that we develop throughout the dissertation while emphasizing on means to obtain maximal performance from the solver. We demonstrate significant performance improvements on a range of models that serve as running examples, describing chemical reactions among distinct species as they diffuse over a surface medium. Also provided are results and procedures that a curious researcher can use to advance the ideas presented in the dissertation to other types of solvers that we have not considered. Research on mathematical solvers for different mathematical models is rich and rewarding with numerous open-ended questions and is a critical component in the progress of modern science and engineering.
APA, Harvard, Vancouver, ISO, and other styles
36

Chen, S. "Integrated system optimisation and parameter estimation methods for on-line control of industrial processes." Thesis, City University London, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.373326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Yanagi, Takahide. "Essays on Nonparametric Methods in Econometrics." Kyoto University, 2015. http://hdl.handle.net/2433/200427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Srinivasan, Raghuram. "Monte Carlo Alternate Approaches to Statistical Performance Estimation in VLSI Circuits." University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1396531763.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Li, Longzhuang. "Statistical methods for performance evaluation and their applications /." free to MU campus, to others for purchase, 2002. http://wwwlib.umi.com/cr/mo/fullcit?p3060118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Poole, David. "Bayesian inference for noninvertible deterministic simulation models, with application to bowhead whale assessment /." Thesis, Connect to this title online; UW restricted, 1999. http://hdl.handle.net/1773/8981.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Bengtsson, Fanny, and Klara Lindblad. "Methods for handling missing values : A simulation study comparing imputation methods for missing values on a Poisson distributed explanatory variable." Thesis, Uppsala universitet, Statistiska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-432467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Xie, Yingfu. "Maximum likelihood estimation and forecasting for GARCH, Markov switching, and locally stationary wavelet processes /." Umeå : Dept. of Forest Economics, Swedish University of Agricultural Sciences, 2007. http://epsilon.slu.se/2007107.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Elamin, Obbey Ahmed. "Nonparametric kernel estimation methods for discrete conditional functions in econometrics." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/nonparametric-kernel-estimation-methods-for-discrete-conditional-functions-in-econometrics(d443e56a-dfb8-4f23-bfbe-ec98ecac030b).html.

Full text
Abstract:
This thesis studies the mixed data types kernel estimation framework for the models of discrete dependent variables, which are known as kernel discrete conditional functions. The conventional parametric multinomial logit MNL model is compared with the mixed data types kernel conditional density estimator in Chapter (2). A new kernel estimator for discrete time single state hazard models is developed in Chapter (3), and named as the discrete time “external kernel hazard” estimator. The discrete time (mixed) proportional hazard estimators are then compared with the discrete time external kernel hazard estimator empirically in Chapter (4). The work in Chapter (2) attempts to estimate a labour force participation decision model using a cross-section data from the UK labour force survey in 2007. The work in Chapter (4) estimates a hazard rate for job-vacancies in weeks, using data from Lancashire Careers Service (LCS) between the period from March 1988 to June 1992. The evidences from the vast literature regarding female labour force participation and the job-market random matching theory are used to examine the empirical results of the estimators. The parametric estimator are tighten by the restrictive assumption regarding the link function of the discrete dependent variable and the dummy variables of the discrete covariates. Adding interaction terms improves the performance of the parametric models but encounters other risks like generating multicollinearity problem, increasing the singularity of the data matrix and complicates the computation of the ML function. On the other hand, the mixed data types kernel estimation framework shows an outstanding performance compared with the conventional parametric estimation methods. The kernel functions that are used for the discrete variables, including the dependent variable, in the mixed data types estimation framework, have substantially improved the performance of the kernel estimators. The kernel framework uses very few assumptions about the functional form of the variables in the model, and relay on the right choice of the kernel functions in the estimator. The outcomes of the kernel conditional density shows that female education level and fertility have high impact on females propensity to work and be in the labour force. The kernel conditional density estimator captures more heterogeneity among the females in the sample than the MNL model due to the restrictive parametric assumptions in the later. The (mixed) proportional hazard framework, on the other hand, missed to capture the effect of the job-market tightness in the job-vacancies hazard rate and produce inconsistent results when the assumptions regarding the distribution of the unobserved heterogeneity are changed. The external kernel hazard estimator overcomes those problems and produce results that consistent with the job market random matching theory. The results in this thesis are useful for nonparametric estimation research in econometrics and in labour economics research.
APA, Harvard, Vancouver, ISO, and other styles
44

Långström, Christoffer. "Comparing Multivariate Regression Methods For Compositional Data : Through Simulation Studies & Applications." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-138463.

Full text
Abstract:
Compositional data, where measurements are vectors with each component constituting a percentage of a whole, is abundant throughout many disciplines of science. Consequently, there is a strong need to establish valid statistical procedures for this type of data. In this work the basic theory of the compositional sample space is presented and through simulation studies and a case study on data from industrial applications, the current available methods for regression as applied to compositional data are evaluated. The main focus of this work is to establish linear regression in a way compatible with compositional data sets and compare this approach with the alternative of applying standard multivariate regression methods on raw compositional data. It is found that for several data sets, the difference between 'naive' multivariate linear regression and compositional linear regression is negligible; while for others (in particular where the dependence of covariates is not strictly linear) the compositional regression methods are shown to be stronger.
APA, Harvard, Vancouver, ISO, and other styles
45

Wood, Andrew Charles. "Methods for rainfall-runoff continuous simulation and flood frequency estimation on an ungauged river catchment with uncertainty." Thesis, Lancaster University, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.547969.

Full text
Abstract:
Historic methods for time series predictions on ungauged sites in the UK have tended to focus on the regionalisation and regression of model parameters against catchment characteristics. Owing to wide variations in catchment characteristics and the (often) poor identification of model parameters, this has resulted in highly uncertain predictions on the ungauged site. However, only very few studies have sought to assess uncertainties in the predicted hydrograph. Methods from the UK Flood Estimation Handbook, that are normally applied for an event design hydrograph, are adopted to choose a pooling group of hydrologically similar gauged catchments to an ungauged application site on the River Tyne. Model simulations are derived for each pooling group catchment with a BETA rainfall-runoff model structure conditioned for the catchment. The BETA rainfall-runoff model simulations are developed using a Monte Carlo approach. For the estimation of uncertainty a modification of the GLUE methodology is applied. Gauging station errors are used to develop limits of acceptability for selecting behavioural model simulations and the final uncertainty limits are obtained with a set of performance thresholds. Prediction limits are derived from a set of calibration and validation simulations for each catchment. Methods are investigated for the carry over of data from the pooled group of models to the ungauged site to develop a weighted model set prediction with pooled prediction limits. Further development of this methodology may offer some interesting approaches for cross-validation of models and further improvements in uncertainty estimation in hydrological regionalisation.
APA, Harvard, Vancouver, ISO, and other styles
46

Barlow, Paige Fithian. "Sea turtle bycatch by the U.S. Atlantic pelagic longline fishery: A simulation modeling analysis of estimation methods." Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/34342.

Full text
Abstract:
The U.S. pelagic longline fishery catches 98% of domestic swordfish landings but is also one of the three fisheries most affecting federally protected sea turtles (Crowder and Myers 2001, Witherington et al 2009). Bycatch by fisheries is considered the main anthropogenic threat to sea turtles (NRC 1990). <p> Accurate and precise bycatch estimates are imperative for sea turtle conservation and appropriate fishery management. However, estimation is complicated by only 8% observer coverage of fishing and data that are hierarchical in structure (i.e., multiple sets per trip), zero-heavy (i.e., bycatch is rare), and often overdispersed (i.e., larger variance than expected). <p> Therefore, I evaluated two predominant bycatch estimation methods, the delta-lognormal method and generalized linear models, and investigated improvements in uncertainty incorporation. I constructed a simulation model to evaluate bycatch estimation at two spatial scales under ten spatial models of sea turtle, fishing set, and observer distributions. <p> Results indicated that distributing observers relative to fishing effort and using the delta-lognormal-strata method was most appropriate. The delta-lognormal-strata 95% confidence interval (CI) was wider than statistically appropriate. The delta-lognormal-all sets pooled 95% CI was narrower but simulated bycatch was above the CI too frequently. Thus, I developed a bycatch estimate risk distribution to incorporate uncertainty in bycatch estimates. It gives managers access to the entire distribution of bycatch estimates and their choice of any risk level. <p> Results support the management agencyâ s observer distribution and estimation method but suggest a new procedure to incorporate uncertainty. This study is also informative for many similar datasets.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
47

Assali, Mehdi. "A macroeconomic model for a developing country : estimation and simulation of a macroeconometric model for Iran (1959-1993)." Thesis, Durham University, 1996. http://etheses.dur.ac.uk/1513/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Jukic, Boris. "Demand estimation techniques and investment incentives for the digital economy infrastructure : an econometric and simulation-based investigation /." Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Antelo, Junior Ernesto Willams Molina. "Estimação conjunta de atraso de tempo subamostral e eco de referência para sinais de ultrassom." Universidade Tecnológica Federal do Paraná, 2017. http://repositorio.utfpr.edu.br/jspui/handle/1/2616.

Full text
Abstract:
CAPES<br>Em ensaios não destrutivos por ultrassom, o sinal obtido a partir de um sistema de aquisição de dados real podem estar contaminados por ruído e os ecos podem ter atrasos de tempo subamostrais. Em alguns casos, esses aspectos podem comprometer a informação obtida de um sinal por um sistema de aquisição. Para lidar com essas situações, podem ser utilizadas técnicas de estimativa de atraso temporal (Time Delay Estimation ou TDE) e também técnicas de reconstrução de sinais, para realizar aproximações e obter mais informações sobre o conjunto de dados. As técnicas de TDE podem ser utilizadas com diversas finalidades na defectoscopia, como por exemplo, para a localização precisa de defeitos em peças, no monitoramento da taxa de corrosão em peças, na medição da espessura de um determinado material e etc. Já os métodos de reconstrução de dados possuem uma vasta gama de aplicação, como nos NDT, no imageamento médico, em telecomunicações e etc. Em geral, a maioria das técnicas de estimativa de atraso temporal requerem um modelo de sinal com precisão elevada, caso contrário, a localização dessa estimativa pode ter sua qualidade reduzida. Neste trabalho, é proposto um esquema alternado que estima de forma conjunta, uma referência de eco e atrasos de tempo para vários ecos a partir de medições ruidosas. Além disso, reinterpretando as técnicas utilizadas a partir de uma perspectiva probabilística, estendem-se suas funcionalidades através de uma aplicação conjunta de um estimador de máxima verossimilhança (Maximum Likelihood Estimation ou MLE) e um estimador máximo a posteriori (MAP). Finalmente, através de simulações, resultados são apresentados para demonstrar a superioridade do método proposto em relação aos métodos convencionais.<br>Abstract (parágrafo único): In non-destructive testing (NDT) with ultrasound, the signal obtained from a real data acquisition system may be contaminated by noise and the echoes may have sub-sample time delays. In some cases, these aspects may compromise the information obtained from a signal by an acquisition system. To deal with these situations, Time Delay Estimation (TDE) techniques and signal reconstruction techniques can be used to perform approximations and also to obtain more information about the data set. TDE techniques can be used for a number of purposes in the defectoscopy, for example, for accurate location of defects in parts, monitoring the corrosion rate in pieces, measuring the thickness of a given material, and so on. Data reconstruction methods have a wide range of applications, such as NDT, medical imaging, telecommunications and so on. In general, most time delay estimation techniques require a high precision signal model, otherwise the location of this estimate may have reduced quality. In this work, an alternative scheme is proposed that jointly estimates an echo model and time delays for several echoes from noisy measurements. In addition, by reinterpreting the utilized techniques from a probabilistic perspective, its functionalities are extended through a joint application of a maximum likelihood estimator (MLE) and a maximum a posteriori (MAP) estimator. Finally, through simulations, results are presented to demonstrate the superiority of the proposed method over conventional methods.
APA, Harvard, Vancouver, ISO, and other styles
50

Kaya, Egemen Tangut. "Estimation Of Expected Monetary Values Of Selected Turkish Oil Fields Using Two Different Risk Assessment Methods." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/1091495/index.pdf.

Full text
Abstract:
Most investments in the oil and gas industry involve considerable risk with a wide range of potential outcomes for a particular project. However, many economic evaluations are based on the &ldquo<br>most likely&rdquo<br>results of variables that could be expected without sufficient consideration given to other possible outcomes and it is well known that initial estimates of all these variables have uncertainty. The data is usually obtained during drilling of the initial oil well and the sources are geophysical (seismic surveys) for formation depths and areal extent of the reservoir trap, well logs for formation tops and bottoms, formation porosity, water saturation and possible permeable strata, core analysis for porosity and saturation data and DST (Drill-Stem Test) for possible oil production rates and samples for PVT (Pressure Volume Temperature) analysis to obtain FVF (Formation Volume Factor) and others. The question is how certain are the values of these variables and what is the probability of these values to occur in the reservoir to evaluate the possible risks. One of the most highly appreciable applications of the risk assessment is the estimation of volumetric reserves of hydrocarbon reservoirs. Monte Carlo and moment technique consider entire ranges of the variables of Original Oil in Place (OOIP) formula rather than deterministic figures. In the present work, predictions were made about how statistical distribution and descriptive statistics of porosity, thickness, area, water saturation, recovery factor, and oil formation volume factor affect the simulated OOIP values. The current work presents the case of two different oil fields in Turkey. It was found that both techniques produce similar results for 95%. The difference between estimated values increases as the percentages decrease from 50% and 5% probability.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography