To see the other types of publications on this topic, follow the link: Estimation theory – Simulation methods.

Journal articles on the topic 'Estimation theory – Simulation methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Estimation theory – Simulation methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Andersen, Torben G. "SIMULATION-BASED ECONOMETRIC METHODS." Econometric Theory 16, no. 1 (2000): 131–38. http://dx.doi.org/10.1017/s0266466600001080.

Full text
Abstract:
The accessibility of high-performance computing power has always influenced theoretical and applied econometrics. Gouriéroux and Monfort begin their recent offering, Simulation-Based Econometric Methods, with a stylized three-stage classification of the history of statistical econometrics. In the first stage, lasting through the 1960's, models and estimation methods were designed to produce closed-form expressions for the estimators. This spurred thorough investigation of the standard linear model, linear simultaneous equations with the associated instrumental variable techniques, and maximum likelihood estimation within the exponential family. During the 1970's and 1980's the development of powerful numerical optimization routines led to the exploration of procedures without closed-form solutions for the estimators. During this period the general theory of nonlinear statistical inference was developed, and nonlinear micro models such as limited dependent variable models and nonlinear time series models, e.g., ARCH, were explored. The associated estimation principles included maximum likelihood (beyond the exponential family), pseudo-maximum likelihood, nonlinear least squares, and generalized method of moments. Finally, the third stage considers problems without a tractable analytic criterion function. Such problems almost invariably arise from the need to evaluate high-dimensional integrals. The idea is to circumvent the associated numerical problems by a simulation-based approach. The main requirement is therefore that the model may be simulated given the parameters and the exogenous variables. The approach delivers simulated counterparts to standard estimation procedures and has inspired the development of entirely new procedures based on the principle of indirect inference.
APA, Harvard, Vancouver, ISO, and other styles
2

Maksimović, D. M., and V. B. Litovski. "Logic simulation methods for longest path delay estimation." IEE Proceedings - Computers and Digital Techniques 149, no. 2 (2002): 53. http://dx.doi.org/10.1049/ip-cdt:20020201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Almongy, Hisham M., Fatma Y. Alshenawy, Ehab M. Almetwally, and Doaa A. Abdo. "Applying Transformer Insulation Using Weibull Extended Distribution Based on Progressive Censoring Scheme." Axioms 10, no. 2 (2021): 100. http://dx.doi.org/10.3390/axioms10020100.

Full text
Abstract:
In this paper, the Weibull extension distribution parameters are estimated under a progressive type-II censoring scheme with random removal. The parameters of the model are estimated using the maximum likelihood method, maximum product spacing, and Bayesian estimation methods. In classical estimation (maximum likelihood method and maximum product spacing), we did use the Newton–Raphson algorithm. The Bayesian estimation is done using the Metropolis–Hastings algorithm based on the square error loss function. The proposed estimation methods are compared using Monte Carlo simulations under a progressive type-II censoring scheme. An empirical study using a real data set of transformer insulation and a simulation study is performed to validate the introduced methods of inference. Based on the result of our study, it can be concluded that the Bayesian method outperforms the maximum likelihood and maximum product-spacing methods for estimating the Weibull extension parameters under a progressive type-II censoring scheme in both simulation and empirical studies.
APA, Harvard, Vancouver, ISO, and other styles
4

Hamad, Alaa M., and Bareq B. Salman. "Different Estimation Methods of the Stress-Strength Reliability Restricted Exponentiated Lomax Distribution." Mathematical Modelling of Engineering Problems 8, no. 3 (2021): 477–84. http://dx.doi.org/10.18280/mmep.080319.

Full text
Abstract:
Lomax distribution, a large-scale probabilistic distribution used in industry, economics, actuarial science, queue theory, and Internet traffic modeling, is the most important distribution in reliability theory. In this paper estimating the reliability of Restricted exponentiated Lomax distribution in two cases, when one component X strength and Y stress R=P(Y<X), and when system content two component series strength, Y stress by using different estimation method. such as maximum likelihood, least square and shrinkage methods. A comparison between the outcomes results of the applied methods has been carried out based on mean square error (MSE) to investigate the best method and the obtained results have been displayed via MATLAB software package.
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Weiwei, Shezhou Luo, Xiaoliang Lu, Jon Atherton, and Jean-Philippe Gastellu-Etchegorry. "Simulation-Based Evaluation of the Estimation Methods of Far-Red Solar-Induced Chlorophyll Fluorescence Escape Probability in Discontinuous Forest Canopies." Remote Sensing 12, no. 23 (2020): 3962. http://dx.doi.org/10.3390/rs12233962.

Full text
Abstract:
The escape probability of Solar-induced chlorophyll fluorescence (SIF) can be remotely estimated using reflectance measurements based on spectral invariants theory. This can then be used to correct the effects of canopy structure on canopy-leaving SIF. However, the feasibility of these estimation methods is untested in heterogeneous vegetation such as the discontinuous forest canopy layer under evaluation here. In this study, the Discrete Anisotropic Radiative Transfer (DART) model is used to simulate canopy-leaving SIF, canopy total emitted SIF, canopy interceptance, and the fraction of absorbed photosynthetically active radiation (fAPAR) in order to evaluate the estimation methods of SIF escape probability in discontinuous forest canopies. Our simulation results show that the normalized difference vegetation index (NDVI) can be used to partly eliminate the effects of background reflectance on the estimation of SIF escape probability in most cases, but fails to produce accurate estimations if the background is partly or totally covered by vegetation. We also found that SIF escape probabilities estimated at a high solar zenith angle have better estimation accuracy than those estimated at a lower solar zenith angle. Our results show that additional errors will be introduced to the estimation of SIF escape probability with the use of satellite products, especially when the product of leaf area index (LAI) and clumping index (CI) was underestimated. In other results, fAPAR has comparable estimation accuracy of SIF escape probability when compared to canopy interceptance. Additionally, fAPAR for the entire canopy has better estimation accuracy of SIF escape probability than fPAR for leaf only in sparse forest canopies. These results help us to better understand the current estimation results of SIF escape probability based on spectral invariants theory, and to improve its estimation accuracy in discontinuous forest canopies.
APA, Harvard, Vancouver, ISO, and other styles
6

Rodríguez-García, Marco A., Isaac Pérez Castillo, and P. Barberis-Blostein. "Efficient qubit phase estimation using adaptive measurements." Quantum 5 (June 4, 2021): 467. http://dx.doi.org/10.22331/q-2021-06-04-467.

Full text
Abstract:
Estimating correctly the quantum phase of a physical system is a central problem in quantum parameter estimation theory due to its wide range of applications from quantum metrology to cryptography. Ideally, the optimal quantum estimator is given by the so-called quantum Cramér-Rao bound, so any measurement strategy aims to obtain estimations as close as possible to it. However, more often than not, the current state-of-the-art methods to estimate quantum phases fail to reach this bound as they rely on maximum likelihood estimators of non-identifiable likelihood functions. In this work we thoroughly review various schemes for estimating the phase of a qubit, identifying the underlying problem which prohibits these methods to reach the quantum Cramér-Rao bound, and propose a new adaptive scheme based on covariant measurements to circumvent this problem. Our findings are carefully checked by Monte Carlo simulations, showing that the method we propose is both mathematically and experimentally more realistic and more efficient than the methods currently available.
APA, Harvard, Vancouver, ISO, and other styles
7

Schmittfull, Marcel. "Large-scale structure non-Gaussianities with modal methods." Proceedings of the International Astronomical Union 11, S308 (2014): 67–68. http://dx.doi.org/10.1017/s1743921316009649.

Full text
Abstract:
AbstractRelying on a separable modal expansion of the bispectrum, the implementation of a fast estimator for the full bispectrum of a 3d particle distribution is presented. The computational cost of accurate bispectrum estimation is negligible relative to simulation evolution, so the bispectrum can be used as a standard diagnostic whenever the power spectrum is evaluated. As an application, the time evolution of gravitational and primordial dark matter bispectra was measured in a large suite of N-body simulations. The bispectrum shape changes characteristically when the cosmic web becomes dominated by filaments and halos, therefore providing a quantitative probe of 3d structure formation. Our measured bispectra are determined by ∼ 50 coefficients, which can be used as fitting formulae in the nonlinear regime and for non-Gaussian initial conditions. We also compare the measured bispectra with predictions from the Effective Field Theory of Large Scale Structures (EFTofLSS).
APA, Harvard, Vancouver, ISO, and other styles
8

ROSSBERG, A. G. "ON THE LIMITS OF SPECTRAL METHODS FOR FREQUENCY ESTIMATION." International Journal of Bifurcation and Chaos 14, no. 06 (2004): 2115–23. http://dx.doi.org/10.1142/s0218127404010503.

Full text
Abstract:
An algorithm is presented which generates pairs of oscillatory random time series which have identical periodograms but differ in the number of oscillations. This result indicates the intrinsic limitations of spectral methods when it comes to the task of measuring frequencies. Other examples, one from medicine and one from bifurcation theory, are given, which also exhibit these limitations of spectral methods. For two methods of spectral estimation it is verified that the particular way end points are treated, which is specific to each method, is, for long enough time series, not relevant for the main result.
APA, Harvard, Vancouver, ISO, and other styles
9

M. Salah, Mukhtar, M. El-Morshedy, M. S. Eliwa, and Haitham M. Yousof. "Expanded Fréchet Model: Mathematical Properties, Copula, Different Estimation Methods, Applications and Validation Testing." Mathematics 8, no. 11 (2020): 1949. http://dx.doi.org/10.3390/math8111949.

Full text
Abstract:
The extreme value theory is expanded by proposing and studying a new version of the Fréchet model. Some new bivariate type extensions using Farlie–Gumbel–Morgenstern copula, modified Farlie–Gumbel–Morgenstern copula, Clayton copula, and Renyi’s entropy copula are derived. After a quick study for its properties, different non-Bayesian estimation methods under uncensored schemes are considered, such as the maximum likelihood estimation method, Anderson–Darling estimation method, ordinary least square estimation method, Cramér–von-Mises estimation method, weighted least square estimation method, left-tail Anderson–Darling estimation method, and right-tail Anderson–Darling estimation method. Numerical simulations were performed for comparing the estimation methods using different sample sizes for three different combinations of parameters. The Barzilai–Borwein algorithm was employed via a simulation study. Three applications were presented for measuring the flexibility and the importance of the new model for comparing the competitive distributions under the uncensored scheme. Using the approach of the Bagdonavicius–Nikulin goodness-of-fit test for validation under the right censored data, we propose a modified chi-square goodness-of-fit test for the new model. The modified goodness-of-fit statistic test was applied for the right censored real data set, called leukemia free-survival times for autologous transplants. Based on the maximum likelihood estimators on initial data, the modified goodness-of-fit test recovered the loss in information while the grouping data and followed chi-square distributions. All elements of the modified goodness-of-fit criteria tests are explicitly derived and given.
APA, Harvard, Vancouver, ISO, and other styles
10

Edwards, Julianne M., and W. Holmes Finch. "Recursive Partitioning Methods for Data Imputation in the Context of Item Response Theory: A Monte Carlo Simulation." Psicológica Journal 39, no. 1 (2018): 88–117. http://dx.doi.org/10.2478/psicolj-2018-0005.

Full text
Abstract:
AbstractMissing data is a common problem faced by psychometricians and measurement professionals. To address this issue, there are a number of techniques that have been proposed to handle missing data regarding Item Response Theory. These methods include several types of data imputation methods - corrected item mean substitution imputation, response function imputation, multiple imputation, and the EM algorithm, as well as approaches that do not rely on the imputation of missing values - treating the item as not presented, coding missing responses as incorrect, or as fractionally correct. Of these methods, even though multiple imputation has demonstrated the best performance in prior research, higher MAE was still present. Given this higher model parameter estimation MAE for even the best performing missing data methods, this simulation study’s goal was to explore the performance of a set of potentially promising data imputation methods based on recursive partitioning. Results of this study demonstrated that approaches that combine multivariate imputation by chained equations and recursive partitioning algorithms yield data with relatively low estimation MAE for both item difficulty and item discrimination. Implications of these findings are discussed.
APA, Harvard, Vancouver, ISO, and other styles
11

Takami Narita, Taynara, Caio Henrique Alberconi, Fernando De Souza, and Lucas Ikeziri. "Comparison of PERT/CPM and CCPM Methods in Project Time Management." Revista Gestão da Produção Operações e Sistemas 16, no. 03 (2021): 01–20. http://dx.doi.org/10.15675/gepros.v16i3.2815.

Full text
Abstract:
Purpose: Evaluate and compare PERT/CPM and Critical Chain Project Management (CCPM) techniques, from the Theory of Constraints (TOC), in relation to indicators of delivery time estimation and reliability in meeting established deadlines. Theoretical framework: The research is based on the time management theory established by the PERT/CPM and CCPM methods. Design/methodology/approach: This work has an experimental character, using a method of computer simulation by applying the Promodel software. A fictitious project environment managed by PERT/CPM and CCPM techniques was modeled in order to evaluate and compare their performances in terms of estimation of, and compliance with, project completion deadlines. Findings: The results obtained showed that the CCPM method proved to be more effective in reducing project completion time and meeting established deadlines. Conversely, the PERT/CPM method increased planned project completion time by 189%. Research, Practical & Social implications: Many managers assume that the best approach to project planning, especially when aiming for short and reliable deadlines, is to allocate margins of safety to each scheduled activity. This research reinforced the already widely held perception of TOC that, due to certain ordinary human behaviors, local optimizations do not guarantee, and usually adversely effect, good global results. Originality/value: There is a lack of research comparing PERT/CPM and CCPM techniques through modeling and computer simulations of project environments subjected to certain degrees of uncertainty, particularly in terms of performance variables such as those studied here. The results of this research, therefore, address this opportunity, bringing to light comparative scenarios and explanations for the different behaviors observed. Keywords: Computational Simulation; Project Management; Goldratt; Critical Chain; CCPM; PERT/CPM.
APA, Harvard, Vancouver, ISO, and other styles
12

Jiao, Zhi. "Speed Sensorless Control System Based on MRAS." Applied Mechanics and Materials 742 (March 2015): 586–89. http://dx.doi.org/10.4028/www.scientific.net/amm.742.586.

Full text
Abstract:
In this paper, a strategy for estimating the induction motor’s rotor speed is proposed. The proposed rotor speed estimation strategy is based on model reference adaptive identification theory. By applying the proposed strategy, the induction motor control system can estimate the induction motor's rotor speed precisely. To improve the rotor speed estimation performance of the system, two methods have been adopt. The speed sensorless control system based on proposed strategy was built with Simulink blocks in Matlab platform. The corresponding simulation results demonstrate that the proposed method can operate stably in the whole range of speed with preferable estimation precision of stator resistance and rotor speed.
APA, Harvard, Vancouver, ISO, and other styles
13

Lei, Tao, Weiwei Tan, Guangsi Chen, and Delin Kong. "A Novel Robust Model Predictive Controller for Aerospace Three-Phase PWM Rectifiers." Energies 11, no. 9 (2018): 2490. http://dx.doi.org/10.3390/en11092490.

Full text
Abstract:
This paper presents a novel Model Predictive Direct Power Control (MPDPC) approach for the pulse width modulation (PWM) rectifiers in the Aircraft Alternating Current Variable Frequency (ACVF) power system. The control performance of rectifiers may be largely affected by variations in the AC side impedance, especially for systems with limited power volume system. A novel idea for estimating the impedance variation based on the Bayesian estimation, using an algorithm embedded in MPDPC is presented in this paper. The input filter inductance and its equivalent series resistance (ESR) of PWM rectifiers are estimated in this algorithm by measuring the input current and input voltage in each cycle with the probability Bayesian estimation theory. This novel estimation method can overcome the shortcomings of traditional data based estimation methods such as least square estimation (LSE), which achieves poor estimation results with the small samples data set. In ACVF systems, the effect on the parameters estimation accuracy caused by the number of sampling points in one cycle is also analyzed in detail by simulation. The validity of this method is verified by the digital and Hard-in-loop simulation compared with other estimation methods such as the least square estimation method. The experimental testing results show that the proposed estimation algorithm can improve the robustness and the control performance of the MPDPC under the condition of the uncertainty of the AC side parameters of the three-phase PWM rectifiers in aircraft electrical power system.
APA, Harvard, Vancouver, ISO, and other styles
14

Shi, Xiaolin, Yimin Shi, and Kuang Zhou. "Estimation for Entropy and Parameters of Generalized Bilal Distribution under Adaptive Type II Progressive Hybrid Censoring Scheme." Entropy 23, no. 2 (2021): 206. http://dx.doi.org/10.3390/e23020206.

Full text
Abstract:
Entropy measures the uncertainty associated with a random variable. It has important applications in cybernetics, probability theory, astrophysics, life sciences and other fields. Recently, many authors focused on the estimation of entropy with different life distributions. However, the estimation of entropy for the generalized Bilal (GB) distribution has not yet been involved. In this paper, we consider the estimation of the entropy and the parameters with GB distribution based on adaptive Type-II progressive hybrid censored data. Maximum likelihood estimation of the entropy and the parameters are obtained using the Newton–Raphson iteration method. Bayesian estimations under different loss functions are provided with the help of Lindley’s approximation. The approximate confidence interval and the Bayesian credible interval of the parameters and entropy are obtained by using the delta and Markov chain Monte Carlo (MCMC) methods, respectively. Monte Carlo simulation studies are carried out to observe the performances of the different point and interval estimations. Finally, a real data set has been analyzed for illustrative purposes.
APA, Harvard, Vancouver, ISO, and other styles
15

Et. al., Mahdi Wahhab Neamah ,. "Statistical Properties & Different Methods Of Estimation Of A New Extended Weighted Frechet Distribution." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 6 (2021): 1011–29. http://dx.doi.org/10.17762/turcomat.v12i6.2414.

Full text
Abstract:
In this paper, we introduce a new distribution is called the extended weighted Frechet distribution, which we obtain by applying the Azzalini method and deduced some statistical properties such as mean, variance, coefficients of variation, coefficient of skewness, and coefficient of kurtosis. The parameters of the new distribution were estimated by the following estimation methods: Maximum Likelihood Method (MLE) and percentile method. We used the Monte Carlo simulation to compare the performances of the proposed estimators obtained through methods of estimation.
APA, Harvard, Vancouver, ISO, and other styles
16

Van den Nest, Maarten. "Simulating quantum computers with probabilistic methods." Quantum Information and Computation 11, no. 9&10 (2011): 784–812. http://dx.doi.org/10.26421/qic11.9-10-5.

Full text
Abstract:
We investigate the boundary between classical and quantum computational power. This work consists of two parts. First we develop new classical simulation algorithms that are centered on sampling methods. Using these techniques we generate new classes of classically simulatable quantum circuits where standard techniques relying on the exact computation of measurement probabilities fail to provide efficient simulations. For example, we show how various concatenations of matchgate, Toffoli, Clifford, bounded-depth, Fourier transform and other circuits are classically simulatable. We also prove that sparse quantum circuits as well as circuits composed of CNOT and $\exp[{i\theta X}]$ gates can be simulated classically. In a second part, we apply our results to the simulation of quantum algorithms. It is shown that a recent quantum algorithm, concerned with the estimation of Potts model partition functions, can be simulated efficiently classically. Finally, we show that the exponential speed-ups of Simon's and Shor's algorithms crucially depend on the very last stage in these algorithms, dealing with the classical postprocessing of the measurement outcomes. Specifically, we prove that both algorithms would be classically simulatable if the function classically computed in this step had a sufficiently peaked Fourier spectrum.
APA, Harvard, Vancouver, ISO, and other styles
17

Wu, Yan Jun, Gang Fu, and Peng Yu. "Performance Analysis on Three Methods for Chirp Signal Parameters Estimation Based on FRFT." Advanced Materials Research 989-994 (July 2014): 3942–45. http://dx.doi.org/10.4028/www.scientific.net/amr.989-994.3942.

Full text
Abstract:
Chirp signal has been used widely in radar signals, and the Fractional Fourier transform is one of the most effective tools to analyze Chirp signal. In this paper, the concept of FRFT and the estimation theory of Chirp signal are introduced firstly. Then, we study three Chirp signal detection algorithms based on character of Chirp signal energy concentrated in a certain FRFT domain. Finally, in order to test the estimation abilities of the frequency modulation rate and the central frequency of FRFT to Chirp signal, and compare the operation time of parameters estimation under different SNR of the three algorithms, we simulate performance of the Three methods, and the final simulation results show that the three method have remarkable capabilities of detecting Chirp signal with low SNR. In contrast, the two-searching method doesn’t need planar search, consumedly reducing the computation cost at the same precision.
APA, Harvard, Vancouver, ISO, and other styles
18

Chacón, Edixon, Jesús M. Alvarado, and Carmen Santisteban. "A Simulation Procedure for the Generation of Samples to Evaluate Goodness of Fit Indices in Item Response Theory Models." Methodology 7, no. 2 (2011): 56–62. http://dx.doi.org/10.1027/1614-2241/a000022.

Full text
Abstract:
The LISREL8.8/PRELIS2.81 program can carry out ordinal factorial analysis (OFA command), with full information maximum likelihood methods, in a data set containing n samples obtained by simulation. Nevertheless, when the replication number is greater than 1, an error command is produced, which impedes reaching solutions that can execute normal (NOR) and logistic (POM) functions. This paper proposes a new procedure of data simulation in PRELIS-LISREL. This procedure permits the generation of n replications and the calculation of the goodness of fit (GOF) indices in the item response theory (IRT) models for each replication, thus allowing the execution of the OFA command for Monte Carlo simulations. The solutions using underlying variable (weighted least squares (WLS) estimation method) and IRT approaches are compared.
APA, Harvard, Vancouver, ISO, and other styles
19

Li, Ziyi, Zhenxing Guo, Ying Cheng, Peng Jin, and Hao Wu. "Robust partial reference-free cell composition estimation from tissue expression." Bioinformatics 36, no. 11 (2020): 3431–38. http://dx.doi.org/10.1093/bioinformatics/btaa184.

Full text
Abstract:
Abstract Motivation In the analysis of high-throughput omics data from tissue samples, estimating and accounting for cell composition have been recognized as important steps. High cost, intensive labor requirements and technical limitations hinder the cell composition quantification using cell-sorting or single-cell technologies. Computational methods for cell composition estimation are available, but they are either limited by the availability of a reference panel or suffer from low accuracy. Results We introduce TOols for the Analysis of heterogeneouS Tissues TOAST/-P and TOAST/+P, two partial reference-free algorithms for estimating cell composition of heterogeneous tissues based on their gene expression profiles. TOAST/-P and TOAST/+P incorporate additional biological information, including cell-type-specific markers and prior knowledge of compositions, in the estimation procedure. Extensive simulation studies and real data analyses demonstrate that the proposed methods provide more accurate and robust cell composition estimation than existing methods. Availability and implementation The proposed methods TOAST/-P and TOAST/+P are implemented as part of the R/Bioconductor package TOAST at https://bioconductor.org/packages/TOAST. Contact ziyi.li@emory.edu or hao.wu@emory.edu Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
20

Berman, Mark. "Improved Estimation of the Intrinsic Dimension of a Hyperspectral Image Using Random Matrix Theory." Remote Sensing 11, no. 9 (2019): 1049. http://dx.doi.org/10.3390/rs11091049.

Full text
Abstract:
Many methods have been proposed in the literature for estimating the number of materials/endmembers in a hyperspectral image. This is sometimes called the “intrinsic” dimension (ID) of the image. A number of recent papers have proposed ID estimation methods based on various aspects of random matrix theory (RMT), under the assumption that the errors are uncorrelated, but with possibly unequal variances. A recent paper, which reviewed a number of the better known methods (including one RMT-based method), has shown that they are all biased, especially when the true ID is greater than about 20 or 30, even when the error structure is known. I introduce two RMT-based estimators ( R M T G , which is new, and R M T K N , which is a modification of an existing estimator), which are approximately unbiased when the error variances are known. However, they are biased when the error variance is unknown and needs to be estimated. This bias increases as ID increases. I show how this bias can be reduced. The results use semi-realistic simulations based on three real hyperspectral scenes. Despite this, when applied to the real scenes, R M T G and R M T K N are larger than expected. Possible reasons for this are discussed, including the presence of errors which are either deterministic, spectrally and/or spatially correlated, or signal-dependent. Possible future research into ID estimation in the presence of such errors is outlined.
APA, Harvard, Vancouver, ISO, and other styles
21

Abdibekova, Aigerim, Dauren Zhakebayev, Akmaral Abdigaliyeva, and Kuanysh Zhubat. "Modelling of turbulence energy decay based on hybrid methods." Engineering Computations 35, no. 5 (2018): 1965–77. http://dx.doi.org/10.1108/ec-11-2016-0395.

Full text
Abstract:
Purpose The purpose of this study is to present an exact and fast-calculated algorithm for the modelling of turbulent energy decay based on two different methods: finite-difference and spectral methods. Design/methodology/approach The filtered three-dimensional non-stationary Navier–Stokes equation is used for simulating the turbulent process. The problem is solved using hybrid methods, where the equation of motion is solved using finite difference methods in combination with cyclic penta-diagonal matrix, which allowed to reach high order of accuracy, and Poisson equation is solved using the spectral methods, which is proposed to speed up the solution procedure. For validation of the given algorithm, the turbulent characteristics were compared with the exact solution of the classical Taylor and Green vortex problem, showing good agreement. Findings The proposed method shows high computational efficiency and good estimation quality. A numerical algorithm for solving non-stationary three-dimensional Navier–Stokes equations for modelling of isotropic turbulence decay using hybrid methods was developed. The simulation’s turbulence characteristics show good agreements with analytical solution. The developed numerical algorithm can be used for simulation of turbulence decay with different values of viscosity. Originality/value An efficient algorithm for simulation of turbulence processes depending on the properties of the viscosity was developed.
APA, Harvard, Vancouver, ISO, and other styles
22

Fu, Zhi-Jun, Wei-Dong Xie, and Xiao-Bin Ning. "Adaptive Nonlinear Tire-Road Friction Force Estimation for Vehicular Systems Based on a Novel Differentiable Friction Model." Mathematical Problems in Engineering 2015 (2015): 1–7. http://dx.doi.org/10.1155/2015/201062.

Full text
Abstract:
A novel adaptive nonlinear observer-based parameter estimation scheme using a newly continuously differentiable friction model has been developed to estimate the tire-road friction force. The differentiable friction model is more flexible and suitable for online adaptive identification and control with the advantage of more explicit parameterizable form. Different from conventional estimation methods, the filtered regression estimation parameter is introduced in the novel adaptive laws, which can guarantee that both the observer error and parameter error exponentially converge to zero. Lyapunov theory has been used to prove the stability of the proposed methods. The effectiveness of the estimation algorithm is illustrated via a bus simulation model in the Trucksim software and simulation environment. The relatively accurate tire-road friction force was estimated just by the easily existing sensors signals wheel rotational speed and vehicle speed and the proposed method also displays strong robustness against bounded disturbances.
APA, Harvard, Vancouver, ISO, and other styles
23

Xia, Shiwei, Qian Zhang, Jiangping Jing, et al. "Distributed State Estimation of Multi-region Power System based on Consensus Theory." Energies 12, no. 5 (2019): 900. http://dx.doi.org/10.3390/en12050900.

Full text
Abstract:
Effective state estimation is critical to the security operation of power systems. With the rapid expansion of interconnected power grids, there are limitations of conventional centralized state estimation methods in terms of heavy and unbalanced communication and computation burdens for the control center. To address these limitations, this paper presents a multi-area state estimation model and afterwards proposes a consensus theory based distributed state estimation solution method. Firstly, considering the nonlinearity of state estimation, the original power system is divided into several non-overlapped subsystems. Correspondingly, the Lagrange multiplier method is adopted to decouple the state estimation equations into a multi-area state estimation model. Secondly, a fully distributed state estimation method based on the consensus algorithm is designed to solve the proposed model. The solution method does not need a centralized coordination system operator, but only requires a simple communication network for exchanging the limited data of boundary state variables and consensus variables among adjacent regions, thus it is quite flexible in terms of communication and computation for state estimation. In the end, the proposed method is tested by the IEEE 14-bus system and the IEEE 118-bus system, and the simulation results verify that the proposed multi-area state estimation model and the distributed solution method are effective for the state estimation of multi-area interconnected power systems.
APA, Harvard, Vancouver, ISO, and other styles
24

Han, Chirok, Peter C. B. Phillips, and Donggyu Sul. "X-DIFFERENCING AND DYNAMIC PANEL MODEL ESTIMATION." Econometric Theory 30, no. 1 (2013): 201–51. http://dx.doi.org/10.1017/s0266466613000170.

Full text
Abstract:
This paper introduces a new estimation method for dynamic panel models with fixed effects and AR(p) idiosyncratic errors. The proposed estimator uses a novel form of systematic differencing, called X-differencing, that eliminates fixed effects and retains information and signal strength in cases where there is a root at or near unity. The resulting “panel fully aggregated” estimator (PFAE) is obtained by pooled least squares on the system of X-differenced equations. The method is simple to implement, consistent for all parameter values, including unit root cases, and has strong asymptotic and finite sample performance characteristics that dominate other procedures, such as bias corrected least squares, generalized method of moments (GMM), and system GMM methods. The asymptotic theory holds as long as the cross section (n) or time series (T) sample size is large, regardless of then/Tratio, which makes the approach appealing for practical work. In the time series AR(1) case (n= 1), the FAE estimator has a limit distribution with smaller bias and variance than the maximum likelihood estimator (MLE) when the autoregressive coefficient is at or near unity and the same limit distribution as the MLE in the stationary case, so the advantages of the approach continue to hold for fixed and even smalln. Some simulation results are reported, giving comparisons with other dynamic panel estimation methods.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhou, Xiaobing, Zhangbiao Fan, Dongming Zhou, and Xiaomei Cai. "Passivity-Based Adaptive Hybrid Synchronization of a New Hyperchaotic System with Uncertain Parameters." Scientific World Journal 2012 (2012): 1–6. http://dx.doi.org/10.1100/2012/920170.

Full text
Abstract:
We investigate the adaptive hybrid synchronization problem for a new hyperchaotic system with uncertain parameters. Based on the passivity theory and the adaptive control theory, corresponding controllers and parameter estimation update laws are proposed to achieve hybrid synchronization between two identical uncertain hyperchaotic systems with different initial values, respectively. Numerical simulation indicates that the presented methods work effectively.
APA, Harvard, Vancouver, ISO, and other styles
26

Wang, Bai He, Dai Zhi Liu, and Shi Qi Huang. "Research on Multi-Target Scales Estimation Based on the Theory of Fuzzy Clustering." Advanced Engineering Forum 6-7 (September 2012): 561–65. http://dx.doi.org/10.4028/www.scientific.net/aef.6-7.561.

Full text
Abstract:
Scale recognition is the effective technology in underwater target identification. The existing methods can only solve the problem of single-target scale estimation and are not adaptable to the problem of multi-target scale estimation and recognition in complex counterwork. A method named FC based on fuzzy clustering analysis was put forward to analyze and estimate the multi-target scales in this paper. In the method, all the highlights collected by homing system were analyzed by dynamic fuzzy clustering with the principle of shortest distance, and scales were estimated separately based on the lights belonged to deferent sorts. Simulation results show that the problem of multi-target scale recognition can be solved effectively with FC method.
APA, Harvard, Vancouver, ISO, and other styles
27

Gamba-Santamaria, Santiago, Oscar Fernando Jaulin-Mendez, Luis Fernando Melo-Velandia, and Carlos Andrés Quicazán-Moreno. "Comparison of methods for estimating the uncertainty of value at risk." Studies in Economics and Finance 33, no. 4 (2016): 595–624. http://dx.doi.org/10.1108/sef-03-2016-0055.

Full text
Abstract:
Purpose Value at risk (VaR) is a market risk measure widely used by risk managers and market regulatory authorities, and various methods are proposed in the literature for its estimation. However, limited studies discuss its distribution or its confidence intervals. The purpose of this paper is to compare different techniques for computing such intervals to identify the scenarios under which such confidence interval techniques perform properly. Design/methodology/approach The methods that are included in the comparison are based on asymptotic normality, extreme value theory and subsample bootstrap. The evaluation is done by computing the coverage rates for each method through Monte Carlo simulations under certain scenarios. The scenarios consider different persistence degrees in mean and variance, sample sizes, VaR probability levels, confidence levels of the intervals and distributions of the standardized errors. Additionally, an empirical application for the stock market index returns of G7 countries is presented. Findings The simulation exercises show that the methods that were considered in the study are only valid for high quantiles. In particular, in terms of coverage rates, there is a good performance for VaR(99 per cent) and bad performance for VaR(95 per cent) and VaR(90 per cent). The results are confirmed by an empirical application for the stock market index returns of G7 countries. Practical implications The findings of the study suggest that the methods that were considered to estimate VaR confidence interval are appropriated when considering high quantiles such as VaR(99 per cent). However, using these methods for smaller quantiles, such as VaR(95 per cent) and VaR(90 per cent), is not recommended. Originality/value This study is the first one, as far as it is known, to identify the scenarios under which the methods for estimating the VaR confidence intervals perform properly. The findings are supported by simulation and empirical exercises.
APA, Harvard, Vancouver, ISO, and other styles
28

Filipiak, Katarzyna, Daniel Klein, and Monika Mokrzycka. "Estimators Comparison of Separable Covariance Structure with One Component as Compound Symmetry Matrix." Electronic Journal of Linear Algebra 33 (May 16, 2018): 83–98. http://dx.doi.org/10.13001/1081-3810.3740.

Full text
Abstract:
The maximum likelihood estimation (MLE) of separable covariance structure with one component as compound symmetry matrix has been widely studied in the literature. Nevertheless, the proposed estimates are not given in explicit form and can be determined only numerically. In this paper we give an alternative form of MLE and we show that this new algorithm is much quicker than the algorithms given in the literature.\\ Another estimator of covariance structure can be found by minimizing the entropy loss function. In this paper we give three methods of finding the best approximation of separable covariance structure with one component as compound symmetry matrix and we compare the quickness of proposed algorithms.\\ We conduct simulation studies to compare statistical properties of MLEs and entropy loss estimators (ELEs), such us biasedness, variability and loss. Another estimator of covariance structure can be found by minimizing the entropy loss function. In this paper we give three methods of finding the best approximation of separable covariance structure with one component as compound symmetry matrix and we compare the quickness of proposed algorithms. We conduct simulation studies to compare statistical properties of MLEs and entropy loss estimators (ELEs), such us biasedness and variability.
APA, Harvard, Vancouver, ISO, and other styles
29

Wei, Li, and Teng Yun. "Real Estate Project Financial Evaluation Based on Cash Flow Estimation." Open Construction and Building Technology Journal 9, no. 1 (2015): 135–41. http://dx.doi.org/10.2174/1874836801509010135.

Full text
Abstract:
This paper analyzes the factors influencing cash flows and divides them as certainty factors and uncertainty factors; it mainly discusses the uncertainty factors causing change of cash flow. For the characteristics of uncertainty of cash flows in real estate project, we adopted probability theory and mathematical statistics to balance and estimate the cash flows. Then the computer simulation method for uncertainty factors based on Beta distribution and normal distribution is proposed, with the prediction method of cash inflow and outflow. Illustrated by the case of actual real estate projects, we provided financial evaluation by traditional evaluation methods and simulation methods of uncertainty factors and compared the evaluation results. The experiment results show that the improved method has more scientific and accurate performance in basic data, compared to the traditional method, and it can acquire more reliable Evaluation results.
APA, Harvard, Vancouver, ISO, and other styles
30

LeBeau, Brandon. "Ability and Prior Distribution Mismatch: An Exploration of Common-Item Linking Methods." Applied Psychological Measurement 41, no. 7 (2017): 545–60. http://dx.doi.org/10.1177/0146621617707508.

Full text
Abstract:
Linking of two forms is an important task when using item response theory, particularly when two forms are administered to nonequivalent groups. When linking with characteristic curve methods, the ability distribution and weights associated with that distribution can be used to weight observations differently. These are commonly specified as equally spaced intervals from −4 to 4, but other options or distributional forms can be specified. The use of these different distributions and weights of the ability distributions will be explored with a Monte Carlo simulation. Primary simulation conditions will include sample size, number of items, number of common items, ability distribution, and randomly varying population transformation constants. Study results show that the linking weights have little impact on the estimation of the linking constants; however, the underlying ability distribution of examinees does have significant impact. Implications for applied researchers will be discussed.
APA, Harvard, Vancouver, ISO, and other styles
31

Pentikäinen, Teivo, and Jukka Rantala. "A Simulation Procedure for Comparing Different Claims Reserving Methods." ASTIN Bulletin 22, no. 2 (1992): 191–216. http://dx.doi.org/10.2143/ast.22.2.2005115.

Full text
Abstract:
AbstractThe estimation of outstanding claims is one of the important aspects in the management of the insurance business. Various methods have been widely dealt with in the actuarial literature. Exploration of the inaccuracies involved is traditionally based on a post-facto comparison of the estimates against the actual outcomes of the settled claims. However, until recent years it has not been usual to consider the inaccuracies inherent in claims reserving in the context of more comprehensive (risk theoretical) models, the purpose of which is to analyse the insurer as a whole. Important parts of the technique which will be outlined in this paper can be incorporated into over-all risk theory models to introduce the uncertainty involved with technical reserves as one of the components in solvency and other analyses (Pentikäinen et al. (1989)).The idea in this paper is to describe a procedure by which one can explore how various reserving methods react to fictitious variations, fluctuations, trends, etc. which might influence the claims process, and, what is most important, how they reflect on the variables indicating the financial position of the insurer. For this purpose, a claims process is first postulated and claims are simulated and ordered to correspond to an actual handling of the observed claims of a fictitious insurer. Next, the simulation program will ‘mime’ an actuary who is calculating the claims reserve on the basis of these ‘observed’ claims data. Finally, the simulation is further continued thus generating the settlement of the reserved claims. The difference between reserved amounts and settled amounts gives the reserving (run-off) error in this particular simulated case. By repeating the simulation numerous times (Monte Carlo method) the distribution of the error can be estimated as well as its effect on the total outcome of the insurer.
APA, Harvard, Vancouver, ISO, and other styles
32

Ma, Jingxin, Haisen Li, Jianjun Zhu, and Baowei Chen. "Sound Velocity Estimation of Seabed Sediment Based on Parametric Array Sonar." Mathematical Problems in Engineering 2020 (August 14, 2020): 1–6. http://dx.doi.org/10.1155/2020/9810215.

Full text
Abstract:
Backscattered sound waves of seabed sediments are important information carriers in seafloor detection and acoustic characteristic parameters inversion. Most of the existing methods for estimating geoacoustic parameters are based on multiangle seabed backscattered signal processing and suitable for flat seafloor conditions with uniform sediment thickness. This usually deviates from the real field conditions and affects the accuracy of parameter estimation. In this paper, the sound ray propagation theory is studied and analysed under the condition of sloping seabed and uneven sediment thickness. Based on the phased parameter array sonar system, a method of acoustic parameters estimation of the sediment under inclined seabed conditions is proposed. The simulation results show that the new method shows good adaptability to different inclination angles of the seabed and solves the problem of accuracy of acoustic parameter estimation of the inclined seabed sediments. The model will greatly reduce the seafloor topography requirements in the sediment acoustic parameter inversion, such as velocity, layer thickness, and acoustic impedance.
APA, Harvard, Vancouver, ISO, and other styles
33

Eckenhoff, Kevin, Patrick Geneva, and Guoquan Huang. "Closed-form preintegration methods for graph-based visual–inertial navigation." International Journal of Robotics Research 38, no. 5 (2019): 563–86. http://dx.doi.org/10.1177/0278364919835021.

Full text
Abstract:
In this paper, we propose a new analytical preintegration theory for graph-based sensor fusion with an inertial measurement unit (IMU) and a camera (or other aiding sensors). Rather than using discrete sampling of the measurement dynamics as in current methods, we derive the closed-form solutions to the preintegration equations, yielding improved accuracy in state estimation. We advocate two new different inertial models for preintegration: (i) the model that assumes piecewise constant measurements; and (ii) the model that assumes piecewise constant local true acceleration. Through extensive Monte Carlo simulations, we show the effect that the choice of preintegration model has on estimation performance. To validate the proposed preintegration theory, we develop both direct and indirect visual–inertial navigation systems (VINSs) that leverage our preintegration. In the first, within a tightly coupled, sliding-window optimization framework, we jointly estimate the features in the window and the IMU states while performing marginalization to bound the computational cost. In the second, we loosely couple the IMU preintegration with a direct image alignment that estimates relative camera motion by minimizing the photometric errors (i.e., image intensity difference), allowing for efficient and informative loop closures. Both systems are extensively validated in real-world experiments and are shown to offer competitive performance to state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
34

Céspedes, I., Y. Huang, J. Ophir, and S. Spratt. "Methods for Estimation of Subsample Time Delays of Digitized Echo Signals." Ultrasonic Imaging 17, no. 2 (1995): 142–71. http://dx.doi.org/10.1177/016173469501700204.

Full text
Abstract:
Time delay estimation (TDE) is commonly performed in practice by crosscorrelation of digitized echo signals. Since time delays are generally not integral multiples of the sampling period, the location of the largest sample of the crosscorrelation function (ccf) is an inexact estimator of the location of the peak. Therefore, one must interpolate between the samples of the ccf to improve the estimation precision. Using theory and simulations, we review and compare the performance of several methods for interpolation of the ccf. The maximum likelihood approach to interpolation is the application of a reconstruction filter to the discrete ccf. However, this method can only be approximated in practice and can be computationally intensive. For these reasons, a simple method is widely used that involves fitting a parabola (or other curve) to samples of the ccf in the neighborhood of its peak. We describe and compare two curve-fitting methods: parabolic and cosine interpolation. Curve-fitting interpolation can yield biased time-delay estimates, which may preclude the use of these methods in some applications. The artifactual effect of these bias errors on elasticity imaging by elastography is discussed. We demonstrate that reconstructive interpolation is unbiased. An iterative implementation of the reconstruction procedure is proposed that can reduce the Computation time significantly.
APA, Harvard, Vancouver, ISO, and other styles
35

Alaghbari, Khaled Abdulaziz, Lim Heng Siong, and Alan W. C. Tan. "Robust correntropy ICA based blind channel estimation for MIMO-OFDM systems." COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering 34, no. 3 (2015): 962–78. http://dx.doi.org/10.1108/compel-08-2014-0199.

Full text
Abstract:
Purpose – The purpose of this paper is to propose a robust correntropy assisted blind channel estimator for multiple-input multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM) for improved channel gains estimation and channel ordering and sign ambiguities resolution in non-Gaussian noise channel. Design/methodology/approach – The correntropy independent component analysis with L1-norm cost function is used for blind channel estimation. Then a correntropy-based method is formulated to resolve the sign and order ambiguities of the channel estimates. Findings – Simulation study on Gaussian noise scenario shows that the proposed method achieves almost the same performance as the conventional L2-norm based method. However, in non-Gaussian noise scenarios performance of the proposed method significantly outperforms the conventional and other popular estimators in terms of mean square error (MSE). To solve the ordering and sign ambiguities problems, an auto-correntropy-based method is proposed and compared with the extended cross-correlation-based method. Simulation study shows improved performance of the proposed method in terms of MSE. Originality/value – This paper presents for the first time, a correntropy-based blind channel estimator for MIMO-OFDM as well as simulated comparison results with traditional correlation-based methods in non-Gaussian noise environment.
APA, Harvard, Vancouver, ISO, and other styles
36

ADDABBO, TOMMASO, ADA FORT, DUCCIO PAPINI, SANTINA ROCCHI, and VALERIO VIGNOLI. "AN EFFICIENT AND ACCURATE METHOD FOR THE ESTIMATION OF ENTROPY AND OTHER DYNAMICAL INVARIANTS FOR PIECEWISE AFFINE CHAOTIC MAPS." International Journal of Bifurcation and Chaos 19, no. 12 (2009): 4175–95. http://dx.doi.org/10.1142/s0218127409025286.

Full text
Abstract:
In this paper, we discuss an efficient iterative method for the estimation of the chief dynamical invariants of chaotic systems based on stochastically stable piecewise affine maps (e.g. the invariant measure, the Lyapunov exponent as well as the Kolmogorov–Sinai entropy). The proposed method represents an alternative to the Monte-Carlo methods and to other methods based on the discretization of the Frobenius–Perron operator, such as the well known Ulam's method. The proposed estimation method converges not slower than exponentially and it requires a computation complexity that grows linearly with the iterations. Referring to the theory developed by C. Liverani, we discuss a theoretical tool for calculating a conservative estimation of the convergence rate of the proposed method. The proposed approach can be used to efficiently estimate any order statistics of a symbolic source based on a piecewise affine mixing map.
APA, Harvard, Vancouver, ISO, and other styles
37

Bjørne, Elias, Edmund F. Brekke, Torleiv H. Bryne, Jeff Delaune, and Tor Arne Johansen. "Globally stable velocity estimation using normalized velocity measurement." International Journal of Robotics Research 39, no. 1 (2019): 143–57. http://dx.doi.org/10.1177/0278364919887436.

Full text
Abstract:
The problem of estimating velocity from a monocular camera and calibrated inertial measurement unit (IMU) measurements is revisited. For the presented setup, it is assumed that normalized velocity measurements are available from the camera. By applying results from nonlinear observer theory, we present velocity estimators with proven global stability under defined conditions, and without the need to observe features from several camera frames. Several nonlinear methods are compared with each other, also against an extended Kalman filter (EKF), where the robustness of the nonlinear methods compared with the EKF are demonstrated in simulations and experiments.
APA, Harvard, Vancouver, ISO, and other styles
38

Pop, Florin. "High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science." Advances in High Energy Physics 2014 (2014): 1–13. http://dx.doi.org/10.1155/2014/507690.

Full text
Abstract:
Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.
APA, Harvard, Vancouver, ISO, and other styles
39

Et.al, Madeline D. Cabauatan. "Statistical Evaluation of Item Nonresponse Methods Using the World Bank’s 2015 Philippines Enterprise Survey." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 3 (2021): 4077–88. http://dx.doi.org/10.17762/turcomat.v12i3.1698.

Full text
Abstract:
The main objective of the study was to evaluate item nonresponse procedures through a simulation study of different nonresponse levels or missing rates. A simulation study was used to explore how each of the response rates performs under a variety of circumstances. It also investigated the performance of procedures suggested for item nonresponse under various conditions and variable trends. The imputation methods considered were the cell mean imputation, random hotdeck, nearest neighbor, and simple regression. These variables are some of the major indicators for measuring productive labor and decent work in the country. For the purpose of this study, the researcher is interested in evaluating methods for imputing missing data for the number of workers and total cost of labor per establishment from the World Bank’s 2015 Enterprise Survey for the Philippines.
 The performances of the imputation techniques for item nonresponse were evaluated in terms of bias and coefficient of variation for accuracy and precision. Based on the results, the cell-mean imputation was seen to be most appropriate for imputing missing values for the total number of workers and total cost of labor per establishment. Since the study was limited to the variables cited, it is recommended to explore other labor indicators. Moreover, exploring choice of other clustering groups is highly recommended as clustering groups have great effect in the resulting estimates of imputation estimation. It is also recommended to explore other imputation techniques like multiple regression and other parametric models for nonresponse such as the Bayes estimation method. For regression based imputation, since the study is limited only in using the cluster groupings estimation, it is highly recommended to use other possible variables that might be related to the variable of interest to verify the results of this study.
APA, Harvard, Vancouver, ISO, and other styles
40

Monroe, Scott. "Estimation of Expected Fisher Information for IRT Models." Journal of Educational and Behavioral Statistics 44, no. 4 (2019): 431–47. http://dx.doi.org/10.3102/1076998619838240.

Full text
Abstract:
In item response theory (IRT) modeling, the Fisher information matrix is used for numerous inferential procedures such as estimating parameter standard errors, constructing test statistics, and facilitating test scoring. In principal, these procedures may be carried out using either the expected information or the observed information. However, in practice, the expected information is not typically used, as it often requires a large amount of computation. In the present research, two methods to approximate the expected information by Monte Carlo are proposed. The first method is suitable for less complex IRT models such as unidimensional models. The second method is generally applicable but is designed for use with more complex models such as high-dimensional IRT models. The proposed methods are compared to existing methods using real data sets and a simulation study. The comparisons are based on simple structure multidimensional IRT models with two-parameter logistic item models.
APA, Harvard, Vancouver, ISO, and other styles
41

Jang, Yoonsun, and Allan S. Cohen. "The Impact of Markov Chain Convergence on Estimation of Mixture IRT Model Parameters." Educational and Psychological Measurement 80, no. 5 (2020): 975–94. http://dx.doi.org/10.1177/0013164419898228.

Full text
Abstract:
A nonconverged Markov chain can potentially lead to invalid inferences about model parameters. The purpose of this study was to assess the effect of a nonconverged Markov chain on the estimation of parameters for mixture item response theory models using a Markov chain Monte Carlo algorithm. A simulation study was conducted to investigate the accuracy of model parameters estimated with different degree of convergence. Results indicated the accuracy of the estimated model parameters for the mixture item response theory models decreased as the number of iterations of the Markov chain decreased. In particular, increasing the number of burn-in iterations resulted in more accurate estimation of mixture IRT model parameters. In addition, the different methods for monitoring convergence of a Markov chain resulted in different degrees of convergence despite almost identical accuracy of estimation.
APA, Harvard, Vancouver, ISO, and other styles
42

Sun, Hui, Xianyu Wang, Kaixin Yang, and Tongrui Peng. "Analysis of Distributed Wireless Sensor Systems with a Switched Quantizer." Complexity 2021 (July 29, 2021): 1–14. http://dx.doi.org/10.1155/2021/6690761.

Full text
Abstract:
In this article, a switched quantizer is proposed to solve the bandwidth limitation application problem for distributed wireless sensor networks (WSNs). The proposed estimator based on switched quantitative event-triggered Kalman consensus filtering (KCF) algorithm is used to monitor the aircraft cabin environmental parameters when suffering packet loss and path loss issues during the communication process for WSN. The quantization error of the novel switched quantizer structure is bounded, and the corresponding stability theory for the quantitative estimation approach is proved. Compared with other methods, the simulation results for the introduced method verify that the environmental parameters can be estimated accurately and timely and reduce the burden of network communication bandwidth.
APA, Harvard, Vancouver, ISO, and other styles
43

van Opheusden, Bas, Luigi Acerbi, and Wei Ji Ma. "Unbiased and efficient log-likelihood estimation with inverse binomial sampling." PLOS Computational Biology 16, no. 12 (2020): e1008483. http://dx.doi.org/10.1371/journal.pcbi.1008483.

Full text
Abstract:
The fate of scientific hypotheses often relies on the ability of a computational model to explain the data, quantified in modern statistical approaches by the likelihood function. The log-likelihood is the key element for parameter estimation and model evaluation. However, the log-likelihood of complex models in fields such as computational biology and neuroscience is often intractable to compute analytically or numerically. In those cases, researchers can often only estimate the log-likelihood by comparing observed data with synthetic observations generated by model simulations. Standard techniques to approximate the likelihood via simulation either use summary statistics of the data or are at risk of producing substantial biases in the estimate. Here, we explore another method, inverse binomial sampling (IBS), which can estimate the log-likelihood of an entire data set efficiently and without bias. For each observation, IBS draws samples from the simulator model until one matches the observation. The log-likelihood estimate is then a function of the number of samples drawn. The variance of this estimator is uniformly bounded, achieves the minimum variance for an unbiased estimator, and we can compute calibrated estimates of the variance. We provide theoretical arguments in favor of IBS and an empirical assessment of the method for maximum-likelihood estimation with simulation-based models. As case studies, we take three model-fitting problems of increasing complexity from computational and cognitive neuroscience. In all problems, IBS generally produces lower error in the estimated parameters and maximum log-likelihood values than alternative sampling methods with the same average number of samples. Our results demonstrate the potential of IBS as a practical, robust, and easy to implement method for log-likelihood evaluation when exact techniques are not available.
APA, Harvard, Vancouver, ISO, and other styles
44

Van Deusen, Paul C. "Forest inventory estimation with mapped plots." Canadian Journal of Forest Research 34, no. 2 (2004): 493–97. http://dx.doi.org/10.1139/x03-209.

Full text
Abstract:
Procedures are developed for estimating means and variances with a mapped-plot design. The focus is on fixed-area plots, and simulations are used to validate the proposed estimators. The mapped-plot estimators for means and variances are compared with simple random sampling estimators that utilize only full plots. As expected, the mapped-plot estimates have smaller mean squared errors than the simple random sampling estimates. The theory for fixed-area plots is easy to apply, although additional work is required to map plots in the field. Corresponding theory for variable plots is developed but not tested with simulations. The difficulty of applying these methods to variable plots is greater, but not prohibitive.
APA, Harvard, Vancouver, ISO, and other styles
45

Song, Yunquan, and Lu Lin. "Sublinear Expectation Nonlinear Regression for the Financial Risk Measurement and Management." Discrete Dynamics in Nature and Society 2013 (2013): 1–10. http://dx.doi.org/10.1155/2013/398750.

Full text
Abstract:
Financial risk is objective in modern financial activity. Management and measurement of the financial risks have become key abilities for financial institutions in competition and also make the major content in finance engineering and modern financial theories. It is important and necessary to model and forecast financial risk. We know that nonlinear expectation, including sublinear expectation as its special case, is a new and original framework of probability theory and has potential applications in some scientific fields, specially in finance risk measure and management. Under the nonlinear expectation framework, however, the related statistical models and statistical inferences have not yet been well established. In this paper, a sublinear expectation nonlinear regression is defined, and its identifiability is obtained. Several parameter estimations and model predictions are suggested, and the asymptotic normality of the estimation and the mini-max property of the prediction are obtained. Finally, simulation study and real data analysis are carried out to illustrate the new model and methods. In this paper, the notions and methodological developments are nonclassical and original, and the proposed modeling and inference methods establish the foundations for nonlinear expectation statistics.
APA, Harvard, Vancouver, ISO, and other styles
46

McIntyre, Neil, Howard Wheater, and Matthew Lees. "Estimation and propagation of parametric uncertainty in environmental models." Journal of Hydroinformatics 4, no. 3 (2002): 177–98. http://dx.doi.org/10.2166/hydro.2002.0018.

Full text
Abstract:
It is proposed that a numerical environmental model cannot be justified for predictive tasks without an implicit uncertainty analysis which uses reliable and transparent methods. Various methods of uncertainty-based model calibration are reviewed and demonstrated. Monte Carlo simulation of data, Generalised Likelihood Uncertainty Estimation (GLUE), the Metropolis algorithm and a set-based approach are compared using the Streeter–Phelps model of dissolved oxygen in a stream. Using idealised data, the first three of these calibration methods are shown to converge the parameter distributions to the same end result. However, in practice, when the properties of the data and model structural errors are less well defined, GLUE and the set-based approach are proposed as more versatile for the robust estimation of parametric uncertainty. Methods of propagation of parametric uncertainty are also reviewed. Rosenblueth's two-point method, first-order variance propagation, Monte Carlo sampling and set theory are applied to the Streeter–Phelps example. The methods are then shown to be equally successful in application to the example, and their relative merits for more complex modelling problems are discussed.
APA, Harvard, Vancouver, ISO, and other styles
47

Song, Xiang, Xu Li, and Weigong Zhang. "Key Parameters Estimation and Adaptive Warning Strategy for Rear-End Collision of Vehicle." Mathematical Problems in Engineering 2015 (2015): 1–20. http://dx.doi.org/10.1155/2015/328029.

Full text
Abstract:
The rear-end collision warning system requires reliable warning decision mechanism to adapt the actual driving situation. To overcome the shortcomings of existing warning methods, an adaptive strategy is proposed to address the practical aspects of the collision warning problem. The proposed strategy is based on the parameter-adaptive and variable-threshold approaches. First, several key parameter estimation algorithms are developed to provide more accurate and reliable information for subsequent warning method. They include a two-stage algorithm which contains a Kalman filter and a Luenberger observer for relative acceleration estimation, a Bayesian theory-based algorithm of estimating the road friction coefficient, and an artificial neural network for estimating the driver’s reaction time. Further, the variable-threshold warning method is designed to achieve the global warning decision. In the method, the safety distance is employed to judge the dangerous state. The calculation method of the safety distance in this paper can be adaptively adjusted according to the different driving conditions of the leading vehicle. Due to the real-time estimation of the key parameters and the adaptive calculation of the warning threshold, the strategy can adapt to various road and driving conditions. Finally, the proposed strategy is evaluated through simulation and field tests. The experimental results validate the feasibility and effectiveness of the proposed strategy.
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Wenhao, and Neal Kingston. "Adaptive Testing With a Hierarchical Item Response Theory Model." Applied Psychological Measurement 43, no. 1 (2018): 51–67. http://dx.doi.org/10.1177/0146621618765714.

Full text
Abstract:
The hierarchical item response theory (H-IRT) model is very flexible and allows a general factor and subfactors within an overall structure of two or more levels. When an H-IRT model with a large number of dimensions is used for an adaptive test, the computational burden associated with interim scoring and selection of subsequent items is heavy. An alternative approach for any high-dimension adaptive test is to reduce dimensionality for interim scoring and item selection and then revert to full dimensionality for final score reporting, thereby significantly reducing the computational burden. This study compared the accuracy and efficiency of final scoring for multidimensional, local multidimensional, and unidimensional item selection and interim scoring methods, using both simulated and real item pools. The simulation study was conducted under 10 conditions (i.e., five test lengths and two H-IRT models) with a simulated sample of 10,000 students. The study with the real item pool was conducted using item parameters from an actual 45-item adaptive test with a simulated sample of 10,000 students. Results indicate that the theta estimations provided by the local multidimensional and unidimensional item selection and interim scoring methods were relatively as accurate as the theta estimation provided by the multidimensional item selection and interim scoring method, especially during the real item pool study. In addition, the multidimensional method required the longest computation time and the unidimensional method required the shortest computation time.
APA, Harvard, Vancouver, ISO, and other styles
49

Shi, Juan, Qunfei Zhang, Weijie Tan, Linlin Mao, Lihuan Huang, and Wentao Shi. "Underdetermined DOA Estimation for Wideband Signals via Focused Atomic Norm Minimization." Entropy 22, no. 3 (2020): 359. http://dx.doi.org/10.3390/e22030359.

Full text
Abstract:
In underwater acoustic signal processing, direction of arrival (DOA) estimation can provide important information for target tracking and localization. To address underdetermined wideband signal processing in underwater passive detection system, this paper proposes a novel underdetermined wideband DOA estimation method equipped with the nested array (NA) using focused atomic norm minimization (ANM), where the signal source number detection is accomplished by information theory criteria. In the proposed DOA estimation method, especially, after vectoring the covariance matrix of each frequency bin, each corresponding obtained vector is focused into the predefined frequency bin by focused matrix. Then, the collected averaged vector is considered as virtual array model, whose steering vector exhibits the Vandermonde structure in terms of the obtained virtual array geometries. Further, the new covariance matrix is recovered based on ANM by semi-definite programming (SDP), which utilizes the information of the Toeplitz structure. Finally, the Root-MUSIC algorithm is applied to estimate the DOAs. Simulation results show that the proposed method outperforms other underdetermined DOA estimation methods based on information theory in term of higher estimation accuracy.
APA, Harvard, Vancouver, ISO, and other styles
50

Li, Xiao Peng, Xing Ju, Guang Hui Zhao, and Ya Min Liang. "Estimation and Analysis for the Fatigue Life of Rail Joint Bolt." Advanced Materials Research 683 (April 2013): 783–86. http://dx.doi.org/10.4028/www.scientific.net/amr.683.783.

Full text
Abstract:
One of the most important reasons for railway traffic accidents is the fatigue failure of the rail joint bolt. Therefore, the research on the fatigue properties of rail joint bolt has important practical significance. This paper analyzed the influence of different pre-tightening torque on the fatigue life of the rail joint bolt. Besides, the best pre-tightening torque of the bolt was acquired through theoretical calculations. Finally, the fatigue life of the rail joint bolt was predicted according to the relevant basic theory and methods of fatigue life analysis, which applied numerical simulation technology.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography