Academic literature on the topic 'Bayesian estimate'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Bayesian estimate.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Bayesian estimate"

1

Chang, Jack, Dhara Patel, Kimberly C. Claeys, Marc H. Scheetz, and Emily Heil. "1090. Does calculation method matter for targeting vancomycin AUC?" Open Forum Infectious Diseases 8, Supplement_1 (2021): S636. http://dx.doi.org/10.1093/ofid/ofab466.1284.

Full text
Abstract:
Abstract Background Recent vancomycin (VAN) guidelines recommend targeting an area under the curve (AUC) concentration of 400-600 for treatment of methicillin resistant Staphylococcus aureus infections. Multiple strategies for calculating AUC exist, including first order pharmacokinetic (foPK) equations and Bayesian models. Most clinical applications of foPK assume unchanged patient status and project ideal administration times to estimate exposure. Bayesian modeling provides the best estimate of true drug exposure and can incorporate changing patient covariates and exact doses. We compared two commonly used foPK methods to Bayesian estimates of VAN AUC. Graphs depict calculated AUCs using the three different methods: 1) Population PK estimated (foPOPPK) 2) Two-level first dose estimated (foFDPK) 3) Bayesian estimated. Methods First order equations were performed using population PK estimates (foPOPPK) to estimate steady state (SS) AUC and initial doses. Two concentrations after first dose were used to estimate SS AUC (foFDPK). A 2-compartment Bayesian model allometrically scaled for weight and adjusted for creatinine clearance was used to determine 24-48 hour AUCs. Differences between AUCs were compared using a mixed-effects analysis, and correlation of foPK equations to Bayesian estimates was described using Spearman’s correlation. Patient results from each method were classified as below (< 400), within (400-600), or above ( >600) targets. Results 65 adult patients were included. The median and IQR for calculated AUCs using foPOPPK, foFDPK, and Bayesian methods were 495.6 (IQR: 76.6), 498.2 (IQR: 107.4), and 472.1 (IQR: 177.9), respectively with p >0.65 for both foPK methods vs. the Bayesian method. AUCs predicted by foPK equations were poorly correlated with Bayesian AUCs (Spearman’s rho= -0.08, p=0.55), while AUCs from foFDPK better correlated with Bayesian AUCs (Spearman’s rho= 0.48, p=0.00). AUCs were within, above, and below target for 54%, 20%, and 26% for the Bayesian model; 95%, 5% and 0% for foPOPPK; and 74%, 12%, and 14% for foFDPK. foPK AUC estimates concurred with Bayesian estimates only 52% of the time. Conclusion AUCs calculated by the three methods did not differ on average, but dosing recommendations for foPK at the patient level varied substantially compared to the Bayesian method. This difference is because Bayesian estimation incorporates actual patient exposures while foPK equations rely on idealized dose timing to predict AUCs. Disclosures Kimberly C. Claeys, PharmD, GenMark (Speaker’s Bureau) Marc H. Scheetz, PharmD, MSc, Nevakar (Grant/Research Support)SuperTrans Medical (Consultant)US Patent #10688195B2 (Other Financial or Material Support, Patent holder)
APA, Harvard, Vancouver, ISO, and other styles
2

Rahman, Mohammad Lutfor, Steven G. Gilmour, Peter J. Zemroch, and Pauline R. Ziman. "Bayesian analysis of fuel economy experiments." Journal of Statistical Research 54, no. 1 (2020): 43–63. http://dx.doi.org/10.47302/jsr.2020540103.

Full text
Abstract:
Statistical analysts can encounter difficulties in obtaining point and interval estimates for fixed effects when sample sizes are small and there are two or more error strata to consider. Standard methods can lead to certain variance components being estimated as zero which often seems contrary to engineering experience and judgement. Shell Global Solutions (UK) has encountered such challenges and is always looking for ways to make its statistical techniques as robust as possible. In this instance, the challenge was to estimate fuel effects and confidence limits from small-sample fuel economy experiments where both test-to-test and day-to-day variation had to be taken into account. Using likelihood-based methods, the experimenters estimated the day-to-day variance component to be zero which was unrealistic. The reason behind this zero estimate is that the data set is not large enough to estimate it reliably. The experimenters were also unsure about the fixed parameter estimates obtained by likelihood methods in linear mixed models. In this paper, we looked for an alternative to compare the likelihood estimates against and found the Bayesian platform to be appropriate. Bayesian methods assuming some non-informative and weakly informative priors enable us to compare the parameter estimates and the variance components. Profile likelihood and bootstrap based methods verified that the Bayesian point and interval estimates were not unreasonable. Also, simulation studies have assessed the quality of likelihood and Bayesian estimates in this study.
APA, Harvard, Vancouver, ISO, and other styles
3

Sanger, Terence D. "Bayesian Filtering of Myoelectric Signals." Journal of Neurophysiology 97, no. 2 (2007): 1839–45. http://dx.doi.org/10.1152/jn.00936.2006.

Full text
Abstract:
Surface electromyography is used in research, to estimate the activity of muscle, in prosthetic design, to provide a control signal, and in biofeedback, to provide subjects with a visual or auditory indication of muscle contraction. Unfortunately, successful applications are limited by the variability in the signal and the consequent poor quality of estimates. I propose to use a nonlinear recursive filter based on Bayesian estimation. The desired filtered signal is modeled as a combined diffusion and jump process and the measured electromyographic (EMG) signal is modeled as a random process with a density in the exponential family and rate given by the desired signal. The rate is estimated on-line by calculating the full conditional density given all past measurements from a single electrode. The Bayesian estimate gives the filtered signal that best describes the observed EMG signal. This estimate yields results with very low short-time variability but also with the capability of very rapid response to change. The estimate approximates isometric joint torque with lower error and higher signal-to-noise ratio than current linear methods. Use of the nonlinear filter significantly reduces noise compared with current algorithms, and it may therefore permit more effective use of the EMG signal for prosthetic control, biofeedback, and neurophysiology research.
APA, Harvard, Vancouver, ISO, and other styles
4

Roy, Himadri Shekhar, Amrit Kumar Paul, Ranjit Kumar Paul, Ramesh Kumar Singh, MD `YEASIN, and Prakash Kumar. "Estimation of Heritability of Karan Fries Cattle using Bayesian Procedure." Indian Journal of Animal Sciences 92, no. 5 (2022): 645–48. http://dx.doi.org/10.56093/ijans.v92i5.117167.

Full text
Abstract:
  The Bayesian model was applied for analyzing the first lactation in Karan Fries cattle. First lactation data of production (305-day or less milk yield and daily milk yield) were collected from the history-cum pedigree sheet and daily milk yield registers of the division of Dairy Cattle Breeding (DCB), National Dairy Research Institute (NDRI), Karnal. In the Bayesian paradigm, MCMC methods are applied to solve complex mathematical problems to estimate a large number of unknown parameters. Assuming linear mixed model and using the different prior set up, diagnostic of MCMC (Markov Chain Monte Carlo) was carried out graphically as well as by Heidelberg stationarity test. Variance estimates of the random effects (VA) and residual variance estimation (VR) and Variance estimate location effects i.e. fixed effects were calculated along with effective sample size. Finally, heritability (h2) estimates for First lactation 305 days or less milk yield (FL305DMY) was estimated along with its credible interval.
APA, Harvard, Vancouver, ISO, and other styles
5

Christ, Theodore J., and Christopher David Desjardins. "Curriculum-Based Measurement of Reading: An Evaluation of Frequentist and Bayesian Methods to Model Progress Monitoring Data." Journal of Psychoeducational Assessment 36, no. 1 (2017): 55–73. http://dx.doi.org/10.1177/0734282917712174.

Full text
Abstract:
Curriculum-Based Measurement of Oral Reading (CBM-R) is often used to monitor student progress and guide educational decisions. Ordinary least squares regression (OLSR) is the most widely used method to estimate the slope, or rate of improvement (ROI), even though published research demonstrates OLSR’s lack of validity and reliability, and imprecision of ROI estimates, especially after brief duration of monitoring (6-10 weeks). This study illustrates and examines the use of Bayesian methods to estimate ROI. Conditions included four progress monitoring durations (6, 8, 10, and 30 weeks), two schedules of data collection (weekly, biweekly), and two ROI growth distributions that broadly corresponded with ROIs for general and special education populations. A Bayesian approach with alternate prior distributions for the ROIs is presented and explored. Results demonstrate that Bayesian estimates of ROI were more precise than OLSR with comparable reliabilities, and Bayesian estimates were consistently within the plausible range of ROIs in contrast to OLSR, which often provided unrealistic estimates. Results also showcase the influence the priors had estimated ROIs and the potential dangers of prior distribution misspecification.
APA, Harvard, Vancouver, ISO, and other styles
6

Fässler, Sascha M. M., Andrew S. Brierley, and Paul G. Fernandes. "A Bayesian approach to estimating target strength." ICES Journal of Marine Science 66, no. 6 (2009): 1197–204. http://dx.doi.org/10.1093/icesjms/fsp008.

Full text
Abstract:
Abstract Fässler, S. M. M., Brierley, A. S., and Fernandes, P. G. 2009. A Bayesian approach to estimating target strength. – ICES Journal of Marine Science, 66: 1197–1204. Currently, conventional models of target strength (TS) vs. fish length, based on empirical measurements, are used to estimate fish density from integrated acoustic data. These models estimate a mean TS, averaged over variables that modulate fish TS (tilt angle, physiology, and morphology); they do not include information about the uncertainty of the mean TS, which could be propagated through to estimates of fish abundance. We use Bayesian methods, together with theoretical TS models and in situ TS data, to determine the uncertainty in TS estimates of Atlantic herring (Clupea harengus). Priors for model parameters (surface swimbladder volume, tilt angle, and s.d. of the mean TS) were used to estimate posterior parameter distributions and subsequently build a probabilistic TS model. The sensitivity of herring abundance estimates to variation in the Bayesian TS model was also evaluated. The abundance of North Sea herring from the area covered by the Scottish acoustic survey component was estimated using both the conventional TS–length formula (5.34×109 fish) and the Bayesian TS model (mean = 3.17×109 fish): this difference was probably because of the particular scattering model employed and the data used in the Bayesian model. The study demonstrates the relative importance of potential bias and precision of TS estimation and how the latter can be so much less important than the former.
APA, Harvard, Vancouver, ISO, and other styles
7

Alharbi, Yasser S., and Amr R. Kamel. "Fuzzy System Reliability Analysis for Kumaraswamy Distribution: Bayesian and Non-Bayesian Estimation with Simulation and an Application on Cancer Data Set." WSEAS TRANSACTIONS ON BIOLOGY AND BIOMEDICINE 19 (June 7, 2022): 118–39. http://dx.doi.org/10.37394/23208.2022.19.14.

Full text
Abstract:
This paper proposes the fuzzy Bayesian (FB) estimation to get the best estimate of the unknown parameters of a two-parameter Kumaraswamy distribution from a frequentist point of view. These estimations of parameters are employed to estimate the fuzzy reliability function of the Kumaraswamy distribution and to select the best estimate of the parameters and fuzzy reliability function. To achieve this goal we investigate the efficiency of seven classical estimators and compare them with FB proposed estimation. Monte Carlo simulations and cancer data set applications are performed to compare the performances of the estimators for both small and large samples. Tierney and Kadane approximation is used to obtain FB estimates of traditional and fuzzy reliability for the Kumaraswamy distribution. The results showed that the fuzziness is better than the reality for all sample sizes and the fuzzy reliability at the estimates of the FB proposed estimated is better than other estimators, it gives the lowest Bias and root mean squared error.
APA, Harvard, Vancouver, ISO, and other styles
8

Ambrose, Paul G., Jeffrey P. Hammel, Sujata M. Bhavnani, Christopher M. Rubino, Evelyn J. Ellis-Grosse, and George L. Drusano. "Frequentist and Bayesian Pharmacometric-Based Approaches To Facilitate Critically Needed New Antibiotic Development: Overcoming Lies, Damn Lies, and Statistics." Antimicrobial Agents and Chemotherapy 56, no. 3 (2011): 1466–70. http://dx.doi.org/10.1128/aac.01743-10.

Full text
Abstract:
ABSTRACTAntimicrobial drug development has greatly diminished due to regulatory uncertainty about the magnitude of the antibiotic treatment effect. Herein we evaluate the utility of pharmacometric-based analyses for determining the magnitude of the treatment effect. Frequentist and Bayesian pharmacometric-based logistic regression analyses were conducted by using data from a phase 3 clinical trial of tigecycline-treated patients with hospital-acquired pneumonia (HAP) to evaluate relationships between the probability of microbiological or clinical success and the free-drug area under the concentration-time curve from time zero to 24 h (AUC0-24)/MIC ratio. By using both the frequentist and Bayesian approaches, the magnitude of the treatment effect was determined using three different methods based on the probability of success at free-drug AUC0-24/MIC ratios of 0.01 and 25. Differences in point estimates of the treatment effect for microbiological response (method 1) were larger using the frequentist approach than using the Bayesian approach (Bayesian estimate, 0.395; frequentist estimate, 0.637). However, the Bayesian credible intervals were tighter than the frequentist confidence intervals, demonstrating increased certainty with the former approach. The treatment effect determined by taking the difference in the probabilities of success between the upper limit of a 95% interval for the minimal exposure and the lower limit of a 95% interval at the maximal exposure (method 2) was greater for the Bayesian analysis (Bayesian estimate, 0.074; frequentist estimate, 0.004). After utilizing bootstrapping to determine the lower 95% bounds for the treatment effect (method 3), treatment effect estimates were still higher for the Bayesian analysis (Bayesian estimate, 0.301; frequentist estimate, 0.166). These results demonstrate the utility of frequentist and Bayesian pharmacometric-based analyses for the determination of the treatment effect using contemporary trial endpoints. Additionally, as demonstrated by using pharmacokinetic-pharmacodynamic data, the magnitude of the treatment effect for patients with HAP is large.
APA, Harvard, Vancouver, ISO, and other styles
9

Ben Zaabza, Hafedh, Abderrahmen Ben Gara, Hedi Hammami, Mohamed Amine Ferchichi, and Boulbaba Rekik. "Estimation of variance components of milk, fat, and protein yields of Tunisian Holstein dairy cattle using Bayesian and REML methods." Archives Animal Breeding 59, no. 2 (2016): 243–48. http://dx.doi.org/10.5194/aab-59-243-2016.

Full text
Abstract:
Abstract. A multi-trait repeatability animal model under restricted maximum likelihood (REML) and Bayesian methods was used to estimate genetic parameters of milk, fat, and protein yields in Tunisian Holstein cows. The estimates of heritability for milk, fat, and protein yields from the REML procedure were 0.21 ± 0.05, 0.159 ± 0.04, and 0.158 ± 0.04, respectively. The corresponding results from the Bayesian procedure were 0.273 ± 0.02, 0.198 ± 0.01, and 0.187 ± 0.01. Heritability estimates tended to be larger via the Bayesian than those obtained by the REML method. Genetic and permanent environmental variances estimated by REML were smaller than those obtained by the Bayesian analysis. Inversely, REML estimates of the residual variances were larger than Bayesian estimates. Genetic and permanent correlation estimates were on the other hand comparable by both REML and Bayesian methods with permanent environmental being larger than genetic correlations. Results from this study confirm previous reports on genetic parameters for milk traits in Tunisian Holsteins and suggest that a multi-trait approach can be an alternative for implementing a routine genetic evaluation of the Tunisian dairy cattle population.
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Ziqi, Cameron Fackler, and Ning Xiang. "Bayesian Parameter estimation of microphone positions, sound speed and dissipation for impedance tube measurements." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 265, no. 7 (2023): 503–7. http://dx.doi.org/10.3397/in_2022_0070.

Full text
Abstract:
With tube measurement widely used for acoustic measurements, calibration plays an important role in verifying and validating the measurement. This work applies a Bayesian method based on an air layer reflectance model to estimate the microphone positions, and sound speed in consideration of environmental effects on uncertainties of the normal incident impedance tube measurements. Bayesian theorem is applied to estimate the microphone positions and sound speed given the experimental data obtained from the transfer function method (TFM) in tube measurements. With a hypothetical air layer treated as material under test in front of a rigid backing in the tube, a parametric model is established for the TFM tube measurement to estimate the microphone positions using Bayesian inference. With the microphone positions accurately estimated, sound speed and losses due to tube interior boundary effects are also estimated within the same Bayesian framework. Bayesian analysis results show that Bayesian parameter estimation based on the air layer model is well suited in estimating the sound speed, microphone positions, and other parameters to ensure highly accurate tube measurements.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Bayesian estimate"

1

OLIVEIRA, ANA CRISTINA BERNARDO DE. "BAYESIAN MODEL TO ESTIMATE ADVERTISING RECALL IN MARKETING." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1997. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=7528@1.

Full text
Abstract:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO<br>A importância de sistemas que monitorem continuamente as resposta dos consumidores à propaganda é notadamente reconhecida pela comunidade de pesquisa de mercado. A coleta sistemática deste tipo de informação é importante porque através desta, pode-se revisar campanhas anteriores, corrigir tendências detectadas em pré-testes e melhor orientar as tomadas de decisão nos setores de propaganda. O presente trabalho contém um modelo para tentar medir esta resposta baseada em Modelos Lineares Dinâmicos Generalizados.<br>Analysis of consumer markets define and attempt to measure many variables in studies of the effectiveness of adversitising. The awareness in a consumer population of a particular advertising is one such quantity, the subject of the above-referenced studies. We define and give the implementation of model based in dynamic Generalised Linear Models which is used to measure this quantity.
APA, Harvard, Vancouver, ISO, and other styles
2

James, Peter Welbury. "Design and analysis of studies to estimate cerebral blood flow." Thesis, University of Newcastle Upon Tyne, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.251020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rodewald, Oliver Russell. "Use of Bayesian inference to estimate diversion likelihood in a PUREX facility." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/76951.

Full text
Abstract:
Thesis (S.M. and S.B.)--Massachusetts Institute of Technology, Dept. of Nuclear Science and Engineering, 2011.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (p. 66-67).<br>Nuclear Fuel reprocessing is done today with the PUREX process, which has been demonstrated to work at industrial scales at several facilities around the world. Use of the PUREX process results in the creation of a stream of pure plutonium, which allows the process to be potentially used by a proliferator. Safeguards have been put in place by the IAEA and other agencies to guard against the possibility of diversion and misuse, but the cost of these safeguards and the intrusion into a facility they represent could cause a fuel reprocessing facility operator to consider foregoing standard safeguards in favor of diversion detection that is less intrusive. Use of subjective expertise in a Bayesian network offers a unique opportunity to monitor a fuel reprocessing facility while collecting limited information compared to traditional safeguards. This work focuses on the preliminary creation of a proof of concept Bayesian network and its application to a model nuclear fuel reprocessing facility.<br>by Oliver Russell Rodewald.<br>S.M.and S.B.
APA, Harvard, Vancouver, ISO, and other styles
4

SOUZA, MARCUS VINICIUS PEREIRA DE. "A BAYESIAN APPROACH TO ESTIMATE THE EFFICIENT OPERATIONAL COSTS OF ELECTRICAL ENERGY UTILITIES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2008. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=12361@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO<br>Esta tese apresenta os principais resultados de medidas de eficiência dos custos operacionais de 60 distribuidoras brasileiras de energia elétrica. Baseado no esquema yardstick competition, foi utilizado uma Rede Neural d e Kohonen (KNN) para identificar grupos de empresas similares. Os resultados obtidos pela KNN não são determinísticos, visto que os pesos sinápticos da rede são inicializados aleatoriamente. Então, é realizada uma simulação de Monte Carlo para encontrar os clusters mais frequentes. As medidas foram obtidas por modelos DEA (input oriented, com e sem restrições aos pesos) e modelos Bayesianos e frequencistas de fronteira estocástica (utilizando as funções Cobb-Douglas e Translog). Em todos os modelos, DEA e SFA, a única variável input refere-se ao custo operacional (OPEX). Os índices de eficiência destes modelos representam a potencial redução destes custos de acordo com cada concessionária avaliada. Os outputs são os cost drivers da variável OPEX: número de unidades consumidoras (uma proxy da quantidade de serviço), montante de energia distribuída (uma proxy do produto total) e a extensão da rede de distribuição (uma proxy da dispersão dos consumidores na área de concessão). Finalmente, vale registrar que estas técnicas podem mitigar a assimetria de informação e aprimorar a habilidade do agente regulador em comparar os desempenhos das distribuidoras em ambientes de regulação incentivada.<br>This thesis presents the main results of the cost efficiency scores of 60 Brazilian electricity distribution utilities. Based on yardstick competition scheme, it was applied a Kohonen Neural Networks (KNN) to identify and to group the similar utilities. The KNN results are not deterministic, since the estimated weights are randomly initialized. Thus, a Monte Carlo simulation was used in order to find the most frequent clusters. Therefore was examined the use of the DEA methodology (input oriented, with and without weight constraints) and Bayesian and non- Bayesian Stochastic Frontier Analysis (centered on a Cobb- Douglas and Translog cost functions) to evaluate the cost efficiency scores of electricity distribution utilities. In both models the only input variable is operational cost (OPEX). The efficiency measures from these models reflect the potential of the reduction of operational costs of each utility. The outputs are the cost-drivers of the OPEX: the number of customers (a proxy for the amount of service), the total electric power supplied (a proxy for the amount of product delivered) and the distribution network size (a proxy of the customers scattering in the operating territory of each distribution utility). Finally, it is important to mention that these techniques can reduce the information assimetry to improve the regulator´s skill to compare the performance of the utilities in incentive regulation environments.
APA, Harvard, Vancouver, ISO, and other styles
5

Xiao, Yuqing. "Estimate the True Pass Probability for Near-Real-Time Monitor Challenge Data Using Bayesian Analysis." Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/math_theses/20.

Full text
Abstract:
The U.S. Army¡¯s Chemical Demilitarization are designed to store, treat and destroy the nation¡¯s aging chemical weapons. It operates Near-Real-Time Monitors and Deport Area Monitoring Systems to detect chemical agent at concentrations before they become dangerous to workers, public health and the environment. CDC recommends that the sampling and analytical methods measure within 25% of the true concentration 95% of the time, and if this criterion is not met the alarm set point or reportable level should be adjusted. Two methods were provided by Army¡¯s Programmatic Laboratory and Monitoring Quality Assurance Plan to evaluate the monitoring systems based on CDC recommendations. This thesis addresses the potential problems associated with these two methods and proposes the Bayesian method in an effort to improve the assessment. Comparison of simulation results indicates that Bayesian method produces a relatively better estimate for verifying monitoring system performance as long as the prior given is correct.
APA, Harvard, Vancouver, ISO, and other styles
6

HUAMANI, LUIS ALBERTO NAVARRO. "A BAYESIAN PROCEDUCE TO ESTIMATE THE INDIVIDUAL CONTRIBUTION OF INDIVIDUAL END USES IN RESIDENCIAL ELECTRICAL ENERGY CONSUMPTION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1997. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=8691@1.

Full text
Abstract:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO<br>Esta dissertação investiga a utilização do Modelo de Regressão Multivariada Seemingly Unrelated sob uma perspectiva Bayesiana, na estimação das curvas de carga dos principais eletrodomésticos. Será utilizada uma estrutura de Demanda Condicional (CDA), consideradas de especial interesse no setor comercial e residencial para o gerenciamento pelo lado da demanda (Demand Side Management) dos hábitos dos consumidores residenciais. O trabalho envolve três partes principais: uma apresentação das metodologias estatísticas clássicas usadas para estimar as curvas de cargas; um estudo sobre Modelos de Regressão Multivariada Seemingly Unrelated usando uma aproximação Bayesiana. E por último o desenvolvimento do modelo num estudo de caso. Na apresentação das metodologias clássicas fez-se um levantamento preliminar da estrutura CDA para casos univariados usando Regressão Múltipla, e multivariada usando Regressão Multivariada Seemingly Unrelated, onde o desempenho desta estrutura depende da estrutura de correlação entre os erros de consumo horário durante um dia específico; assim como as metodologias usadas para estimar as curvas de cargas. No estudo sobre Modelos de Regressão Multivariada Seemingly Unrelated a partir da abordagem Bayesiana considerou-se um fator importante no desempenho da metodologia de estimação, a saber: informação a priori. No desenvolvimento do modelo, foram estimadas as curvas de cargas dos principais eletrodomésticos numa abordagem Bayesiana mostrando o desempenho da metodologia na captura de ambos tipos de informação: estimativas de engenharia e estimativas CDA. Os resultados obtidos avaliados pelo método acima comprovaram superioridade na explicação de dados em relação aos modelos clássicos.<br>The present dissertation investigates the use of multivariate regression models from a Bayesian point of view. These models were used to estimate the electric load behavior of household end uses. A conditional demand structure was used considering its application to the demand management of the residential and commercial consumers. This work is divided in three main parts: a description of the classical statistical methodologies used for the electric load prediction, a study of the multivariate regression models using a Bayesian approach and a further development of the model applied to a case study. A preliminary revision of the CDA structure was done for univariate cases using multiple regression. A similar revision was done for other cases using multivariate regression (Seemingly Unrelated). In those cases, the behavior of the structure depends on the correlation between a minimization of the daily demand errors and the methodologies used for the electric load prediction. The study on multivariate regression models (Seemingly Unrelated) was done from a Bayesian point of view. This kind of study is very important for the prediction methodology. When developing the model, the electric load curves of the main household appliances were predicted using a Bayesian approach. This fact showed the performance of the metodology on the capture of two types of information: Engineering prediction and CDA prediction. The results obtained using the above method, for describing the data, were better than the classical models.
APA, Harvard, Vancouver, ISO, and other styles
7

Bergström, David. "Bayesian optimization for selecting training and validation data for supervised machine learning : using Gaussian processes both to learn the relationship between sets of training data and model performance, and to estimate model performance over the entire problem domain." Thesis, Linköpings universitet, Artificiell intelligens och integrerade datorsystem, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157327.

Full text
Abstract:
Validation and verification in machine learning is an open problem which becomes increasingly important as its applications becomes more critical. Amongst the applications are autonomous vehicles and medical diagnostics. These systems all needs to be validated before being put into use or else the consequences might be fatal. This master’s thesis focuses on improving both learning and validating machine learning models in cases where data can either be generated or collected based on a chosen position. This can for example be taking and labeling photos at the position or running some simulation which generates data from the chosen positions. The approach is twofold. The first part concerns modeling the relationship between any fixed-size set of positions and some real valued performance measure. The second part involves calculating such a performance measure by estimating the performance over a region of positions. The result is two different algorithms, both variations of Bayesian optimization. The first algorithm models the relationship between a set of points and some performance measure while also optimizing the function and thus finding the set of points which yields the highest performance. The second algorithm uses Bayesian optimization to approximate the integral of performance over the region of interest. The resulting algorithms are validated in two different simulated environments. The resulting algorithms are applicable not only to machine learning but can also be used to optimize any function which takes a set of positions and returns a value, but are more suitable when the function is expensive to evaluate.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Qing. "Recurrent-Event Models for Change-Points Detection." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/78207.

Full text
Abstract:
The driving risk of novice teenagers is the highest during the initial period after licensure but decreases rapidly. This dissertation develops recurrent-event change-point models to detect the time when driving risk decreases significantly for novice teenager drivers. The dissertation consists of three major parts: the first part applies recurrent-event change-point models with identical change-points for all subjects; the second part proposes models to allow change-points to vary among drivers by a hierarchical Bayesian finite mixture model; the third part develops a non-parametric Bayesian model with a Dirichlet process prior. In the first part, two recurrent-event change-point models to detect the time of change in driving risks are developed. The models are based on a non-homogeneous Poisson process with piecewise constant intensity functions. It is shown that the change-points only occur at the event times and the maximum likelihood estimators are consistent. The proposed models are applied to the Naturalistic Teenage Driving Study, which continuously recorded textit{in situ} driving behaviour of 42 novice teenage drivers for the first 18 months after licensure using sophisticated in-vehicle instrumentation. The results indicate that crash and near-crash rate decreases significantly after 73 hours of independent driving after licensure. The models in part one assume identical change-points for all drivers. However, several studies showed that different patterns of risk change over time might exist among the teenagers, which implies that the change-points might not be identical among drivers. In the second part, change-points are allowed to vary among drivers by a hierarchical Bayesian finite mixture model, considering that clusters exist among the teenagers. The prior for mixture proportions is a Dirichlet distribution and a Markov chain Monte Carlo algorithm is developed to sample from the posterior distributions. DIC is used to determine the best number of clusters. Based on the simulation study, the model gives fine results under different scenarios. For the Naturalist Teenage Driving Study data, three clusters exist among the teenagers: the change-points are 52.30, 108.99 and 150.20 hours of driving after first licensure correspondingly for the three clusters; the intensity rates increase for the first cluster while decrease for other two clusters; the change-point of the first cluster is the earliest and the average intensity rate is the highest. In the second part, model selection is conducted to determine the number of clusters. An alternative is the Bayesian non-parametric approach. In the third part, a Dirichlet process Mixture Model is proposed, where the change-points are assigned a Dirichlet process prior. A Markov chain Monte Carlo algorithm is developed to sample from the posterior distributions. Automatic clustering is expected based on change-points without specifying the number of latent clusters. Based on the Dirichlet process mixture model, three clusters exist among the teenage drivers for the Naturalistic Teenage Driving Study. The change-points of the three clusters are 96.31, 163.83, and 279.19 hours. The results provide critical information for safety education, safety countermeasure development, and Graduated Driver Licensing policy making.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
9

Benko, Matej. "Hledaní modelů pohybu a jejich parametrů pro identifikaci trajektorie cílů." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-445467.

Full text
Abstract:
Táto práca sa zaoberá odstraňovaním šumu, ktorý vzniká z tzv. multilateračných meraní leteckých cieľov. Na tento účel bude využitá najmä teória Bayesovských odhadov. Odvodí sa aposteriórna hustota skutočnej (presnej) polohy lietadla. Spolu s polohou (alebo aj rýchlosťou) lietadla bude odhadovaná tiež geometria trajektórie lietadla, ktorú lietadlo v aktuálnom čase sleduje a tzv. procesný šum, ktorý charakterizuje ako moc sa skutočná trajektória môže od tejto líšiť. Odhad spomínaného procesného šumu je najdôležitejšou časťou tejto práce. Je odvodený prístup maximálnej vierohodnosti a Bayesovský prístup a ďalšie rôzne vylepšenia a úpravy týchto prístupov. Tie zlepšujú odhad pri napr. zmene manévru cieľa alebo riešia problém počiatočnej nepresnosti odhadu maximálnej vierohodnosti. Na záver je ukázaná možnosť kombinácie prístupov, t.j. odhad spolu aj geometrie aj procesného šumu.
APA, Harvard, Vancouver, ISO, and other styles
10

Marković, Dimitrije, and Stefan J. Kiebel. "Comparative Analysis of Behavioral Models for Adaptive Learning in Changing Environments." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-214867.

Full text
Abstract:
Probabilistic models of decision making under various forms of uncertainty have been applied in recent years to numerous behavioral and model-based fMRI studies. These studies were highly successful in enabling a better understanding of behavior and delineating the functional properties of brain areas involved in decision making under uncertainty. However, as different studies considered different models of decision making under uncertainty, it is unclear which of these computational models provides the best account of the observed behavioral and neuroimaging data. This is an important issue, as not performing model comparison may tempt researchers to over-interpret results based on a single model. Here we describe how in practice one can compare different behavioral models and test the accuracy of model comparison and parameter estimation of Bayesian and maximum-likelihood based methods. We focus our analysis on two well-established hierarchical probabilistic models that aim at capturing the evolution of beliefs in changing environments: Hierarchical Gaussian Filters and Change Point Models. To our knowledge, these two, well-established models have never been compared on the same data. We demonstrate, using simulated behavioral experiments, that one can accurately disambiguate between these two models, and accurately infer free model parameters and hidden belief trajectories (e.g., posterior expectations, posterior uncertainties, and prediction errors) even when using noisy and highly correlated behavioral measurements. Importantly, we found several advantages of Bayesian inference and Bayesian model comparison compared to often-used Maximum-Likelihood schemes combined with the Bayesian Information Criterion. These results stress the relevance of Bayesian data analysis for model-based neuroimaging studies that investigate human decision making under uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Bayesian estimate"

1

Houston, Walter M. Empirical Bayes estimates of parameters from the logistic regression model. ACT, Inc., 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Houston, Walter M. Empirical Bayes estimates of parameters from the logistic regression model. ACT, Inc., 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

1972-, Raymer James, Willekens Frans, and University of Southampton. Division of Social Statistics., eds. International migration in Europe: Data, models and estimates. Wiley, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Doppelhofer, Gernot. Determinants of long-term growth: A Bayesian averaging of classical estimates (BACE) approach. National Bureau of Economic Research, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tanabe, Kunio. BNDE, FORTRAN subroutines for computing Bayesian nonparametric univariate and bivariate density estimator. Institute of Statistical Mathematics, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Torres, Maura Acevedo. Reduction of Uncertainty in Post-Event Seismic Loss Estimates Using Observation Data and Bayesian Updating. [publisher not identified], 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

ha-meḥḳar, Banḳ Yiśraʼel Maḥleḳet, ed. The choice of a foreign price measure in a Bayesian estimated New-Keynesian model for Israel. Research Department, Bank of Israel, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Walsh, Bruce, and Michael Lynch. Analysis of Short-term Selection Experiments: 2. Mixed-model and Bayesian Approaches. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198830870.003.0019.

Full text
Abstract:
When the full pedigree of individuals whose values (records) were used in the selection decisions during an experiment (or breeding program) is known, LS analysis can be replaced by mixed models and their Bayesian extensions. In this setting, REML can be used to estimate genetic variances and BLUP can be used to estimate the mean breeding value in any given generation. The latter allows for genetic trends to be separated from environmental trends without the need for a control population. Under the infinitesimal model setting (wherein selection-induced allele-frequency changes are small during the course of the experiment), the use of the relationship matrix in a BLUP analysis accounts for drift, nonrandom mating, and linkage disequilibrium.
APA, Harvard, Vancouver, ISO, and other styles
9

Quintana, José Mario, Carlos Carvalho, James Scott, and Thomas Costigliola. Extracting S&P500 and NASDAQ Volatility: The Credit Crisis of 2007–2008. Edited by Anthony O'Hagan and Mike West. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198703174.013.13.

Full text
Abstract:
This article demonstrates the utility of Bayesian modelling and inference in financial market volatility analysis, using the 2007-2008 credit crisis as a case study. It first describes the applied problem and goal of the Bayesian analysis before introducing the sequential estimation models. It then discusses the simulation-based methodology for inference, including Markov chain Monte Carlo (MCMC) and particle filtering methods for filtering and parameter learning. In the study, Bayesian sequential model choice techniques are used to estimate volatility and volatility dynamics for daily data for the year 2007 for three market indices: the Standard and Poor’s S&amp;P500, the NASDAQ NDX100 and the financial equity index called XLF. Three models of financial time series are estimated: a model with stochastic volatility, a model with stochastic volatility that also incorporates jumps in volatility, and a Garch model.
APA, Harvard, Vancouver, ISO, and other styles
10

Higdon, Dave, Katrin Heitmann, Charles Nakhleh, and Salman Habib. Combining simulations and physical observations to estimate cosmological parameters. Edited by Anthony O'Hagan and Mike West. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198703174.013.26.

Full text
Abstract:
This article focuses on the use of a Bayesian approach that combines simulations and physical observations to estimate cosmological parameters. It begins with an overview of the Λ-cold dark matter (CDM) model, the simplest cosmological model in agreement with the cosmic microwave background (CMB) and largescale structure analysis. The CDM model is determined by a small number of parameters which control the composition, expansion and fluctuations of the universe. The present study aims to learn about the values of these parameters using measurements from the Sloan Digital Sky Survey (SDSS). Computationally intensive simulation results are combined with measurements from the SDSS to infer about a subset of the parameters that control the CDM model. The article also describes a statistical framework used to determine a posterior distribution for these cosmological parameters and concludes by showing how it can be extended to include data from diverse data sources.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Bayesian estimate"

1

Ghosh, J. K. "The Horvitz-Thompson Estimate and Basu’s Circus Revisited." In Bayesian Analysis in Statistics and Econometrics. Springer New York, 1992. http://dx.doi.org/10.1007/978-1-4612-2944-5_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pillonetto, Gianluigi, Tianshi Chen, Alessandro Chiuso, Giuseppe De Nicolao, and Lennart Ljung. "Bayesian Interpretation of Regularization." In Regularized System Identification. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95860-2_4.

Full text
Abstract:
AbstractIn the previous chapter, it has been shown that the regularization approach is particularly useful when information contained in the data is not sufficient to obtain a precise estimate of the unknown parameter vector and standard methods, such as least squares, yield poor solutions. The fact itself that an estimate is regarded as poor suggests the existence of some form of prior knowledge on the degree of acceptability of candidate solutions. It is this knowledge that guides the choice of the regularization penalty that is added as a corrective term to the usual sum of squared residuals. In the previous chapters, this design process has been described in a deterministic setting where only the measurement noises are random. In this chapter, we will see that an alternative formalization of prior information is obtained if a subjective/Bayesian estimation paradigm is adopted. The major difference is that the parameters, rather than being regarded as deterministic, are now treated as a random vector. This stochastic setting permits the definition of new powerful tools for both priors selection, e.g., through the maximum entropy principle, and for regularization parameters tuning, e.g., through the empirical Bayes approach and its connection with the concept of equivalent degrees of freedom.
APA, Harvard, Vancouver, ISO, and other styles
3

O’Hagan, Anthony, and Frank S. Wells. "Use of Prior Information to Estimate Costs in a Sewerage Operation." In Case Studies in Bayesian Statistics. Springer New York, 1993. http://dx.doi.org/10.1007/978-1-4612-2714-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Weichert, Dorina, Elena Haedecke, Gunar Ernis, Sebastian Houben, Alexander Kister, and Stefan Wrobel. "Bayesian Inference for Fatigue Strength Estimation." In Cognitive Technologies. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-83097-6_6.

Full text
Abstract:
Abstract A vital material property of metals is long life fatigue strength. It describes the maximum load that can be cyclically applied to a defined specimen for a number of cycles that is thought to represent an infinite lifetime. The experimental measurement of long life fatigue strength is costly, justifying the need to create a precise estimate with as few experiments as possible. We propose a new approach for estimating long life fatigue strength that defines a ready-to-use experimental and analysis procedure. It relies on probabilistic machine learning methods, efficiently connecting expert knowledge about the material behavior and the test setup with historical and newly generated data. A comparison to state-of-the-art standard experimental procedures shows that our approach requires fewer experiments to produce an estimate at the same precision—massively reducing experimental costs.
APA, Harvard, Vancouver, ISO, and other styles
5

Henzinger, Thomas A., Mahyar Karimi, Konstantin Kueffner, and Kaushik Mallik. "Monitoring Algorithmic Fairness." In Computer Aided Verification. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-37703-7_17.

Full text
Abstract:
AbstractMachine-learned systems are in widespread use for making decisions about humans, and it is important that they are fair, i.e., not biased against individuals based on sensitive attributes. We present runtime verification of algorithmic fairness for systems whose models are unknown, but are assumed to have a Markov chain structure. We introduce a specification language that can model many common algorithmic fairness properties, such as demographic parity, equal opportunity, and social burden. We build monitors that observe a long sequence of events as generated by a given system, and output, after each observation, a quantitative estimate of how fair or biased the system was on that run until that point in time. The estimate is proven to be correct modulo a variable error bound and a given confidence level, where the error bound gets tighter as the observed sequence gets longer. Our monitors are of two types, and use, respectively, frequentist and Bayesian statistical inference techniques. While the frequentist monitors compute estimates that are objectively correct with respect to the ground truth, the Bayesian monitors compute estimates that are correct subject to a given prior belief about the system’s model. Using a prototype implementation, we show how we can monitor if a bank is fair in giving loans to applicants from different social backgrounds, and if a college is fair in admitting students while maintaining a reasonable financial burden on the society. Although they exhibit different theoretical complexities in certain cases, in our experiments, both frequentist and Bayesian monitors took less than a millisecond to update their verdicts after each observation.
APA, Harvard, Vancouver, ISO, and other styles
6

Peña, José M., Víctor Robles, Óscar Marbán, and María S. Pérez. "Bayesian Methods to Estimate Future Load in Web Farms." In Advances in Web Intelligence. Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24681-7_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Evensen, Geir, Femke C. Vossepoel, and Peter Jan van Leeuwen. "Maximum a Posteriori Solution." In Springer Textbooks in Earth Sciences, Geography and Environment. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96709-3_3.

Full text
Abstract:
AbstractWe will now introduce a fundamental approximation used in most practical data-assimilation methods, namely the definition of Gaussian priors. This approximation simplifies the Bayesian posterior, which allows us to compute the maximum a posteriori (MAP) estimate and sample from the posterior pdf. This chapter will introduce the Gaussian approximation and then discuss the Gauss–Newton method for finding the MAP estimate. This method is the starting point for many of the data-assimilation algorithms discussed in the following chapters.
APA, Harvard, Vancouver, ISO, and other styles
8

Wickramasuriya, Dilranjan S., and Rose T. Faghih. "Introduction." In Bayesian Filter Design for Computational Medicine. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-47104-9_1.

Full text
Abstract:
AbstractThe human body is an intricate network of multiple functioning sub-systems. Many unobserved processes quietly keep running within the body even while we remain largely unconscious of them. For decades, scientists have sought to understand how different physiological systems work and how they can be mathematically modeled. Mathematical models of biological systems provide key scientific insights and also help guide the development of technologies for treating disorders when proper functioning no longer occurs. One of the challenges encountered with physiological systems is that, in a number of instances, the quantities we are interested in are difficult to observe directly or remain completely inaccessible. This could be either because they are located deep within the body or simply because they are more abstract (e.g., emotion). Consider the heart, for instance. The left ventricle pumps out blood through the aorta to the rest of the body. Blood pressure inside the aorta (known as central aortic pressure) has been considered a useful predictor of the future risk of developing cardiovascular disease, perhaps even more useful than the conventional blood pressure measurements taken from the upper arm (McEniery et al. (Eur Heart J 35(26):1719–1725, 2014)). However, measuring blood pressure inside the aorta is difficult. Consequently, researchers have had to rely on developing mathematical models with which to estimate central aortic pressure using other peripheral measurements (e.g., Ghasemi et al. (J Dyn Syst Measur Control 139(6):061003, 2017)). The same could be said regarding the recovery of CRH (corticotropin-releasing hormone) secretion timings within the hypothalamus—a largely inaccessible structure deep within the brain—using cortisol measurements in the blood based on mathematical relationships (Faghih (System identification of cortisol secretion: Characterizing pulsatile dynamics, Ph.D. dissertation, Massachusetts Institute of Technology, 2014)). Emotions could also be placed in this same category. They are difficult to measure because of their inherently abstract nature. Emotions, however, do cause changes in heart rate, sweating, and blood pressure that can be measured and with which someone’s feelings can be estimated. What we have described so far, in a sense, captures the big picture underlying this book. We have physiological quantities that are difficult to observe directly, we have measurements that are easier to acquire, and we have the ability to build mathematical models to estimate those inaccessible quantities.
APA, Harvard, Vancouver, ISO, and other styles
9

Joanes, Derrick N., Christine A. Gill, and Andrew J. Baczkowski. "Simulation of a Bayesian Interval Estimate for a Heterogeneity Measure." In Compstat. Physica-Verlag HD, 1994. http://dx.doi.org/10.1007/978-3-642-52463-9_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mira, Antonietta, and Paolo Tenconi. "Bayesian Estimate of Default Probabilities via MCMC with Delayed Rejection." In Seminar on Stochastic Analysis, Random Fields and Applications IV. Birkhäuser Basel, 2004. http://dx.doi.org/10.1007/978-3-0348-7943-9_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Bayesian estimate"

1

Mertenskötter, Lutz, and Markus Kantner. "Bayesian Estimation of Frequency Noise in Narrow-Linewidth Lasers." In CLEO: Applications and Technology. Optica Publishing Group, 2024. http://dx.doi.org/10.1364/cleo_at.2024.jtu2a.45.

Full text
Abstract:
We present a statistical inference approach to estimate the frequency noise power spectral density of narrow-linewidth lasers from delayed self-heterodyne beat note experiments in the presence of considerable measurement noise.
APA, Harvard, Vancouver, ISO, and other styles
2

Scharzenberger, Cody, Shay Snyder, Sumedh R. Risbud, Joe Hays, and Maryam Parsa. "Learning to Estimate Regions of Attraction of Autonomous Dynamical Systems Using Bayesian Physics Informed Neural Networks." In 2024 International Conference on Neuromorphic Systems (ICONS). IEEE, 2024. https://doi.org/10.1109/icons62911.2024.00036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Patel, Ishan, and Gheorghe Bota. "Mechanistic Model as a Bias to Machine Learning Algorithm for Confident Prediction of Corrosion." In CONFERENCE 2023. AMPP, 2023. https://doi.org/10.5006/c2023-19108.

Full text
Abstract:
Abstract Bayesian network is employed to estimate a risk-based life cycle cost of corrosion for assets. It has been highly recognized that inclusion of mechanistic models to a Bayesian network can increase the confidence in estimation of corrosion rates. However, coefficients of mechanistic models are often unknown, especially when complex rate processes are involved, which discourages the usage of the model. A methodology is proposed here, to introduce a mechanistic model as a bias to a regressive machine learning (ML) algorithm. No attempts have been made to obtain phenomenological coefficients of the mechanistic model. Instead, a methodology is proposed to obtain a highly tuned parameter vector for a ML algorithm from a learning set of corrosion rate data.
APA, Harvard, Vancouver, ISO, and other styles
4

de Melo, Brian A. R., Raony C. C. Cesar, and Carlos A. B. Pereira. "Sample sizes to estimate proportions and correlation." In XI BRAZILIAN MEETING ON BAYESIAN STATISTICS: EBEB 2012. AIP, 2012. http://dx.doi.org/10.1063/1.4759606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Iseki, Toshio. "An Improved Stochastic Modeling for Bayesian Wave Estimation." In ASME 2012 31st International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/omae2012-83740.

Full text
Abstract:
A modified Bayesian modeling procedure for wave estimation is proposed. In this method, errors in the estimates of ship response functions can be taken into account. In order to discuss the relationship between the minimum ABIC and the accuracy of the estimated wave parameters, the ABIC surfaces and the optimum area of the wave estimation are shown with respect to the two hyperparameters. As a result, the modified Bayesian modeling makes the ABIC surface smoother and can provide stable wave estimation. This concludes that the modified Bayesian modeling is reliable within a certain accuracy to estimate the wave parameters.
APA, Harvard, Vancouver, ISO, and other styles
6

Gu, L., G. Li, J. Abramczyk, and J. Prybylski. "A Bayesian Estimate of Vehicle Safety Performance." In SAE 2005 World Congress & Exhibition. SAE International, 2005. http://dx.doi.org/10.4271/2005-01-0822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Torres-Avilés, F., C. Molina, and M. J. Muñoz. "Bayesian approaches for Poisson models to estimate bivariate relative risks." In XI BRAZILIAN MEETING ON BAYESIAN STATISTICS: EBEB 2012. AIP, 2012. http://dx.doi.org/10.1063/1.4759618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dan, Zhiping, Xi Chen, Haitao Gan, and Changxin Gao. "Locally Adaptive Shearlet Denoising Based on Bayesian MAP Estimate." In Graphics (ICIG). IEEE, 2011. http://dx.doi.org/10.1109/icig.2011.134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nagel, Joseph B., and Bruno Sudret. "A Bayesian Multilevel Approach to Optimally Estimate Material Properties." In Second International Conference on Vulnerability and Risk Analysis and Management (ICVRAM) and the Sixth International Symposium on Uncertainty, Modeling, and Analysis (ISUMA). American Society of Civil Engineers, 2014. http://dx.doi.org/10.1061/9780784413609.151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Feng Hu, Wei Li, Jorma Lilleberg, and Matti Latva-aho. "On the approximate noise modeling for the Estimate-and-Forward relay with the Bayesian estimator." In 2013 IEEE Wireless Communications and Networking Conference (WCNC). IEEE, 2013. http://dx.doi.org/10.1109/wcnc.2013.6555206.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Bayesian estimate"

1

Mulcahy, Garrett, Dusty Brooks, and Brian Ehrhart. Using Bayesian Methodology to Estimate Liquefied Natural Gas Leak Frequencies. Office of Scientific and Technical Information (OSTI), 2021. http://dx.doi.org/10.2172/1782412.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Read, Matthew. Estimating the Effects of Monetary Policy in Australia Using Sign-restricted Structural Vector Autoregressions. Reserve Bank of Australia, 2023. http://dx.doi.org/10.47688/rdp2022-09.

Full text
Abstract:
Existing estimates of the macroeconomic effects of Australian monetary policy tend to be based on strong, potentially contentious, assumptions. I estimate these effects under weaker assumptions. Specifically, I estimate a structural vector autoregression identified using a variety of sign restrictions, including restrictions on impulse responses to a monetary policy shock, the monetary policy reaction function, and the relationship between the monetary policy shock and a proxy for this shock. I use an approach to Bayesian inference that accounts for the problem of posterior sensitivity to the choice of prior that arises in this setting, which turns out to be important. Some sets of identifying restrictions are not particularly informative about the effects of monetary policy. However, combining the restrictions allows us to draw some useful inferences. There is robust evidence that an increase in the cash rate lowers output and consumer prices at horizons beyond a year or so. The results are consistent with the macroeconomic effects of a 100 basis point increase in the cash rate lying towards the upper end of the range of existing estimates.
APA, Harvard, Vancouver, ISO, and other styles
3

Granados, Camilo, and Daniel Parra-Amado. Output Gap Measurement after COVID for Colombia: Lessons from a Permanent-Transitory Approach. Banco de la República, 2025. https://doi.org/10.32468/be.1295.

Full text
Abstract:
We estimate the output gap for the Colombian economy explicitly accounting for the COVID-19 period. Our estimates reveal a significant $20$\% decline in the output gap but with a faster recovery compared to previous crises. Our empirical strategy follows a two-stage Bayesian vector autoregressive (BSVAR) model where i) a scaling factor in the reduced form of VAR is used to model extreme data, such as those observed around the COVID-19 period, and ii) permanent and transitory shocks are structurally identified. As a result, we obtain that a single structural shock explains the potential GDP, while the remaining shocks within the model are transitory in nature and thus can be used to estimate the output gap. We elaborate on the relative strengths of our method for drawing policy lessons and show that the improved approximation accuracy of our method allows for inflation forecasting gains through the use of Phillips curves, as well as for rule-based policy diagnostics that align more closely with the observed behavior of the Central Bank.
APA, Harvard, Vancouver, ISO, and other styles
4

López, Lucía, Florens Odendahl, Susana Párraga, and Edgar Silgado-Gómez. The pass-through to inflation of gas price shocks. Banco de España, 2025. https://doi.org/10.53479/39118.

Full text
Abstract:
This paper uses a Bayesian Structural Vector Autoregressive (BSVAR) framework to estimate the pass-through of unexpected gas price supply shocks to HICP inflation in the euro area and its four largest economies. Compared with oil price shocks, gas price shocks have an approximately one-third smaller pass-through to headline inflation. Country-specific results indicate that gas price increases matter more for German, Spanish and Italian inflation than for French inflation, hinging on the reliance on energy commodities in consumption, production and different electricity price regulations. Consistent with gas becoming a prominent energy commodity in the euro area, including time-variation through a time-varying parameter BVAR demonstrates a substantially larger impact of gas price shocks on HICP inflation in recent years. The empirical estimates are then rationalised using a New Keynesian Dynamic Stochastic General Equilibrium (NK-DSGE) model augmented with energy. In the model, the elasticity of substitution between gas and non-energy inputs plays a critical role in explaining the inflationary effects of gas shocks. A decomposition of the recent inflation dynamics into the model’s structural shocks reveals a larger contribution of gas shocks compared with oil shocks.
APA, Harvard, Vancouver, ISO, and other styles
5

Galindo, Arturo, and Victoria Nuguer. Fuel-Price Shocks and Inflation in Latin America and the Caribbean. Inter-American Development Bank, 2023. http://dx.doi.org/10.18235/0004724.

Full text
Abstract:
We estimate the impact of fuel-commodity price shocks on inflation and inflation expectations for eight Latin American countries in which monetary policy follows inflation-targeting frameworks. We use Bayesian Vector Autoregressive models (BVARs) and data from 2005 and up to 2022 to quantify these impacts. We find that the fuel-price shocks are significant in all cases and the response ranges between 0.01 and 0.04 percentage points of inflation, following a 1 p.p. shock to fuel prices. A variance decomposition exercise shows that more than 50% of the outburst in inflation that these countries experienced in 2021 and 2022 can be attributed to the shock in global fuel prices. These results are robust to changes in the specification that include additional controls, different commodity price measures, different lag structures, and alternative ordering.
APA, Harvard, Vancouver, ISO, and other styles
6

Albis, Manuel Leonard, Mara Claire Tayag, and Jong Woo Kang. Estimating Regional Integration Using the Bayesian State-Space Approach. Asian Development Bank, 2024. http://dx.doi.org/10.22617/wps230622-2.

Full text
Abstract:
Estimating regional integration faces challenges due to incomplete data. This paper addresses this through the dynamic factor model estimated using the Bayesian state-space approach. Bilateral economic integration (BEI) indexes are estimated across four dimensions: trade, foreign direct investments, finance, and migration. The regional integration index (RII) of Asia and the Pacific is calculated by applying network density to the BEI estimates. The RII declined slightly in recent years, with the network centering more around the People’s Republic of China.
APA, Harvard, Vancouver, ISO, and other styles
7

Gungor, Osman, Imad Al-Qadi, and Navneet Garg. Pavement Data Analytics for Collected Sensor Data. Illinois Center for Transportation, 2021. http://dx.doi.org/10.36501/0197-9191/21-034.

Full text
Abstract:
The Federal Aviation Administration instrumented four concrete slabs of a taxiway at the John F. Kennedy International Airport to collect pavement responses under aircraft and environmental loading. The study started with developing preprocessing scripts to organize, structure, and clean the collected data. As a result of the preprocessing step, the data became easier and more intuitive for pavement engineers and researchers to transform and process. After the data were cleaned and organized, they were used to develop two prediction models. The first prediction model employs a Bayesian calibration framework to estimate the unknown material parameters of the concrete pavement. Additionally, the posterior distributions resulting from the calibration process served as a sensitivity analysis by reporting the significance of each parameter for temperature distribution. The second prediction model utilized a machine-learning (ML) algorithm to predict pavement responses under aircraft and environmental loadings. The results demonstrated that ML can predict the responses with high accuracy at a low computational cost. This project highlighted the potential of using ML for future pavement design guidelines as more instrumentation data from future projects are collected to incorporate various material properties and pavement structures.
APA, Harvard, Vancouver, ISO, and other styles
8

Cheng, Benny N., and Lap S. Tam. Bayesian Missile System Reliability from Point Estimates. Defense Technical Information Center, 2014. http://dx.doi.org/10.21236/ada611099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kurozumi, Takushi, Ryohei Oishi, and Willem Van Zandweghe. Sticky Information Versus Sticky Prices Revisited: A Bayesian VAR-GMM Approach. Federal Reserve Bank of Cleveland, 2022. http://dx.doi.org/10.26509/frbc-wp-202234.

Full text
Abstract:
Several Phillips curves based on sticky information and sticky prices are estimated and compared using Bayesian VAR-GMM. This method derives expectations in each Phillips curve from a VAR and estimates the Phillips curve parameters and the VAR coefficients simultaneously. Quasi-marginal likelihood-based model comparison selects a dual stickiness Phillips curve in which, each period, some prices remain unchanged, consistent with micro evidence. Moreover, sticky information is a more plausible source of inflation inertia in the Phillips curve than other sources proposed in previous studies. Sticky information, sticky prices, and unchanged prices in each period are all needed to better describe inflation dynamics.
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Donggyu. Quantitative Easing and Inequality. Federal Reserve Bank of New York, 2024. http://dx.doi.org/10.59576/sr.1108.

Full text
Abstract:
This paper studies how quantitative easing (QE) affects household welfare across the wealth distribution. I build a Heterogeneous Agent New Keynesian (HANK) model with household portfolio choice, wage and price rigidities, endogenous unemployment, frictional financial intermediation, an effective lower bound (ELB) on the policy rate, forward guidance, and QE. To quantify the contribution of the various channels through which monetary policy affects inequality, I estimate the model using Bayesian methods, explicitly taking into account the occasionally binding ELB constraint and the QE operations undertaken by the Federal Reserve during the 2009-15 period. I find that the QE program unambiguously benefited all households by stimulating economic activity. However, it had nonlinear distributional effects. On the one hand, it widened the income and consumption gap between the top 10 percent and the rest of the wealth distribution by boosting profits and equity prices. On the other hand, QE shrank inequality within the lower 90 percent of the wealth distribution, primarily by lowering unemployment. On net, it reduced overall wealth and income inequality, as measured by the Gini index. Surprisingly, QE has weaker distributional consequences compared with conventional monetary policy. Lastly, forward guidance and an extended period of zero policy rates amplified both the aggregate and the distributional effects of QE.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography