Academic literature on the topic 'Bayesian approach. eng'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Bayesian approach. eng.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Bayesian approach. eng"

1

Adiputra, Dimas, Mohd Azizi Abdul Rahman, Irfan Bahiuddin, Ubaidillah, Fitrian Imaduddin, and Nurhazimah Nazmi. "Sensor Number Optimization Using Neural Network for Ankle Foot Orthosis Equipped with Magnetorheological Brake." Open Engineering 11, no. 1 (November 19, 2020): 91–101. http://dx.doi.org/10.1515/eng-2021-0010.

Full text
Abstract:
Abstract A passive controlled ankle foot orthosis (PICAFO) used a passive actuator such as Magnetorheological (MR) brake to control the ankle stiffness. The PICAFO used two kinds of sensors, such as Electromyography (EMG) signal and ankle position (two inputs) to determine the amount of stiffness (one output) to be generated by the MR brake. As the overall weight and design of an orthotic device must be optimized, the sensor numbers on PICAFO wanted to be reduced. To do that, a machine learning approach was implemented to simplify the previous stiffness function. In this paper, Non-linear Autoregressive Exogeneous (NARX) neural network were used to generate the simplified function. A total of 2060 data were used to build the network with detail such as 1309 training data, 281 validation data, 281 testing data 1, and 189 testing data 2. Three training algorithms were used such as Levenberg-Marquardt, Bayesian Regularization, and Scaled Conjugate Gradient. The result shows that the function can be simplified into one input (ankle position) – one output (stiffness). Optimized result was shown by the NARX neural network with 15 hidden layers and trained using Bayesian Regularization with delay 2. In this case, the testing data shows R-value of 0.992 and MSE of 19.16.
APA, Harvard, Vancouver, ISO, and other styles
2

Obesnyuk, V. F. "Group health risk parameters in a heterogeneous cohort. Indirect assessment as per events taken in dynamics." Health Risk Analysis, no. 2 (June 2021): 17–32. http://dx.doi.org/10.21668/health.risk/2021.2.02.eng.

Full text
Abstract:
The present work focuses on describing a procedure for assessing intensive and cumulative parameters of specific risk when observing cohorts under combined exposure to several external or internal factors. The research goal was to reveal how to use well-known heuristic-descriptive parameters accepted in remote consequences epidemiology for analyzing dynamics of countable events in a cohort; analysis should be performed on quite strict statistic-probabilistic grounds based on Bayesian approach to explaining conditional probabilities that such countable events might occur. The work doesn’t contain any new or previously unknown epidemiologic concept or parameters; despite that, it is not a simple literature review. It is the suggested procedure itself that is comparatively new as it combines techniques used to process conventional epidemiologic information and a correct metrological approach based on process description. The basic result is providing a reader with understanding that all basic descriptive epidemiologic parameters within cohort description framework turn out to be quantitatively interlinked in case they are considered as conditional group processes. It allows simultaneous inter-consistent assessment of annual risk parameters and Kaplan-Meier (Fleming-Harrington) and Nelson-Aalen cumulative parameters as well as other conditional risk parameters or their analogues. It is shown that when a basic descriptive characteristic of cumulative parameters is chosen as a measure for measurable long-term external exposure, it is only natural to apply such a concept as a dose of this risk factor which is surrogate in its essence. Operability of the procedure was confirmed with an example. The suggested procedure was proven to differ from its prototype that previously allowed achieving only substantially shifted estimates, up to ~100 % even in case an operation mode was normal. Application requires creating specific but quite available PC software.
APA, Harvard, Vancouver, ISO, and other styles
3

de Bragança Pereira, Carlos Alberto, and Julio Michael Stern. "Model selection: Full Bayesian approach." Environmetrics 12, no. 6 (September 2001): 559–68. http://dx.doi.org/10.1002/env.482.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bartolucci, Alfred A., Charles R. Katholi, and Robert Birch. "Interim analysis of failure time data — A Bayesian approach." Environmetrics 3, no. 4 (1992): 465–77. http://dx.doi.org/10.1002/env.3170030407.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bauwens, Luc, Denzil G. Fiebig, and Mark F. J. Steel. "Estimating End-Use Demand: A Bayesian Approach." Journal of Business & Economic Statistics 12, no. 2 (April 1994): 221. http://dx.doi.org/10.2307/1391485.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bauwens, Luc, Denzil G. Fiebig, and Mark F. J. Steel. "Estimating End-use Demand: A Bayesian Approach." Journal of Business & Economic Statistics 12, no. 2 (April 1994): 221–31. http://dx.doi.org/10.1080/07350015.1994.10510009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Neeley, E. S., W. F. Christensen, and S. R. Sain. "A Bayesian spatial factor analysis approach for combining climate model ensembles." Environmetrics 25, no. 7 (June 27, 2014): 483–97. http://dx.doi.org/10.1002/env.2277.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sansó, Bruno, and Lelys Guenni. "A Bayesian approach to compare observed rainfall data to deterministic simulations." Environmetrics 15, no. 6 (August 19, 2004): 597–612. http://dx.doi.org/10.1002/env.660.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Oleson, Jacob J., Diane Hope, Corinna Gries, and Jason Kaye. "Estimating soil properties in heterogeneous land-use patches: a Bayesian approach." Environmetrics 17, no. 5 (2006): 517–25. http://dx.doi.org/10.1002/env.789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Oikonomou, Vangelis P., and Ioannis Kompatsiaris. "A Novel Bayesian Approach for EEG Source Localization." Computational Intelligence and Neuroscience 2020 (October 30, 2020): 1–12. http://dx.doi.org/10.1155/2020/8837954.

Full text
Abstract:
We propose a new method for EEG source localization. An efficient solution to this problem requires choosing an appropriate regularization term in order to constraint the original problem. In our work, we adopt the Bayesian framework to place constraints; hence, the regularization term is closely connected to the prior distribution. More specifically, we propose a new sparse prior for the localization of EEG sources. The proposed prior distribution has sparse properties favoring focal EEG sources. In order to obtain an efficient algorithm, we use the variational Bayesian (VB) framework which provides us with a tractable iterative algorithm of closed-form equations. Additionally, we provide extensions of our method in cases where we observe group structures and spatially extended EEG sources. We have performed experiments using synthetic EEG data and real EEG data from three publicly available datasets. The real EEG data are produced due to the presentation of auditory and visual stimulus. We compare the proposed method with well-known approaches of EEG source localization and the results have shown that our method presents state-of-the-art performance, especially in cases where we expect few activated brain regions. The proposed method can effectively detect EEG sources in various circumstances. Overall, the proposed sparse prior for EEG source localization results in more accurate localization of EEG sources than state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Bayesian approach. eng"

1

Gonçalves, Tarcísio de Moraes 1963. "Genes de efeito principal e locos de características quantitativas (QTL) em suínos /." Botucatu, [s.n.], 2003. http://hdl.handle.net/11449/104151.

Full text
Abstract:
Orientador: Henrique Nunes Gonçalves
Resumo: Foi utilizada uma análise de segregação com o uso da inferência Bayesiana para se verificar a presença de genes de efeito principal (GEP) afetando duas características de carcaça: gordura intramuscular em % (GIM) e espessura de toucinho em mm (ET); e uma de crescimento, ganho de peso (g/dia) no período entre 25 a 90 kg de peso vivo (GP). Para este estudo foram usadas informações de 1.257 animais provenientes de um experimento de cruzamento de suínos machos da raça Meishan (raça chinesa) e fêmeas de linhagens holandesas de Large White e Landrace. No melhoramento genético animal, Modelos Poligênicos Finitos (MPF) podem ser uma alternativa a Modelos Poligênicos Infinitesimais (MPI) para avaliação genética de características quantitativas usando pedigris complexos. MPI, MPF e MPI combinado com MPF, foram empiricamente testados para estimar componentes de variâncias e número de genes no MPF. Para a estimação de médias marginais a posteriori de componentes de variância e parâmetros foi usado uma metodologia Bayesiana, através do uso da Cadeia de Markov, algoritmos de Monte Carlo (MCMC), via Amostrador de Gibbs e "Reversible Jump Sampler (Metropolis-Hastings)". Em função dos resultados obtidos, pode-se evidenciar quatro GEP, isto é, dois para GIM e dois para ET. Para ET, o GEP explicou a maior parte da variação genética, enquanto para GIM, o GEP reduziu significativamente a variação poligênica. Para a variação do GP não foi possível determinar a influência do GEP. As herdabilidades estimadas para GIM, ET e GP foram de 0,37, 0,24 e 0,37 respectivamente. A metodologia Bayesiana foi implementada satisfatoriamente usando o pacote computacional FlexQTLTM. Estudos futuros baseados neste experimento que usem marcadores moleculares para mapear os genes de efeito principal que afetem, principalmente GIM e ET, poderão lograr êxito.
Abstract: A Bayesian marker-free segregation analysis was applied to search for evidence of segregation genes affecting two carcass traits: Intramuscular Fat in % (IMF) and Backfat Thickness in mm (BF), and one growth trait: Liveweight Gain from approximately 25 to 90 kg liveweight, in g/day (LG). For this study 1257 animals from an experimental cross between pigs Meishan (male) and Dutch Large White and Landrace lines (female) were used. In animal breeding, Finite Polygenic Models (FPM) may be an alternative to the Infinitesimal Polygenic Model (IPM) for genetic evaluation of pedigree multiple-generations populations for multiple quantitative traits. FPM, IPM and FPM combined with IPM were empirically tested for estimation of variance components and number of genes in the FPM. Estimation of marginal posteriori means of variance components and parameters was performed by use Markov Chain Monte Carlo techniques by use of the Gibbs sampler and the reversible Jump sampler (Metropolis-Hastings). The results showed evidence for four Major Genes (MG), i.e., two for IMF and two BF. For BF, the MG explained almost all of the genetic variance while for IMF, the MG reduced the polygenic variance significantly. For LG was not found to be likely influenced by MG. The polygenic heritability estimates for IMF, BF and LG were 0.37, 0.24 and 0.37 respectively. The Bayesian methodology was satisfactorily implemented in the software package FlexQTLTM. Further molecular genetic research, based on the same experimental data, effort to map single genes affecting, mainly IMF and BF, has a high probability of success.
Doutor
APA, Harvard, Vancouver, ISO, and other styles
2

Onal, Murat. "Evaulation Of Spatial And Spatio-temporal Regularization Approaches In Inverse Problem Of Electrocardiography." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/2/12610045/index.pdf.

Full text
Abstract:
Conventional electrocardiography (ECG) is an essential tool for investigating cardiac disorders such as arrhythmias or myocardial infarction. It consists of interpretation of potentials recorded at the body surface that occur due to the electrical activity of the heart. However, electrical signals originated at the heart suffer from attenuation and smoothing within the thorax, therefore ECG signal measured on the body surface lacks some important details. The goal of forward and inverse ECG problems is to recover these lost details by estimating the heart&
#8217
s electrical activity non-invasively from body surface potential measurements. In the forward problem, one calculates the body surface potential distribution (i.e. torso potentials) using an appropriate source model for the equivalent cardiac sources. In the inverse problem of ECG, one estimates cardiac electrical activity based on measured torso potentials and a geometric model of the torso. Due to attenuation and spatial smoothing that occur within the thorax, inverse ECG problem is ill-posed and the forward model matrix is badly conditioned. Thus, small disturbances in the measurements lead to amplified errors in inverse solutions. It is difficult to solve this problem for effective cardiac imaging due to the ill-posed nature and high dimensionality of the problem. Tikhonov regularization, Truncated Singular Value Decomposition (TSVD) and Bayesian MAP estimation are some of the methods proposed in literature to cope with the ill-posedness of the problem. The most common approach in these methods is to ignore temporal relations of epicardial potentials and to solve the inverse problem at every time instant independently (column sequential approach). This is the fastest and the easiest approach
however, it does not include temporal correlations. The goal of this thesis is to include temporal constraints as well as spatial constraints in solving the inverse ECG problem. For this purpose, two methods are used. In the first method, we solved the augmented problem directly. Alternatively, we solve the problem with column sequential approach after applying temporal whitening. The performance of each method is evaluated.
APA, Harvard, Vancouver, ISO, and other styles
3

Moutoussis, Michael. "Defensive avoidance in paranoid delusions : experimental and computational approaches." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/defensive-avoidance-in-paranoid-delusions-experimental-and-computational-approaches(e36dbfcf-9341-43a0-be41-087f9b22d994).html.

Full text
Abstract:
This abstract summarises the thesis entitled Defensive Avoidance in Paranoid Delusions: Experimental and Computational Approaches, submitted by Michael Moutoussis to The University of Manchester for the degree of Doctor of Philosophy (PhD) in the faculty of Medical and Human Sciences, in 2011.The possible aetiological role of defensive avoidance in paranoia was investigated in this work. First the psychological significance of the Conditioned Avoidance Response (CAR) was reappraised. The CAR activates normal threat-processing mechanisms that may be pathologically over-activated in the anticipation of threats in paranoia. This may apply both to external threats and also to threats to the self-esteem.A temporal-difference computational model of the CAR suggested that a dopamine-independent process may signal that a particular state has led to a worse-than-expected outcome. On the contrary, learning about actions is likely to involve dopamine in signalling both worse-than-expected and better-than-expected outcomes. The psychological mode of action of dopamine blocking drugs may involve dampening (1) the vigour of the avoidance response and (2) the prediction-error signals that drive action learning.Excessive anticipation of negative events might lead to inappropriately perceived high costs of delaying decisions. Efforts to avoid such costs might explain the Jumping-to-Conclusions (JTC) bias found in paranoid patients. Two decision-theoretical models were used to analyse data from the ‘beads-in-a-jar’ task. One model employed an ideal-observer Bayesian approach; a control model made decisions by weighing evidence against a fixed threshold of certainty. We found no support for our ‘high cost’ hypothesis. According to both models the JTC bias was better explained by higher levels of ‘cognitive noise’ (relative to motivation) in paranoid patients. This ‘noise’ appears to limit the ability of paranoid patients to be influenced by cognitively distant possibilities.It was further hypothesised that excessive avoidance of negative aspects of the self may fuel paranoia. This was investigated empirically. Important self-attributes were elicited in paranoid patients and controls. Conscious and non-conscious avoidance were assessed while negative thoughts about the self were presented. Both ‘deserved’ and ‘undeserved’ persecutory beliefs were associated with high avoidance/control strategies in general, but not with increased of avoidance of negative thoughts about the self. On the basis of the present studies the former is therefore considerably more likely than the latter to play an aetiological role in paranoia.This work has introduced novel computational methods, especially useful in the study of ‘hidden’ psychological variables. It supported and deepened some key hypotheses about paranoia and provided consistent evidence against other important aetiological hypotheses. These contributions have substantial implications for research and for some aspects of clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
4

Wendling, Thierry. "Hierarchical mechanistic modelling of clinical pharmacokinetic data." Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/hierarchical-mechanistic-modelling-of-clinical-pharmacokinetic-data(573652c9-d3fb-4233-bea7-7abd7ef48d4b).html.

Full text
Abstract:
Pharmacokinetic and pharmacodynamic models can be applied to clinical study data using various modelling approaches depending on the aim of the analysis. In population pharmacokinetics for instance, simple compartmental models can be employed to describe concentration-time data, identify prognostic factors and interpolate within well-defined experimental conditions. The first objective of this thesis was to illustrate such a ‘semi-mechanistic’ pharmacokinetic modelling approach using mavoglurant as an example of a compound under clinical development. In particular, methods to accurately characterise complex oral pharmacokinetic profiles and evaluate the impact of absorption factors were investigated. When the purpose of the model-based analysis is to further extrapolate beyond the experimental conditions in order to guide the design of subsequent clinical trials, physiologically-based pharmacokinetic (PBPK) models are more valuable as they incorporate information not only on the drug but also on the system, i.e. on mammillary anatomy and physiology. The combination of such mechanistic models with statistical modelling techniques in order to analysis clinical data has been widely applied in toxicokinetics but has only recently received increasing interest in pharmacokinetics. This is probably because, due to the higher complexity of PBPK models compared to conventional pharmacokinetic models, additional efforts are required for adequate population data analysis. Hence, the second objective of this thesis was to explore methods to allow the application of PBPK models to clinical study data, such as the Bayesian approach or model order reduction techniques, and propose a general mechanistic modelling workflow for population data analysis. In pharmacodynamics, mechanistic modelling of clinical data is even less common than in pharmacokinetics. This is probably because our understanding of the interaction between therapeutic drugs and biological processes is limited and also because the types of data to analyse are often more complex than pharmacokinetic data. In oncology for instance, the most widely used clinical endpoint to evaluate the benefit of an experimental treatment is survival of patients. Survival data are typically censored due to logistic constraints associated with patient follow-up. Hence, the analysis of survival data requires specific statistical techniques. Longitudinal tumour size data have been increasingly used to assess treatment response for solid tumours. In particular, the survival prognostic value of measures derived from such data has been recently evaluated for various types of cancer although not for pancreatic cancer. The last objective of this thesis was therefore to investigate different modelling approaches to analyse survival data of pancreatic cancer patients treated with gemcitabine, and compare tumour burden measures with other patient clinical characteristics and established risk factors, in terms of predictive value for survival.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Bayesian approach. eng"

1

Yu, Angela J. Bayesian Models of Attention. Edited by Anna C. (Kia) Nobre and Sabine Kastner. Oxford University Press, 2014. http://dx.doi.org/10.1093/oxfordhb/9780199675111.013.025.

Full text
Abstract:
Traditionally, attentional selection has been thought of as arising naturally from resource limitations, with a focus on what might be the most apt metaphor, e.g. whether it is a ‘bottleneck’ or ‘spotlight’. However, these simple metaphors cannot account for the specificity, flexibility, and heterogeneity of the way attentional selection manifests itself in different behavioural contexts. A recent body of theoretical work has taken a different approach, focusing on the computational needs of selective processing, relative to environmental constraints and behavioural goals. They typically adopt a normative computational framework, incorporating Bayes-optimal algorithms for information processing and action selection. This chapter reviews some of this recent modelling work, specifically in the context of attention for learning, covert spatial attention, and overt spatial attention.
APA, Harvard, Vancouver, ISO, and other styles
2

Brazier, John, Julie Ratcliffe, Joshua A. Salomon, and Aki Tsuchiya. Modelling health state valuation data. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780198725923.003.0005.

Full text
Abstract:
This chapter examines the technical issues in modelling health state valuation data. Most measures of health define too many states to directly value all of them (e.g. SF-6D defines 18,000 health states). The solution has been to value a subset and by using modelling to predict the values of all states. This chapter reviews two approaches to modelling: one using multiattribute utility theory to determine health values given an assumed functional form; and the other is using statistical modelling of SF-6D preference data that are skewed, bimodal, and clustered by respondents. This chapter examines the selection of health states for valuation, data preparation, model specification, and techniques for modelling the data starting with ordinary least squares (OLS) and moving on to more complex techniques including Bayesian non-parametric and semi-parametric approaches, and a hybrid approach that combines cardinal preference data with the results of paired data from a discrete choice experiment.
APA, Harvard, Vancouver, ISO, and other styles
3

Westheimer, Gerald. The Shifted-Chessboard Pattern as Paradigm of the Exegesis of Geometrical-Optical Illusions. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780199794607.003.0036.

Full text
Abstract:
The shifted chessboard or café wall illusion yields to analysis at the two poles of the practice of vision science: bottom-up, pursuing its course from the visual stimulus into the front end of the visual apparatus, and top-down, figuring how the rules governing perception might lead to it. Following the first approach, examination of the effects of light spread in the eye and of nonlinearity and center-surround antagonism in the retina has made some inroads and provided partial explanations; with respect to the second, principles of perspective and of continuity and smoothness of contours can be evoked, and arguments about perception as Bayesian inference can be joined. Insights from these two directions are helping neurophysiologists in their struggle to identify a neural substrate of the phenomenon Münsterberg described in 1897.
APA, Harvard, Vancouver, ISO, and other styles
4

Tir, Jaroslav, and Johannes Karreth. The Logic of Institutional Influence: Conceptual and Methodological Implications. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190699512.003.0005.

Full text
Abstract:
This chapter further probes the finding that countries belonging to a larger number of highly structured (IGOs) face a significantly lower risk that an emerging low-level armed conflict on their territories will escalate to full-scale civil war. Various empirical approaches show that the finding is robust. For example, we establish that the finding holds when we account for (a) the determinants of memberships in highly structured IGOs (i.e. endogeneity concerns); (b) mediations and interventions; (c) natural resources; (d) government-rebel relative power; and (e) spatial, temporal, and transnational trends. Further, (f) we isolate highly structured IGOs’ use of costs and benefits as the key drivers of our finding, (g) establish that nonescalated conflicts end in settlements, as opposed to one side simply defeating the other militarily, and (h) use Bayesian model averaging (BMA) to demonstrate the added value of accounting for highly structured IGO memberships in analyses of conflict escalation patterns.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Bayesian approach. eng"

1

Golzan, S. Mojtaba, Farzaneh Hakimpour, and Alireza Toolou. "Fetal ECG Extraction Using Multi-Layer Perceptron Neural Networks with Bayesian Approach." In IFMBE Proceedings, 311–17. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-540-89208-3_74.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Golzan, S. Mojtaba, Farzaneh Hakimpour, Mohammad Mikaili, and Alireza Toolou. "Fetal ECG Extraction Using Multi-Layer Perceptron Neural Networks with Bayesian Approach." In IFMBE Proceedings, 1378–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-540-89208-3_327.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Daniel, Ben K., Juan-Diego Zapata-Rivera, and Gordon I. McCalla. "A Bayesian Belief Network Approach for Modeling Complex Domains." In Bayesian Network Technologies, 13–41. IGI Global, 2007. http://dx.doi.org/10.4018/978-1-59904-141-4.ch002.

Full text
Abstract:
Bayesian belief networks (BBNs) are increasingly used for understanding and simulating computational models in many domains. Though BBN techniques are elegant ways of capturing uncertainties, knowledge engineering effort required to create and initialize the network has prevented many researchers from using them. Even though the structure of the network and its conditional & initial probabilities could be learned from data, data is not always available or it is too costly to obtain. In addition, current algorithms that can be used to learn relationships among variables, initial and conditional probabilities from data are often complex and cumbersome to employ. Qualitative-based approaches applied to the creation of graphical models can be used to create initial computational models that can help researchers analyze complex problems and provide guidance and support for decision-making. Initial BBN models can be refined once appropriate data is obtained. This chapter extends the use of BBNs to help experts make sense of complex social systems (e.g., social capital in virtual learning communities) using a Bayesian model as an interactive simulation tool. Scenarios are used to find out whether the model is consistent with the expert’s beliefs. The sensitivity analysis was conducted to help explain how the model reacted to different sets of evidence. Currently, we are in the process of refining the initial probability values presented in the model using empirical data and developing more authentic scenarios to further validate the model.
APA, Harvard, Vancouver, ISO, and other styles
4

D'Agostino, Susan. "Update your understanding, with Bayesian statistics." In How to Free Your Inner Mathematician, 233–36. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198843597.003.0039.

Full text
Abstract:
“Update your understanding, with Bayesian statistics” provides an accessible introduction to Bayesian statistics, in which one begins with a belief, understanding, or prediction that is based on data and then, upon receiving new information, updates the prediction. Readers consider a Bayesian approach to hypothetical woman’s risk levels for breast cancer upon receiving notice of a positive mammogram. Mathematics students and enthusiasts are encouraged to consider a Bayesian approach whenever they are in a position to manage uncertainty in mathematics or life. At the chapter’s end, readers may check their understanding by working on a problem. A solution is provided.
APA, Harvard, Vancouver, ISO, and other styles
5

Huang, Kaizhu, Zenglin Xu, Irwin King, Michael R. Lyu, and Zhangbing Zhou. "A Novel Discriminative Naive Bayesian Network for Classification." In Bayesian Network Technologies, 1–12. IGI Global, 2007. http://dx.doi.org/10.4018/978-1-59904-141-4.ch001.

Full text
Abstract:
Naive Bayesian network (NB) is a simple yet powerful Bayesian network. Even with a strong independency assumption among the features, it demonstrates competitive performance against other state-of-the-art classifiers, such as support vector machines (SVM). In this chapter, we propose a novel discriminative training approach originated from SVM for deriving the parameters of NB. This new model, called discriminative naive Bayesian network (DNB), combines both merits of discriminative methods (e.g., SVM) and Bayesian networks. We provide theoretic justifications, outline the algorithm, and perform a series of experiments on benchmark real-world datasets to demonstrate our model’s advantages. Its performance outperforms NB in classification tasks and outperforms SVM in handling missing information tasks.
APA, Harvard, Vancouver, ISO, and other styles
6

Donovan, Therese M., and Ruth M. Mickey. "The Shark Attack Problem Revisited: MCMC with the Metropolis Algorithm." In Bayesian Statistics for Beginners, 193–211. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198841296.003.0013.

Full text
Abstract:
In this chapter, the “Shark Attack Problem” (Chapter 11) is revisited. Markov Chain Monte Carlo (MCMC) is introduced as another way to determine a posterior distribution of λ‎, the mean number of shark attacks per year. The MCMC approach is so versatile that it can be used to solve almost any kind of parameter estimation problem. The chapter highlights the Metropolis algorithm in detail and illustrates its application, step by step, for the “Shark Attack Problem.” The posterior distribution generated in Chapter 11 using the gamma-Poisson conjugate is compared with the MCMC posterior distribution to show how successful the MCMC method can be. By the end of the chapter, the reader should also understand the following concepts: tuning parameter, MCMC inference, traceplot, and moment matching.
APA, Harvard, Vancouver, ISO, and other styles
7

Dhaka, Vinti, Chandra K. Jaggi, Sarla Pareek, and Piyush Kant Rai. "A Gentle Introduction to the Bayesian Paradigm for Some Inventory Models." In Advances in Logistics, Operations, and Management Science, 340–59. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9888-8.ch016.

Full text
Abstract:
The recent era describes the demand of inventory systems which are governed through random cause effect phenomenon prevailing the strongest use of random models in the concerned area. Bayesian probability model serve the demands of present need in such inventory systems. The present study deals the use of basic Bayesian theory in the development of some of the inventory models, for e.g.: The inventory model for deteriorating items; Designing of the classical (s, Q) models, etc. Here the motivation of use of Bayes theory is to test the efficacy of optimal design of above said models when demand is supposed to be random having some basic probability distributions. In this regard we discuss the inventory model for deteriorating items and the (s, Q) model and their mathematical solution under Bayesian approach.
APA, Harvard, Vancouver, ISO, and other styles
8

Sprenger, Jan, and Stephan Hartmann. "Learning Conditional Evidence." In Bayesian Philosophy of Science, 107–30. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780199672110.003.0004.

Full text
Abstract:
Learning indicative conditionals and learning relative frequencies have one thing in common: they are examples of conditional evidence, that is, evidence that includes a suppositional element. Standard Bayesian theory does not describe how such evidence affects rational degrees of belief, and natural solutions run into major problems. We propose that conditional evidence is best modeled by a combination of two strategies: First, by generalizing Bayesian Conditionalization to minimizing an appropriate divergence between prior and posterior probability distribution. Second, by representing the relevant causal relations and the implied conditional independence relations in a Bayesian network that constrains both prior and posterior. We show that this approach solves several well-known puzzles about learning conditional evidence (e.g., the notorious Judy Benjamin problem) and that learning an indicative conditional can often be described adequately by conditionalizing on the associated material conditional.
APA, Harvard, Vancouver, ISO, and other styles
9

Sucar, Luis Enrique. "Introduction to Bayesian Networks and Influence Diagrams." In Decision Theory Models for Applications in Artificial Intelligence, 9–32. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-60960-165-2.ch002.

Full text
Abstract:
In this chapter we will cover the fundamentals of probabilistic graphical models, in particular Bayesian networks and influence diagrams, which are the basis for some of the techniques and applications that are described in the rest of the book. First we will give a general introduction to probabilistic graphical models, including the motivation for using these models, and a brief history and general description of the main types of models. We will also include a brief review of the basis of probability theory. The core of the chapter will be the next three sections devoted to: (i) Bayesian networks, (ii) Dynamic Bayesian networks and (iii) Influence diagrams. For each we will introduce the models, their properties and give some examples. We will briefly describe the main inference techniques for the three types of models. For Bayesian and dynamic Bayesian nets we will talk about learning, including structure and parameter learning, describing the main types of approaches. At the end of the section on influence diagrams we will briefly introduce sequential decision problems as a link to the chapter on MDPs and POMDPs. We conclude the chapter with a summary and pointers for further reading for each topic.
APA, Harvard, Vancouver, ISO, and other styles
10

Mršić, Leo. "Widely Applicable Multi-Variate Decision Support Model for Market Trend Analysis and Prediction with Case Study in Retail." In Handbook of Research on Novel Soft Computing Intelligent Algorithms, 989–1018. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-4450-2.ch032.

Full text
Abstract:
Chapter explains efficient ways of dealing with business problems of analyzing market environment and market trends under complex circumstances using heterogeneous data source. Under the assumption that used data can be expressed as time series, widely applicable multi variate model is explained together with case study in textile retail. This Chapter includes an overview of research conducted with a brief explanation of approaches and models available today. A widely applicable multi-variate decision support model is presented with advantages, limitations, and several variations for development. The explanation is based on textile retail case study with model wide range of possible applications in perspective. Complex business environment issues are simulated with explanation of several important global trends in textile retail in past seasons. Non-traditional approaches are revised as tools for a better understanding of modern market trends as well as references in relevant literature. A widely applicable multi-variate decision support model and its usage is presented through built stages and simulated. Model concept is based on specific time series transformation method in combination with Bayesian logic and Bayesian network as final business logic layer with front end interface built with open source Bayesian network tool. Explained case study provides one of the most challenging issue in textile retail: market trends seasonal/weather dependence. Separate outcomes for different scenario analysis approaches are presented on real life data from a textile retail chain located in Zagreb, Croatia. Chapter ends with a discussion about similar research’s, wide applicability of presented model with references for future research.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Bayesian approach. eng"

1

Anugolu, Madhavi, Anish Sebastian, Parmod Kumar, Marco P. Schoen, Alex Urfer, and D. Subbaram Naidu. "Surface EMG Array Sensor Based Model Fusion Using Bayesian Approaches for Prosthetic Hands." In ASME 2009 Dynamic Systems and Control Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/dscc2009-2690.

Full text
Abstract:
Traditional electromyopgrahic (EMG) measurements are based on single sensor information. Due to the arrangement of skeletal muscle fibers for hand motions, cross talk is an inherent problem when inferring motion/force potentials from EMG data. This paper studies means of using sensor arrays to infer better motion/force potential for prosthetic hands. In particular, a surface electromyographic (sEMG) sensor array is used to investigate multiple model fusion techniques. This paper provides a comparison between three statistical model selection criteria. The sEMG signals are pre-processed using four filters, Butterworth, Chebyshev type-II, as well as Bayesian filters such as the Exponential and Half-Gaussian filter. Output Error (OE) models were extracted from sEMG data and hand force data and compared using a Bayesian based fusion model. The four different filters effect were quantified based on the OE models performance in matching the actual measured data. The comparison indicates a preference for using the sensor fusion technique with preprocessed EMG data using the Half-Gaussian Bayesian filter and the Kullback Information Criterion (KIC).
APA, Harvard, Vancouver, ISO, and other styles
2

Kido, Hiroyuki, and Keishi Okamoto. "A Bayesian Approach to Argument-Based Reasoning for Attack Estimation." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/36.

Full text
Abstract:
The web is a source of a large amount of arguments and their acceptability statuses (e.g., votes for and against the arguments). However, relations existing between the fore-mentioned arguments are typically not available. This study investigates the utilisation of acceptability semantics to statistically estimate an attack relation between arguments wherein the acceptability statuses of arguments are provided. A Bayesian network model of argument-based reasoning is defined in which Dung's theory of abstract argumentation gives the substance of Bayesian inference. The model correctness is demonstrated by analysing properties of estimated attack relations and illustrating its applicability to online forums.
APA, Harvard, Vancouver, ISO, and other styles
3

Boughariou, Jihene, Wassim Zouch, and Ahmed Ben Hamida. "A bayesian approach for EEG inverse problem: Spatio-temporal regularization." In 2014 World Symposium on Computer Applications & Research (WSCAR). IEEE, 2014. http://dx.doi.org/10.1109/wscar.2014.6916829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Azarkhail, M., and M. Modarres. "A Novel Bayesian Framework for Uncertainty Management in Physics-Based Reliability Models." In ASME 2007 International Mechanical Engineering Congress and Exposition. ASMEDC, 2007. http://dx.doi.org/10.1115/imece2007-41333.

Full text
Abstract:
The physics-of-failure (POF) modeling approach is a proven and powerful method to predict the reliability of mechanical components and systems. Most of POF models have been originally developed based upon empirical data from a wide range of applications (e.g. fracture mechanics approach to the fatigue life). Available curve fitting methods such as least square for example, calculate the best estimate of parameters by minimizing the distance function. Such point estimate approaches, basically overlook the other possibilities for the parameters and fail to incorporate the real uncertainty of empirical data into the process. The other important issue with traditional methods is when new data points become available. In such conditions, the best estimate methods need to be recalculated using the new and old data sets all together. But the original data sets, used to develop POF models may be no longer available to be combined with new data in a point estimate framework. In this research, for efficient uncertainty management in POF models, a powerful Bayesian framework is proposed. Bayesian approach provides many practical features such as a fair coverage of uncertainty and the updating concept that provide a powerful means for knowledge management, meaning that the Bayesian models allow the available information to be stored in a probability density format over the model parameters. These distributions may be considered as prior to be updated in the light of new data when they become available. At the first part of this article a brief review of classical and probabilistic approach to regression is presented. In this part the accuracy of traditional normal distribution assumption for error is examined and a new flexible likelihood function is proposed. The Bayesian approach to regression and its bonds with classical and probabilistic methods are explained next. In Bayesian section we shall discuss how the likelihood functions introduced in probabilistic approach, can be combined with prior information using the conditional probability concept. In order to highlight the advantages, the Bayesian approach is further clarified with case studies in which the result of calculation is compared with other traditional methods such as least square and maximum likelihood estimation (MLE) method. In this research, the mathematical complexity of Bayesian inference equations was overcome utilizing Markov Chain Monte Carlo simulation technique.
APA, Harvard, Vancouver, ISO, and other styles
5

Armstrong, Derek E. "Bayesian Approach to Estimating Fireball Parameters From Remote Sensing Data." In ASME 2019 Verification and Validation Symposium. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/vvs2019-5112.

Full text
Abstract:
Abstract Remote sensors in the infrared region can be used to study the progression of fireballs generated from experiments involving high explosives (HE). Developing an improved understanding of HE fireballs can be used to validate and improve computational physics codes that simulate such events. In this paper, Bayesian approaches are studied to estimate time-dependent optimal fireball parameters and their uncertainties using Fourier transform infrared (FTIR) spectroscopy. The optical signal measured by an FTIR sensor provides information on the fireball due to thermal emission, particulate emission/absorption, and HE gas product emission/absorption from the fireball. FTIR sensors have the advantage of being able to capture and measure the radiance in a large part of the infrared spectrum. The parameters to be estimated from the fireball include temperature and size, soot quantity, gas species concentrations (e.g., H2O, CO2, CO), and information on the presence of metals. In general, this inverse optimization problem is difficult due to the estimated quantities being correlated, the low spectral resolution of the FTIR sensor, and the intervening atmosphere absorbing the radiation emitted from the fireball. Bayesian calibration and Bayesian model averaging are applied to address these difficulties and to quantify the uncertainty in the estimated optimal parameter values. The fireball parameter settings are evaluated by the fit of a simplified spectral model to FTIR data. The overall problem will be presented together with a description of the Bayesian approaches. In this paper, the Bayesian approaches are applied to artificially generated FTIR data to illustrate the approach.
APA, Harvard, Vancouver, ISO, and other styles
6

Rakshit, Arnab, Anwesha Khasnobish, and D. N. Tibarewala. "A Naïve Bayesian approach to lower limb classification from EEG signals." In 2016 2nd International Conference on Control, Instrumentation, Energy & Communication (CIEC). IEEE, 2016. http://dx.doi.org/10.1109/ciec.2016.7513812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hoppe, Fred M., and Lin Fang. "Bayesian Prediction for the Gumbel Distribution Applied to Feeder Pipe Thicknesses." In 16th International Conference on Nuclear Engineering. ASMEDC, 2008. http://dx.doi.org/10.1115/icone16-48871.

Full text
Abstract:
This paper develops Bayesian prediction intervals for the minimum of any specified number of future measurements from a Gumbel distribution based on previous observations. The need for such intervals arises in the analysis of data from outlet side feeder pipes at Ontario nuclear power plants. The issue is how to best use these measurements in order to arrive at a statistically sound conclusion concerning the minimum thickness of all remaining uninspected pipes, in particular with what confidence can it be asserted that the remaining wall thicknesses are above an acceptable minimum to ensure a sufficiently high thickness up to the end of the next operating interval. The result gives a probability measure of the potential benefit of performing additional inspections when considered against the additional radiation exposure and the cost of performing additional inspections. Previously, this problem was approached by adapting a classical prediction interval that was originally derived for normal data. Here we examine both a hybrid Bayesian method that combines Bayesian ideas with maximum likelihood and also a full Bayesian approach using Markov Chain Monte Carlo. We show that the latter gives larger lower prediction limits and therefore more margin to fitness for service.
APA, Harvard, Vancouver, ISO, and other styles
8

Aughenbaugh, Jason Matthew, and Jeffrey W. Herrmann. "Updating Uncertainty Assessments: A Comparison of Statistical Approaches." In ASME 2007 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2007. http://dx.doi.org/10.1115/detc2007-35158.

Full text
Abstract:
The performance of a product that is being designed is affected by variations in material, manufacturing process, use, and environmental variables. As a consequence of uncertainties in these factors, some items may fail. Failure is taken very generally, but we assume that it is a random event that occurs at most once in the lifetime of an item. The designer wants the probability of failure to be less than a given threshold. In this paper, we consider three approaches for modeling the uncertainty in whether or not the failure probability meets this threshold: a classical approach, a precise Bayesian approach, and a robust Bayesian (or imprecise probability) approach. In some scenarios, the designer may have some initial beliefs about the failure probability. The designer also has the opportunity to obtain more information about product performance (e.g. from either experiments with actual items or runs of a simulation program that provides an acceptable surrogate for actual performance). The different approaches for forming and updating the designer’s beliefs about the failure probability are illustrated and compared under different assumptions of available information. The goal is to gain insight into the relative strengths and weaknesses of the approaches. Examples are presented for illustrating the conclusions.
APA, Harvard, Vancouver, ISO, and other styles
9

Kaya, Mine, and Shima Hajimirza. "Using Bayesian Optimization With Knowledge Transfer for High Computational Cost Design: A Case Study in Photovoltaics." In ASME 2019 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/detc2019-98111.

Full text
Abstract:
Abstract Engineering design is usually an iterative procedure where many different configurations are tested to yield a desirable end performance. When the design objective can only be measured by costly operations such as experiments or cumbersome computer simulations, a thorough design procedure can be limited. The design problem in these cases is a high cost optimization problem. Meta model-based approaches (e.g. Bayesian optimization) and transfer optimization are methods that can be used to facilitate more efficient designs. Transfer optimization is a technique that enables using previous design knowledge instead of starting from scratch in a new task. In this work, we study a transfer optimization framework based on Bayesian optimization using Gaussian Processes. The similarity among the tasks is determined via a similarity metric. The framework is applied to a particular design problem of thin film solar cells. Planar multilayer solar cells with different sets of materials are optimized to obtain the best opto-electrical efficiency. Solar cells with amorphous silicon and organic absorber layers are studied and the results are presented.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Pingfeng, Byeng D. Youn, and Lee J. Wells. "Bayesian Reliability Based Design Optimization Using Eigenvector Dimension Reduction (EDR) Method." In ASME 2007 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2007. http://dx.doi.org/10.1115/detc2007-35524.

Full text
Abstract:
In the last decade, considerable advances have been made in Reliability-Based Design Optimization (RBDO). It is assumed in RBDO that statistical information of input uncertainties is completely known (aleatory uncertainty), such as a distribution type and its parameters (e.g., mean, deviation). However, this assumption is not valid in practical engineering applications, since the amount of uncertainty data is restricted mainly due to limited resources (e.g., man-power, expense, time). In practical engineering design, most data sets for system uncertainties are insufficiently sampled from unknown statistical distributions, known as epistemic uncertainty. Existing methods in uncertainty based design optimization have difficulty in handling both aleatory and epistemic uncertainties. To tackle design problems engaging both epistemic and aleatory uncertainties, this paper proposes an integration of RBDO with Bayes Theorem, referred to as Bayesian Reliability-Based Design Optimization (Bayesian RBDO). However, when a design problem involves a large number of epistemic variables, Bayesian RBDO becomes extremely expensive. Thus, this paper presents a more efficient and accurate numerical method for reliability method demanded in the process of Bayesian RBDO. It is found that the Eigenvector Dimension Reduction (EDR) Method is a very efficient and accurate method for reliability analysis, since the method takes a sensitivity-free approach with only 2n+1 analyses, where n is the number of aleatory random parameters. One mathematical example and an engineering design example (vehicle suspension system) are used to demonstrate the feasibility of Bayesian RBDO. In Bayesian RBDO using the EDR method, random parameters associated with manufacturing variability are considered as the aleatory random parameters, whereas random parameters associated with the load variability are regarded as the epistemic random parameters. Moreover, a distributed computing system is used for this study.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography