To see the other types of publications on this topic, follow the link: Phylogeny Bayesian statistical decision theory.

Journal articles on the topic 'Phylogeny Bayesian statistical decision theory'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 49 journal articles for your research on the topic 'Phylogeny Bayesian statistical decision theory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

de la Horra, Julián. "Bayesian robustness of the quantile loss in statistical decision theory." Revista de la Real Academia de Ciencias Exactas, Fisicas y Naturales. Serie A. Matematicas 107, no. 2 (May 16, 2012): 451–58. http://dx.doi.org/10.1007/s13398-012-0070-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Procaccia, H., R. Cordier, and S. Muller. "Application of Bayesian statistical decision theory for a maintenance optimization problem." Reliability Engineering & System Safety 55, no. 2 (February 1997): 143–49. http://dx.doi.org/10.1016/s0951-8320(96)00006-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Reinhardt, Howard E. "Statistical Decision Theory and Bayesian Analysis. Second Edition (James O. Berger)." SIAM Review 29, no. 3 (September 1987): 487–89. http://dx.doi.org/10.1137/1029095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Laedermann, Jean-Pascal, Jean-François Valley, and François O. Bochud. "Measurement of radioactive samples: application of the Bayesian statistical decision theory." Metrologia 42, no. 5 (September 13, 2005): 442–48. http://dx.doi.org/10.1088/0026-1394/42/5/015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Luce, Bryan R., Ya-Chen Tina Shih, and Karl Claxton. "INTRODUCTION." International Journal of Technology Assessment in Health Care 17, no. 1 (January 2001): 1–5. http://dx.doi.org/10.1017/s0266462301104010.

Full text
Abstract:
Until the mid-1980s, most economic analyses of healthcare technologies were based on decision theory and used decision-analytic models. The goal was to synthesize all relevant clinical and economic evidence for the purpose of assisting decision makers to efficiently allocate society's scarce resources. This was true of virtually all the early cost-effectiveness evaluations sponsored and/or published by the U.S. Congressional Office of Technology Assessment (OTA) (15), Centers of Disease Control and Prevention (CDC), the National Cancer Institute, other elements of the U.S. Public Health Service, and of healthcare technology assessors in Europe and elsewhere around the world. Methodologists routinely espoused, or at minimum assumed, that these economic analyses were based on decision theory (8;24;25). Since decision theory is rooted in—in fact, an informal application of—Bayesian statistical theory, these analysts were conducting studies to assist healthcare decision making by appealing to a Bayesian rather than a classical, or frequentist, inference approach. But their efforts were not so labeled. Oddly, the statistical training of these decision analysts was invariably classical, not Bayesian. Many were not—and still are not—conversant with Bayesian statistical approaches.
APA, Harvard, Vancouver, ISO, and other styles
6

Geisler, Wilson S., and Randy L. Diehl. "Bayesian natural selection and the evolution of perceptual systems." Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 357, no. 1420 (April 29, 2002): 419–48. http://dx.doi.org/10.1098/rstb.2001.1055.

Full text
Abstract:
In recent years, there has been much interest in characterizing statistical properties of natural stimuli in order to better understand the design of perceptual systems. A fruitful approach has been to compare the processing of natural stimuli in real perceptual systems with that of ideal observers derived within the framework of Bayesian statistical decision theory. While this form of optimization theory has provided a deeper understanding of the information contained in natural stimuli as well as of the computational principles employed in perceptual systems, it does not directly consider the process of natural selection, which is ultimately responsible for design. Here we propose a formal framework for analysing how the statistics of natural stimuli and the process of natural selection interact to determine the design of perceptual systems. The framework consists of two complementary components. The first is a maximum fitness ideal observer, a standard Bayesian ideal observer with a utility function appropriate for natural selection. The second component is a formal version of natural selection based upon Bayesian statistical decision theory. Maximum fitness ideal observers and Bayesian natural selection are demonstrated in several examples. We suggest that the Bayesian approach is appropriate not only for the study of perceptual systems but also for the study of many other systems in biology.
APA, Harvard, Vancouver, ISO, and other styles
7

Galvani, Marta, Chiara Bardelli, Silvia Figini, and Pietro Muliere. "A Bayesian Nonparametric Learning Approach to Ensemble Models Using the Proper Bayesian Bootstrap." Algorithms 14, no. 1 (January 3, 2021): 11. http://dx.doi.org/10.3390/a14010011.

Full text
Abstract:
Bootstrap resampling techniques, introduced by Efron and Rubin, can be presented in a general Bayesian framework, approximating the statistical distribution of a statistical functional ϕ(F), where F is a random distribution function. Efron’s and Rubin’s bootstrap procedures can be extended, introducing an informative prior through the Proper Bayesian bootstrap. In this paper different bootstrap techniques are used and compared in predictive classification and regression models based on ensemble approaches, i.e., bagging models involving decision trees. Proper Bayesian bootstrap, proposed by Muliere and Secchi, is used to sample the posterior distribution over trees, introducing prior distributions on the covariates and the target variable. The results obtained are compared with respect to other competitive procedures employing different bootstrap techniques. The empirical analysis reports the results obtained on simulated and real data.
APA, Harvard, Vancouver, ISO, and other styles
8

Moore, Brian R., Sebastian Höhna, Michael R. May, Bruce Rannala, and John P. Huelsenbeck. "Critically evaluating the theory and performance of Bayesian analysis of macroevolutionary mixtures." Proceedings of the National Academy of Sciences 113, no. 34 (August 10, 2016): 9569–74. http://dx.doi.org/10.1073/pnas.1518659113.

Full text
Abstract:
Bayesian analysis of macroevolutionary mixtures (BAMM) has recently taken the study of lineage diversification by storm. BAMM estimates the diversification-rate parameters (speciation and extinction) for every branch of a study phylogeny and infers the number and location of diversification-rate shifts across branches of a tree. Our evaluation of BAMM reveals two major theoretical errors: (i) the likelihood function (which estimates the model parameters from the data) is incorrect, and (ii) the compound Poisson process prior model (which describes the prior distribution of diversification-rate shifts across branches) is incoherent. Using simulation, we demonstrate that these theoretical issues cause statistical pathologies; posterior estimates of the number of diversification-rate shifts are strongly influenced by the assumed prior, and estimates of diversification-rate parameters are unreliable. Moreover, the inability to correctly compute the likelihood or to correctly specify the prior for rate-variable trees precludes the use of Bayesian approaches for testing hypotheses regarding the number and location of diversification-rate shifts using BAMM.
APA, Harvard, Vancouver, ISO, and other styles
9

Borysova, Valentyna I., and Bohdan P. Karnaukh. "Standard of proof in common law: Mathematical explication and probative value of statistical data." Journal of the National Academy of Legal Sciences of Ukraine 28, no. 2 (June 25, 2021): 171–80. http://dx.doi.org/10.37635/jnalsu.28(2).2021.171-180.

Full text
Abstract:
As a result of recent amendments to the procedural legislation of Ukraine, one may observe a tendency in judicial practice to differentiate the standards of proof depending on the type of litigation. Thus, in commercial litigation the so-called standard of “probability of evidence” applies, while in criminal proceedings – “beyond a reasonable doubt” standard applies. The purpose of this study was to find the rational justification for the differentiation of the standards of proof applied in civil (commercial) and criminal cases and to explain how the same fact is considered proven for the purposes of civil lawsuit and not proven for the purposes of criminal charge. The study is based on the methodology of Bayesian decision theory. The paper demonstrated how the principles of Bayesian decision theory can be applied to judicial fact-finding. According to Bayesian theory, the standard of proof applied depends on the ratio of the false positive error disutility to false negative error disutility. Since both types of error have the same disutility in a civil litigation, the threshold value of conviction is 50+ percent. In a criminal case, on the other hand, the disutility of false positive error considerably exceeds the disutility of the false negative one, and therefore the threshold value of conviction shall be much higher, amounting to 90 percent. Bayesian decision theory is premised on probabilistic assessments. And since the concept of probability has many meanings, the results of the application of Bayesian theory to judicial fact-finding can be interpreted in a variety of ways. When dealing with statistical evidence, it is crucial to distinguish between subjective and objective probability. Statistics indicate objective probability, while the standard of proof refers to subjective probability. Yet, in some cases, especially when statistical data is the only available evidence, the subjective probability may be roughly equivalent to the objective probability. In such cases, statistics cannot be ignored
APA, Harvard, Vancouver, ISO, and other styles
10

De Waal, D. J. "Summary on Bayes estimation and hypothesis testing." Suid-Afrikaanse Tydskrif vir Natuurwetenskap en Tegnologie 7, no. 1 (March 17, 1988): 28–32. http://dx.doi.org/10.4102/satnt.v7i1.896.

Full text
Abstract:
Although Bayes’ theorem was published in 1764, it is only recently that Bayesian procedures were used in practice in statistical analyses. Many developments have taken place and are still taking place in the areas of decision theory and group decision making. Two aspects, namely that of estimation and tests of hypotheses, will be looked into. This is the area of statistical inference mainly concerned with Mathematical Statistics.
APA, Harvard, Vancouver, ISO, and other styles
11

Wijayanti, Rina. "PENAKSIRAN PARAMETER ANALISIS REGRESI COX DAN ANALISIS SURVIVAL BAYESIAN." PRISMATIKA: Jurnal Pendidikan dan Riset Matematika 1, no. 2 (June 1, 2019): 16–26. http://dx.doi.org/10.33503/prismatika.v1i2.427.

Full text
Abstract:
In the theory of estimation, there are two approaches, namely the classical statistical approach and global statistical approach (Bayesian). Classical statistics are statistics in which the procedure is the decision based only on the data samples taken from the population. While Bayesian statistics in making decisions based on new information from the observed data (sample) and prior knowledge. At this writing Cox Regression Analysis will be taken as an example of parameter estimation by the classical statistical approach Survival Analysis and Bayesian statistical approach as an example of global (Bayesian). Survival Bayesial parameter estimation using MCMC algorithms for model complex / complicated and difficult to resolve while the Cox regression models using the method of partial likelihood. Results of the parameter estimates do not close form that needs to be done by the method of Newton-Raphson iteration.
APA, Harvard, Vancouver, ISO, and other styles
12

Mukha, V. S., and N. F. Kako. "The integrals and integral transformations connected with the joint vector Gaussian distribution." Proceedings of the National Academy of Sciences of Belarus. Physics and Mathematics Series 57, no. 2 (July 16, 2021): 206–16. http://dx.doi.org/10.29235/1561-2430-2021-57-2-206-216.

Full text
Abstract:
In many applications it is desirable to consider not one random vector but a number of random vectors with the joint distribution. This paper is devoted to the integral and integral transformations connected with the joint vector Gaussian probability density function. Such integral and transformations arise in the statistical decision theory, particularly, in the dual control theory based on the statistical decision theory. One of the results represented in the paper is the integral of the joint Gaussian probability density function. The other results are the total probability formula and Bayes formula formulated in terms of the joint vector Gaussian probability density function. As an example the Bayesian estimations of the coefficients of the multiple regression function are obtained. The proposed integrals can be used as table integrals in various fields of research.
APA, Harvard, Vancouver, ISO, and other styles
13

North, D. W. "Analysis of Uncertainty and Reaching Broad Conclusions." Journal of the American College of Toxicology 7, no. 5 (September 1988): 583–90. http://dx.doi.org/10.3109/10915818809019535.

Full text
Abstract:
Probability theory can provide a general way of reasoning about uncertainty, even when data are sparse or absent. The idea that probabilities can represent judgment is a basic principle for decision analysis and for the Bayesian school of statistics. The use of judgmental probabilities and Bayesian statistical methods for the analysis of toxicological data appears to be promising in reaching broad conclusions for policy and for research planning. Illustrative examples are given using quantal dose-response data from carcinogenicity bioassays for two chemicals, perchloroethylene and alachlor.
APA, Harvard, Vancouver, ISO, and other styles
14

Brown, Joseph W., Caroline Parins-Fukuchi, Gregory W. Stull, Oscar M. Vargas, and Stephen A. Smith. "Bayesian and likelihood phylogenetic reconstructions of morphological traits are not discordant when taking uncertainty into consideration: a comment on Puttick et al ." Proceedings of the Royal Society B: Biological Sciences 284, no. 1864 (October 11, 2017): 20170986. http://dx.doi.org/10.1098/rspb.2017.0986.

Full text
Abstract:
Puttick et al. (2017 Proc. R. Soc. B 284 , 20162290 ( doi:10.1098/rspb.2016.2290 )) performed a simulation study to compare accuracy among methods of inferring phylogeny from discrete morphological characters. They report that a Bayesian implementation of the Mk model (Lewis 2001 Syst. Biol. 50 , 913–925 ( doi:10.1080/106351501753462876 )) was most accurate (but with low resolution), while a maximum-likelihood (ML) implementation of the same model was least accurate. They conclude by strongly advocating that Bayesian implementations of the Mk model should be the default method of analysis for such data. While we appreciate the authors' attempt to investigate the accuracy of alternative methods of analysis, their conclusion is based on an inappropriate comparison of the ML point estimate, which does not consider confidence, with the Bayesian consensus, which incorporates estimation credibility into the summary tree. Using simulation, we demonstrate that ML and Bayesian estimates are concordant when confidence and credibility are comparably reflected in summary trees, a result expected from statistical theory. We therefore disagree with the conclusions of Puttick et al. and consider their prescription of any default method to be poorly founded. Instead, we recommend caution and thoughtful consideration of the model or method being applied to a morphological dataset.
APA, Harvard, Vancouver, ISO, and other styles
15

Girtler, Jerzy. "Limiting Distribution of the Three-State Semi-Markov Model of Technical State Transitions of Ship Power Plant Machines and its Applicability in Operational Decision-Making." Polish Maritime Research 27, no. 2 (June 1, 2020): 136–44. http://dx.doi.org/10.2478/pomr-2020-0035.

Full text
Abstract:
AbstractThe article presents the three-state semi-Markov model of the process {W(t): t ≥ 0} of state transitions of a ship power plant machine, with the following interpretation of these states: s1 – state of full serviceability, s2 – state of partial serviceability, and s3 – state of unserviceability. These states are precisely defined for the ship main engine (ME). A hypothesis is proposed which explains the possibility of application of this model to examine models of real state transitions of ship power plant machines. Empirical data concerning ME were used for calculating limiting probabilities for the process {W(t): t ≥ 0}. The applicability of these probabilities in decision making with the assistance of the Bayesian statistical theory is demonstrated. The probabilities were calculated using a procedure included in the computational software MATHEMATICA, taking into consideration the fact that the random variables representing state transition times of the process {W(t): t ≥ 0} have gamma distributions. The usefulness of the Bayesian statistical theory in operational decision-making concerning ship power plants is shown using a decision dendrite which maps ME states and consequences of particular decisions, thus making it possible to choose between the following two decisions: d1 – first perform a relevant preventive service of the engine to restore its state and then perform the commissioned task within the time limit determined by the customer, and d2 – omit the preventive service and start performing the commissioned task.
APA, Harvard, Vancouver, ISO, and other styles
16

Lepora, Nathan F., and Kevin N. Gurney. "The Basal Ganglia Optimize Decision Making over General Perceptual Hypotheses." Neural Computation 24, no. 11 (November 2012): 2924–45. http://dx.doi.org/10.1162/neco_a_00360.

Full text
Abstract:
The basal ganglia are a subcortical group of interconnected nuclei involved in mediating action selection within cortex. A recent proposal is that this selection leads to optimal decision making over multiple alternatives because the basal ganglia anatomy maps onto a network implementation of an optimal statistical method for hypothesis testing, assuming that cortical activity encodes evidence for constrained gaussian-distributed alternatives. This letter demonstrates that this model of the basal ganglia extends naturally to encompass general Bayesian sequential analysis over arbitrary probability distributions, which raises the proposal to a practically realizable theory over generic perceptual hypotheses. We also show that the evidence in this model can represent either log likelihoods, log-likelihood ratios, or log odds, all leading proposals for the cortical processing of sensory data. For these reasons, we claim that the basal ganglia optimize decision making over general perceptual hypotheses represented in cortex. The relation of this theory to cortical encoding, cortico-basal ganglia anatomy, and reinforcement learning is discussed.
APA, Harvard, Vancouver, ISO, and other styles
17

Garrett, K. A., L. V. Madden, G. Hughes, and W. F. Pfender. "New Applications of Statistical Tools in Plant Pathology." Phytopathology® 94, no. 9 (September 2004): 999–1003. http://dx.doi.org/10.1094/phyto.2004.94.9.999.

Full text
Abstract:
The series of papers introduced by this one address a range of statistical applications in plant pathology, including survival analysis, nonparametric analysis of disease associations, multivariate analyses, neural networks, meta-analysis, and Bayesian statistics. Here we present an overview of additional applications of statistics in plant pathology. An analysis of variance based on the assumption of normally distributed responses with equal variances has been a standard approach in biology for decades. Advances in statistical theory and computation now make it convenient to appropriately deal with discrete responses using generalized linear models, with adjustments for overdispersion as needed. New nonparametric approaches are available for analysis of ordinal data such as disease ratings. Many experiments require the use of models with fixed and random effects for data analysis. New or expanded computing packages, such as SAS PROC MIXED, coupled with extensive advances in statistical theory, allow for appropriate analyses of normally distributed data using linear mixed models, and discrete data with generalized linear mixed models. Decision theory offers a framework in plant pathology for contexts such as the decision about whether to apply or withhold a treatment. Model selection can be performed using Akaike's information criterion. Plant pathologists studying pathogens at the population level have traditionally been the main consumers of statistical approaches in plant pathology, but new technologies such as microarrays supply estimates of gene expression for thousands of genes simultaneously and present challenges for statistical analysis. Applications to the study of the landscape of the field and of the genome share the risk of pseudoreplication, the problem of determining the appropriate scale of the experimental unit and of obtaining sufficient replication at that scale.
APA, Harvard, Vancouver, ISO, and other styles
18

STAUFFER, HOWARD B. "APPLICATION OF BAYESIAN STATISTICAL INFERENCE AND DECISION THEORY TO A FUNDAMENTAL PROBLEM IN NATURAL RESOURCE SCIENCE: THE ADAPTIVE MANAGEMENT OF AN ENDANGERED SPECIES." Natural Resource Modeling 21, no. 2 (April 29, 2008): 264–84. http://dx.doi.org/10.1111/j.1939-7445.2008.00007.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Yang, Jen-Jen, Yen-Ching Chuang, Huai-Wei Lo, and Ting-I. Lee. "A Two-Stage MCDM Model for Exploring the Influential Relationships of Sustainable Sports Tourism Criteria in Taichung City." International Journal of Environmental Research and Public Health 17, no. 7 (March 30, 2020): 2319. http://dx.doi.org/10.3390/ijerph17072319.

Full text
Abstract:
Many countries advocate sports for all to cultivate people’s interest in sports. In cities, cross-industry alliances between sports and tourism are one of the common practices. The following two important issues need to be discussed, namely, what factors should be paid attention to in the development of sports tourism, and what are the mutual influential relationships among these factors. This study proposes a novel two-stage multi-criteria decision-making (MCDM) model to incorporate the concept of sustainable development into sports tourism. First, the Bayesian best–worst method (Bayesian BWM) is used to screen out important criteria. Bayesian BWM solves the problem of expert opinion integration of conventional BWM. It is based on the statistical probability to estimate the optimal group criteria weights. Secondly, the rough decision making trial and evaluation laboratory (rough DEMATEL) technique is used to map out complex influential relationships. The introduction of DEMATEL from the rough set theory has better practicality. In the calculation program, interval types are used to replace crisp values in order to retain more expert information. A city in central Taiwan was used to demonstrate the effectiveness of the model. The results show that the quality of urban security, government marketing, business sponsorship and mass transit planning are the most important criteria. In addition, in conjunction with local festivals is the most influential factor for the overall evaluation system.
APA, Harvard, Vancouver, ISO, and other styles
20

Zhang, Zhewei, Huzi Cheng, and Tianming Yang. "A recurrent neural network framework for flexible and adaptive decision making based on sequence learning." PLOS Computational Biology 16, no. 11 (November 3, 2020): e1008342. http://dx.doi.org/10.1371/journal.pcbi.1008342.

Full text
Abstract:
The brain makes flexible and adaptive responses in a complicated and ever-changing environment for an organism’s survival. To achieve this, the brain needs to understand the contingencies between its sensory inputs, actions, and rewards. This is analogous to the statistical inference that has been extensively studied in the natural language processing field, where recent developments of recurrent neural networks have found many successes. We wonder whether these neural networks, the gated recurrent unit (GRU) networks in particular, reflect how the brain solves the contingency problem. Therefore, we build a GRU network framework inspired by the statistical learning approach of NLP and test it with four exemplar behavior tasks previously used in empirical studies. The network models are trained to predict future events based on past events, both comprising sensory, action, and reward events. We show the networks can successfully reproduce animal and human behavior. The networks generalize the training, perform Bayesian inference in novel conditions, and adapt their choices when event contingencies vary. Importantly, units in the network encode task variables and exhibit activity patterns that match previous neurophysiology findings. Our results suggest that the neural network approach based on statistical sequence learning may reflect the brain’s computational principle underlying flexible and adaptive behaviors and serve as a useful approach to understand the brain.
APA, Harvard, Vancouver, ISO, and other styles
21

Scales, John A., and Luis Tenorio. "Prior information and uncertainty in inverse problems." GEOPHYSICS 66, no. 2 (March 2001): 389–97. http://dx.doi.org/10.1190/1.1444930.

Full text
Abstract:
Solving any inverse problem requires understanding the uncertainties in the data to know what it means to fit the data. We also need methods to incorporate data‐independent prior information to eliminate unreasonable models that fit the data. Both of these issues involve subtle choices that may significantly influence the results of inverse calculations. The specification of prior information is especially controversial. How does one quantify information? What does it mean to know something about a parameter a priori? In this tutorial we discuss Bayesian and frequentist methodologies that can be used to incorporate information into inverse calculations. In particular we show that apparently conservative Bayesian choices, such as representing interval constraints by uniform probabilities (as is commonly done when using genetic algorithms, for example) may lead to artificially small uncertainties. We also describe tools from statistical decision theory that can be used to characterize the performance of inversion algorithms.
APA, Harvard, Vancouver, ISO, and other styles
22

Huang, Yi Hu, Jin Li Wang, and Xi Mei Jia. "Research of Soccer Robot Target Tracking Algorithm Based on Improved CAMShift." Advanced Materials Research 221 (March 2011): 610–14. http://dx.doi.org/10.4028/www.scientific.net/amr.221.610.

Full text
Abstract:
According to the vision needs of robot soccer and CAMShift tracking inefficient in dynamic background, a new tracking algorithm is brought forward to improve the CAMShift in this paper. A real-time updating background model is build, by traversing the search area for all target pixels to statistic and calculate the color probability distribution of the color target, statistical principles and minimum error rate of Bayesian decision theory are used to achieve a more accurate distinction between the target and the background. By comparing with the CAMShift, the new algorithm provides a better robustness in the soccer robot game and can meet the purposes of fast and accurate tracking.
APA, Harvard, Vancouver, ISO, and other styles
23

FELGAER, PABLO, PAOLA BRITOS, and RAMÓN GARCÍA-MARTÍNEZ. "PREDICTION IN HEALTH DOMAIN USING BAYESIAN NETWORKS OPTIMIZATION BASED ON INDUCTION LEARNING TECHNIQUES." International Journal of Modern Physics C 17, no. 03 (March 2006): 447–55. http://dx.doi.org/10.1142/s0129183106008558.

Full text
Abstract:
A Bayesian network is a directed acyclic graph in which each node represents a variable and each arc a probabilistic dependency; they are used to provide: a compact form to represent the knowledge and flexible methods of reasoning. Obtaining it from data is a learning process that is divided in two steps: structural learning and parametric learning. In this paper we define an automatic learning method that optimizes the Bayesian networks applied to classification, using a hybrid method of learning that combines the advantages of the induction techniques of the decision trees (TDIDT-C4.5) with those of the Bayesian networks. The resulting method is applied to prediction in health domain.
APA, Harvard, Vancouver, ISO, and other styles
24

Liu, Shun, Qin Xu, and Pengfei Zhang. "Identifying Doppler Velocity Contamination Caused by Migrating Birds. Part II: Bayes Identification and Probability Tests." Journal of Atmospheric and Oceanic Technology 22, no. 8 (August 1, 2005): 1114–21. http://dx.doi.org/10.1175/jtech1758.1.

Full text
Abstract:
Abstract Based on the Bayesian statistical decision theory, a probabilistic quality control (QC) technique is developed to identify and flag migrating-bird-contaminated sweeps of level II velocity scans at the lowest elevation angle using the QC parameters presented in Part I. The QC technique can use either each single QC parameter or all three in combination. The single-parameter QC technique is shown to be useful for evaluating the effectiveness of each QC parameter based on the smallness of the tested percentages of wrong decision by using the ground truth information (if available) or based on the smallness of the estimated probabilities of wrong decision (if there is no ground truth information). The multiparameter QC technique is demonstrated to be much better than any of the three single-parameter QC techniques, as indicated by the very small value of the tested percentages of wrong decision for no-flag decisions (not contaminated by migrating birds). Since the averages of the estimated probabilities of wrong decision are quite close to the tested percentages of wrong decision, they can provide useful information about the probability of wrong decision when the multiparameter QC technique is used for real applications (with no ground truth information).
APA, Harvard, Vancouver, ISO, and other styles
25

Daniel, Lucky O., Caston Sigauke, Colin Chibaya, and Rendani Mbuvha. "Short-Term Wind Speed Forecasting Using Statistical and Machine Learning Methods." Algorithms 13, no. 6 (May 26, 2020): 132. http://dx.doi.org/10.3390/a13060132.

Full text
Abstract:
Wind offers an environmentally sustainable energy resource that has seen increasing global adoption in recent years. However, its intermittent, unstable and stochastic nature hampers its representation among other renewable energy sources. This work addresses the forecasting of wind speed, a primary input needed for wind energy generation, using data obtained from the South African Wind Atlas Project. Forecasting is carried out on a two days ahead time horizon. We investigate the predictive performance of artificial neural networks (ANN) trained with Bayesian regularisation, decision trees based stochastic gradient boosting (SGB) and generalised additive models (GAMs). The results of the comparative analysis suggest that ANN displays superior predictive performance based on root mean square error (RMSE). In contrast, SGB shows outperformance in terms of mean average error (MAE) and the related mean average percentage error (MAPE). A further comparison of two forecast combination methods involving the linear and additive quantile regression averaging show the latter forecast combination method as yielding lower prediction accuracy. The additive quantile regression averaging based prediction intervals also show outperformance in terms of validity, reliability, quality and accuracy. Interval combination methods show the median method as better than its pure average counterpart. Point forecasts combination and interval forecasting methods are found to improve forecast performance.
APA, Harvard, Vancouver, ISO, and other styles
26

Domanov, Aleksey. "The Basics of Bayesian Approach to Quantitative Analysis (at the Example of Euroscepticism)." Political Science (RU), no. 1 (2021): 301–21. http://dx.doi.org/10.31249/poln/2021.01.13.

Full text
Abstract:
This article attempts to identify the main assumptions, prerequisites and techniques of the methods developed by some modern statisticians on the basis of T. Bayes' theorem for the purposes of social variables interactions assessment. The author underlined several advantages of the given approach as compared to more traditional quantitative methods and highlighted key research areas subject to evaluation by Bayesian estimates. First of all, this approach is compatible with game and decision theory, event analysis, hidden Markov chains, prediction using neural networks and other predictive algorithms of artificial intelligence. The Bayesian approach differs significantly from traditional statistical methods (first of all, it is focused on finding the most probable, rather than the only true value of the feature coupling coefficient), hence a graphical interpretation was provided for such basic concepts and techniques as probabilistic inference, maximum likelihood estimation and Bayesian confidence network. The described tools were used to test the hypothesis about the impact of life quality decrease on rise in Euroscepticism of EU citizens. ANOVA and correlation analysis of 27 thousand people’s responses to Eurobarometer questions addressed in November-December 2019 attributed strong likelihood to this assumption. Moreover, Bayesian approach allowed for a probabilistic conclusion that this hypothesis is more plausible than the link between Euroscepticism and respondents’ current financial situation (explanatory power of comparison to the past is relatively greater).
APA, Harvard, Vancouver, ISO, and other styles
27

Mukha, V. S., and N. F. Kako. "Integrals and integral transformations related to the vector Gaussian distribution." Proceedings of the National Academy of Sciences of Belarus. Physics and Mathematics Series 55, no. 4 (January 7, 2020): 457–66. http://dx.doi.org/10.29235/1561-2430-2019-55-4-457-466.

Full text
Abstract:
This paper is dedicated to the integrals and integral transformations related to the probability density function of the vector Gaussian distribution and arising in probability applications. Herein, we present three integrals that permit to calculate the moments of the multivariate Gaussian distribution. Moreover, the total probability formula and Bayes formula for the vector Gaussian distribution are given. The obtained results are proven. The deduction of the integrals is performed on the basis of the Gauss elimination method. The total probability formula and Bayes formula are obtained on the basis of the proven integrals. These integrals and integral transformations could be used, for example, in the statistical decision theory, particularly, in the dual control theory, and as table integrals in various areas of research. On the basis of the obtained results, Bayesian estimations of the coefficients of the multiple regression function are calculated.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhang, Zhihao, Saksham Chandra, Andrew Kayser, Ming Hsu, and Joshua L. Warren. "A Hierarchical Bayesian Implementation of the Experience-Weighted Attraction Model." Computational Psychiatry 4 (August 2020): 40–60. http://dx.doi.org/10.1162/cpsy_a_00028.

Full text
Abstract:
Social and decision-making deficits are often the first symptoms of neuropsychiatric disorders. In recent years, economic games, together with computational models of strategic learning, have been increasingly applied to the characterization of individual differences in social behavior, as well as their changes across time due to disease progression, treatment, or other factors. At the same time, the high dimensionality of these data poses an important challenge to statistical estimation of these models, potentially limiting the adoption of such approaches in patients and special populations. We introduce a hierarchical Bayesian implementation of a class of strategic learning models, experience-weighted attraction (EWA), that is widely used in behavioral game theory. Importantly, this approach provides a unified framework for capturing between- and within-participant variation, including changes associated with disease progression, comorbidity, and treatment status. We show using simulated data that our hierarchical Bayesian approach outperforms representative agent and individual-level estimation methods that are commonly used in extant literature, with respect to parameter estimation and uncertainty quantification. Furthermore, using an empirical dataset, we demonstrate the value of our approach over competing methods with respect to balancing model fit and complexity. Consistent with the success of hierarchical Bayesian approaches in other areas of behavioral science, our hierarchical Bayesian EWA model represents a powerful and flexible tool to apply to a wide range of behavioral paradigms for studying the interplay between complex human behavior and biological factors.
APA, Harvard, Vancouver, ISO, and other styles
29

Magee, Andrew F., Sebastian Höhna, Tetyana I. Vasylyeva, Adam D. Leaché, and Vladimir N. Minin. "Locally adaptive Bayesian birth-death model successfully detects slow and rapid rate shifts." PLOS Computational Biology 16, no. 10 (October 28, 2020): e1007999. http://dx.doi.org/10.1371/journal.pcbi.1007999.

Full text
Abstract:
Birth-death processes have given biologists a model-based framework to answer questions about changes in the birth and death rates of lineages in a phylogenetic tree. Therefore birth-death models are central to macroevolutionary as well as phylodynamic analyses. Early approaches to studying temporal variation in birth and death rates using birth-death models faced difficulties due to the restrictive choices of birth and death rate curves through time. Sufficiently flexible time-varying birth-death models are still lacking. We use a piecewise-constant birth-death model, combined with both Gaussian Markov random field (GMRF) and horseshoe Markov random field (HSMRF) prior distributions, to approximate arbitrary changes in birth rate through time. We implement these models in the widely used statistical phylogenetic software platform RevBayes, allowing us to jointly estimate birth-death process parameters, phylogeny, and nuisance parameters in a Bayesian framework. We test both GMRF-based and HSMRF-based models on a variety of simulated diversification scenarios, and then apply them to both a macroevolutionary and an epidemiological dataset. We find that both models are capable of inferring variable birth rates and correctly rejecting variable models in favor of effectively constant models. In general the HSMRF-based model has higher precision than its GMRF counterpart, with little to no loss of accuracy. Applied to a macroevolutionary dataset of the Australian gecko family Pygopodidae (where birth rates are interpretable as speciation rates), the GMRF-based model detects a slow decrease whereas the HSMRF-based model detects a rapid speciation-rate decrease in the last 12 million years. Applied to an infectious disease phylodynamic dataset of sequences from HIV subtype A in Russia and Ukraine (where birth rates are interpretable as the rate of accumulation of new infections), our models detect a strongly elevated rate of infection in the 1990s.
APA, Harvard, Vancouver, ISO, and other styles
30

Reichert, Peter. "Towards a comprehensive uncertainty assessment in environmental research and decision support." Water Science and Technology 81, no. 8 (January 29, 2020): 1588–96. http://dx.doi.org/10.2166/wst.2020.032.

Full text
Abstract:
Abstract Uncertainty quantification is very important in environmental management to allow decision makers to consider the reliability of predictions of the consequences of decision alternatives and relate them to their risk attitudes and the uncertainty about their preferences. Nevertheless, uncertainty quantification in environmental decision support is often incomplete and the robustness of the results regarding assumptions made for uncertainty quantification is often not investigated. In this article, an attempt is made to demonstrate how uncertainty can be considered more comprehensively in environmental research and decision support by combining well-established with rarely applied statistical techniques. In particular, the following elements of uncertainty quantification are discussed: (i) using stochastic, mechanistic models that consider and propagate uncertainties from their origin to the output; (ii) profiting from the support of modern techniques of data science to increase the diversity of the exploration process, to benchmark mechanistic models, and to find new relationships; (iii) analysing structural alternatives by multi-model and non-parametric approaches; (iv) quantitatively formulating and using societal preferences in decision support; (v) explicitly considering the uncertainty of elicited preferences in addition to the uncertainty of predictions in decision support; and (vi) explicitly considering the ambiguity about prior distributions for predictions and preferences by using imprecise probabilities. In particular, (v) and (vi) have mostly been ignored in the past and a guideline is provided on how these uncertainties can be considered without significantly increasing the computational burden. The methodological approach to (v) and (vi) is based on expected expected utility theory, which extends expected utility theory to the consideration of uncertain preferences, and on imprecise, intersubjective Bayesian probabilities.
APA, Harvard, Vancouver, ISO, and other styles
31

Parag, Kris V., and Christl A. Donnelly. "Adaptive Estimation for Epidemic Renewal and Phylogenetic Skyline Models." Systematic Biology 69, no. 6 (April 25, 2020): 1163–79. http://dx.doi.org/10.1093/sysbio/syaa035.

Full text
Abstract:
Abstract Estimating temporal changes in a target population from phylogenetic or count data is an important problem in ecology and epidemiology. Reliable estimates can provide key insights into the climatic and biological drivers influencing the diversity or structure of that population and evidence hypotheses concerning its future growth or decline. In infectious disease applications, the individuals infected across an epidemic form the target population. The renewal model estimates the effective reproduction number, R, of the epidemic from counts of observed incident cases. The skyline model infers the effective population size, N, underlying a phylogeny of sequences sampled from that epidemic. Practically, R measures ongoing epidemic growth while N informs on historical caseload. While both models solve distinct problems, the reliability of their estimates depends on p-dimensional piecewise-constant functions. If p is misspecified, the model might underfit significant changes or overfit noise and promote a spurious understanding of the epidemic, which might misguide intervention policies or misinform forecasts. Surprisingly, no transparent yet principled approach for optimizing p exists. Usually, p is heuristically set, or obscurely controlled via complex algorithms. We present a computable and interpretable p-selection method based on the minimum description length (MDL) formalism of information theory. Unlike many standard model selection techniques, MDL accounts for the additional statistical complexity induced by how parameters interact. As a result, our method optimizes p so that R and N estimates properly and meaningfully adapt to available data. It also outperforms comparable Akaike and Bayesian information criteria on several classification problems, given minimal knowledge of the parameter space, and exposes statistical similarities among renewal, skyline, and other models in biology. Rigorous and interpretable model selection is necessary if trustworthy and justifiable conclusions are to be drawn from piecewise models. [Coalescent processes; epidemiology; information theory; model selection; phylodynamics; renewal models; skyline plots]
APA, Harvard, Vancouver, ISO, and other styles
32

Nemykin, O. I. "ALGORITHM FOR SELECTION OF LAUNCH ELEMENTS IN THE PRESENCE OF A PRIORI INFORMATION ABOUT ITS COMPOSITION AND STRUCTURE." Issues of radio electronics, no. 3 (March 20, 2018): 114–19. http://dx.doi.org/10.21778/2218-5453-2018-3-114-119.

Full text
Abstract:
Traditional methods of the theory of statistical solutions are developed for cases of making single-valued two-alternative or multialternative solutions about the class of an object. Assuming the possibility of ambiguous multi-alternative (in the case of solving the problem of selection of space objects of three-alternative) decisions on the classification of of space objects at the stages of the selection process, a modification of the traditional statistical decision making algorithm is required. Such a modification of the algorithm can be carried out by appropriate selection of the loss function. In the framework of the Bayes approach, an additive loss function is proposed, the structure of which takes into account a priori information on the structure and composition of launch elements in relation to the classes «Launch vehicle» and «spacecraft». The algorithm of decision making is synthesized under the conditions of a priori certainty regarding the probabilistic description of the analyzed situation. It is shown that the problem of verifying three-alternative hypotheses can be reduced to an independent verification of three two-alternative hypotheses, which makes it possible to take particular solutions in the solution process and use a different set of the signs of selection for the formation of solutions for individual classes of space objects. The peculiarities of the implementation of the selection algorithm are discussed in the presence of a priori information and measurement information on starts of a limited volume. The synthesized Bayesian decision making algorithm has the properties necessary to solve the problem of selection of space objects at launch in real conditions in the presence of measuring information specified in the form of a training sample. Its architecture allows to form unambiguous and ambiguous decisions about each space object in the launch.
APA, Harvard, Vancouver, ISO, and other styles
33

Manski, Charles F., and Aleksey Tetenov. "Sufficient trial size to inform clinical practice." Proceedings of the National Academy of Sciences 113, no. 38 (September 6, 2016): 10518–23. http://dx.doi.org/10.1073/pnas.1612174113.

Full text
Abstract:
Medical research has evolved conventions for choosing sample size in randomized clinical trials that rest on the theory of hypothesis testing. Bayesian statisticians have argued that trials should be designed to maximize subjective expected utility in settings of clinical interest. This perspective is compelling given a credible prior distribution on treatment response, but there is rarely consensus on what the subjective prior beliefs should be. We use Wald’s frequentist statistical decision theory to study design of trials under ambiguity. We show that ε-optimal rules exist when trials have large enough sample size. An ε-optimal rule has expected welfare within ε of the welfare of the best treatment in every state of nature. Equivalently, it has maximum regret no larger than ε. We consider trials that draw predetermined numbers of subjects at random within groups stratified by covariates and treatments. We report exact results for the special case of two treatments and binary outcomes. We give simple sufficient conditions on sample sizes that ensure existence of ε-optimal treatment rules when there are multiple treatments and outcomes are bounded. These conditions are obtained by application of Hoeffding large deviations inequalities to evaluate the performance of empirical success rules.
APA, Harvard, Vancouver, ISO, and other styles
34

Prateepasen, Asa, Pakorn Kaewtrakulpong, and Chalermkiat Jirarungsatean. "Semi-Parametric Learning for Classification of Pitting Corrosion Detected by Acoustic Emission." Key Engineering Materials 321-323 (October 2006): 549–52. http://dx.doi.org/10.4028/www.scientific.net/kem.321-323.549.

Full text
Abstract:
This paper presents a Non-Destructive Testing (NDT) technique, Acoustic Emission (AE) to classify pitting corrosion severity in austenitic stainless steel 304 (SS304). The corrosion severity is graded roughly into five levels based on the depth of corrosion. A number of timedomain AE parameters were extracted and used as features in our classification methods. In this work, we present practical classification techniques based on Bayesian Statistical Decision Theory, namely Maximum A Posteriori (MAP) and Maximum Likelihood (ML) classifiers. Mixture of Gaussian distributions is used as the class-conditional probability density function for the classifiers. The mixture model has several appealing attributes such as the ability to model any probability density function (pdf) with any precision and the efficiency of parameter-estimation algorithm. However, the model still suffers from model-order-selection and initialization problems which greatly limit its applications. In this work, we introduced a semi-parametric scheme for learning the mixture model which can solve the mentioned difficulties. The method was compared with conventional Feed-Forward Neural Network (FFNN) and Probabilistic Neural Network (PNN) to evaluate its performance. We found that our proposed methods gave much lower classificationerror rate and also far smaller variance of the classifiers.
APA, Harvard, Vancouver, ISO, and other styles
35

Mousavi, Seyed Pezhman, Saeid Atashrouz, Menad Nait Amar, Abdolhossein Hemmati-Sarapardeh, Ahmad Mohaddespour, and Amir Mosavi. "Viscosity of Ionic Liquids: Application of the Eyring’s Theory and a Committee Machine Intelligent System." Molecules 26, no. 1 (December 31, 2020): 156. http://dx.doi.org/10.3390/molecules26010156.

Full text
Abstract:
Accurate determination of the physicochemical characteristics of ionic liquids (ILs), especially viscosity, at widespread operating conditions is of a vital role for various fields. In this study, the viscosity of pure ILs is modeled using three approaches: (I) a simple group contribution method based on temperature, pressure, boiling temperature, acentric factor, molecular weight, critical temperature, critical pressure, and critical volume; (II) a model based on thermodynamic properties, pressure, and temperature; and (III) a model based on chemical structure, pressure, and temperature. Furthermore, Eyring’s absolute rate theory is used to predict viscosity based on boiling temperature and temperature. To develop Model (I), a simple correlation was applied, while for Models (II) and (III), smart approaches such as multilayer perceptron networks optimized by a Levenberg–Marquardt algorithm (MLP-LMA) and Bayesian Regularization (MLP-BR), decision tree (DT), and least square support vector machine optimized by bat algorithm (BAT-LSSVM) were utilized to establish robust and accurate predictive paradigms. These approaches were implemented using a large database consisting of 2813 experimental viscosity points from 45 different ILs under an extensive range of pressure and temperature. Afterward, the four most accurate models were selected to construct a committee machine intelligent system (CMIS). Eyring’s theory’s results to predict the viscosity demonstrated that although the theory is not precise, its simplicity is still beneficial. The proposed CMIS model provides the most precise responses with an absolute average relative deviation (AARD) of less than 4% for predicting the viscosity of ILs based on Model (II) and (III). Lastly, the applicability domain of the CMIS model and the quality of experimental data were assessed through the Leverage statistical method. It is concluded that intelligent-based predictive models are powerful alternatives for time-consuming and expensive experimental processes of the ILs viscosity measurement.
APA, Harvard, Vancouver, ISO, and other styles
36

Solodov, A. A. "Mathematical Formalization and Algorithmization of the Main Modules of Organizational and Technical Systems." Statistics and Economics 17, no. 4 (September 6, 2020): 96–104. http://dx.doi.org/10.21686/2500-3925-2020-4-96-104.

Full text
Abstract:
The purpose of the research is to develop a generalized structural scheme of organizational and technical systems based on the general theory of management, which contains the necessary and sufficient number of modules and formalize on this basis the main management tasks that act as goals of the behavior of the management object. The main modules that directly implement the management process are the status assessment module of organizational and technical systems and the management module. It is shown that in traditional organizational and technical systems, including the decision-maker, the key module is the state assessment module of organizational and technical systems. In this regard, the key aspect of the work is to study the optimal algorithms for evaluating the state of processes occurring in the organizational and technical systems and develop on this basis the principles of mathematical formalization and algorithmization of the status assessment module. The research method is the application of the principles of the theory of statistical estimates of random processes occurring in the organizational and technical systems against the background of interference and the synthesis of algorithms for the functioning of the status assessment module on this basis. It is shown that a characteristic feature of random processes occurring in organizational and technical systems is their essentially discrete nature and Poisson statistics. A mathematical description of the statistical characteristics of point random processes is formulated, which is suitable for solving the main problems of process evaluation and management in organizational and technical systems. The main results were the definition of state space of the organizational and technical systems, the development of a generalized structural scheme of the organizational and technical systems in state space that includes the modules forming the state variable of the module assessment and module management. This mathematical interpretation of the organizational and technical systems structure allowed us to formalize the main problems solved by typical organizational and technical systems and consider optimal algorithms for solving such problems. The assumption when considering the problems of synthesis of optimal algorithms is to optimize the status assessment module of organizational and technical systems and the control module separately, while the main attention is paid to the consideration of optimal estimation algorithms. The formalization and algorithmization of the organizational and technical systems behavior is undertaken mainly in terms of the Bayesian criterion of optimal statistical estimates. Various methods of overcoming a priori uncertainty typical for the development of real organizational and technical systems are indicated. Methods of adaptation are discussed, including Bayesian adaptation of the decision-making procedure under conditions of a priori uncertainty. Using a special case of the Central limit theorem, an asymptotic statistical relationship between the mentioned point processes and traditional Gaussian processes is established. As an example, a nontrivial problem of optimal detection of Poisson signal against a background of Poisson noise is considered; graphs of the potential noise immunity of this algorithm are calculated and presented. The corresponding references are given to the previously obtained results of estimates of Poisson processes. For automatic organizational and technical systems, the generally accepted criteria for the quality of management of such systems are specified. The result of the review is a classification of methods for formalization and algorithmization of problems describing the behavior of organizational and technical systems.
APA, Harvard, Vancouver, ISO, and other styles
37

Khrennikova, Polina, and Emmanuel Haven. "Quantum generalized observables framework for psychological data: a case of preference reversals in US elections." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 375, no. 2106 (October 2, 2017): 20160391. http://dx.doi.org/10.1098/rsta.2016.0391.

Full text
Abstract:
Politics is regarded as a vital area of public choice theory, and it is strongly relying on the assumptions of voters’ rationality and as such, stability of preferences. However, recent opinion polls and real election outcomes in the USA have shown that voters often engage in ‘ticket splitting’, by exhibiting contrasting party support in Congressional and Presidential elections (cf. Khrennikova 2014 Phys. Scripta T163 , 014010 ( doi:10.1088/0031-8949/2014/T163/014010 ); Khrennikova & Haven 2016 Phil. Trans. R. Soc. A 374 , 20150106 ( doi:10.1098/rsta.2015.0106 ); Smith et al. 1999 Am. J. Polit. Sci. 43 , 737–764 ( doi:10.2307/2991833 )). Such types of preference reversals cannot be mathematically captured via the formula of total probability, thus showing that voters’ decision making is at variance with the classical probabilistic information processing framework. In recent work, we have shown that quantum probability describes well the violation of Bayesian rationality in statistical data of voting in US elections, through the so-called interference effects of probability amplitudes. This paper is proposing a novel generalized observables framework of voting behaviour, by using the statistical data collected and analysed in previous studies by Khrennikova (Khrennikova 2015 Lect. Notes Comput. Sci. 8951 , 196–209) and Khrennikova & Haven (Khrennikova & Haven 2016 Phil. Trans. R. Soc. A 374 , 20150106 ( doi:10.1098/rsta.2015.0106 )). This framework aims to overcome the main problems associated with the quantum probabilistic representation of psychological data, namely the non-double stochasticity of transition probability matrices. We develop a simplified construction of generalized positive operator valued measures by formulating special non-orthonormal bases with respect to these operators. This article is part of the themed issue ‘Second quantum revolution: foundational questions’.
APA, Harvard, Vancouver, ISO, and other styles
38

Germano, J. D. "Ecology, statistics, and the art of misdiagnosis: The need for a paradigm shift." Environmental Reviews 7, no. 4 (December 1, 1999): 167–90. http://dx.doi.org/10.1139/a99-014.

Full text
Abstract:
This paper approaches ecological data analysis from a different vantage point and has implications for ecological risk assessment. Despite all the advances in theoretical ecology over the past four decades and the huge amounts of data that have been collected in various marine monitoring programs, we still do not know enough about how marine ecosystems function to be able to make valid predictions of impacts before they occur, accurately assess ecosystem ``health,'' or perform valid risk assessments. Comparisons are made among the fields of psychology, social science, and ecology in terms of the applications of decision theory or approach to problem diagnosis. In all of these disciplines, researchers are dealing with phenomena whose mechanisms are poorly understood. One of the biggest impediments to the interpretation of ecological data and the advancement of our understanding about ecosystem function is the desire of marine scientists and policy regulators to cling to the ritual of null hypothesis significance testing (NHST) with mechanical dichotomous decisions around a sacred 0.05 criterion. The paper is divided into three main sections: first, a brief overview of common misunderstandings about NHST; second, why diagnosis of ecosystem health is and will be such a difficult task; and finally, some suggestions about alternative approaches for ecologists to improve our "diagnostic accuracy'' by taking heed of lessons learned in the fields of clinical psychology and medical epidemiology. Key words: statistical significance, Bayesian statistics, risk assessment
APA, Harvard, Vancouver, ISO, and other styles
39

Teng, Jimmy. "A Bayesian Theory of Games: An Analysis of Strategic Interactions with Statistical Decision Theoretic Foundation." SSRN Electronic Journal, 2012. http://dx.doi.org/10.2139/ssrn.2014459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Villar-Hernández, Bartolo de Jesús, Sergio Pérez-Elizalde, Johannes W. R. Martini, Fernando Toledo, P. Perez-Rodriguez, Margaret Krause, Irma Delia García-Calvillo, Giovanny Covarrubias-Pazaran, and José Crossa. "Application of multi-trait Bayesian decision theory for parental genomic selection." G3 Genes|Genomes|Genetics 11, no. 2 (January 20, 2021). http://dx.doi.org/10.1093/g3journal/jkab012.

Full text
Abstract:
Abstract In all breeding programs, the decision about which individuals to select and intermate to form the next selection cycle is crucial. The improvement of genetic stocks requires considering multiple traits simultaneously, given that economic value and net genetic merits depend on many traits; therefore, with the advance of computational and statistical tools and genomic selection (GS), researchers are focusing on multi-trait selection. Selection of the best individuals is difficult, especially in traits that are antagonistically correlated, where improvement in one trait might imply a reduction in other(s). There are approaches that facilitate multi-trait selection, and recently a Bayesian decision theory (BDT) has been proposed. Parental selection using BDT has the potential to be effective in multi-trait selection given that it summarizes all relevant quantitative genetic concepts such as heritability, response to selection and the structure of dependence between traits (correlation). In this study, we applied BDT to provide a treatment for the complexity of multi-trait parental selection using three multivariate loss functions (LF), Kullback–Leibler (KL), Energy Score, and Multivariate Asymmetric Loss (MALF), to select the best-performing parents for the next breeding cycle in two extensive real wheat data sets. Results show that the high ranking lines in genomic estimated breeding value (GEBV) for certain traits did not always have low values for the posterior expected loss (PEL). For both data sets, the KL LF gave similar importance to all traits including grain yield. In contrast, the Energy Score and MALF gave a better performance in three of four traits that were different than grain yield. The BDT approach should help breeders to decide based not only on the GEBV per se of the parent to be selected, but also on the level of uncertainty according to the Bayesian paradigm.
APA, Harvard, Vancouver, ISO, and other styles
41

Hatzikirou, Haralampos. "Statistical mechanics of cell decision-making: the cell migration force distribution." Journal of the Mechanical Behavior of Materials 27, no. 1-2 (June 1, 2018). http://dx.doi.org/10.1515/jmbm-2018-0001.

Full text
Abstract:
AbstractCell decision-making is the cellular process of responding to microenvironmental cues. This can be regarded as the regulation of cell’s intrinsic variables to extrinsic stimuli. Currently, little is known about the principles dictating cell decision-making. Regarding cells as Bayesian decision-makers under energetic constraints, I postulate the principle of least microenvironmental uncertainty principle (LEUP). This is translated into a free-energy principle and I develop a statistical mechanics theory for cell decision-making. I exhibit the potential of LEUP in the case of cell migration. In particular, I calculate the dependence of cell locomotion force on the steady state distribution of adhesion receptors. Finally, the associated migration velocity allows for the reproduction of the cell anomalous diffusion, as observed in cell culture experiments.
APA, Harvard, Vancouver, ISO, and other styles
42

Gupta, Maya, Paul Zaharias, and Tandy Warnow. "Accurate large-scale phylogeny-aware alignment using BAli-Phy." Bioinformatics, July 28, 2021. http://dx.doi.org/10.1093/bioinformatics/btab555.

Full text
Abstract:
Abstract Motivation BAli-Phy, a popular Bayesian method that co-estimates multiple sequence alignments and phylogenetic trees, is a rigorous statistical method, but due to its computational requirements, it has generally been limited to relatively small datasets (at most about 100 sequences). Here, we repurpose BAli-Phy as a ‘phylogeny-aware’ alignment method: we estimate the phylogeny from the input of unaligned sequences, and then use that as a fixed tree within BAli-Phy. Results We show that this approach achieves high accuracy, greatly superior to Prank, the current most popular phylogeny-aware alignment method, and is even more accurate than MAFFT, one of the top performing alignment methods in common use. Furthermore, this approach can be used to align very large datasets (up to 1000 sequences in this study). Availability and implementation See https://doi.org/10.13012/B2IDB-7863273_V1 for datasets used in this study. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
43

"A Novel Intelligent Technique of Invariant Statistical Embedding and Averaging via Pivotal Quantities for Optimization or Improvement of Statistical Decision Rules under Parametric Uncertainty." WSEAS TRANSACTIONS ON MATHEMATICS 19 (March 3, 2020). http://dx.doi.org/10.37394/23206.2020.19.3.

Full text
Abstract:
In the present paper, for intelligent constructing efficient (optimal, uniformly non-dominated, unbiased, improved) statistical decisions under parametric uncertainty, a new technique of invariant embedding of sample statistics in a decision criterion and averaging this criterion over pivots’ probability distributions is proposed. This technique represents a simple and computationally attractive statistical method based on the constructive use of the invariance principle in mathematical statistics. Unlike the Bayesian approach, the technique of invariant statistical embedding and averaging via pivotal quantities (ISE&APQ) is independent of the choice of priors and represents a novelty in the theory of statistical decisions. It allows one to eliminate unknown parameters from the problem and to find the efficient statistical decision rules, which often have smaller risk than any of the well-known decision rules. The aim of the present paper is to show how the technique of ISE&APQ may be employed in the particular case of optimization, estimation, or improvement of statistical decisions under parametric uncertainty. To illustrate the proposed technique of ISE&APQ, illustrative examples of intelligent constructing exact statistical tolerance limits for prediction of future outcomes coming from log-location-scale distributions under parametric uncertainty are given
APA, Harvard, Vancouver, ISO, and other styles
44

Zhou, Chao, Hongyu Zhao, and Tao Wang. "Transformation and differential abundance analysis of microbiome data incorporating phylogeny." Bioinformatics, July 24, 2021. http://dx.doi.org/10.1093/bioinformatics/btab543.

Full text
Abstract:
Abstract Motivation Microbiome data have proven extremely useful for understanding microbial communities and their impacts in health and disease. Although microbiome analysis methods and standards are evolving rapidly, obtaining meaningful and interpretable results from microbiome studies still requires careful statistical treatment. In particular, many existing and emerging methods for differential abundance (DA) analysis fail to account for the fact that microbiome data are high-dimensional and sparse, compositional, negatively and positively correlated and phylogenetically structured. To better describe microbiome data and improve the power of DA testing, there is still a great need for the continued development of appropriate statistical methodology. Results In this article, we propose a model-based approach for microbiome data transformation, and a phylogenetically informed procedure for DA testing based on the transformed data. First, we extend the Dirichlet-tree multinomial (DTM) to zero-inflated DTM for multivariate modeling of microbial counts, addressing data sparsity and correlation and phylogeny among bacterial taxa. Then, within this framework and using a Bayesian formulation, we introduce posterior mean transformation to convert raw counts into non-zero relative abundances that sum to one, accounting for the compositionality nature of microbiome data. Second, using the transformed data, we propose adaptive analysis of composition of microbiomes (adaANCOM) for DA testing by constructing log-ratios adaptively on the tree for each taxon, greatly reducing the computational complexity of ANCOM in high dimensions. Finally, we present extensive simulation studies, an analysis of HMP data across 18 body sites and 2 visits, and an application to a gut microbiome and malnutrition study, to investigate the performance of posterior mean transformation and adaANCOM. Comparisons with ANCOM and other DA testing procedures show that adaANCOM controls the false discovery rate well, allows for easy interpretation of the results, and is computationally efficient for high-dimensional problems. Availability and implementation The developed R package is available at https://github.com/ZRChao/adaANCOM. For replicability purposes, scripts for our simulations and data analysis are available at https://github.com/ZRChao/Papers_supplementary. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
45

Liu, Shengjie, Jun Gao, Yuling Zheng, Lei Huang, and Fangrong Yan. "Bayesian Two-Stage Adaptive Design in Bioequivalence." International Journal of Biostatistics 16, no. 1 (July 16, 2019). http://dx.doi.org/10.1515/ijb-2018-0105.

Full text
Abstract:
AbstractBioequivalence (BE) studies are an integral component of new drug development process, and play an important role in approval and marketing of generic drug products. However, existing design and evaluation methods are basically under the framework of frequentist theory, while few implements Bayesian ideas. Based on the bioequivalence predictive probability model and sample re-estimation strategy, we propose a new Bayesian two-stage adaptive design and explore its application in bioequivalence testing. The new design differs from existing two-stage design (such as Potvin’s method B, C) in the following aspects. First, it not only incorporates historical information and expert information, but further combines experimental data flexibly to aid decision-making. Secondly, its sample re-estimation strategy is based on the ratio of the information in interim analysis to total information, which is simpler in calculation than the Potvin’s method. Simulation results manifested that the two-stage design can be combined with various stop boundary functions, and the results are different. Moreover, the proposed method saves sample size compared to the Potvin’s method under the conditions that type I error rate is below 0.05 and statistical power reaches 80 %.
APA, Harvard, Vancouver, ISO, and other styles
46

Stanton, Jeffrey M. "Evaluating Equivalence and Confirming the Null in the Organizational Sciences." Organizational Research Methods, May 11, 2020, 109442812092193. http://dx.doi.org/10.1177/1094428120921934.

Full text
Abstract:
Testing and rejecting the null hypothesis is a routine part of quantitative research, but relatively few organizational researchers prepare for confirming the null or, similarly, testing a hypothesis of equivalence (e.g., that two group means are practically identical). Both theory and practice could benefit from greater attention to this capability. Planning ahead for equivalence testing also provides helpful input on assuring sufficient statistical power in a study. This article provides background on these ideas plus guidance on the use of two frequentist and two Bayesian techniques for testing a hypothesis of no nontrivial effect. The guidance highlights some faulty strategies and how to avoid them. An organizationally relevant example illustrates how to put these techniques into practice. A simulation compares the four techniques to support recommendations of when and how to use each one. A nine-step process table describes separate analytical tracks for frequentist and Bayesian equivalence techniques.
APA, Harvard, Vancouver, ISO, and other styles
47

Majoros, William H., Young-Sook Kim, Alejandro Barrera, Fan Li, Xingyan Wang, Sarah J. Cunningham, Graham D. Johnson, et al. "Bayesian estimation of genetic regulatory effects in high-throughput reporter assays." Bioinformatics, August 1, 2019. http://dx.doi.org/10.1093/bioinformatics/btz545.

Full text
Abstract:
Abstract Motivation High-throughput reporter assays dramatically improve our ability to assign function to noncoding genetic variants, by measuring allelic effects on gene expression in the controlled setting of a reporter gene. Unlike genetic association tests, such assays are not confounded by linkage disequilibrium when loci are independently assayed. These methods can thus improve the identification of causal disease mutations. While work continues on improving experimental aspects of these assays, less effort has gone into developing methods for assessing the statistical significance of assay results, particularly in the case of rare variants captured from patient DNA. Results We describe a Bayesian hierarchical model, called Bayesian Inference of Regulatory Differences, which integrates prior information and explicitly accounts for variability between experimental replicates. The model produces substantially more accurate predictions than existing methods when allele frequencies are low, which is of clear advantage in the search for disease-causing variants in DNA captured from patient cohorts. Using the model, we demonstrate a clear tradeoff between variant sequencing coverage and numbers of biological replicates, and we show that the use of additional biological replicates decreases variance in estimates of effect size, due to the properties of the Poisson-binomial distribution. We also provide a power and sample size calculator, which facilitates decision making in experimental design parameters. Availability and implementation The software is freely available from www.geneprediction.org/bird. The experimental design web tool can be accessed at http://67.159.92.22:8080 Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
48

Pinotsis, Dimitris A., and Earl K. Miller. "Differences in visually induced MEG oscillations reflect differences in deep cortical layer activity." Communications Biology 3, no. 1 (November 25, 2020). http://dx.doi.org/10.1038/s42003-020-01438-7.

Full text
Abstract:
AbstractNeural activity is organized at multiple scales, ranging from the cellular to the whole brain level. Connecting neural dynamics at different scales is important for understanding brain pathology. Neurological diseases and disorders arise from interactions between factors that are expressed in multiple scales. Here, we suggest a new way to link microscopic and macroscopic dynamics through combinations of computational models. This exploits results from statistical decision theory and Bayesian inference. To validate our approach, we used two independent MEG datasets. In both, we found that variability in visually induced oscillations recorded from different people in simple visual perception tasks resulted from differences in the level of inhibition specific to deep cortical layers. This suggests differences in feedback to sensory areas and each subject’s hypotheses about sensations due to differences in their prior experience. Our approach provides a new link between non-invasive brain imaging data, laminar dynamics and top-down control.
APA, Harvard, Vancouver, ISO, and other styles
49

Gómez-Déniz, Emilio, José Boza-Chirino, and Nancy Dávila-Cárdenes. "Tourist tax to promote rentals of low-emission vehicles." Tourism Economics, August 6, 2020, 135481662094650. http://dx.doi.org/10.1177/1354816620946508.

Full text
Abstract:
In the Canary Islands (Spain), the tourism boom has been paralleled by sharp growth in the car rental sector. However, this economic activity is associated with problems such as rising levels of vehicle emissions. In this article, we discuss, on the one hand, the introduction of a tax to internalise the costs of emissions from car rental fleets and, on the other, the measures to reward users who rent environmentally-friendly cars. For this purpose, we propose a model based on statistical decision theory, from which a Bayesian rule is derived. According to this model, the tax increases with the number of days the car is rented but decreases in line with the environmental efficiency of the vehicle. A data sample of visitors to the Canary Islands is used to compare the covariates involved in computing the number of car rental days and the corresponding tax payable.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography