Academic literature on the topic 'Predicted probabilities'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Predicted probabilities.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Predicted probabilities"

1

Fang, Di, Jenny Chong, and Jeffrey R. Wilson. "Predicted Probabilities' Relationship to Inclusion Probabilities." American Journal of Public Health 105, no. 5 (May 2015): 837–39. http://dx.doi.org/10.2105/ajph.2015.302592.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Long, J. Scott, and Jeremy Freese. "Predicted Probabilities for Count Models." Stata Journal: Promoting communications on statistics and Stata 1, no. 1 (November 2001): 51–57. http://dx.doi.org/10.1177/1536867x0100100103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mitchell, Michael N., and Xiao Chen. "Visualizing Main Effects and Interactions for Binary Logit Models." Stata Journal: Promoting communications on statistics and Stata 5, no. 1 (February 2005): 64–82. http://dx.doi.org/10.1177/1536867x0500500111.

Full text
Abstract:
This paper considers the role of covariates when using predicted probabilities to interpret main effects and interactions in logit models. While predicted probabilities are very intuitive for interpreting main effects and interactions, the pattern of results depends on the contribution of covariates. We introduce a concept called the covariate contribution, which reflects the aggregate contribution of all of the remaining predictors (covariates) in the model and a family of tools to help visualize the relationship between predictors and the predicted probabilities across a variety of covariate contributions. We believe this strategy and the accompanying tools can help researchers who wish to use predicted probabilities as an interpretive framework for logit models acquire and present a more comprehensive interpretation of their results. These visualization tools could be extended to other models (such as binary probit, multinomial logistic, ordinal logistic models, and other nonlinear models).
APA, Harvard, Vancouver, ISO, and other styles
4

Newell, P. T., K. Liou, J. W. Gjerloev, T. Sotirelis, S. Wing, and E. J. Mitchell. "Substorm probabilities are best predicted from solar wind speed." Journal of Atmospheric and Solar-Terrestrial Physics 146 (August 2016): 28–37. http://dx.doi.org/10.1016/j.jastp.2016.04.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Herron, Michael C. "Postestimation Uncertainty in Limited Dependent Variable Models." Political Analysis 8, no. 1 (1999): 83–98. http://dx.doi.org/10.1093/oxfordjournals.pan.a029806.

Full text
Abstract:
Many political science research articles that use limited dependent variable models report estimated quantities, in particular, fitted probabilities, predicted probabilities, and functions of such probabilities, without indicating that such estimates are subject to uncertainty. This practice, along with the reporting of “percentage correctly predicted,” can overstate the precision of reported results. In light of this, the present article describes a variety of measures of uncertainty that authors can include alongside estimates generated by limited dependent variable models. It also proposes an alternative to “percentage correctly predicted” and illustrates its calculations with congressional cosponsorship data from Krehbiel (1995).
APA, Harvard, Vancouver, ISO, and other styles
6

Whitaker, T. B., J. W. Dickens, and V. Chew. "Development of Statistical Models to Simulate the Testing of Farmers Stock Peanuts for Aflatoxin Using Visual, Thin Layer Chromatography, and Minicolumn Methods1." Peanut Science 12, no. 2 (July 1, 1985): 94–98. http://dx.doi.org/10.3146/pnut.12.2.0012.

Full text
Abstract:
Abstract The negative binomial probability function was used to model the distribution of sample aflatoxin test results when replicated grade samples from farmers stock peanuts are analyzed by thin layer chromatography and minicolumn methods. The Poisson probability funtion was used to model the distribution of the number of kernels with visible Aspergillus flavus growth found in replicated grade samples of farmers stock peanuts when the visible A. flavus method is used. The probabilities of accepting a lot of farmers stock peanuts with given aflatoxin concentrations when using a 465-g grade sample and 2 different accept/reject levels were predicted with the models and compared to observed acceptance probabilities computed from previously published data for each of the 3 methods. The comparisons showed good agreement between the predicted acceptance probabilities and the observed acceptance probabilities.
APA, Harvard, Vancouver, ISO, and other styles
7

Freese, Jeremy. "Least Likely Observations in Regression Models for Categorical Outcomes." Stata Journal: Promoting communications on statistics and Stata 2, no. 3 (September 2002): 296–300. http://dx.doi.org/10.1177/1536867x0200200306.

Full text
Abstract:
This article presents a method and program for identifying poorly fitting observations for maximum-likelihood regression models for categorical dependent variables. After estimating a model, the program leastlikely will list the observations that have the lowest predicted probabilities of observing the value of the outcome category that was actually observed. For example, when run after estimating a binary logistic regression model, leastlikely will list the observations with a positive outcome that had the lowest predicted probabilities of a positive outcome and the observations with a negative outcome that had the lowest predicted probabilities of a negative outcome. These can be considered the observations in which the outcome is most surprising given the values of the independent variables and the parameter estimates and, like observations with large residuals in ordinary least squares regression, may warrant individual inspection. Use of the program is illustrated with examples using binary and ordered logistic regression.
APA, Harvard, Vancouver, ISO, and other styles
8

Engels, Eric A., Gregory Haber, Allyson Hart, Charles F. Lynch, Jie Li, Karen S. Pawlish, Baozhen Qiao, Kelly J. Yu, and Ruth M. Pfeiffer. "Predicted Cure and Survival Among Transplant Recipients With a Previous Cancer Diagnosis." Journal of Clinical Oncology 39, no. 36 (December 20, 2021): 4039–48. http://dx.doi.org/10.1200/jco.21.01195.

Full text
Abstract:
PURPOSE A previous cancer diagnosis is a negative consideration in evaluating patients for possible solid organ transplantation. Statistical models may improve selection of patients with cancer evaluated for transplantation. METHODS We fitted statistical cure models for patients with cancer in the US general population using data from 13 cancer registries. Patients subsequently undergoing solid organ transplantation were identified through the Scientific Registry of Transplant Recipients. We estimated cure probabilities at diagnosis (for all patients with cancer) and transplantation (transplanted patients). We used Cox regression to assess associations of cure probability at transplantation with subsequent cancer-specific mortality. RESULTS Among 10,524,326 patients with 17 cancer types in the general population, the median cure probability at diagnosis was 62%. Of these patients, 5,425 (0.05%) subsequently underwent solid organ transplantation and their median cure probability at transplantation was 94% (interquartile range, 86%-98%). Compared with the tertile of transplanted patients with highest cure probability, those in the lowest tertile more frequently had lung or breast cancers and less frequently colorectal, testicular, or thyroid cancers; more frequently had advanced-stage cancer; were older (median 57 v 51 years); and were transplanted sooner after cancer diagnosis (median 3.6 v 8.6 years). Patients in the low-cure probability tertile had increased cancer-specific mortality after transplantation (adjusted hazard ratio, 2.08; 95% CI, 1.48 to 2.93; v the high tertile), whereas those in the middle tertile did not differ. CONCLUSION Patients with cancer who underwent solid organ transplantation exhibited high cure probabilities, reflecting selection on the basis of existing guidelines and clinical judgment. Nonetheless, there was a range of cure probabilities among transplanted patients and low probability predicted increased cancer-specific mortality after transplantation. Cure probabilities may facilitate guideline development and evaluating individual patients for transplantation.
APA, Harvard, Vancouver, ISO, and other styles
9

Ackermann, John F., and Michael S. Landy. "Suboptimal decision criteria are predicted by subjectively weighted probabilities and rewards." Attention, Perception, & Psychophysics 77, no. 2 (November 4, 2014): 638–58. http://dx.doi.org/10.3758/s13414-014-0779-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Holzer, Thomas L., J. Luke Blair, Thomas E. Noce, and Michael J. Bennett. "Predicted Liquefaction of East Bay Fills during a Repeat of the 1906 San Francisco Earthquake." Earthquake Spectra 22, no. 2_suppl (April 2006): 261–77. http://dx.doi.org/10.1193/1.2188018.

Full text
Abstract:
Predicted conditional probabilities of surface manifestations of liquefaction during a repeat of the 1906 San Francisco (M7.8) earthquake range from 0.54 to 0.79 in the area underlain by the sandy artificial fills along the eastern shore of San Francisco Bay near Oakland, California. Despite widespread liquefaction in 1906 of sandy fills in San Francisco, most of the East Bay fills were emplaced after 1906 without soil improvement to increase their liquefaction resistance. They have yet to be shaken strongly. Probabilities are based on the liquefaction potential index computed from 82 CPT soundings using median (50 th percentile) estimates of PGA based on a ground-motion prediction equation. Shaking estimates consider both distance from the San Andreas Fault and local site conditions. The high probabilities indicate extensive and damaging liquefaction will occur in East Bay fills during the next M∼7.8 earthquake on the northern San Andreas Fault.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Predicted probabilities"

1

Rebguns, Antons. "Using scouts to predict swarm success rate." Laramie, Wyo. : University of Wyoming, 2008. http://proquest.umi.com/pqdweb?did=1798481081&sid=3&Fmt=2&clientId=18949&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Alkhalaf, Arwa A. "The impact of predictor variable(s) with skewed cell probabilities on the Wald test in binary logistic regression." Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/61232.

Full text
Abstract:
What happens to the parameter estimates and test operating characteristics when the predictor variables in a logistic regression are skewed? The statistics literature provides relatively few answers to this question. A series of simulation studies are reported that investigated the impact of a skewed predictor (s) on the Type I error rate and power of the Wald test in a logistic regression model. Five simulations were conducted for three different models: a simple logistic regression with a binary predictor, a simple logistic regression with a continuous predictor, and a multiple logistic regression with two dichotomous predictors. The results show that the Type I error rate and power were affected by severe predictor skewness, but that the effect was moderated by sample size. The Type I error rate was consistently deflated for all three models. Also, power improved with less skewness. A detailed description of the impact of skewed cell predictor probabilities and sample size provide guidelines for practitioners as to where to expect the greatest problems. These findings highlight the importance of the effects of predictor characteristics on statistical analysis of a logistic regression.
Education, Faculty of
Educational and Counselling Psychology, and Special Education (ECPS), Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
3

Stevenson, Clint W. "A Logistic Regression Analysis of Utah Colleges Exit Poll Response Rates Using SAS Software." BYU ScholarsArchive, 2006. https://scholarsarchive.byu.edu/etd/1116.

Full text
Abstract:
In this study I examine voter response at an interview level using a dataset of 7562 voter contacts (including responses and nonresponses) in the 2004 Utah Colleges Exit Poll. In 2004, 4908 of the 7562 voters approached responded to the exit poll for an overall response rate of 65 percent. Logistic regression is used to estimate factors that contribute to a success or failure of each interview attempt. This logistic regression model uses interviewer characteristics, voter characteristics (both respondents and nonrespondents), and exogenous factors as independent variables. Voter characteristics such as race, gender, and age are strongly associated with response. An interviewer's prior retail sales experience is associated with whether a voter will decide to respond to a questionnaire or not. The only exogenous factor that is associated with voter response is whether the interview occurred in the morning or afternoon.
APA, Harvard, Vancouver, ISO, and other styles
4

Neves, Tiago Filipe Mendes. "A data mining approach to predict probabilities of football matches." Master's thesis, 2019. https://hdl.handle.net/10216/121217.

Full text
Abstract:
Com um crescimento cada vez maior dos volumes apostados em competições desportivas torna-se importante verificar até onde as técnicas de aprendizagem computacional conseguem trazer valor a esta área. É feita uma avaliação da performance de algoritmos estado-da-arte em diversas métricas, incorporado na metodologia CRISP-DM que é percorrida desde a aquisição de dados via web-scraping, passando pela geração e seleção de features. É também explorado o universo de técnicas de ensemble numa tentativa de melhorar os modelos do ponto de vista do bias-variance trade-off, com especial foco nos ensembles de redes neuronais.
With the increasing growth of the amount of money invested in sports betting markets it is important to verify how far the machine learning techniques can bring value to this area. A performance evaluation of the state-of-art algorithms is performed and evaluated according to several metrics, incorporated in the CRISP-DM methodology that goes from web-scraping through to generation and selection of features. It is also explored the universe of ensemble techniques in an attempt to improve the models from the point of view of bias-variance trade-off, with a special focus on neural network ensembles.
APA, Harvard, Vancouver, ISO, and other styles
5

Chung, Chuang Yi, and 莊宜娟. "Using Rough Sets Theory to Predict The Probabilities Of Credit Clients Going Bankrupt." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/33641395728935631713.

Full text
Abstract:
碩士
南台科技大學
國際企業系
94
The purpose of this study is to predict the probabilities of credit clients going bankrupt by using clients’ financial and other relevant information. We expect to decrease the credit risk and increase profits and good performance. Most of the researches adopted statistic prediction model to predict credit risk, including factor analysis, regression, and discriminate analysis. This study tried to apply the rough sets theory as the research method. It could systematically narrow down the information we need and come out with regulations. Moreover, it can develop prediction model without matching the assumption of general statistical analysis, and we use this prediction model to compare with discriminate analysis and logit regression respectively. The empirical results showed that that the rough set theory reached better result in predicting credit risk than discriminate analysis and logit regression. Applying the rough set theory, 94%can be detected one year before clients break the contracts; 86% for two years. Therefore, we can identify that the Rough Set Theory will not have the same problems as traditional statistic model when limited information was offered. In conclusion, this method is appropriate to applied in predicting the credit risk.
APA, Harvard, Vancouver, ISO, and other styles
6

Law, Helen. "Gender and mathematics: pathways to mathematically intensive fields of study in Australia." Phd thesis, 2017. http://hdl.handle.net/1885/125139.

Full text
Abstract:
Women in Australia have gone from being under-represented to being over-represented in university education, but they are still far less likely than men to engage in mathematically intensive science fields including engineering, information technology and the physical sciences. With a rapid growth of employment opportunities in these fields, women need quantitative skills to become competitive in technologically and science-oriented niches of the labour market. The persisting gender gap in mathematically intensive fields is important also because it may reinforce the stereotypical belief that males are naturally more talented in mathematics, abstract thinking and technical problem solving. The prevalence of such a belief drives adolescents to aspire to gender-typical occupations and thus reproduces gender inequality. Given this, there is an urgent need to systematically examine the extent to which socialisation influences and educational experiences in adolescence affect the participation in advanced high school mathematics and mathematically intensive university qualifications. The key question to consider is why engagement in advanced mathematics and cognate disciplines remains so strongly segregated by gender. This thesis offers a comprehensive examination of this issue in Australia by drawing on the theories of gender stratification and educational psychology. The scope of this examination is broader than any other Australian study of this issue to date. I adopt a life course perspective to study the impact of teenage educational experiences and occupational expectations on the gender differences in later pursuits of advanced mathematics subjects in Year 12 and mathematically intensive fields at university. To achieve this, I use multilevel logistic regression models to analyse the data from the 2003 cohort of the Longitudinal Survey of Australian Youth. The data comprise a nationally representative sample of adolescents who turned 15 around 2003 and entered the labour market in the following decade. Occupational expectations are crucial in explaining why boys are considerably more likely than girls to enrol in advanced mathematics subjects in Year 12. These expectations, however, are less influential than the combined effect of self-assessed mathematical competence of students and their achievement in mathematics. The gender gap in Year 12 advanced mathematics enrolment would disappear completely should we succeed in generating the same levels of self-assessed mathematical competence and in fostering similar levels of early achievement in mathematics across both genders. To achieve gender parity in the choice of a mathematically intensive university major, we would also have to persuade teenagers of both genders to aspire to similar careers and have similar confidence in their mathematical abilities. Apart from individual micro-social characteristics of students, single-sex schooling enhances the participation of girls in advanced high school mathematics and related fields of study at university. The advantage of all-girls education is evident in these analyses even after considering the pre-existing differences between single-sex and coeducational schools in school resources, teacher quality and the policy of selectivity in student admissions. These results suggest that all-girls secondary education provides an environment that somewhat counters gender stereotypes and fosters mathematically intensive studies, not only in high school but also at university.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Predicted probabilities"

1

Palmeri, Thomas J., Jeffrey D. Schall, and Gordon D. Logan. Neurocognitive Modeling of Perceptual Decision Making. Edited by Jerome R. Busemeyer, Zheng Wang, James T. Townsend, and Ami Eidels. Oxford University Press, 2015. http://dx.doi.org/10.1093/oxfordhb/9780199957996.013.15.

Full text
Abstract:
Mathematical psychology and systems neuroscience have converged on stochastic accumulator models to explain decision making. We examined saccade decisions in monkeys while neurophysiological recordings were made within their frontal eye field. Accumulator models were tested on how well they fit response probabilities and distributions of response times to make saccades. We connected these models with neurophysiology. To test the hypothesis that visually responsive neurons represented perceptual evidence driving accumulation, we replaced perceptual processing time and drift rate parameters with recorded neurophysiology from those neurons. To test the hypothesis that movement related neurons instantiated the accumulator, we compared measures of neural dynamics with predicted measures of accumulator dynamics. Thus, neurophysiology both provides a constraint on model assumptions and data for model selection. We highlight a gated accumulator model that accounts for saccade behavior during visual search, predicts neurophysiology during search, and provides insights into the locus of cognitive control over decisions.
APA, Harvard, Vancouver, ISO, and other styles
2

Fudin, Jeffrey, Jacqueline Cleary, Courtney Kominek, Abigail Brooks, and Thien C. Pham. Screening Patients for Opioid Risk (DRAFT). Oxford University Press, 2018. http://dx.doi.org/10.1093/med/9780190265366.003.0010.

Full text
Abstract:
The narrow therapeutic index associated with most analgesic opioids creates a high degree of risk, obliging caution in patient selection; this chapter describes screening practices. More attention is due to the use of opioid therapy for chronic non-cancer pain as episodes of respiratory arrest increase. Universal precautions are recommended for all patients. Before and throughout treatment, selected tools assessing risk and misuse should be employed. Increased access to the opioid antagonist naloxone has followed the increase in opioid poisoning deaths; the Risk Index for Overdose or Serious Opioid-induced Respiratory Depression (RIOSORD) helps determine the likelihood of an opioid overdose, serving to select candidates for at-home naloxone access. Urine drug testing aids treatment plan adherence. Tables include: a comparison of screening tools for risk stratification; a comparison of tools for assessment of opioid misuse; the scoring procedure for the RIOSORD; and a list of risk classes with predicted probabilities.
APA, Harvard, Vancouver, ISO, and other styles
3

Wolf, E. L. Fusion in the Sun. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198769804.003.0004.

Full text
Abstract:
Protons in the Sun’s core are a dense plasma allowing fusion events where two protons initially join to produce a deuteron. Eventually this leads to alpha particles, the mass-four nucleus of helium, releasing kinetic energy. Schrodinger’s equation allows particles to penetrate classically forbidden Coulomb barriers with small but important probabilities. The approximation known as Wentzel–Kramers–Brillouin (WKB) is used by Gamow to predict the rate of proton–proton fusion in the Sun, shown to be in agreement with measurements. A simplified formula is given for the power density due to fusion in the plasma constituting the Sun’s core. The properties of atomic nuclei are briefly summarized.
APA, Harvard, Vancouver, ISO, and other styles
4

Sherman, Mila Getmansky, and Rachel (Kyungyeon) Koh. The Life Cycle of Hedge Funds. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780190607371.003.0003.

Full text
Abstract:
This chapter analyzes the life cycle of hedge funds. Analysis using the Thomson Reuters Lipper TASS database reveals industry-related and fund-specific factors affecting the survival probabilities of hedge funds. Analysis of hedge fund flows and asset sizes can offer insights into a fund’s future survival. Fund performance is a nonlinear function of a fund’s asset size. A fund can obtain an optimal asset size by balancing the effects of past returns, fund flows, market impact, and competition. Competition among hedge funds using similar strategies presents challenges. To survive, funds employ dynamic strategies, move nimbly from market to market, and develop unique strengths. Being an effective market and strategy timer is critical because funds using the right strategy at the right time are more likely to survive. The chapter also analyzes the last stage of the hedge fund life cycle—liquidation or closure. Fund characteristics, risk measures, and style-related factors can help predict fund liquidation.
APA, Harvard, Vancouver, ISO, and other styles
5

Khoo, Justin. The Meaning of If. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780190096700.001.0001.

Full text
Abstract:
Conditional sentences remain a puzzling source of philosophical speculation in large part because there seems to be nothing they could possibly mean that would vindicate the roles they play in language and thought. Bringing together work from philosophy and linguistics, Justin Khoo articulates a theory of what conditionals mean that captures their varied and complex behavior. According to the theory, conditionals form a unified class of expressions that share a common semantic core that encodes inferential dispositions. Thus, rather than represent the world, conditionals are devices used to communicate how we are disposed to infer. Khoo shows that this core theory can be extended to predict the correct probabilities of conditionals, as well as the semantic and pragmatic between different kinds of conditionals. The resulting theory has broad implications beyond debates about the meaning of conditionals, including upshots about the nature of metaphysical and epistemic possibility, the cognitive roles of non-factual contents, and the relationship between counterfactuals and causation.
APA, Harvard, Vancouver, ISO, and other styles
6

Rosenzweig, Cynthia, and Daniel Hillel. Climate Variability and the Global Harvest. Oxford University Press, 2008. http://dx.doi.org/10.1093/oso/9780195137637.001.0001.

Full text
Abstract:
The Earth's climate is constantly changing. Some of the changes are progressive, while others fluctuate at various time scales. The El Niño-la Niña cycle is one such fluctuation that recurs every few years and has far-reaching impacts. It generally appears at least once per decade, but this may vary with our changing climate. The exact frequency, sequence, duration and intensity of El Niño's manifestations, as well as its effects and geographic distributions, are highly variable. The El Niño-la Niña cycle is particularly challenging to study due to its many interlinked phenomena that occur in various locations around the globe. These worldwide teleconnections are precisely what makes studying El Niño-la Niña so important. Cynthia Rosenzweig and Daniel Hillel describe the current efforts to develop and apply a global-to-regional approach to climate-risk management. They explain how atmospheric and social scientists are cooperating with agricultural practitioners in various regions around the world to determine how farmers may benefit most from new climate predictions. Specifically, the emerging ability to predict the El Niño-Southern Oscillation (ENSO) cycle offers the potential to transform agricultural planning worldwide. Biophysical scientists are only now beginning to recognize the large-scale, globally distributed impacts of ENSO on the probabilities of seasonal precipitation and temperature regimes. Meanwhile, social scientists have been researching how to disseminate forecasts more effectively within rural communities. Consequently, as the quality of climatic predictions have improved, the dissemination and presentation of forecasts have become more effective as well. This book explores the growing understanding of the interconnectedness of climate predictions and productive agriculture for sustainable development, as well as methods and models used to study this relationship.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Predicted probabilities"

1

Carpita, Maurizio, and Silvia Golia. "Prediction of wine sensorial quality: a classification problem." In Proceedings e report, 235–38. Florence: Firenze University Press, 2021. http://dx.doi.org/10.36253/978-88-5518-461-8.44.

Full text
Abstract:
When dealing with a wine, it is of interest to be able to predict its quality based on chemical and/or sensory variables. There is no agreement on what wine quality means, or how it should be assessed and it is often viewed in intrinsic (physicochemical, sensory) or extrinsic (price, prestige, context) terms (Jackson, 2017). In this paper, the wine quality was evaluated by experienced judges who scored the wine on the base of a 0-10 scale, with 0 meaning very bad and 10 excellent, so, the resulting variable was categorical. The models applied to predict this variable provide the prediction of the occurrence probabilities of each of its categories. Nevertheless, jointly with this probabilities’ record, the practitioners need the predicted value (category) of the variable, so the statistical problem to be covered refers to the way in which this probabilities’ record is transformed into a single value. In this paper we compare the predictive performances of the default method (Bayes Classifier - BC), which assigns a unit to the most likely category, and other two methods (Maximum Difference Classifier and Maximum Ratio Classifier). The BC is the optimal criterion if one is interested in the accuracy of the classification, but, given that it favors the prevalent category most, when there is not a category of interest, it cannot be the best choice. The data under study concern the quality of the red variant of the Portuguese "Vinho Verde" wine (Cortez et al., 2009), measured on a 0-10 scale. Nevertheless, only 6 scores were used, with 2 scores with a very few number of observations, so this is the right context for predictive performance comparisons. In the study, we investigated different merging of categories and we used 11 explanatory variables to estimate the probabilities’ record of the wine quality variable.
APA, Harvard, Vancouver, ISO, and other styles
2

Bacharoudis, Konstantinos, Atanas Popov, and Svetan Ratchev. "Application of Advanced Simulation Methods for the Tolerance Analysis of Mechanical Assemblies." In IFIP Advances in Information and Communication Technology, 153–67. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72632-4_11.

Full text
Abstract:
AbstractIn the frame of a statistical tolerance analysis of complex assemblies, for example an aircraft wing, the capability to predict accurately and fast specified, very small quantiles of the distribution of the assembly key characteristic becomes crucial. The problem is significantly magnified, when the tolerance synthesis problem is considered in which several tolerance analyses are performed and thus, a reliability analysis problem is nested inside an optimisation one in a fully probabilistic approach. The need to reduce the computational time and accurately estimate the specified probabilities is critical. Therefore, herein, a systematic study on several state of the art simulation methods is performed whilst they are critically evaluated with respect to their efficiency to deal with tolerance analysis problems. It is demonstrated that tolerance analysis problems are characterised by high dimensionality, high non-linearity of the state functions, disconnected failure domains, implicit state functions and small probability estimations. Therefore, the successful implementation of reliability methods becomes a formidable task. Herein, advanced simulation methods are combined with in-house developed assembly models based on the Homogeneous Transformation Matrix method as well as off-the-self Computer Aided Tolerance tools. The main outcome of the work is that by using an appropriate reliability method, computational time can be reduced whilst the probability of defected products can be accurately predicted. Furthermore, the connection of advanced mathematical toolboxes with off-the-self 3D tolerance tools into a process integration framework introduces benefits to successfully deal with the tolerance allocation problem in the future using dedicated and powerful computational tools.
APA, Harvard, Vancouver, ISO, and other styles
3

Nguyen, Huy H., Junichi Yamagishi, and Isao Echizen. "Capsule-Forensics Networks for Deepfake Detection." In Handbook of Digital Face Manipulation and Detection, 275–301. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_13.

Full text
Abstract:
AbstractSeveral sophisticated convolutional neural network (CNN) architectures have been devised that have achieved impressive results in various domains. One downside of this success is the advent of attacks using deepfakes, a family of tools that enable anyone to use a personal computer to easily create fake videos of someone from a short video found online. Several detectors have been introduced to deal with such attacks. To achieve state-of-the-art performance, CNN-based detectors have usually been upgraded by increasing their depth and/or their width, adding more internal connections, or fusing several features or predicted probabilities from multiple CNNs. As a result, CNN-based detectors have become bigger, consume more memory and computation power, and require more training data. Moreover, there is concern about their generalizability to deal with unseen manipulation methods. In this chapter, we argue that our forensic-oriented capsule network overcomes these limitations and is more suitable than conventional CNNs to detect deepfakes. The superiority of our “Capsule-Forensics” network is due to the use of a pretrained feature extractor, statistical pooling layers, and a dynamic routing algorithm. This design enables the Capsule-Forensics network to outperform a CNN with a similar design and to be from 5 to 11 times smaller than a CNN with similar performance.
APA, Harvard, Vancouver, ISO, and other styles
4

Moncayo, Steven, and Guillermo Ávila. "Landslide Travel Distances in Colombia from National Landslide Database Analysis." In Progress in Landslide Research and Technology, Volume 1 Issue 1, 2022, 315–25. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-16898-7_24.

Full text
Abstract:
AbstractThe analysis of landslide processes and consequent damages constitutes an important aspect in risk assessment. The potential reach zones of a landslide can be estimated by analyzing the behavior of past events under particular geological, geomorphological, and climatic conditions. Although landslide risk models have been developed for temperate zones, little information is available for tropical countries, so empirical equations are used without validation. In this study, a dataset comprising characteristic parameters for 123 landslides from the Andean region of Colombia was compiled from the digital inventory of the Colombian Geological Survey Mass Movement Information System (SIMMA). Empirical landslide travel-distance models were developed using simple and multiple regression techniques. The results revealed that the volume of the displaced mass, the slope angle, the maximum landslide height, and geomorphological environment were the predominant factors controlling the landslides travel distances in the study area. Similarly, a strong correlation was found between the planimetric area and landslide volume, validating the model of Iverson et al. (1998) (Iverson et al., in Geol Soc Am Bull 110:972–984, 1998). The proposed models show a reasonable fit between the observed and predicted values, and exhibited higher prediction capacity than other models in the literature. An example of application of the prediction equations developed here illustrates the procedure to delineate landslide hazard zones for different exceedance probabilities.
APA, Harvard, Vancouver, ISO, and other styles
5

Atzeni, Gianfranco, Luca G. Deidda, Marco Delogu, and Dimitri Paolini. "Drop-Out Decisions in a Cohort of Italian Universities." In Teaching, Research and Academic Careers, 71–103. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-07438-7_4.

Full text
Abstract:
AbstractIn this chapter, we study the determinants of student drop-out decisions using data on a cohort of over 230,000 students enrolled in the Italian university system. The empirical analysis reveals that the probability of dropping out of university negatively correlates with high school grades and student age, controlling for the course of study and university fixed effects. The benchmark estimation suggests a negative correlation between high school final grade and drop-out probability. We also find that enrolling late at the university increases the likelihood of dropping out. In line with the literature, our results suggest that women have a lower propensity to drop out. Our dataset allows differentiating between students who leave their homes to enroll at university (off-site students) and on-site students. We find that off-site students drop out significantly less than those who study in their hometowns. We provide significant evidence that off-site students are a self-selected sample of the total population. Accordingly, we use an instrumental variable (IV) approach to identify the causal relationship. The IV estimation shows that studying off-site negatively affects drop-out decisions and more so for students growing up in the south of Italy who typically study off-site in the Center-North of Italy. Taking advantage of a more detailed dataset concerning students enrolled at the Università di Sassari, we show that the choice of the degree is also important to predict the magnitude of drop-out. Specifically, we resort to a bivariate probit specification to account for self-selection into the course of study, finding that the estimates of the determinants of drop-out and the predicted probabilities are heavily affected. Accounting for self-selection, we show that an unconditional comparison among degrees is misleading, as some degrees attract more heterogeneous students than others, as far as skills and motivation are concerned. For instance, regarding the effect of gender, we show that while the estimation without selection suggests that women drop out less, once we account for selection, the contribution of women to drop-out becomes either positive or negative, depending on which course of study they choose. In line with these results, policymakers should tailor drop-out reducing policy interventions to the specificities of each course of study.
APA, Harvard, Vancouver, ISO, and other styles
6

Hartmanns, Arnd. "Correct Probabilistic Model Checking with Floating-Point Arithmetic." In Tools and Algorithms for the Construction and Analysis of Systems, 41–59. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-99527-0_3.

Full text
Abstract:
AbstractProbabilistic model checking computes probabilities and expected values related to designated behaviours of interest in Markov models. As a formal verification approach, it is applied to critical systems; thus we trust that probabilistic model checkers deliver correct results. To achieve scalability and performance, however, these tools use finite-precision floating-point numbers to represent and calculate probabilities and other values. As a consequence, their results are affected by rounding errors that may accumulate and interact in hard-to-predict ways. In this paper, we show how to implement fast and correct probabilistic model checking by exploiting the ability of current hardware to control the direction of rounding in floating-point calculations. We outline the complications in achieving correct rounding from higher-level programming languages, describe our implementation as part of the Modest Toolset’s model checker, and exemplify the tradeoffs between performance and correctness in an extensive experimental evaluation across different operating systems and CPU architectures.
APA, Harvard, Vancouver, ISO, and other styles
7

Fedorenko, Yuriy S. "Using a Sparse Neural Network to Predict Clicks Probabilities in Online Advertising." In Advances in Neural Computation, Machine Learning, and Cognitive Research IV, 276–82. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60577-3_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gariano, Stefano Luigi, Massimo Melillo, Maria Teresa Brunetti, Sumit Kumar, Rajkumar Mathiyalagan, and Silvia Peruccacci. "Challenges in Defining Frequentist Rainfall Thresholds to Be Implemented in a Landslide Early Warning System in India." In Progress in Landslide Research and Technology, Volume 1 Issue 1, 2022, 409–16. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-16898-7_27.

Full text
Abstract:
AbstractIn India, rainfall-induced landslides cause a high toll in terms of fatalities and damages. Therefore, the adoption of tools to predict the occurrence of such phenomena is urgent. For the purpose, the LANDSLIP project aimed at developing a landslide early warning system (LEWS) to forecast the occurrence of rainfall-induced landslides in two Indian pilot areas: Darjeeling and Nilgiris. Rainfall thresholds are a widely used tool to define critical probability levels for the possible occurrence of landslides in large areas, and are particularly suitable to be implemented in LEWSs.In this work, we exploited two catalogues of 84 and 116 rainfall conditions likely responsible for landslide triggering in Darjeeling and Nilgiris, respectively. Adopting a frequentist statistical method and using an automatic tool, we determined rainfall thresholds at different non-exceedance probabilities for the two pilot areas. Despite the daily temporal resolution of rainfall data and the spatial and temporal distribution of the documented landslides, the thresholds calculated for the two areas have acceptable uncertainties and were implemented in the LANDSLIP LEWS prototype. We expect that the new thresholds and the whole system will contribute to mitigate the landslide risk in the study areas.
APA, Harvard, Vancouver, ISO, and other styles
9

Flarend, Alice, and Bob Hilborn. "Quantum Measurements." In Quantum Computing: From Alice to Bob, 44–56. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780192857972.003.0005.

Full text
Abstract:
Measurements of quantum states are a key feature of QIS and QC. Polarized light, optical polarization devices, and light detectors illustrate how we describe quantum measurements and how the results of measurements are related to the properties of the quantum states describing the photon qubits. The key distinction between quantum physics and classical physics is that the results of individual quantum measurements cannot be predicted; quantum mechanics gives us only the probabilities of observing each of the possible measurement outcomes. The famous Born rule expresses the probability interpretation of quantum states. Those states carry all the information about the probabilities of all possible measurements on the corresponding qubit. In particular, the probabilities are proportional to the squares of the amplitudes of the basis states appearing in the overall quantum state. Bob points out that state preparation and state measurement are two sides of the same coin.
APA, Harvard, Vancouver, ISO, and other styles
10

Pabreja, Kavita. "Artificial Neural Network for Markov Chaining of Rainfall Over India." In Research Anthology on Artificial Neural Network Applications, 1130–45. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-2408-7.ch053.

Full text
Abstract:
Rainfall forecasting plays a significant role in water management for agriculture in a country like India where the economy depends heavily upon agriculture. In this paper, a feed forward artificial neural network (ANN) and a multiple linear regression model has been utilized for lagged time series data of monthly rainfall. The data for 23 years from 1990 to 2012 over Indian region has been used in this study. Convincing values of root mean squared error between actual monthly rainfall and that predicted by ANN has been found. It has been found that during monsoon months, rainfall of every n+3rd month can be predicted using last three months' (n, n+1, n+2) rainfall data with an excellent correlation coefficient that is more than 0.9 between actual and predicted rainfall. The probabilities of dry seasonal month, wet seasonal month for monsoon and non-monsoon months have been found.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Predicted probabilities"

1

Bezembinder, Erwin M., Luc J. J. Wismans, and Eric C. Van Berkum. "Constructing multi-labelled decision trees for junction design using the predicted probabilities." In 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2017. http://dx.doi.org/10.1109/itsc.2017.8317699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Xiaodong, Ying Min Low, and Chan Ghee Koh. "Prediction of Low Failure Probabilities With Application to Marine Risers." In ASME 2017 36th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/omae2017-61574.

Full text
Abstract:
Offshore riser systems are subjected to wind, wave and current loadings, which are random in nature. Nevertheless, the current deterministic based design and analysis practice could not quantitatively evaluate the safety of structures taking random environmental loadings into consideration, due to high computational costs. Structural reliability method, as an analysis tool to quantify probability of failure of components or systems, can account for uncertainties in environmental conditions and system parameters. It is particularly useful in cases where limited experience exists or a risk-based evaluation of design is required. Monte Carlo Simulation (MCS) method is the most widely accepted method and usually used to benchmark other proposed reliability methods. However, MCS is computationally demanding for predicting low failure probabilities, especially for offshore dynamic problems involving many types of uncertainties. Innovative structural reliability methods are desired to perform reliability analysis, so as to predict the low failure probabilities associated with extreme values. Variety of structural reliability methods are proposed in the literature to reduce the computational burden of MCS. The post processing methods, which recover PDF or tail distribution of random variable from sample data to perform structural reliability analysis, have great advantages over the methods from other categories on solving engineering problems. Thus the main focus of our study is on post processing structural reliability methods. In this paper, four post processing reliability methods are compared on the prediction of low failure probabilities with applications to a drilling riser system and a steel catenary riser (SCR) system: Enhanced Monte Carlo Simulation (EMCS) assumes the failure probability follows the asymptotic behavior and uses high failure probabilities to predict low failure probabilities; Multi-Gaussian Maximum Entropy Method (MGMEM) assumes the probability density function (PDF) is a summation of Gaussian density functions and adopts maximum entropy methods to obtain the model parameters; Shifted Generalized Lognormal Distribution (SGLD) method proposes a distribution that specializes to the normal distribution for zero skewness and is able to assume any finite value of skewness for versatility; and Generalized Extreme-Value Distribution method (GEV) comprises three distribution families: the Gumbel-type, Frechet-type and Weibull-type distribution. The study compares the bias errors (the difference between the predicted values and the exact values) and variance errors (the variability of the predicted values) of these methods on the prediction of low failure probabilities with applications to two riser systems. This study could provide offshore engineers and researchers feasible options for marine riser system structural reliability analysis.
APA, Harvard, Vancouver, ISO, and other styles
3

Naess, Arvid, Bernt J. Leira, and Olexandr Batsevych. "Efficient Reliability Analysis of Structural Systems With a High Number of Limit States." In ASME 2010 29th International Conference on Ocean, Offshore and Arctic Engineering. ASMEDC, 2010. http://dx.doi.org/10.1115/omae2010-21179.

Full text
Abstract:
In principle, the reliability of complicated structural systems can be accurately predicted by standard Monte Carlo simulation methods, but the computational burden may be prohibitive. A new Monte Carlo based method for estimating system reliability that aims at reducing the computational cost is therefore proposed. It exploits the regularity of tail probabilities to set up an approximation procedure for the prediction of the far tail failure probabilities based on the estimates of the failure probabilities obtained by Monte Carlo simulation at more moderate levels. In the paper the usefulness and accuracy of the estimation method is illustrated by application to some particular examples of structures with several thousand potentially critical limit state functions. The effect of varying the correlation of the load components is also investigated.
APA, Harvard, Vancouver, ISO, and other styles
4

Enright, Michael P., and R. Craig McClung. "A Probabilistic Framework for Gas Turbine Engine Materials With Multiple Types of Anomalies." In ASME Turbo Expo 2010: Power for Land, Sea, and Air. ASMEDC, 2010. http://dx.doi.org/10.1115/gt2010-23618.

Full text
Abstract:
Some rotor-grade gas turbine engine materials may contain multiple types of anomalies such as voids and inclusions that can be introduced during the manufacturing process. The number and size of anomalies can be very different for the various anomaly types, each of which may lead to premature fracture. The probability of failure of a component with multiple anomaly types can be predicted using established system reliability methods provided that the failure probabilities associated with individual anomaly types are known. Unfortunately, these failure probabilities are often difficult to obtain in practice. In this paper, an approach is presented that provides treatment for engine materials with multiple anomalies of multiple types. It is based on previous work that has extended to address the overlap among anomaly type failure modes using the method of Kaplan-Meier, and is illustrated for risk prediction of a nickel-based superalloy. The results can be used to predict the risk of general materials with multiple types of anomalies.
APA, Harvard, Vancouver, ISO, and other styles
5

Jones, Oliver, Kevin Ewans, and Stanley Chuah. "A Monte Carlo Approach for Estimating Extreme Currents in the Singapore Straits." In ASME 2013 32nd International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/omae2013-11147.

Full text
Abstract:
Utilizing the independency of tide, through-flow, surge and high-frequency currents in the Singapore Straits, a Monte Carlo simulation method of combining the different components is proposed, expanding the horizon of available measured and modelled data and facilitating the definition of design current speeds. The statistical model proceeds by, first, making N number of random picks from the non-exceedence probability distributions of the surge, through-flow and high-frequency components. The number of random picks made in a given year for each component, N, is defined by assuming its occurrence rate is Poisson-distributed around a known annual mean value. N number of random start times are then chosen from each year and the maximum value of tidal current predicted over an ensuring 3-day window is combined with the randomly sampled component (either surge, through-flow or high-frequency current). Assuming an intended design life of 50 years, this process is repeated N number of times in each of the 50 years and for each current component, yielding 50 annual maximum values. For random 3-day windows that overlap, the model takes the vector sum of the maximum tidal current and the 2 (or 3) concurrent components. The process is repeated 1000 times, producing 1000 * 50 values of annual maxima which are then assigned non-exceedence probabilities. Return Period levels are obtained directly from the non-exceedence probabilities. The method provides a reduction in design current when compared to values derived by multiplying the exceedence probabilities of the varying independent contributions directly.
APA, Harvard, Vancouver, ISO, and other styles
6

Rudland, David L., Heqin Xu, Gery Wilkowski, Paul Scott, Nu Ghadiali, and Frederick Brust. "Development of a New Generation Computer Code (PRO-LOCA) for the Prediction of Break Probabilities for Commercial Nuclear Power Plants Loss-of-Coolant Accidents." In ASME 2006 Pressure Vessels and Piping/ICPVT-11 Conference. ASMEDC, 2006. http://dx.doi.org/10.1115/pvp2006-icpvt-11-93802.

Full text
Abstract:
In order to aid the Nuclear Regulatory Commission (NRC) in the process of making a risk-informed revision of the design-basis break size requirements for operating commercial nuclear power plants, a new probabilistic computer code (PRO-LOCA) is being developed, This code will aide the NRC in predicting leakage and rupture probabilities for critical piping systems as a function of break sizes. The PRO-LOCA code incorporates many enhancements in technology developed since some of the earlier probabilistic codes (e.g., PRAISE) were developed. These enhancements include improved crack stability analyses, improved leak rate models, improved weld residual stress solutions, improved crack initiation models, and new material property data. In addition, new degradation mechanisms, not previously considered in codes such as PRAISE were added. This included the addition of primary water stress corrosion cracking (PWSCC) for dissimilar welds in pressurized water reactors (PWRs). This paper reviews the development of the PRO-LOCA code by giving a brief description of its capabilities and the models embedded in the code. The current version of the PRO-LOCA code was used for a series of sensitivity analyses in order to test the robustness of the code for a full range of input parameters and to identify which input parameters have the greatest effect on the predicted leakage probabilities. The results from these sensitivity analyses are presented, as well as comparisons with other PFM codes (PRAISE). Details of which parameters appear to affect the leakage probabilities and the plans for future improvements to the code conclude this paper.
APA, Harvard, Vancouver, ISO, and other styles
7

Schneider, Ronald, David J. Sanderson, and Simon D. Thurlbeck. "The Impact of Stress Redistribution on Structural Reliability Predictions of Deepwater Jackets." In ASME 2007 26th International Conference on Offshore Mechanics and Arctic Engineering. ASMEDC, 2007. http://dx.doi.org/10.1115/omae2007-29617.

Full text
Abstract:
Structural reliability predictions for fixed jacket structures often involve calculating the probabilities of structural members failing and the overall collapse probabilities of the jackets existing in the corresponding damaged condition. It is current best practice to consider only single member failure conditions and to ignore multi member failure conditions since the probability of more than one member failing has been considered to be negligibly small. This approach assumes that the failure of a member is in no way related to the failure of another member, which implies that the two events are statistically independent. However, in reality the two events are not statistically independent since the failure of one member changes the stress state and therefore the failure probability of the remaining members. Two studies conducted to investigate the effects of stress redistribution on platform reliability predictions concluded that, by ignoring the effects of stress redistribution, platform reliability predictions could be significantly overestimated. However, they both concentrated on shallow water jackets and consequently a further study was performed to extend the existing work on stress redistribution to consider jacket structures in greater water depths. The study considered single and dual member failure conditions and compared the effects of including and excluding stress redistribution in reliability calculations. The findings of this work suggest that including stress redistribution reduces the predicted platform reliability. From the results, generic correction factors were derived for different inspection intervals, which account for stress redistribution and dual member failure. These correction factors can be applied to the probabilities derived using current best practice to determine a more realistic estimate of a structure’s reliability.
APA, Harvard, Vancouver, ISO, and other styles
8

Simonen, F. A., S. R. Gosselin, B. O. Y. Lydell, D. L. Rudland, and G. M. Wilkowski. "Application of Failure Event Data to Benchmark Probabilistic Fracture Mechanics Computer Codes." In ASME 2007 Pressure Vessels and Piping Conference. ASMEDC, 2007. http://dx.doi.org/10.1115/pvp2007-26373.

Full text
Abstract:
This paper describes an application of data on cracking, leak and rupture events from nuclear power plant operating experience to estimate failure frequencies for piping components that had been previously evaluated using the PROLOCA and PRAISE probabilistic fracture mechanics (PFM) computer codes. The calculations had addressed the failure mechanisms of stress corrosion cracking, intergranular stress corrosion cracking and fatigue for materials and operating conditions that were known to have failed components. The first objective was to benchmark the calculations against field experience. A second objective was a review of uncertainties in the treatments of the data from observed failures and in the structural mechanics models. The database PIPExp-2006 was applied to estimate failure frequencies. Because the number of reported failure events was small, there were also statistical uncertainties in the estimates of frequencies. Comparisons of predicted and observed failure frequencies showed that PFM codes correctly predicted relatively high failure probabilities for components that had experienced field failures. However, the predicted frequencies tended to be significantly greater than those estimated from plant operating experience. A review of the PFM models and inputs to the models showed that uncertainties in the calculations were sufficiently large to explain the differences between the predicted and observed failure frequencies.
APA, Harvard, Vancouver, ISO, and other styles
9

Taapopi, E. E., H. Wang, and J. Zhou. "Equal Forced Time Step Approach to PSA for a Dynamic System – A Case of the Holdup Tank." In 2021 28th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/icone28-64081.

Full text
Abstract:
Abstract This work presents a methodology of the equal forced time step for generating accident scenarios in a time-dependent dynamic sequence. The aim is to model DPSA and system reliability of a simple hold-up tank which is a good representative of a steam generator — a component in the secondary loop of a nuclear power plant. This was achieved based on some predicted theoretical calculations on node generating conditions and the cumulative probability of partial failure mode. The computer language Python 3.0 was employed to establish a dynamic event tree of 20 equal time steps (h), the total number of 7367 nodes, 4882 failed nodes for various types of potential accidents. Also, various types of failures and the cumulative probabilities of 0.13 were generated. The program modeled cumulative probabilities and associated errors. However, there is a need to improve and refine the methodology to consider physical parameters such as temperature, pressure, etc. as these are factors that can lead to calculation termination. This will boost the program efficiency and utilize the information extracted from the dynamic event tree in the analysis of the dynamic characteristics. Although extra efforts and improvement still be needed to refine produced accident scenario results, the presented methodology has satisfactorily handled the example.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Ri, and Sheng Dong. "A Probability Analysis of Atomization Rate for Fully Developed Annular Flow in Vertical Pipes." In ASME 2017 36th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/omae2017-61581.

Full text
Abstract:
The phase distributions and mechanical properties of annular flow are constantly fluctuating, so they can be regarded as random states. The probability analysis of annular flow is an appropriate method to research the formation, development and evolution of the flow pattern. In the present work, the atomization and deposition rates of fully developed annular flow are investigated in detail by the method of a probability analysis. First, the basic equations of the probability model are applied to solve some important intermediate parameters of annular flow. Second, the atomization and deposition rates of any size droplets are closely related to the probabilities of droplet generation and disappearance. Third, the interchange rate of the whole liquid phase can be obtained by summing the generation and disappearance probabilities of arbitrary size droplets. The predictions of atomization rate are well verified by comparing with the experimental date of 71 cases from three sets of tests. It is demonstrated that the probability model can accurately calculate the atomization rate of the fully developed annular flow for most cases. The predicted deviation for some cases may be caused by the neglect of droplet breakup process. Furthermore, the effects on the atomization rate of seven parameters of annular flow are discussed in detail.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Predicted probabilities"

1

Perkins, Mike S. Predicted Effect of Projectile Dispersion on Target Hit Probabilities and Dispersion-Zone Sizes for the 25-mm Gun of the Bradley Fighting Vehicle. Fort Belvoir, VA: Defense Technical Information Center, April 1988. http://dx.doi.org/10.21236/ada193618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Montalvo-Bartolomei, Axel, Bryant Robbins, and Jamie López-Soto. Backward erosion progression rates from small-scale flume tests. Engineer Research and Development Center (U.S.), September 2021. http://dx.doi.org/10.21079/11681/42135.

Full text
Abstract:
Backward erosion piping (BEP) is an internal erosion mechanism by which erosion channels progress upstream, typically through cohesionless or highly erodible foundation materials of dams and levees. As one of the primary causes of embankment failures, usually during high pool events, the probability of BEP-induced failure is commonly evaluated by the U.S. Army Corps of Engineers for existing dams and levees. In current practice, BEP failure probability is quantitatively assessed assuming steady state conditions with qualitative adjustments for temporal aspects of the process. In cases with short-term hydraulic loads, the progression rate of the erosion pipe may control the failure probability such that more quantitative treatment of the temporal development of erosion is necessary to arrive at meaningful probabilities of failure. This report builds upon the current state of the practice by investigating BEP progression rates through a series of laboratory experiments. BEP progression rates were measured for nine uniform sands in a series of 55 small-scale flume tests. Results indicate that the pipe progression rates are proportional to the seepage velocity and can be predicted using equations recently proposed in the literature.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography