To see the other types of publications on this topic, follow the link: Predicted probabilities.

Journal articles on the topic 'Predicted probabilities'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Predicted probabilities.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Fang, Di, Jenny Chong, and Jeffrey R. Wilson. "Predicted Probabilities' Relationship to Inclusion Probabilities." American Journal of Public Health 105, no. 5 (May 2015): 837–39. http://dx.doi.org/10.2105/ajph.2015.302592.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Long, J. Scott, and Jeremy Freese. "Predicted Probabilities for Count Models." Stata Journal: Promoting communications on statistics and Stata 1, no. 1 (November 2001): 51–57. http://dx.doi.org/10.1177/1536867x0100100103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mitchell, Michael N., and Xiao Chen. "Visualizing Main Effects and Interactions for Binary Logit Models." Stata Journal: Promoting communications on statistics and Stata 5, no. 1 (February 2005): 64–82. http://dx.doi.org/10.1177/1536867x0500500111.

Full text
Abstract:
This paper considers the role of covariates when using predicted probabilities to interpret main effects and interactions in logit models. While predicted probabilities are very intuitive for interpreting main effects and interactions, the pattern of results depends on the contribution of covariates. We introduce a concept called the covariate contribution, which reflects the aggregate contribution of all of the remaining predictors (covariates) in the model and a family of tools to help visualize the relationship between predictors and the predicted probabilities across a variety of covariate contributions. We believe this strategy and the accompanying tools can help researchers who wish to use predicted probabilities as an interpretive framework for logit models acquire and present a more comprehensive interpretation of their results. These visualization tools could be extended to other models (such as binary probit, multinomial logistic, ordinal logistic models, and other nonlinear models).
APA, Harvard, Vancouver, ISO, and other styles
4

Newell, P. T., K. Liou, J. W. Gjerloev, T. Sotirelis, S. Wing, and E. J. Mitchell. "Substorm probabilities are best predicted from solar wind speed." Journal of Atmospheric and Solar-Terrestrial Physics 146 (August 2016): 28–37. http://dx.doi.org/10.1016/j.jastp.2016.04.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Herron, Michael C. "Postestimation Uncertainty in Limited Dependent Variable Models." Political Analysis 8, no. 1 (1999): 83–98. http://dx.doi.org/10.1093/oxfordjournals.pan.a029806.

Full text
Abstract:
Many political science research articles that use limited dependent variable models report estimated quantities, in particular, fitted probabilities, predicted probabilities, and functions of such probabilities, without indicating that such estimates are subject to uncertainty. This practice, along with the reporting of “percentage correctly predicted,” can overstate the precision of reported results. In light of this, the present article describes a variety of measures of uncertainty that authors can include alongside estimates generated by limited dependent variable models. It also proposes an alternative to “percentage correctly predicted” and illustrates its calculations with congressional cosponsorship data from Krehbiel (1995).
APA, Harvard, Vancouver, ISO, and other styles
6

Whitaker, T. B., J. W. Dickens, and V. Chew. "Development of Statistical Models to Simulate the Testing of Farmers Stock Peanuts for Aflatoxin Using Visual, Thin Layer Chromatography, and Minicolumn Methods1." Peanut Science 12, no. 2 (July 1, 1985): 94–98. http://dx.doi.org/10.3146/pnut.12.2.0012.

Full text
Abstract:
Abstract The negative binomial probability function was used to model the distribution of sample aflatoxin test results when replicated grade samples from farmers stock peanuts are analyzed by thin layer chromatography and minicolumn methods. The Poisson probability funtion was used to model the distribution of the number of kernels with visible Aspergillus flavus growth found in replicated grade samples of farmers stock peanuts when the visible A. flavus method is used. The probabilities of accepting a lot of farmers stock peanuts with given aflatoxin concentrations when using a 465-g grade sample and 2 different accept/reject levels were predicted with the models and compared to observed acceptance probabilities computed from previously published data for each of the 3 methods. The comparisons showed good agreement between the predicted acceptance probabilities and the observed acceptance probabilities.
APA, Harvard, Vancouver, ISO, and other styles
7

Freese, Jeremy. "Least Likely Observations in Regression Models for Categorical Outcomes." Stata Journal: Promoting communications on statistics and Stata 2, no. 3 (September 2002): 296–300. http://dx.doi.org/10.1177/1536867x0200200306.

Full text
Abstract:
This article presents a method and program for identifying poorly fitting observations for maximum-likelihood regression models for categorical dependent variables. After estimating a model, the program leastlikely will list the observations that have the lowest predicted probabilities of observing the value of the outcome category that was actually observed. For example, when run after estimating a binary logistic regression model, leastlikely will list the observations with a positive outcome that had the lowest predicted probabilities of a positive outcome and the observations with a negative outcome that had the lowest predicted probabilities of a negative outcome. These can be considered the observations in which the outcome is most surprising given the values of the independent variables and the parameter estimates and, like observations with large residuals in ordinary least squares regression, may warrant individual inspection. Use of the program is illustrated with examples using binary and ordered logistic regression.
APA, Harvard, Vancouver, ISO, and other styles
8

Engels, Eric A., Gregory Haber, Allyson Hart, Charles F. Lynch, Jie Li, Karen S. Pawlish, Baozhen Qiao, Kelly J. Yu, and Ruth M. Pfeiffer. "Predicted Cure and Survival Among Transplant Recipients With a Previous Cancer Diagnosis." Journal of Clinical Oncology 39, no. 36 (December 20, 2021): 4039–48. http://dx.doi.org/10.1200/jco.21.01195.

Full text
Abstract:
PURPOSE A previous cancer diagnosis is a negative consideration in evaluating patients for possible solid organ transplantation. Statistical models may improve selection of patients with cancer evaluated for transplantation. METHODS We fitted statistical cure models for patients with cancer in the US general population using data from 13 cancer registries. Patients subsequently undergoing solid organ transplantation were identified through the Scientific Registry of Transplant Recipients. We estimated cure probabilities at diagnosis (for all patients with cancer) and transplantation (transplanted patients). We used Cox regression to assess associations of cure probability at transplantation with subsequent cancer-specific mortality. RESULTS Among 10,524,326 patients with 17 cancer types in the general population, the median cure probability at diagnosis was 62%. Of these patients, 5,425 (0.05%) subsequently underwent solid organ transplantation and their median cure probability at transplantation was 94% (interquartile range, 86%-98%). Compared with the tertile of transplanted patients with highest cure probability, those in the lowest tertile more frequently had lung or breast cancers and less frequently colorectal, testicular, or thyroid cancers; more frequently had advanced-stage cancer; were older (median 57 v 51 years); and were transplanted sooner after cancer diagnosis (median 3.6 v 8.6 years). Patients in the low-cure probability tertile had increased cancer-specific mortality after transplantation (adjusted hazard ratio, 2.08; 95% CI, 1.48 to 2.93; v the high tertile), whereas those in the middle tertile did not differ. CONCLUSION Patients with cancer who underwent solid organ transplantation exhibited high cure probabilities, reflecting selection on the basis of existing guidelines and clinical judgment. Nonetheless, there was a range of cure probabilities among transplanted patients and low probability predicted increased cancer-specific mortality after transplantation. Cure probabilities may facilitate guideline development and evaluating individual patients for transplantation.
APA, Harvard, Vancouver, ISO, and other styles
9

Ackermann, John F., and Michael S. Landy. "Suboptimal decision criteria are predicted by subjectively weighted probabilities and rewards." Attention, Perception, & Psychophysics 77, no. 2 (November 4, 2014): 638–58. http://dx.doi.org/10.3758/s13414-014-0779-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Holzer, Thomas L., J. Luke Blair, Thomas E. Noce, and Michael J. Bennett. "Predicted Liquefaction of East Bay Fills during a Repeat of the 1906 San Francisco Earthquake." Earthquake Spectra 22, no. 2_suppl (April 2006): 261–77. http://dx.doi.org/10.1193/1.2188018.

Full text
Abstract:
Predicted conditional probabilities of surface manifestations of liquefaction during a repeat of the 1906 San Francisco (M7.8) earthquake range from 0.54 to 0.79 in the area underlain by the sandy artificial fills along the eastern shore of San Francisco Bay near Oakland, California. Despite widespread liquefaction in 1906 of sandy fills in San Francisco, most of the East Bay fills were emplaced after 1906 without soil improvement to increase their liquefaction resistance. They have yet to be shaken strongly. Probabilities are based on the liquefaction potential index computed from 82 CPT soundings using median (50 th percentile) estimates of PGA based on a ground-motion prediction equation. Shaking estimates consider both distance from the San Andreas Fault and local site conditions. The high probabilities indicate extensive and damaging liquefaction will occur in East Bay fills during the next M∼7.8 earthquake on the northern San Andreas Fault.
APA, Harvard, Vancouver, ISO, and other styles
11

Biró, Anikó. "Differences between Subjective and Predicted Survival Probabilities and Their Relation to Preventive Care Use." B.E. Journal of Economic Analysis & Policy 16, no. 2 (April 1, 2016): 807–35. http://dx.doi.org/10.1515/bejeap-2015-0154.

Full text
Abstract:
Abstract I analyse how differences between subjective and predicted survival probabilities are related to preventive healthcare use. Based on the Health and Retirement Study, I find that private information inherent in subjective survival probability affects the decisions on preventive care use: positive and negative deviations between the subjective and predicted survival probabilities both imply lower likelihood of use, the relations with negative deviations being stronger. These results are driven by perceptions verified by later survival and health outcomes. A theoretical model provides explanation for the empirical results, in which preventive care increases the chances of survival, but the benefits of preventive care also vary with the survival probability.
APA, Harvard, Vancouver, ISO, and other styles
12

Pillai, Rajiv, Duolao Wang, Erik K. Mayer, and Paul Abel. "Do Standardised Prognostic Algorithms Reflect Local Practice? Application of EORTC Risk Tables for Non-Muscle Invasive (pTa/pT1) Bladder Cancer Recurrence and Progression in a Local Cohort." Scientific World JOURNAL 11 (2011): 751–59. http://dx.doi.org/10.1100/tsw.2011.77.

Full text
Abstract:
A risk calculator algorithm to allow prediction of probabilities of 1- and 5-year recurrence and progression rates in individuals with pTa/pT1 bladder cancer has been proposed by the European Organisation for Research and Treatment of Cancer (EORTC) and was incorporated into the European Association of Urology guidelines in 2006. We attempted to validate this algorithm in a cohort of patients with known outcome. Prognostic data were collected from a consecutively presenting cohort of 109 patients with non-muscle invasive (pTa/pT1) transitional cell cancer (TCC) at a single institution between 1983 and 1985. Using the same statistical models as in the EORTC original paper, predicted probabilities of 1- and 5-year recurrence and progression were calculated. Patients were divided into four risk groups for recurrence (Ir-IVr) and progression (Ip-IVp), respectively, using six prognostic criteria. These were then compared to the probabilities predicted in the EORTC algorithm. The predicted 1- and 5-year probabilities of recurrence were significantly higher in the study population as compared to the original EORTC algorithm for all four risk groups. The predicted 1-year probabilities for progression in groups Ip/IIIp and at 5-years for groups Ip/IIp were in accordance with the original algorithm, but were higher for the other progression groups. The concordance for the model of prediction using the study group for recurrence at 1 and 5 years was 62 and 63%, respectively, and for progression was 65 and 67, respectively. We were unable to validate the proposed algorithm in our group of patients. Although our study has limitations that prevent firm conclusions on the validity of the algorithm, it does expose some of the drawbacks of standardised nomograms when applied to local clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
13

PALMERI, PATRICK, and JEAN-FRANÇOIS WYART. "INTERPRETATION OF ENERGY LEVELS AND PREDICTED TRANSITION PROBABILITIES IN NEUTRAL TECHNETIUM (TcI)." Journal of Quantitative Spectroscopy and Radiative Transfer 61, no. 5 (March 1999): 603–16. http://dx.doi.org/10.1016/s0022-4073(98)00048-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Steyerberg, E. W., A. Gerl, S. D. Fossá, D. T. Sleijfer, R. de Wit, W. J. Kirkels, N. Schmeller, C. Clemm, J. D. Habbema, and H. J. Keizer. "Validity of predictions of residual retroperitoneal mass histology in nonseminomatous testicular cancer." Journal of Clinical Oncology 16, no. 1 (January 1998): 269–74. http://dx.doi.org/10.1200/jco.1998.16.1.269.

Full text
Abstract:
PURPOSE To validate predictions of the histology (necrosis, mature teratoma, or cancer) of residual retroperitoneal masses in patients treated with chemotherapy for metastatic nonseminomatous testicular germ cell tumor. PATIENTS AND METHODS We studied 172 testicular cancer patients who underwent resection while tumor markers were normal. Predictive characteristics for the residual histology were registered, including the presence of teratoma elements in the primary tumor, the prechemotherapy level of tumor markers (alpha-fetaprotein [AFP], human chorionic gonadotropin [HCG], lactate dehydrogenase [LDH]), the size of the residual mass, and the percentage of shrinkage in mass diameter. We calculated the predicted probability of necrosis and the ratio of cancer and mature teratoma with previously published logistic regression formulas. RESULTS The distribution of the residual histology was necrosis in 77 (45%), mature teratoma in 72 (42%), and cancer in 23 (13%). Necrosis could be well distinguished from other tissue, with an area under the receiver operating characteristic (ROC) curve of 82%. No tumor was found in 15 patients with a predicted probability of necrosis over 90%. The predicted probabilities corresponded reliably with the observed probabilities (goodness-of-fit tests, P > .20), although a somewhat higher probability of necrosis was observed in patients treated with chemotherapy containing etoposide. Conversely, cancer could not reliably be predicted or adequately discriminated from mature teratoma. CONCLUSION The predicted probabilities of necrosis have adequate reliability and discriminative power. These predictions may validly support the decision-making process regarding the need and extent of retroperitoneal lymph node dissection.
APA, Harvard, Vancouver, ISO, and other styles
15

Muller, Clemma J., and Richard F. MacLehose. "Estimating predicted probabilities from logistic regression: different methods correspond to different target populations." International Journal of Epidemiology 43, no. 3 (March 4, 2014): 962–70. http://dx.doi.org/10.1093/ije/dyu029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Traczyk, Jakub, Jakub Kus, and Agata Sobkow. "Affective response to a lottery prize moderates processing of payoffs and probabilities: An eye-tracking study." Polish Journal of Applied Psychology 14, no. 4 (December 1, 2016): 35–46. http://dx.doi.org/10.1515/pjap-2015-0066.

Full text
Abstract:
Abstract Expected utility theory posits that our preferences for gambles result from the weighting of utilities of monetary payoffs by their probabilities. However, recent studies have shown that combining payoffs and probabilities is often distorted by affective responses. In the current study, we hypothesized that affective response to a lottery prize moderates processing of payoffs and probabilities. Attentional engagement (measured by the number of fixations in the eye tracking experiment) was predicted by probability, value of an outcome, and their interaction, but only for affect-poor lottery tickets. A corresponding pattern of results was not observed in affect-rich lottery tickets, suggesting more simplified processing of such lotteries.
APA, Harvard, Vancouver, ISO, and other styles
17

Pleskac, Timothy J., Peter D. Kvam, and Shuli Yu. "What's the predicted outcome? Explanatory and predictive properties of the quantum probability framework." Behavioral and Brain Sciences 36, no. 3 (May 14, 2013): 303–4. http://dx.doi.org/10.1017/s0140525x12003093.

Full text
Abstract:
AbstractQuantum probability (QP) provides a new perspective for cognitive science. However, one must be clear about the outcome the QP model is predicting. We discuss this concern in reference to modeling the subjective probabilities given by people as opposed to modeling the choice proportions of people. These two models would appear to have different cognitive assumptions.
APA, Harvard, Vancouver, ISO, and other styles
18

Gong, Chengyue, Xu Tan, Di He, and Tao Qin. "Sentence-Wise Smooth Regularization for Sequence to Sequence Learning." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 6449–56. http://dx.doi.org/10.1609/aaai.v33i01.33016449.

Full text
Abstract:
Maximum-likelihood estimation (MLE) is widely used in sequence to sequence tasks for model training. It uniformly treats the generation/prediction of each target token as multiclass classification, and yields non-smooth prediction probabilities: in a target sequence, some tokens are predicted with small probabilities while other tokens are with large probabilities. According to our empirical study, we find that the non-smoothness of the probabilities results in low quality of generated sequences. In this paper, we propose a sentence-wise regularization method which aims to output smooth prediction probabilities for all the tokens in the target sequence. Our proposed method can automatically adjust the weights and gradients of each token in one sentence to ensure the predictions in a sequence uniformly well. Experiments on three neural machine translation tasks and one text summarization task show that our method outperforms conventional MLE loss on all these tasks and achieves promising BLEU scores on WMT14 English-German and WMT17 Chinese-English translation task.
APA, Harvard, Vancouver, ISO, and other styles
19

Beltzer, Miranda L., Stephen Adams, Peter A. Beling, and Bethany A. Teachman. "Social Anxiety and Dynamic Social Reinforcement Learning in a Volatile Environment." Clinical Psychological Science 7, no. 6 (September 20, 2019): 1372–88. http://dx.doi.org/10.1177/2167702619858425.

Full text
Abstract:
Adaptive social behavior requires learning probabilities of social reward and punishment and updating these probabilities when they change. Given prior research on aberrant reinforcement learning in affective disorders, this study examines how social anxiety affects probabilistic social reinforcement learning and dynamic updating of learned probabilities in a volatile environment. Two hundred and twenty-two online participants completed questionnaires and a computerized ball-catching game with changing probabilities of reward and punishment. Dynamic learning rates were estimated to assess the relative importance ascribed to new information in response to volatility. Mixed-effects regression was used to analyze throw patterns as a function of social anxiety symptoms. Higher social anxiety predicted fewer throws to the previously punishing avatar and different learning rates after certain role changes, suggesting that social anxiety may be characterized by difficulty updating learned social probabilities. Socially anxious individuals may miss the chance to learn that a once-punishing situation no longer poses a threat.
APA, Harvard, Vancouver, ISO, and other styles
20

ZHANG, TONG-JIE, ZHI-LIANG YANG, and XIANG-TAO HE. "STRONG LENSING PROBABILITIES IN A COSMOLOGICAL MODEL WITH A RUNNING PRIMORDIAL POWER SPECTRUM." Modern Physics Letters A 20, no. 11 (April 10, 2005): 851–59. http://dx.doi.org/10.1142/s0217732305016142.

Full text
Abstract:
The combination of the first-year Wilkinson Microwave Anisotropy Probe (WMAP) data with other finer scale cosmic microwave background (CMB) experiments (CBI and ACBAR) and two structure formation measurements (2dFGRS and Lyman α forest) suggest a ΛCDM cosmological model with a running spectral power index of primordial density fluctuations. Motivated by this new result on the index of primordial power spectrum, we present the first study on the predicted lensing probabilities of image separation in a spatially flat ΛCDM model with a running spectral index (RSI-ΛCDM model). It is shown that the RSI-ΛCDM model suppresses the predicted lensing probabilities on small splitting angles of less than about 4″ compared with that of standard power-law ΛCDM (PL-ΛCDM) model.
APA, Harvard, Vancouver, ISO, and other styles
21

Buis, Maarten L. "Predict and Adjust with Logistic Regression." Stata Journal: Promoting communications on statistics and Stata 7, no. 2 (June 2007): 221–26. http://dx.doi.org/10.1177/1536867x0700700206.

Full text
Abstract:
Within Stata there are two ways of getting average predicted values for different groups after an estimation command: adjust and predict. After OLS regression (regress), these two ways give the same answer. However, after logistic regression, the average predicted probabilities differ. This article discusses where that difference comes from and the consequent subtle difference in interpretation.
APA, Harvard, Vancouver, ISO, and other styles
22

Wang, Z. L., and J. M. Cowley. "Generation of surface plasmons in a supported metal particle with an external electron beam: I. quantum theory." Proceedings, annual meeting, Electron Microscopy Society of America 45 (August 1987): 404–5. http://dx.doi.org/10.1017/s0424820100126810.

Full text
Abstract:
The theoretical prediction of surface plasmon frquencies for a supported metal particle system is based on the model as shown in fig. 1. A spherical particle covered with a thin shell of oxide is half embedded in the support. Two surface plasmons (ωH, ωLO) and four surface plasmons (ω1 to ω4) have been predicted by using dielectric theory for the cases of a metal particle sitting on (A) a dielectric insulator and (B) a metal plate, respectively[l-2]. The relative excitation probabilities of those predicted surface plasmons have been calculated using the so-called coherent-state theory[3-4]. Two directions of incidence for the external electron beam are assumed as indicated in fig. 1, P(||) and P(⊥) one is parallel to the Z axis and the other is perpindicular to it. The curves marked with H and L are the calculated excitation probabilities for ωH and WLO as predicted in [1-2]
APA, Harvard, Vancouver, ISO, and other styles
23

Shah, Sami Ullah, Marta Yebra, Albert I. J. M. Van Dijk, and Geoffrey J. Cary. "A New Fire Danger Index Developed by Random Forest Analysis of Remote Sensing Derived Fire Sizes." Fire 5, no. 5 (September 29, 2022): 152. http://dx.doi.org/10.3390/fire5050152.

Full text
Abstract:
Studies using remote sensing data for fire danger prediction have primarily relied on fire ignitions data to develop fire danger indices (FDIs). However, these data may only represent conditions suitable for ignition but may not represent fire danger conditions causing escalating fire size. The fire-related response variable’s scalability is a key factor that forms a basis for an FDI to include a broader range of fire danger conditions. Remote sensing derived fire size is a scalable fire characteristic encapsulating all possible fire sizes that previously occurred in the landscape, including extreme fire events. Consequently, we propose a new FDI that uses remote sensing derived fire size as a response variable. We computed fire sizes from the Moderate Resolution Imaging Spectroradiometer (MODIS) satellite instrument burned area. We applied random forest (RF) and logistic regression (LR) to develop the FDI for Australia. RF models performed better than LR, and the higher predicted probabilities demonstrated higher chances for ignited fires to be escalated to larger fire sizes at a regional scale across Australia. However, the predicted probabilities cannot be related to the specific range of fire sizes due to data limitations. Further research with higher temporal and spatial resolution data of both the response and predictor variables can help establish a better relationship between a specific range of fire sizes and the predicted probabilities.
APA, Harvard, Vancouver, ISO, and other styles
24

Preisler, Haiganoush K., David R. Brillinger, Robert E. Burgan, and J. W. Benoit. "Probability based models for estimation of wildfire risk." International Journal of Wildland Fire 13, no. 2 (2004): 133. http://dx.doi.org/10.1071/wf02061.

Full text
Abstract:
We present a probability-based model for estimating fire risk. Risk is defined using three probabilities: the probability of fire occurrence; the conditional probability of a large fire given ignition; and the unconditional probability of a large fire. The model is based on grouped data at the 1 km2-day cell level. We fit a spatially and temporally explicit non-parametric logistic regression to the grouped data. The probability framework is particularly useful for assessing the utility of explanatory variables, such as fire weather and danger indices for predicting fire risk. The model may also be used to produce maps of predicted probabilities and to estimate the total number of expected fires, or large fires, in a given region and time period. As an example we use historic data from the State of Oregon to study the significance and the forms of relationships between some of the commonly used weather and danger variables on the probabilities of fire. We also produce maps of predicted probabilities for the State of Oregon. Graphs of monthly total numbers of fires are also produced for a small region in Oregon, as an example, and expected numbers are compared to actual numbers of fires for the period 1989–1996. The fits appear to be reasonable; however, the standard errors are large indicating the need for additional weather or topographic variables.
APA, Harvard, Vancouver, ISO, and other styles
25

Becker, Nico, Henning W. Rust, and Uwe Ulbrich. "Predictive modeling of hourly probabilities for weather-related road accidents." Natural Hazards and Earth System Sciences 20, no. 10 (October 29, 2020): 2857–71. http://dx.doi.org/10.5194/nhess-20-2857-2020.

Full text
Abstract:
Abstract. Impacts of weather on road accidents have been identified in several studies with a focus mainly on monthly or daily accident counts. This study investigates hourly probabilities of road accidents caused by adverse weather conditions in Germany on the spatial scale of administrative districts using logistic regression models. Including meteorological predictor variables from radar-based precipitation estimates, high-resolution reanalysis and weather forecasts improves the prediction of accident probability compared to models without weather information. For example, the percentage of correctly predicted accidents (hit rate) is increased from 30 % to 70 %, while keeping the percentage of wrongly predicted accidents (false-alarm rate) constant at 20 %. When using ensemble weather forecasts up to 21 h instead of radar and reanalysis data, the decline in model performance is negligible. Accident probability has a nonlinear relationship with precipitation. Given an hourly precipitation sum of 1 mm, accident probabilities are approximately 5 times larger at negative temperatures compared to positive temperatures. The findings are relevant in the context of impact-based warnings for road users, road maintenance, traffic management and rescue forces.
APA, Harvard, Vancouver, ISO, and other styles
26

MacPherson, Leigh, Ivan David Haigh, Matthew Mason, Sarath Wijeratne, Charitha Pattiaratchi, and Steve George. "EXTREME WATER LEVEL EXCEEDANCE PROBABILITIES AROUND AUSTRALIA." Coastal Engineering Proceedings 1, no. 33 (December 15, 2012): 53. http://dx.doi.org/10.9753/icce.v33.currents.53.

Full text
Abstract:
The potential impacts of extreme water level events on our coasts are increasing as populations grow and sea levels rise. To better prepare for the future, coastal engineers and managers need accurate estimates of average exceedance probabilities for extreme water levels. In this paper, we estimate present day probabilities of extreme water levels around the entire coastline of Australia. Tides and storm surges generated by extra-tropical storms were included by creating a 61-year (1949-2009) hindcast of water levels using a high resolution depth averaged hydrodynamic model driven with meteorological data from a global reanalysis. Tropical cyclone-induced surges were included through numerical modelling of a database of synthetic tropical cyclones equivalent to 10,000 years of cyclone activity around Australia. Predicted water level data was analysed using extreme value theory to construct return period curves for both the water level hindcast and synthetic tropical cyclone modelling. These return period curves were then combined by taking the highest water level at each return period.
APA, Harvard, Vancouver, ISO, and other styles
27

Dehbi, Y., N. Gojayeva, A. Pickert, J. H. Haunert, and L. Plümer. "ROOM SHAPES AND FUNCTIONAL USES PREDICTED FROM SPARSE DATA." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-4 (September 19, 2018): 33–40. http://dx.doi.org/10.5194/isprs-annals-iv-4-33-2018.

Full text
Abstract:
<p><strong>Abstract.</strong> Many researchers used expensive 3D laser scanning techniques to derive indoor models. Few papers describe the derivation of indoor models based on sparse data such as footprints. They assume that floorplans and rooms are rather rectangular and that information on functional use is given. This paper addresses the automatic learning of a classifier which predicts the functional use of housing rooms. The classification is based on features which are widely available such as room areas and orientation. These features are extracted from an extensive database of annotated rooms. A Bayesian classifier is applied which delivers probabilities of competing class hypotheses. In a second step, functional uses are used to predict the shape of the rooms in a further classification.</p>
APA, Harvard, Vancouver, ISO, and other styles
28

Hasan, A. K. "Shell Model Calculations for 18,19,20O Isotopes by Using USDA and USDB Interactions." Ukrainian Journal of Physics 63, no. 3 (April 20, 2018): 189. http://dx.doi.org/10.15407/ujpe63.3.189.

Full text
Abstract:
The shell model (SM) is used to calculate the energy levels and transition probabilities B(E2) for 18,19,20 O isotopes. Two interactions (USDA and USDB) are used in the SDPN model space. We assume that all possible many-nucleon configurations are defined by the 0d5/2, 1s1/2, and d3/2 states that are higher than in 16 O doubly magic nucleus. The available empirical data are in a good agreement with theoretical energy levels predictions. Spins and parities were affirmed for new levels, and the transition probabilities B(E2; ↓) are predicted.
APA, Harvard, Vancouver, ISO, and other styles
29

Tuovinen, Santeri, Runlong Cai, Veli-Matti Kerminen, Jingkun Jiang, Chao Yan, Markku Kulmala, and Jenni Kontkanen. "Survival probabilities of atmospheric particles: comparison based on theory, cluster population simulations, and observations in Beijing." Atmospheric Chemistry and Physics 22, no. 22 (November 25, 2022): 15071–91. http://dx.doi.org/10.5194/acp-22-15071-2022.

Full text
Abstract:
Abstract. Atmospheric new particle formation (NPF) events are regularly observed in urban Beijing, despite high concentrations of background particles which, based on theory, should inhibit NPF due to high values of coagulation sink (CoagS). The survival probability, which depends on both CoagS and particle growth rate (GR), is a key parameter in determining the occurrence of NPF events as it describes the fraction of newly formed particles that survive from a smaller diameter to a larger diameter. In this study, we investigate and compare survival probabilities from 1.5 to 3 nm (J3/J1.5), from 3 to 6 nm (J6/J3), and from 6 to 10 nm (J10/J6) based on analytical formulae, cluster population simulations, and atmospheric observations from Beijing. We find that survival probabilities based on the cluster population simulations and one of the analytical formulae are in a good agreement. However, at low ratios between the background condensation sink (CS) and GR, and at high concentrations of sub-3 nm clusters, cluster–cluster collisions efficiently lower survival probabilities in the cluster population simulations. Due to the large concentrations of clusters and small particles required to considerably affect the survival probabilities, we consider it unlikely that cluster–cluster collisions significantly affect atmospheric survival probabilities. The values of J10/J6 observed in Beijing show high variability, most likely due to influences of primary particle emissions, but are on average in relatively good agreement with the values based on the simulations and the analytical formulae. The observed values of J6/J3 are mostly lower than those predicted based on the simulations and the analytical formulae, which could be explained by uncertainties in CS and GR. The observed values of J3/J1.5 at high CS / GR are much higher than predicted based on the simulations and the analytical formulae. We argue that uncertainties in GR or CS are unlikely to solely explain the observed values of J3/J1.5 under high CS conditions. Thus, further work is needed to better understand the factors influencing survival probabilities of sub-3 nm atmospheric particles in polluted environments.
APA, Harvard, Vancouver, ISO, and other styles
30

Jiang, Zhengbao, Jun Araki, Haibo Ding, and Graham Neubig. "How Can We Know When Language Models Know? On the Calibration of Language Models for Question Answering." Transactions of the Association for Computational Linguistics 9 (2021): 962–77. http://dx.doi.org/10.1162/tacl_a_00407.

Full text
Abstract:
Abstract Recent works have shown that language models (LM) capture different types of knowledge regarding facts or common sense. However, because no model is perfect, they still fail to provide appropriate answers in many cases. In this paper, we ask the question, “How can we know when language models know, with confidence, the answer to a particular query?” We examine this question from the point of view of calibration, the property of a probabilistic model’s predicted probabilities actually being well correlated with the probabilities of correctness. We examine three strong generative models—T5, BART, and GPT-2—and study whether their probabilities on QA tasks are well calibrated, finding the answer is a relatively emphatic no. We then examine methods to calibrate such models to make their confidence scores correlate better with the likelihood of correctness through fine-tuning, post-hoc probability modification, or adjustment of the predicted outputs or inputs. Experiments on a diverse range of datasets demonstrate the effectiveness of our methods. We also perform analysis to study the strengths and limitations of these methods, shedding light on further improvements that may be made in methods for calibrating LMs. We have released the code at https://github.com/jzbjyb/lm-calibration.
APA, Harvard, Vancouver, ISO, and other styles
31

Yoo, Shi Yong. "The Valuation of the Electricity Future Contract Under Weather Uncertainty." Journal of Derivatives and Quantitative Studies 12, no. 2 (November 30, 2004): 127–55. http://dx.doi.org/10.1108/jdqs-02-2004-b0006.

Full text
Abstract:
This paper is concerned with the effects of weather uncertainty on the electricity future curve. Following the approach used by Lucia and Schwartz (2002), the behavior of the underlying spot price is assumed to consist of two components ‘ a totally predictable deterministic component that accounts for regularities in the evolution of prices and a stochastic component that accounts for the behavior of residuals from the deterministic part. The weather uncertainty is modeled consistently with seasonal outlook probabilities from the CPC (Climate Prediction Center) outlook. For a given realization of temperature, the electricity load can be predicted very accurately by a time series model using temperature and other explanatory variables. Furthermore, if temperature and electricity load are known, the spot price can be predicted as well using the regime switching model with time-varying transition probabilities. The electricity future price can be calculated for the given seasonal probabilities from the CPC outlook. Then the electricity future price can be obtained as the arithmetic average of the one-day electricity future price. The future price reflects clearly the response of the spot price to different weather patterns. As the summer gets warmer, the high price regime is more likely to be realized, and as a result, the future price increases.
APA, Harvard, Vancouver, ISO, and other styles
32

Brickman, David, Gudrun Marteinsdottir, Kai Logemann, and Ingo H. Harms. "Drift probabilities for Icelandic cod larvae." ICES Journal of Marine Science 64, no. 1 (November 10, 2006): 49–59. http://dx.doi.org/10.1093/icesjms/fsl019.

Full text
Abstract:
Abstract Brickman, D., Marteinsdottir, G., Logemann, K., and Harms, I. H. 2007. Drift probabilities for Icelandic cod larvae – ICES Journal of Marine Science, 64, 49–59. The climatological distribution of juvenile Icelandic cod is characterized by a negative spatial age gradient, with a fairly abrupt decrease in age near the northwest corner of Iceland, and a spatial abundance gradient with higher concentrations of 0-group fish inshore. Flowfields from a high-resolution circulation model developed for Icelandic waters were used to investigate larval drift from the various spawning grounds in Icelandic coastal waters to understand the distribution of 0-group fish. To present the results clearly, drift probability density functions (pdfs) are derived describing the probability of drifting from a given spawning ground to a given spatial region over a specified time interval. These pdfs are used to determine the spawning grounds most probably contributing to the observed age distribution. The observed spatial gradient in age is likely due to differences in the spawning location of larvae, with older larvae originating in spawning grounds in the southwest and younger larvae from farther north. In general, the contribution from the main spawning grounds in the southwest is predicted to decrease with clockwise distance from the source region. The pdf technique was also used to investigate drift from regions on the south coast of Iceland corresponding to known or possible subpopulation spawning grounds, to see whether these spawning areas are associated with distinct drift patterns. This technique is a useful way to present larval drift results and to facilitate comparison with real data.
APA, Harvard, Vancouver, ISO, and other styles
33

Durnford, Andrew J., Fenella J. Kirkham, Nijaguna Mathad, and Owen C. E. Sparrow. "Endoscopic third ventriculostomy in the treatment of childhood hydrocephalus: validation of a success score that predicts long-term outcome." Journal of Neurosurgery: Pediatrics 8, no. 5 (November 2011): 489–93. http://dx.doi.org/10.3171/2011.8.peds1166.

Full text
Abstract:
Object The goal of this study was to externally validate the proposed Endoscopic Third Ventriculostomy Success Score (ETVSS), which predicts successful treatment for hydrocephalus on the basis of a child's individual characteristics. Methods The authors retrospectively identified 181 cases of consecutive endoscopic third ventriculostomy (ETV) performed in children at a single neurosurgery center in the United Kingdom. They compared actual success at both 6 and 36 months, with mean predicted probabilities for low, moderate, and high chance of success strata based on the ETVSS. Long-term success was calculated using Kaplan-Meier methods and comparisons were made by means of unpaired t-tests. Results Overall, 166 primary ETVs were performed; ETV success was 72.9% at 6 and 64.5% at 36 months. At long-term follow-up, the mean predicted probability of success was significantly higher in those with a successful ETV (99 patients) than in those with a failed ETV (67 patients) (p = 0.001). The ETVSS accurately predicted outcome at 36 months; the low, medium, and high chance of success strata had mean predicted probabilities of success of 82%, 63%, and 36%, and actual success of 76%, 66%, and 42%, respectively. The overall complication rate was 6%. Conclusions The ETVSS closely predicted the overall long-term success rates in high-, moderate-, and low-risk groups. The results of this study suggest that the ETVSS will aid clinical decision making in predicting outcome of ETV.
APA, Harvard, Vancouver, ISO, and other styles
34

Hasan, A. K., F. H. Obeed, and A. N. Rahim. "Positive Parity Levels of 21,23Na Isotopes by Using the Nuclear Shell." Ukrainian Journal of Physics 65, no. 1 (February 3, 2020): 3. http://dx.doi.org/10.15407/ujpe65.1.3.

Full text
Abstract:
The energy levels and transition probabilities B(E2; ↓) i B(M1; ↓) have been investigated for 21,23Na isotopes by using the (USDA and USDB) interactions in the (sd-shell) model space. In the calculations of the shell model, it has been assumed that all possible many-nucleon configurations are specified by the (0d5/2, 1s1/2 i 0d3/2) states above 16O doubly magic nucleus. The available empirical data are in a good agreement with predictions of theoretical energy levels. Spins and parities are affirmed for new levels, transition probabilities B(E2; ↓) and B(M1; ↓) are predicted as well.
APA, Harvard, Vancouver, ISO, and other styles
35

Sekuła, Przemysław, Zachary Vander Laan, Kaveh Farokhi Sadabadi, and Mirosław J. Skibniewski. "Predicting Work Zone Collision Probabilities via Clustering: Application in Optimal Deployment of Highway Response Teams." Journal of Advanced Transportation 2018 (September 18, 2018): 1–16. http://dx.doi.org/10.1155/2018/3179207.

Full text
Abstract:
This paper proposes a clustering approach to predict the probability of a collision occurring in the proximity of planned road maintenance operations (i.e., work zones). The proposed method is applied to over 54,000 short-term work zones in the state of Maryland and demonstrates an ability to predict work zone collision probabilities. One of the key applications of this work is using the predicted probabilities at the operational level to help allocate highway response teams. To this end, a two-stage stochastic program is used to locate response vehicles on the Maryland highway network in order to minimize expected response times.
APA, Harvard, Vancouver, ISO, and other styles
36

Campbell, Heather M., Allison Murata, Gerald A. Charlton, and Glen H. Murata. "Development and validation of prediction rules to target care intensification in veteran patients with diabetes." BMJ Health & Care Informatics 27, no. 1 (June 2020): e100130. http://dx.doi.org/10.1136/bmjhci-2020-100130.

Full text
Abstract:
BackgroundDiabetes affects 30.3 million people in the USA. Among these people, a major risk factor for microvascular complications is having a glycated haemoglobin (HbA1c) value of ≥75 mmol/mol; therefore, it would be helpful to identify patients who will obtain future HbA1c values of <75 mmol/mol.ObjectivesTo develop and validate two prediction rules among patients with diabetes having a baseline HbA1c value of ≥75 mmol/mol: (1) HbA1c measurement ever <75 mmol/mol and (2) final HbA1c measurement of <75 mmol/mol.MethodsRetrospective cohort study using a registry extracting data from the Department of Veterans Affairs’s (VA’s) electronic health records system. Baseline was 1 Jul 2013–30 June 2014; patients were followed up until 31 July 2016.ResultsOur population consisted of 145 659 patients. Across models, predictors were age, sex, minority status, baseline HbA1c value, time, HbA1c≥75 mmol/mol, receiving insulin treatment and consecutive number of HbA1c values of 75 mmol/mol. The overall likelihood of a patient ever having an HbA1c<75 mmol/mol was 73.65%; with the rule, predicted probabilities were 38.94%, 50.75% and 78.88%. The overall likelihood of patients having a final HbA1c measurement of <75 mmol/mol was 55.35%; the rule provided predicted probabilities of 29.93%, 50.17% and 68.58%.ConclusionsWithin each rule, there were similar observed and predicted tertile probabilities; maintaining HbA1c values of <75 mmol/mol resulted in probability shifts in the majority of patients. We recommend psychosocial screening for 15% of patients for whom there is less than one-third chance of maintaining HbA1c<75 mmol/mol. We plan to conduct additional research to see whether this approach helps.
APA, Harvard, Vancouver, ISO, and other styles
37

Mishalani, Rabi G., Abdollah Shafieezadeh, and Zequn Li. "Updating Bridge Deck Condition Transition Probabilities as New Inspection Data Are Collected: Methodology and Empirical Evaluation." Transportation Research Record: Journal of the Transportation Research Board 2672, no. 12 (December 2018): 93–102. http://dx.doi.org/10.1177/0361198118796003.

Full text
Abstract:
A Bayesian updating method is proposed to estimate a Markov chain based concrete deck deterioration model in a manner that combines condition data collected over two consecutive inspections and the deterioration information available prior to the collection of these data. A dataset of bridge deck condition assessments based on AASHTO condition state definitions collected by a state infrastructure agency spanning two years is used to evaluate the performance of this method. Training and validation datasets are selected from the original dataset where the former is used for estimation and the latter for prediction and evaluation. Single period transition probabilities are estimated using Bayesian updating, where prior deterioration information is combined with the condition data, and maximum likelihood estimation where only the collected condition data over two consecutive inspections are used. The evaluation is based on measuring the degree of similarity between reported condition states and those predicted based on the estimated transition probabilities using the two estimation methods. While updating transition probabilities as new data are collected is found to be advantageous for many cases, this advantage is highly dependent on the extent to which the training dataset is representative of the deterioration nature of the bridge decks for which condition is to be predicted. The less representative the training dataset, the more value is derived from Bayesian updating based predictions where prior deterioration information is considered.
APA, Harvard, Vancouver, ISO, and other styles
38

Wang, Yang. "Dynamic Implications of Subjective Expectations: Evidence from Adult Smokers." American Economic Journal: Applied Economics 6, no. 1 (January 1, 2014): 1–37. http://dx.doi.org/10.1257/app.6.1.1.

Full text
Abstract:
We set up a dynamic discrete choice model with subjective expectations data to explain adult smokers’ smoking decisions. We find important differences between subjective survival probabilities and those estimated using observed mortality data. Subjectively, individuals attach less weight to their health conditions and smoking choices and more weight to such factors as age, race, and parents’ longevity. Moreover, adult smokers are found to care more about their health and to be more forward-looking than predicted by a rational expectations framework. We further show the importance of unobserved heterogeneity in agents’ subjective survival probabilities, and discuss policy implications of subjective expectations. (JEL D12, D84, I12)
APA, Harvard, Vancouver, ISO, and other styles
39

Rodríguez, Lissette Guzmán, Vagner Anabor, Franciano Scremin Puhales, and Everson Dal Piva. "ESTIMATIVA DA PROBABILIDADE DE OCORRÊNCIA DE PRECIPITAÇÃO, A PARTIR DE TÉCNICAS ESTATÍSTICAS NÃO PARAMÉTRICAS APLICADAS A SIMULAÇÕES NUMÉRICAS DE WRF. UM CASO DE ESTUDO." Ciência e Natura 38 (July 20, 2016): 491. http://dx.doi.org/10.5902/2179460x20193.

Full text
Abstract:
In this paper was used the kernel density estimation (KDE), a nonparametric method to estimate the probability density function of a random variable, to obtain a probabilistic precipitation forecast, from an ensemble prediction with the WRF model. The nine members of the prediction were obtained by varying the convective parameterization of the model, for a heavy precipitation event in southern Brazil. Evaluating the results, the estimated probabilities obtained for periods of 3 and 24 hours, and various thresholds of precipitation, were compared with the estimated precipitation of the TRMM, without showing a clear morphological correspondence between them. For accumulated in 24 hours, it was possible to compare the specific values of the observations of INMET, finding better coherence between the observations and the predicted probabilities. Skill scores were calculated from contingency tables, for different ranks of probabilities, and the forecast of heavy rain had higher proportion correct in all ranks of probabilities, and forecasted precipitation with probability of 75%, for any threshold, did not produce false alarms. Furthermore, the precipitation of lower intensity with marginal probability was over-forecasted, showing also higher index of false alarms.
APA, Harvard, Vancouver, ISO, and other styles
40

Metzler, Adam, and Alexandre Scott. "Importance Sampling in the Presence of PD-LGD Correlation." Risks 8, no. 1 (March 10, 2020): 25. http://dx.doi.org/10.3390/risks8010025.

Full text
Abstract:
This paper seeks to identify computationally efficient importance sampling (IS) algorithms for estimating large deviation probabilities for the loss on a portfolio of loans. Related literature typically assumes that realised losses on defaulted loans can be predicted with certainty, i.e., that loss given default (LGD) is non-random. In practice, however, LGD is impossible to predict and tends to be positively correlated with the default rate and the latter phenomenon is typically referred to as PD-LGD correlation (here PD refers to probability of default, which is often used synonymously with default rate). There is a large literature on modelling stochastic LGD and PD-LGD correlation, but there is a dearth of literature on using importance sampling to estimate large deviation probabilities in those models. Numerical evidence indicates that the proposed algorithms are extremely effective at reducing the computational burden associated with obtaining accurate estimates of large deviation probabilities across a wide variety of PD-LGD correlation models that have been proposed in the literature.
APA, Harvard, Vancouver, ISO, and other styles
41

Skylark, William J., and Sidharth Prabhu-Naik. "A new test of the risk-reward heuristic." Judgment and Decision Making 13, no. 1 (January 2018): 73–78. http://dx.doi.org/10.1017/s1930297500008834.

Full text
Abstract:
AbstractRisk and reward are negatively correlated in a wide variety of environments, and in many cases this trade off approximates a fair bet. Pleskac and Hertwig (2014) recently proposed that people have internalized this relationship and use it as the basis for probability estimation and subsequent choice under conditions of uncertainty. Specifically, they showed that risky options with high-value outcomes are inferred to have lower probability than options offering a less valuable reward. We report two experiments that test a simple corollary of this idea. In both studies, participants estimated the magnitude of prizes offered by lotteries with known win-probabilities. The relationship between estimates and probabilities followed the power relationship predicted by the risk-reward heuristic, albeit with a tendency to overestimate outcome magnitude. In addition, people’s estimates predicted their willingness to take the gamble. Our results provide further evidence that people have internalized the ecological relationship between risk and reward in financial lotteries, and we suggest that this relationship exerts a wide-ranging influence on decision-making.
APA, Harvard, Vancouver, ISO, and other styles
42

Querejeta, Miguel. "On the Eclipse of Thales, Cycles and Probabilities." Culture and Cosmos 15, no. 01 (June 2011): 5–16. http://dx.doi.org/10.46472/cc.0115.0203.

Full text
Abstract:
According to classical tradition, Thales of Miletus predicted the total solar eclipse that took place on 28 May 585 bce. Even if some authors have flatly denied the possibility of such a prediction, others have struggled to find cycles which would justify the achievement of the philosopher. Some of the proposed cycles have already been refuted, but two of them, namely those of Willy Hartner and Dirk Couprie, remain unchallenged. This paper presents some important objections to these two possibilities, based on the fact that these authors do not list all the eclipses potentially visible by their criteria. In addition, any explanation based on cycles will need to face the complex problem of visibility (smallest observable eclipse, weather…). The present article also includes a statistical study on the predictability of solar eclipses for a variety of periods, similar to that performed by Willy Hartner for lunar eclipses, resulting in lower probabilities in the solar case (and percentages depend on the chosen magnitude limit). The conclusion is that none of the cycles proposed so far provides a satisfactory explanation of the prediction, and, on statistical grounds, none of the periods studied leads to a significant probability of success with solar eclipse cycles.
APA, Harvard, Vancouver, ISO, and other styles
43

Vaidogas, Egidijus Rytas. "EXPLOSIVE DAMAGE TO INDUSTRIAL BUILDINGS: ASSESSMENT BY RESAMPLING LIMITED EXPERIMENTAL DATA ON BLAST LOADING." JOURNAL OF CIVIL ENGINEERING AND MANAGEMENT 11, no. 4 (December 31, 2005): 251–66. http://dx.doi.org/10.3846/13923730.2005.9636356.

Full text
Abstract:
Assessment of damage to industrial buildings due to accidental explosions in air is considered. It is suggested to formulate the result of the damage assessment in the form of risk. The expression of risk embraces probabilities of foreseeable damage events (damage probabilities) and outcomes (consequences) of suffering these events. The situation is examined when blast loading imposed by an accidental explosion is predicted by a series of experiments. They yield a small‐size sample of blast loading characteristics. It is suggested to idealise the formation of explosive damage to industrial buildings by means of event trees diagrams. A quantitative analysis of these diagrams can be carried out by developing fragility functions for their branching points. Each branching point is used to represent a structural failure contributing to the final explosive damage. The fragility functions are applied to expressing the conditional probabilities of explosive damage. With these probabilities, a technique of frequentist (Fisherian) inference is applied to assessing the explosive damage. This technique is called statistical resampling (Efron's bootstrap) and applied as a practical, albeit not equivalent alternative to the Bayesian approaches. It is shown that statistical resampling is capable to yield confidence intervals of damage probabilities and can be applied almost automatically. It operates without using cumbersome methods of statistical inference developed in the classical statistics. The bootstrap confidence intervals do not contain any subjective information except the degree of confidence for which these intervals are computed. The degree of confidence must be chosen by the engineer. The bootstrap confidence intervals are applied to estimating damage probabilities on the basis of the small‐size sample of blast loading characteristics. An estimate of the risk of explosive damage is expressed as a set of bootstrap confidence intervals computed for damage probabilities and related outcomes of this damage.
APA, Harvard, Vancouver, ISO, and other styles
44

Hultgren, Gustav, Leo Myrén, Zuheir Barsoum, and Rami Mansour. "Digital Scanning of Welds and Influence of Sampling Resolution on the Predicted Fatigue Performance: Modelling, Experiment and Simulation." Metals 11, no. 5 (May 18, 2021): 822. http://dx.doi.org/10.3390/met11050822.

Full text
Abstract:
Digital weld quality assurance systems are increasingly used to capture local geometrical variations that can be detrimental for the fatigue strength of welded components. In this study, a method is proposed to determine the required scanning sampling resolution for proper fatigue assessment. Based on FE analysis of laser-scanned welded joints, fatigue failure probabilities are computed using a Weakest-link fatigue model with experimentally determined parameters. By down-sampling of the scanning data in the FE simulations, it is shown that the uncertainty and error in the fatigue failure probability prediction increases with decreased sampling resolution. The required sampling resolution is thereafter determined by setting an allowable error in the predicted failure probability. A sampling resolution of 200 to 250 μm has been shown to be adequate for the fatigue-loaded welded joints investigated in the current study. The resolution requirements can be directly incorporated in production for continuous quality assurance of welded structures. The proposed probabilistic model used to derive the resolution requirement accurately captures the experimental fatigue strength distribution, with a correlation coefficient of 0.9 between model and experimental failure probabilities. This work therefore brings novelty by deriving sampling resolution requirements based on the influence of stochastic topographical variations on the fatigue strength distribution.
APA, Harvard, Vancouver, ISO, and other styles
45

Prakasam, Geetha Rani, Mukesh, and Gopinathan R. "Enrolment by academic discipline in higher education: differential and determinants." Journal of Asian Business and Economic Studies 26, no. 2 (December 2, 2019): 265–85. http://dx.doi.org/10.1108/jabes-12-2018-0104.

Full text
Abstract:
Purpose Enrolling in an academic discipline or selecting the college major choice is a dynamic process. Very few studies examine this aspect in India. This paper makes a humble attempt to fill this gap using NSSO 71st round data on social consumption on education. The purpose of this paper is to use multinomial regression model to study the different factors that influence course choice in higher education. The different factors (given the availability of information) considered relate to ability, gender, cost of higher education, socio-economic and geographical location. The results indicate that gender polarization is apparent between humanities and engineering. The predicated probabilities bring out the dichotomy between the choice of courses and levels of living expressed through consumption expenditures in terms of professional and non-professional courses. Predicted probabilities of course choices bring in a clear distinction between south and west regions preferring engineering and other professional courses, whereas north, east and NES prefer humanities. Design/methodology/approach The present paper follows the same approach as that of Turner and Bowen (1999). The Multinomial regression is specified as P ( M i = j ) = ( exp ( β j × X i ) / ∑ j − 1 5 exp ( β j × X i ) ) , where P (Mi=j) denotes the probability of choosing outcome j, the particular course/major choice that categorizes different disciplines. This response variable is specified with five categories: such as medicine, engineering, other professional courses, science and humanities. The authors’ primary interest is to determine the factors governing an individual’s decision to choose a particular subject field as compared to humanities. In other words, to make the system identifiable in the MLR, humanities is treated as a reference category. The vector Xi includes the set of explanatory variables and βj refers to the corresponding coefficients for each of the outcome j. From an aggregate perspective, the distribution of course choices is an important input to the skill (technical skills) composition of future workforce. In that sense, except humanities, the rest of the courses are technical-intensive courses; hence, humanities is treated as a reference category. Findings The results indicate that gender polarization is apparent between humanities and engineering. The predicated probabilities bring out the dichotomy between the choice of courses and levels of living expressed through consumption expenditures in terms of professional and non-professional courses. Predicted probabilities of course choices bring in a clear distinction between south and west regions preferring engineering and other professional courses, whereas north, east and NES prefer humanities. Research limitations/implications Predicted probabilities of course choices bring in a clear distinction between south and west regions preferring engineering and other professional courses, whereas north, east and NES prefer humanities. This course and regional imbalance need to be worked with multi-pronged strategies of providing both access to education and employment opportunities in other states. But the predicted probabilities of medicine and science remain similar across the board. Very few research studies on the determinants of field choice in higher education prevail in India. Research studies on returns to education by field or course choices hardly exist in India. These evidences are particularly important to know which course choices can support student loans, which can be the future area of work. Practical implications The research evidence is particularly important to know which course choices can support student loans, which can be the future area of work, as well as how to address the gender bias in the course choices. Social implications The paper has social implications in terms of giving insights into the course choices of students. These findings bring in implications for practice in their ability to predict the demand for course choices and their share of demand, not only in the labor market but also across regions. India has 36 states/UTs and each state/UT has a huge population size and large geographical areas. The choice of course has state-specific influence because of nature of state economy, society, culture and inherent education systems. Further, within the states, rural and urban variation has also a serious influence on the choice of courses. Originality/value The present study is a value addition on three counts. First, the choice of courses includes the recent trends in the preference over market-oriented/technical courses such as medicine, engineering and other professional courses (chartered accountancy and similar courses, courses from Industrial Training Institute, recognized vocational training institute, etc.). The choice of market-oriented courses has been examined in relation to the choice of conventional subjects. Second, the socio-economic background of students plays a significant role in the choice of courses. Third, the present paper uses the latest data on Social Consumption on Education.
APA, Harvard, Vancouver, ISO, and other styles
46

Aylmore, L. A. G., and H. J. Di. "Predicting the probabilities of groundwater contamination by pesticides under variable recharge." Soil Research 38, no. 3 (2000): 591. http://dx.doi.org/10.1071/sr99060.

Full text
Abstract:
A major deficiency in many models used to assess the groundwater contamination potential of organic pesticides used in agricultural and horticultural systems has been the assumption of a constant average groundwater recharge rate. This paper describes an enhanced version of a simple model designed to screen out or identify those pesticides that have a high probability for causing groundwater contamination in a region (e.g. a catchment, farm, or cropping area), which enables the temporal variability of recharge to be taken into account and illustrates the influence of a seasonal pattern of leaching and variations in predicted mobilities and persistence from those associated with averaging the recharge rate. The model is available as a user-friendly software package (PESTSCRN 3). The model was formulated based on the following assumptions: (1) linear, equilibrium, and reversible sorption; (2) first-order breakdown or degradation; (3) pesticide leaching by steady convective flow; and (4) recharge rate varying with time. Random values of the input parameters required by the calculations are generated from probability distributions as specified by the means (m) and standard deviations (s), assuming normal distributions. Outputs provide a statistical analysis of travel time and fraction of pesticide remaining at different soil depths. Most distributions for travel time are near normal with slight positive skewness and can be approximated as normal distributions. The frequency distributions of residue fractions for all the pesticides examined to date show significant positive skewness (Beta distributions), with the peak frequency towards the lower bound of 0. Simulations using metalaxyl as an example demonstrate that, depending on recharge conditions, the use of daily data instead of mean recharge data can result in important differences in the predicted values of both travel times and residue percentages.
APA, Harvard, Vancouver, ISO, and other styles
47

Park, Kyung-Sun. "Analysis of Worldwide Carrier Frequency and Predicted Genetic Prevalence of Autosomal Recessive Congenital Hypothyroidism Based on a General Population Database." Genes 12, no. 6 (June 4, 2021): 863. http://dx.doi.org/10.3390/genes12060863.

Full text
Abstract:
To assess how genomic information of the general population reflects probabilities of developing diseases and the differences in those probabilities among ethnic groups, a general population database was analyzed with an example of congenital hypothyroidism. Twelve candidate genes that follow an autosomal recessive inheritance pattern in congenital hypothyroidism (SLC5A5, TPO, TG, IYD, DUOXA2, DUOX2, TSHR, SLC26A7, GLIS3, FOXE1, TSHB, TRHR) in the gnomAD database (v2.1.1) were analyzed. The carrier frequency (CF) and predicted genetic prevalence (pGP) were estimated. The total CF in the overall population was 3.6%. DUOX2 showed the highest CF (1.8%), followed by TG (0.46%), TPO (0.44%), TSHR (0.31%), SLC26A7 (0.144%), DUOXA2 (0.141%), IYD (0.08%), SLC5A5 (0.06%), TRHR (0.059%), GLIS3 (0.059%), TSHB (0.04%), and FOXE1 (0%). The pGP in the overall population was 10.01 individuals per 100,000 births (1:9992). The highest pGP was in the East Asian population at 52.48 per 100,000 births (1:1905), followed by Finnish (35.96), Non-Finnish European (9.56), African/African American (4.0), Latino/Admixed American (3.89), South Asian (3.56), and Ashkenazi Jewish (1.81) groups. Comparing the pGP with the real incidence of congenital hypothyroidism, the pGP in East Asian populations was highly consistent with the real incidence.
APA, Harvard, Vancouver, ISO, and other styles
48

Xu, Weng Ming, Gang Cheng Wang, and Chong Liu. "A Method Based on Neural Network for Risk Prediction of the Typical Moraine-Dammed Lake Outburst in the Himalayan Region." Advanced Materials Research 535-537 (June 2012): 1799–802. http://dx.doi.org/10.4028/www.scientific.net/amr.535-537.1799.

Full text
Abstract:
In this research, based on neural network, thirty typical moraine-dammed lakes were selected as the training set. In accordance with the rules defined, ten evaluation indexes were made dimensionless and used to train the model. Then the research could get the applicable model that evaluated the probabilities of moraine-dammed lake outburst in the Himalayas region of Tibet, China. Then the probability of outburst was predicted for the Laqu Lake based on the developed model, and the predictive value was 0.538. In terms of risk level standards divided, the Laqu Lake was high-risk, which is consistent with the field survey. It well demonstrated the applicability that using the neural network to assess the probabilities of moraine-dammed lake outburst.
APA, Harvard, Vancouver, ISO, and other styles
49

Rachlin, Howard, Eric Siegel, and David Cross. "Lotteries and the Time Horizon." Psychological Science 5, no. 6 (November 1994): 390–93. http://dx.doi.org/10.1111/j.1467-9280.1994.tb00291.x.

Full text
Abstract:
People chose between hypothetical alternatives of (a) a million-dollar lottery prize and (b) a much smaller but certain amount of money When probabilities of winning the lottery were above about 1/100,000, subjects avoided risk, for example, a 1/100,000 probability of the million-dollar prize was chosen about as often as $700 for sure But at probabilities below 1/100,000, subjects sought risk, for example, a 1/1,000,000 probability of the million-dollar prize was chosen about as often as $9 for sure This crossover from risk aversion to risk seeking is predicted by Mazur's (1987) hyperbolic delay discount function with probability expressed as average delay due to strings of losses followed by a win, together with a time horizon limiting subjective delay
APA, Harvard, Vancouver, ISO, and other styles
50

Bailey, Sarah A., Luis Antonio Vélez-Espino, Ora E. Johannsson, Marten A. Koops, and Chris J. Wiley. "Estimating establishment probabilities of Cladocera introduced at low density: an evaluation of the proposed ballast water discharge standards." Canadian Journal of Fisheries and Aquatic Sciences 66, no. 2 (February 2009): 261–76. http://dx.doi.org/10.1139/f08-200.

Full text
Abstract:
The International Maritime Organization has proposed ballast water discharge standards to reduce densities of taxa transported in ballast water. While reduced propagule pressure will almost certainly lower the risk of species introductions, we ask if the proposed standards will be effective against taxa capable of rapid population growth from small inocula, such as parthenogenetic taxa. We modified a diffusion approximation to calculate establishment probabilities as the probability of reaching a critical threshold density necessary to initiate sexual reproduction (and production of diapausing eggs for long-term persistence) before the onset of adverse environmental conditions. We parameterize our model for six Cladocera using growth rates measured during enclosure experiments conducted under favourable environmental conditions (i.e., using a precautionary approach). We then examine predicted establishment probabilities at different inoculum densities to determine the efficacy of the proposed ballast water discharge standards. Our results indicate that future technologies in compliance with the proposed standards could reduce establishment probabilities for some parthenogenetic zooplankton threefold; however, there is still a risk of establishment for some taxa.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography