To see the other types of publications on this topic, follow the link: Zero-truncated binomial distribution.

Journal articles on the topic 'Zero-truncated binomial distribution'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 43 journal articles for your research on the topic 'Zero-truncated binomial distribution.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Sinha, Arun K., and Rajiv Kumar. "The Zero-Truncated Symmetrical Bivariate Negative Binomial Distribution." American Journal of Mathematical and Management Sciences 21, no. 1-2 (2001): 57–68. http://dx.doi.org/10.1080/01966324.2001.10737537.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mecha, Peter, Isaac Kipchirchir, George Muhua, and Joseph Ottieno. "Lifetime Distribution Based on Generators for Discrete Mixtures with Application to Lomax Distribution." American Journal of Theoretical and Applied Statistics 14, no. 2 (2025): 51–60. https://doi.org/10.11648/j.ajtas.20251402.11.

Full text
Abstract:
In various application areas, data is frequently collected and analyzed using basic statistical distributions such as exponential, Poisson, and gamma distributions. However, these traditional distributions often fail to adequately capture the inherent heterogeneity present in real-world data. This limitation highlights the need for more flexible distributions that can address these complexities. Such distributions can be generated through techniques like reparameterization, generalization, compounding, and mixing. This paper focuses on deriving generators for survival functions of discrete mixtures using minimum and maximum order statistic distributions. The approach leverages the probability generating function (PGF) techniques of mixing distributions, including zero-truncated Poisson, shifted geometric, zero-truncated binomial, zero-truncated negative binomial, and logarithmic series distributions. Specifically, the derived generator was applied to Lomax distributions to construct survival functions. Consequently, the probability density function (PDF) and failure rate of the resulting discrete mixtures were also obtained. Furthermore, the paper examines the shapes of the PDF and failure rate for discrete mixtures derived from the zero-truncated Poisson distribution. Notably, the failure rates of discrete mixtures generated using minimum and maximum order statistics from the Lomax distribution exhibited distinct behaviors. The failure rate for the minimum order statistic was observed to decrease, while the failure rate for the maximum order statistic showed a combination of non-decreasing and bathtub-shaped patterns.
APA, Harvard, Vancouver, ISO, and other styles
3

Irshad, Muhammed Rasheed, Christophe Chesneau, Damodaran Santhamani Shibu, Mohanan Monisha, and Radhakumari Maya. "A Novel Generalization of Zero-Truncated Binomial Distribution by Lagrangian Approach with Applications for the COVID-19 Pandemic." Stats 5, no. 4 (2022): 1004–28. http://dx.doi.org/10.3390/stats5040060.

Full text
Abstract:
The importance of Lagrangian distributions and their applicability in real-world events have been highlighted in several studies. In light of this, we create a new zero-truncated Lagrangian distribution. It is presented as a generalization of the zero-truncated binomial distribution (ZTBD) and hence named the Lagrangian zero-truncated binomial distribution (LZTBD). The moments, probability generating function, factorial moments, as well as skewness and kurtosis measures of the LZTBD are discussed. We also show that the new model’s finite mixture is identifiable. The unknown parameters of the LZTBD are estimated using the maximum likelihood method. A broad simulation study is executed as an evaluation of the well-established performance of the maximum likelihood estimates. The likelihood ratio test is used to assess the effectiveness of the third parameter in the new model. Six COVID-19 datasets are used to demonstrate the LZTBD’s applicability, and we conclude that the LZTBD is very competitive on the fitting objective.
APA, Harvard, Vancouver, ISO, and other styles
4

Jolayemi, E. Teju, and S. A. Aderoju. "On Zero-Truncated Negative Binomial with Excess Ones." Asian Journal of Probability and Statistics 22, no. 3 (2023): 45–50. http://dx.doi.org/10.9734/ajpas/2023/v22i3487.

Full text
Abstract:
In this paper, Zero-truncated negative binomial distribution is modified to include excess ones to improve goodness-of-fit. This is necessary when data are dispersed and zero has been eliminated from data structurally. However, when the ones are unduly large, the proportion of this excess must be recognized and estimated to improve the fit. This development is applied using real data from a national survey.
APA, Harvard, Vancouver, ISO, and other styles
5

Ahmad, Zahoor, and Adil Rashid. "Maximum Entropy Formalism for Zero Truncated Poission and Binomial Distribution." Journal of Statistics Applications & Probability 6, no. 2 (2017): 441–44. http://dx.doi.org/10.18576/jsap/060218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sitho, Surang, Sunthree Denthet, and Hira Nadeem. "Zero Truncated Negative Binomial Weighted Weibull Distribution and Its Application." Lobachevskii Journal of Mathematics 42, no. 13 (2021): 3241–52. http://dx.doi.org/10.1134/s1995080222010206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bodhisuwan, Rujira, Sunthree Denthet, and Tannen Acoose. "Zero-Truncated Negative Binomial Weighted-Lindley Distribution and Its Application." Lobachevskii Journal of Mathematics 42, no. 13 (2021): 3105–11. http://dx.doi.org/10.1134/s1995080222010061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tarami, Bahram, Mohsen Avaji, and Nahid Sanjari Farsipour. "Distributions Family of Extended Weibull Combined with Negative Binomial Distribution Truncated at Zero." Journal of Statistical Sciences 15, no. 1 (2021): 165–91. http://dx.doi.org/10.52547/jss.15.1.9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Neal, Allison T. "Distribution of clones among hosts for the lizard malaria parasite Plasmodium mexicanum." PeerJ 9 (November 2, 2021): e12448. http://dx.doi.org/10.7717/peerj.12448.

Full text
Abstract:
Background Malaria parasites reproduce asexually, leading to the production of large numbers of genetically identical parasites, here termed a clonal line or clone. Infected hosts may harbor one or more clones, and the number of clones in a host is termed multiplicity of infection (MOI). Understanding the distribution of parasite clones among hosts can shed light on the processes shaping this distribution and is important for modeling MOI. Here, I determine whether the distribution of clones of the lizard malaria parasite Plasmodium mexicanum differ significantly from statistical distributions commonly used to model MOI and logical extensions of these models. Methods The number of clones per infection was assessed using four microsatellite loci with the maximum number of alleles at any one locus used as a simple estimate of MOI for each infection. I fit statistical models (Poisson, negative binomial, zero-inflated models) to data from four individual sites to determine a best fit model. I also simulated the number of alleles per locus using an unbiased estimate of MOI to determine whether the simple (but potentially biased) method I used to estimate MOI influenced model fit. Results The distribution of clones among hosts at individual sites differed significantly from traditional Poisson and negative binomial distributions, but not from zero-inflated modifications of these distributions. A consistent excess of two-clone infections and shortage of one-clone infections relative to all fit distributions was also observed. Any bias introduced by the simple method for estimating of MOI did not appear to qualitatively alter the results. Conclusions The statistical distributions used to model MOI are typically zero-truncated; truncating the Poisson or zero-inflated Poisson yield the same distribution, so the reasonable fit of the zero-inflated Poisson to the data suggests that the use of the zero-truncated Poisson in modeling is adequate. The improved fit of zero-inflated distributions relative to standard distributions may suggest that only a portion of the host population is located in areas suitable for transmission even at small sites (<1 ha). Collective transmission of clones and premunition may also contribute to deviations from standard distributions.
APA, Harvard, Vancouver, ISO, and other styles
10

Kyriakoussis, A., and Alex S. Papadopoulos. "The zero-truncated negative binomial distribution as a failure model from the Bayesian approach." Microelectronics Reliability 32, no. 1-2 (1992): 259–64. http://dx.doi.org/10.1016/0026-2714(92)90104-s.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Mishra, A. "A new generalization of the logarithmic series distribution." Studia Scientiarum Mathematicarum Hungarica 51, no. 1 (2014): 41–49. http://dx.doi.org/10.1556/sscmath.51.2014.1.1257.

Full text
Abstract:
A new generalization of the logarithmic series distribution has been obtained as a limiting case of the zero-truncated Mishra’s [10] generalized negative binomial distribution (GNBD). This distribution has an advantage over the Mishra’s [9] quasi logarithmic series distribution (QLSD) as its moments appear in compact forms unlike the QLSD. This makes the estimation of parameters easier by the method of moments. The first four moments of this distribution have been obtained and the distribution has been fitted to some well known data-sets to test its goodness of fit.
APA, Harvard, Vancouver, ISO, and other styles
12

Wilson, Paul, and Jochen Einbeck. "A new and intuitive test for zero modification." Statistical Modelling 19, no. 4 (2018): 341–61. http://dx.doi.org/10.1177/1471082x18762277.

Full text
Abstract:
Abstract: While there do exist several statistical tests for detecting zero modification in count data regression models, these rely on asymptotical results and do not transparently distinguish between zero inflation and zero deflation. In this manuscript, a novel non-asymptotic test is introduced which makes direct use of the fact that the distribution of the number of zeros under the null hypothesis of no zero modification can be described by a Poisson-binomial distribution. The computation of critical values from this distribution requires estimation of the mean parameter under the null hypothesis, for which a hybrid estimator involving a zero-truncated mean estimator is proposed. Power and nominal level attainment rates of the new test are studied, which turn out to be very competitive to those of the likelihood ratio test. Illustrative data examples are provided.
APA, Harvard, Vancouver, ISO, and other styles
13

Simon, Thorsten, Georg J. Mayr, Nikolaus Umlauf, and Achim Zeileis. "NWP-based lightning prediction using flexible count data regression." Advances in Statistical Climatology, Meteorology and Oceanography 5, no. 1 (2019): 1–16. http://dx.doi.org/10.5194/ascmo-5-1-2019.

Full text
Abstract:
Abstract. A method to predict lightning by postprocessing numerical weather prediction (NWP) output is developed for the region of the European Eastern Alps. Cloud-to-ground (CG) flashes – detected by the ground-based Austrian Lightning Detection & Information System (ALDIS) network – are counted on the 18×18 km2 grid of the 51-member NWP ensemble of the European Centre for Medium-Range Weather Forecasts (ECMWF). These counts serve as the target quantity in count data regression models for the occurrence of lightning events and flash counts of CG. The probability of lightning occurrence is modelled by a Bernoulli distribution. The flash counts are modelled with a hurdle approach where the Bernoulli distribution is combined with a zero-truncated negative binomial. In the statistical models the parameters of the distributions are described by additive predictors, which are assembled using potentially nonlinear functions of NWP covariates. Measures of location and spread of 100 direct and derived NWP covariates provide a pool of candidates for the nonlinear terms. A combination of stability selection and gradient boosting identifies the nine (three) most influential terms for the parameters of the Bernoulli (zero-truncated negative binomial) distribution, most of which turn out to be associated with either convective available potential energy (CAPE) or convective precipitation. Markov chain Monte Carlo (MCMC) sampling estimates the final model to provide credible inference of effects, scores, and predictions. The selection of terms and MCMC sampling are applied for data of the year 2016, and out-of-sample performance is evaluated for 2017. The occurrence model outperforms a reference climatology – based on 7 years of data – up to a forecast horizon of 5 days. The flash count model is calibrated and also outperforms climatology for exceedance probabilities, quantiles, and full predictive distributions.
APA, Harvard, Vancouver, ISO, and other styles
14

Chakraborty, A. B., A. Khurshid, and R. Acharjee. "Measurement error effect on the power of control chart for Zero truncated Negative Binomial Distribution (ZTNBD)." Yugoslav Journal of Operations Research 27, no. 4 (2017): 451–62. http://dx.doi.org/10.2298/yjor161028002c.

Full text
Abstract:
In the present article measurement error effect on the power of control chart for ZTNBD is investigated based on standardized normal variate. Numerical calculations are presented so as to enable an appreciation of the consequences of measurement errors on the power curve. To examine the sensitivity of the monitoring procedure, average run length is also considered.
APA, Harvard, Vancouver, ISO, and other styles
15

Walters, Glenn D. "Using Poisson Class Regression To Analyze Count Data in Correctional and Forensic Psychology." Criminal Justice and Behavior 34, no. 12 (2007): 1659–74. http://dx.doi.org/10.1177/0093854807307030.

Full text
Abstract:
The benchmark model for count data is the Poisson distribution, and the standard statistical procedure for analyzing count data is Poisson regression. However, highly restrictive assumptions lead to frequent misspecification of the Poisson model. Alternate approaches, such as negative binomial regression, zero modified procedures, and truncated and censored models are consequently required to handle count data in many social science contexts. Empirical examples from correctional and forensic psychology are provided to illustrate the importance of replacing ordinary least squares regression with Poisson class procedures in situations when count data are analyzed.
APA, Harvard, Vancouver, ISO, and other styles
16

Das, P. K., A. Manoharan, A. Srividya, B. T. Grenfell, D. A. P. Bundy, and P. Vanamail. "Frequency distribution of Wuchereria bancrofti microfilariae in human populations and its relationships with age and sex." Parasitology 101, no. 3 (1990): 429–34. http://dx.doi.org/10.1017/s0031182000060625.

Full text
Abstract:
SUMMARYThis paper examines the effects of host age and sex on the frequency distribution of Wuchereria bancrofti infections in the human host. Microfilarial counts from a large data base on the epidemiology of bancroftian filariasis in Pondicherry, South India are analysed. Frequency distributions of microfilarial counts divided by age are successfully described by zero-truncated negative binomial distributions, fitted by maximum likelihood. Parameter estimates from the fits indicate a significant trend of decreasing overdispersion with age in the distributions above age 10; this pattern provides indirect evidence for the operation of density-dependent constraints on microfilarial intensity. The analysis also provides estimates of the proportion of mf-positive individuals who are identified as negative due to sampling errors (around 5% of the total negatives). This allows the construction of corrected mf age–prevalence curves, which indicate that the observed prevalence may underestimate the true figures by between 25% and 100%. The age distribution of mf-negative individuals in the population is discussed in terms of current hypotheses about the interaction between disease and infection.
APA, Harvard, Vancouver, ISO, and other styles
17

Srividya, A., K. Krishnamoorthy, S. Sabesan, K. N. Panicker, B. T. Grenfell, and D. A. P. Bundy. "Frequency distribution of Brugia malayi microfilariae in human populations." Parasitology 102, no. 2 (1991): 207–12. http://dx.doi.org/10.1017/s0031182000062508.

Full text
Abstract:
SUMMARYThis study examines the effects of host age and sex on the frequency distribution of Brugia malayi infections in the human host. Microfilarial (mf) counts for a large data base on the epidemiology of brugian filariasis in Shertallai, Kerala, South India are analysed. Frequency distributions of microfilarial counts partitioned by age are successfully described by zero-truncated negative binomial distributions, fitted by maximum likelihood. This analysis provides estimates of the proportion of mf-positive individuals who are identified as negative due to sampling errors, allowing the construction of corrected mf age–prevalence curves, which indicate that the observed prevalence may under-estimate the true figures by between 18 and 47%. There is no evidence from these results for a decrease in the degree of over-dispersion of parasite frequency distributions with host age, such as might be produced by the acquired immunity to infection. This departure from the pattern in bancroftian filariasis (where there is evidence of such decreases in over-dispersion; Das et al. 1990) is discussed in terms of the long history of filariasis control (and consequently low infection prevalence) in Shertallai.
APA, Harvard, Vancouver, ISO, and other styles
18

Grenfell, B. T., P. K. Das, P. K. Rajagopalan, and D. A. P. Bundy. "Frequency distribution of lymphatic filariasis microfilariae in human populations: population processes and statistical estimation." Parasitology 101, no. 3 (1990): 417–27. http://dx.doi.org/10.1017/s0031182000060613.

Full text
Abstract:
SUMMARYThis paper uses simple mathematical models and statistical estimation techniques to analyse the frequency distribution of microfilariae (mf) in blood samples from human populations which are endemic for lymphatic filariasis. The theoretical analysis examines the relationship between microfilarial burdens and the prevalence of adult (macrofilarial) worms in the human host population. The main finding is that a large proportion of observed mf-negatives may be ‘true’ zeros, arising from the absence of macrofilarial infections or unmated adult worms, rather than being attributable to the blood sampling process. The corresponding mf distribution should then follow a Poisson mixture, arising from the sampling of mf positives, with an additional proportion of ‘true’ mf-zeros. This hypothesis is supported by analysis of observed Wuchereria bancrofti mf distributions from Southern India, Japan and Fiji, in which zero-truncated Poisson mixtures fit mf-positive counts more effectively than distributions including the observed zeros. The fits of two Poisson mixtures, the negative binomial and the Sichel distribution, are compared. The Sichel provides a slightly better empirical description of the mf density distribution; reasons for this improvement, and a discussion of the relative merits of the two distributions, are presented. The impact on observed mf distributions of increasing blood sampling volume and extraction efficiency are illustrated via a simple model, and directions for future work are identified.
APA, Harvard, Vancouver, ISO, and other styles
19

MacNeil, M. Aaron, John K. Carlson, and Lawrence R. Beerkircher. "Shark depredation rates in pelagic longline fisheries: a case study from the Northwest Atlantic." ICES Journal of Marine Science 66, no. 4 (2009): 708–19. http://dx.doi.org/10.1093/icesjms/fsp022.

Full text
Abstract:
Abstract MacNeil, M. A., Carlson, J. K., and Beerkircher, L. R. 2009. Shark depredation rates in pelagic longline fisheries: a case study from the Northwest Atlantic. – ICES Journal of Marine Science, 66: 708–719. A suite of modelling approaches was employed to analyse shark depredation rates from the US Atlantic pelagic longline fishery. As depredation events are relatively rare, there are a large number of zeroes in pelagic longline data and conventional generalized linear models (GLMs) may be ineffective as tools for statistical inference. GLMs (Poisson and negative binomial), two-part (delta-lognormal and truncated negative binomial, T-NB), and mixture models (zero-inflated Poisson, ZIP, and zero-inflated negative binomial, ZINB) were used to understand the factors that contributed most to the occurrence of depredation events that included a small proportion of whale damage. Of the six distribution forms used, only the ZIP and T-NB models performed adequately in describing depredation data, and the T-NB and ZINB models outperformed the ZIP models in bootstrap cross-validation estimates of prediction error. Candidate T-NB and ZINB model results showed that encounter probabilities were more strongly related to large-scale covariates (space, season) and that depredation counts were correlated with small-scale characteristics of the fishery (temperature, catch composition). Moreover, there was little evidence of historical trends in depredation rates. The results show that the factors contributing to most depredation events are those already controlled by ships' captains and, beyond novel technologies to repel sharks, there may be little more to do to reduce depredation loss in the fishery within current economic and operational constraints.
APA, Harvard, Vancouver, ISO, and other styles
20

Jiang, Lin, Jingjing Zheng, Johnny S. H. Kwan, et al. "WITER: a powerful method for estimation of cancer-driver genes using a weighted iterative regression modelling background mutation counts." Nucleic Acids Research 47, no. 16 (2019): e96-e96. http://dx.doi.org/10.1093/nar/gkz566.

Full text
Abstract:
Abstract Genomic identification of driver mutations and genes in cancer cells are critical for precision medicine. Due to difficulty in modelling distribution of background mutation counts, existing statistical methods are often underpowered to discriminate cancer-driver genes from passenger genes. Here we propose a novel statistical approach, weighted iterative zero-truncated negative-binomial regression (WITER, http://grass.cgs.hku.hk/limx/witer or KGGSeq,http://grass.cgs.hku.hk/limx/kggseq/), to detect cancer-driver genes showing an excess of somatic mutations. By fitting the distribution of background mutation counts properly, this approach works well even in small or moderate samples. Compared to alternative methods, it detected more significant and cancer-consensus genes in most tested cancers. Applying this approach, we estimated 229 driver genes in 26 different types of cancers. In silico validation confirmed 78% of predicted genes as likely known drivers and many other genes as very likely new drivers for corresponding cancers. The technical advances of WITER enable the detection of driver genes in TCGA datasets as small as 30 subjects and rescue of more genes missed by alternative tools in moderate or small samples.
APA, Harvard, Vancouver, ISO, and other styles
21

Holovach, Phillip G., Wei-Wen Hsu, and Alan B. Fleischer. "Number Bias in Clinicians’ Documentation of Actinic Keratosis Removal." Journal of Clinical Medicine 13, no. 1 (2023): 202. http://dx.doi.org/10.3390/jcm13010202.

Full text
Abstract:
Background: Actinic keratosis (AK) is a pre-cancerous skin condition caused by sun exposure. Number bias, a phenomenon that occurs when meaning other than numerical value is associated with numbers, may influence the reporting of AK removal. The present study aims to determine if number bias is affecting healthcare providers’ documentation of patient-provider encounters. Methods: A single-center retrospective chart review of 1415 patients’ charts was conducted at the University of Cincinnati Medical Center. To determine if there was a significant difference between even and odd-numbered AK removals reported, an exact binomial test was used. The frequency of removals per encounter was fitted to a zero-truncated negative binomial distribution to predict the number of removals expected. All data were analyzed with RStudio. Results: There were 741 odd and 549 even encounters. Odd removals were reported at a significantly greater frequency than even p < 0.001. Age may be contributing to the observed number bias (p < 0.001). One, two, and eight were reportedly removed more frequently, while nine, 13, and 14 were reportedly removed less frequently than expected, respectively. Conclusion: Number bias may be affecting clinicians’ documentation of AK removal and should be investigated in other clinical settings.
APA, Harvard, Vancouver, ISO, and other styles
22

Bień-Barkowska, Katarzyna. "Looking at Extremes without Going to Extremes: A New Self-Exciting Probability Model for Extreme Losses in Financial Markets." Entropy 22, no. 7 (2020): 789. http://dx.doi.org/10.3390/e22070789.

Full text
Abstract:
Forecasting market risk lies at the core of modern empirical finance. We propose a new self-exciting probability peaks-over-threshold (SEP-POT) model for forecasting the extreme loss probability and the value at risk. The model draws from the point-process approach to the POT methodology but is built under a discrete-time framework. Thus, time is treated as an integer value and the days of extreme loss could occur upon a sequence of indivisible time units. The SEP-POT model can capture the self-exciting nature of extreme event arrival, and hence, the strong clustering of large drops in financial prices. The triggering effect of recent events on the probability of extreme losses is specified using a discrete weighting function based on the at-zero-truncated Negative Binomial (NegBin) distribution. The serial correlation in the magnitudes of extreme losses is also taken into consideration using the generalized Pareto distribution enriched with the time-varying scale parameter. In this way, recent events affect the size of extreme losses more than distant events. The accuracy of SEP-POT value at risk (VaR) forecasts is backtested on seven stock indexes and three currency pairs and is compared with existing well-recognized methods. The results remain in favor of our model, showing that it constitutes a real alternative for forecasting extreme quantiles of financial returns.
APA, Harvard, Vancouver, ISO, and other styles
23

DeCarli, Kathryn, Joshua Ray Tanzer, Amelia Tajik, Camille Higel-Mcgovern, Christine Mary Duffy, and Mary Lorraine Lopresti. "An assessment of fertility preservation practices among oncology providers." Journal of Clinical Oncology 39, no. 28_suppl (2021): 267. http://dx.doi.org/10.1200/jco.2020.39.28_suppl.267.

Full text
Abstract:
267 Background: Chemotherapy accelerates the natural decline of ovarian reserve. Women with a new cancer diagnosis commonly experience psychosocial distress around anticipated fertility loss. Fertility preservation via oocyte cryopreservation or temporary ovarian suppression with GnRH agonists may address this concern. ASCO guidelines recommend early discussion of fertility, preservation methods, psychosocial distress counseling, and referral to a fertility specialist. Disparities have been shown in fertility counseling rates based on patient age, race and cancer type. We sought to identify patterns in fertility preservation practices at Lifespan Cancer Institute. Methods: We retrospectively reviewed the medical record of female patients aged 18-45 years at time of solid tumor or lymphoma diagnosis in the years 2014-2019 who received chemotherapy. We compared documented fertility discussions and referrals across patient demographics and provider characteristics. Generalized mixed effects modeling was used with a logit link or a log link (negative binomial or zero inflated truncated Poisson distribution). Results: Among 181 patients who met eligibility criteria, the median age was 38 years with 140 (77.3%) White and 23 (12.7%) Hispanic. Only 112 patients (61.9%) had a conversation about fertility documented by a medical oncologist. Overall, 42 (23.2%) were referred to a fertility specialist and 28 (15.5%) received fertility preservation. Older patients and patients with higher parity were less likely to have a conversation about fertility with their oncologist (parity: OR = 0.33, p = 0.0020; age: OR = 0.64, p = 0.0439) or to be referred to a fertility specialist (parity: OR = 0.87, p = < 0.0001; age: OR = 0.97, p < 0.0001). Male providers were less likely to refer patients to a specialist (OR = 0.85, p = 0.0155) or discuss fertility (OR = 0.02, p = 0.0164). On average, male providers had much shorter conversations about fertility (Cohen’s d = 1.01, p = 0.0007). Male providers were slightly more likely to refer patients of color to a fertility specialist than White patients (OR = 1.26, p = 0.0684). Patients with breast cancer were more likely to have discussions about fertility than patients with other cancers ( p < 0.0001). Conclusions: We found disparities among patient age, parity, cancer type and provider sex in fertility preservation practices at our institution. Though not statistically significant, we also found disparities among patient race. Nearly all breast cancer providers at our institution are female and use a note template that includes fertility preservation. Providers in other cancer subtypes may be less accustomed to addressing fertility based on their patient populations. A major limitation is that we were only able to capture explicitly documented conversations. This needs assessment supports implementation of a systematic approach to promote fertility preservation as a quality measure across all cancer types.
APA, Harvard, Vancouver, ISO, and other styles
24

Ghosh, Indranil, and Tamara D. H. Cooper. "On Surprise Indices Related to Univariate Discrete and Continuous Distributions: A Survey." Mathematics 11, no. 14 (2023): 3234. http://dx.doi.org/10.3390/math11143234.

Full text
Abstract:
The notion that the occurrence of an event is surprising has been discussed in the literature without adequate details. By definition, a surprise index is an index by which how surprising an event is may be determined. Since its inception, this index has been evaluated for univariate discrete probability models, such as the binomial, negative binomial, and Poisson probability distributions. In this article, we derive and discuss using numerical studies, in addition to the above-mentioned probability models, surprise indices for several other univariate discrete probability models, such as the zero-truncated Poisson, geometric, Hermite, and Skellam distributions, by adopting a established strategy and using the Mathematica, version 12 software. In addition, we provide symbolical expressions for the surprise index for several univariate continuous probability models, which has not been previously discussed. For illustrative purposes, we present some possible real-life applications of this index and potential challenges to extending the notion of the surprise index to bivariate and higher dimensions, which might involve ubiquitous normalizing constants.
APA, Harvard, Vancouver, ISO, and other styles
25

Neelon, Brian, Howard H. Chang, Qiang Ling, and Nicole S. Hastings. "Spatiotemporal hurdle models for zero-inflated count data: Exploring trends in emergency department visits." Statistical Methods in Medical Research 25, no. 6 (2016): 2558–76. http://dx.doi.org/10.1177/0962280214527079.

Full text
Abstract:
Motivated by a study exploring spatiotemporal trends in emergency department use, we develop a class of two-part hurdle models for the analysis of zero-inflated areal count data. The models consist of two components—one for the probability of any emergency department use and one for the number of emergency department visits given use. Through a hierarchical structure, the models incorporate both patient- and region-level predictors, as well as spatially and temporally correlated random effects for each model component. The random effects are assigned multivariate conditionally autoregressive priors, which induce dependence between the components and provide spatial and temporal smoothing across adjacent spatial units and time periods, resulting in improved inferences. To accommodate potential overdispersion, we consider a range of parametric specifications for the positive counts, including truncated negative binomial and generalized Poisson distributions. We adopt a Bayesian inferential approach, and posterior computation is handled conveniently within standard Bayesian software. Our results indicate that the negative binomial and generalized Poisson hurdle models vastly outperform the Poisson hurdle model, demonstrating that overdispersed hurdle models provide a useful approach to analyzing zero-inflated spatiotemporal data.
APA, Harvard, Vancouver, ISO, and other styles
26

Gao, Dechen, and Kristina P. Sendova. "Applications of the classical compound Poisson model with claim sizes following a compound distribution." Probability in the Engineering and Informational Sciences, July 14, 2022, 1–30. http://dx.doi.org/10.1017/s0269964822000195.

Full text
Abstract:
In this paper, we discuss a generalization of the classical compound Poisson model with claim sizes following a compound distribution. As applications, we consider models involving zero-truncated geometric, zero-truncated negative-binomial and zero-truncated binomial batch-claim arrivals. We also provide some ruin-related quantities under the resulting risk models. Finally, through numerical examples, we visualize the behavior of these quantities.
APA, Harvard, Vancouver, ISO, and other styles
27

Zapata, Zakry, Stephen A. Sedory, and Sarjinder Singh. "Zero-truncated Binomial Distribution as a Randomization Device." Sociological Methods & Research, December 12, 2019, 004912411988246. http://dx.doi.org/10.1177/0049124119882469.

Full text
Abstract:
In this article, we consider the use of the zero-truncated binomial distribution as a randomization device while estimating the population proportion of a sensitive characteristic. The resultant new estimator based on the zero-truncated binomial distribution is then compared to its competitors from both the efficiency and the protection point of views. The situations where the proposed new estimator performs better are reported. R codes are also included.
APA, Harvard, Vancouver, ISO, and other styles
28

Kiani, Tassaddaq Hussain. "A zero truncated discrete distribution: Theory and applications to count data." Pakistan Journal of Statistics and Operation Research, March 6, 2020, 167–90. http://dx.doi.org/10.18187/pjsor.v16i1.2133.

Full text
Abstract:
The analysis and modeling of zero truncated count data is of primary interest in many elds such as engineering, public health, sociology, psychology, epidemiology. Therefore, in this article we have proposed a new and simple structure model, named a zero truncated discrete Lindley distribution. Thedistribution contains some submodels and represents a two-component mixture of a zero truncated geometric distribution and a zero truncated negative binomial distribution with certain parameters. Several properties of the distribution are obtained such as mean residual life function, probability generating function, factorial moments, negative moments, moments of residual life function, Bonferroni and Lorenz curves, estimation of parameters, Shannon and Renyi entropies, order statistics with the asymptotic distribution of their extremes and range, a characterization, stochastic ordering and stress-strength parameter. Moreover, the collective risk model is discussed by considering theproposed distribution as primary distribution and exponential and Erlang distributions as secondary ones. Test and evaluation statistics as well as three real data applications are considered to assess the peformance of the distribution among the most frequently zero truncated discrete probability models.
APA, Harvard, Vancouver, ISO, and other styles
29

Khurshid, Anwer, and Ashit B. Chakraborty. "On Shewhart Control Charts for Zero-Truncated Negative Binomial Distributions." Pakistan Journal of Engineering, Technology & Science 4, no. 1 (2016). http://dx.doi.org/10.22555/pjets.v4i1.521.

Full text
Abstract:
<p><span>The negative binomial distribution (NBD) is extensively used for the<br /><span>description of data too heterogeneous to be fitted by Poisson<br /><span>distribution. Observed samples, however may be truncated, in the<br /><span>sense that the number of individuals falling into zero class cannot be<br /><span>determined, or the observational apparatus becomes active when at<br /><span>least one event occurs. Chakraborty and Kakoty (1987) and<br /><span>Chakraborty and Singh (1990) have constructed CUSUM and<br /><span>Shewhart charts for zero-truncated Poisson distribution respectively.<br /><span>Recently, Chakraborty and Khurshid (2011 a, b) have constructed<br /><span>CUSUM charts for zero-truncated binomial distribution and doubly<br /><span>truncated binomial distribution respectively. Apparently, very little<br /><span>work has specifically addressed control charts for the NBD (see, for<br /><span>example, Kaminsky et al., 1992; Ma and Zhang, 1995; Hoffman, 2003;<br /><span>Schwertman. 2005).<br /></span></span></span></span></span></span></span></span></span></span></span></span></span></span></p><p><span><span><span><span><span><span><span><span><span><span><span><span><span><span><span>The purpose of this paper is to construct Shewhart control charts<br /><span>for zero-truncated negative binomial distribution (ZTNBD). Formulae<br /><span>for the Average run length (ARL) of the charts are derived and studied<br /><span>for different values of the parameters of the distribution. OC curves<br /><span>are also drawn.</span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span><br /><br class="Apple-interchange-newline" /></span></p>
APA, Harvard, Vancouver, ISO, and other styles
30

PRATHEEBA, Dr S. "Designing Of Double Sampling Plan Indexed Through Six Sigma Quality Level – 1 Using Truncated Binomial Distribution." Educational Administration: Theory and Practice, 2024. http://dx.doi.org/10.53555/kuey.v30i5.2658.

Full text
Abstract:
Among the various probability distributions utilized to characterize situations where an observational apparatus becomes active only upon the occurrence of at least one event, the Zero Truncated Poisson Distribution (ZTPD) stands out. Shanmugam (1985) demonstrated the applicability of the ZTPD in modeling scenarios such as second quality lots, where there exists the possibility of at least one defective item in the sample. In this paper, a systematic procedure for constructing Double Sampling Plans, indexed through Six Sigma Quality Level – 1 (SSQL - 1), is presented. The methodology employs the Truncated Binomial Distribution (TBD) as the baseline distribution. Furthermore, to facilitate the selection of these plans, tables generated using an Excel packages are provided, ensuring ease of use and accessibility.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhao, Shi, Mingwang Shen, Salihu S. Musa, et al. "Inferencing superspreading potential using zero-truncated negative binomial model: exemplification with COVID-19." BMC Medical Research Methodology 21, no. 1 (2021). http://dx.doi.org/10.1186/s12874-021-01225-w.

Full text
Abstract:
Abstract Background In infectious disease transmission dynamics, the high heterogeneity in individual infectiousness indicates that few index cases generate large numbers of secondary cases, which is commonly known as superspreading events. The heterogeneity in transmission can be measured by describing the distribution of the number of secondary cases as a negative binomial (NB) distribution with dispersion parameter, k. However, such inference framework usually neglects the under-ascertainment of sporadic cases, which are those without known epidemiological link and considered as independent clusters of size one, and this may potentially bias the estimates. Methods In this study, we adopt a zero-truncated likelihood-based framework to estimate k. We evaluate the estimation performance by using stochastic simulations, and compare it with the baseline non-truncated version. We exemplify the analytical framework with three contact tracing datasets of COVID-19. Results We demonstrate that the estimation bias exists when the under-ascertainment of index cases with 0 secondary case occurs, and the zero-truncated inference overcomes this problem and yields a less biased estimator of k. We find that the k of COVID-19 is inferred at 0.32 (95%CI: 0.15, 0.64), which appears slightly smaller than many previous estimates. We provide the simulation codes applying the inference framework in this study. Conclusions The zero-truncated framework is recommended for less biased transmission heterogeneity estimates. These findings highlight the importance of individual-specific case management strategies to mitigate COVID-19 pandemic by lowering the transmission risks of potential super-spreaders with priority.
APA, Harvard, Vancouver, ISO, and other styles
32

Mir, Khurshid. "On Size-Biased Negative Binomial Distribution and its Use in Zero-Truncated Cases." Measurement Science Review 9, no. 2 (2009). http://dx.doi.org/10.2478/v10048-009-0005-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Chakraborty, Ashit B., and Anwer Khurshid. "One-Sided Cumulative Sum (CUSUM) Control Charts for the Zero-Truncated Binomial Distribution." Economic Quality Control 26, no. 1 (2011). http://dx.doi.org/10.1515/eqc.2011.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Ashit, B. Chakraborty, and Khurshid Anwer. "ON POWER OF THE CONTROL CHART FOR ZERO TRUNCATED BINOMIAL DISTRIBUTION UNDER MISCLASSIFICATION ERROR." Investigación Operacional 40, no. 3 (2019). https://doi.org/10.5281/zenodo.3721687.

Full text
Abstract:
En este paper una investigación matemática se desarrolló sobre el efecto de la mala clasificación debido al error de medición en la carta de control de la distribución Binomial truncada (ZTBD). Fórmulas analíticas son  obtenidas para calcular las probabilidades de error de la clasificación debida al error de medición. La  conexión entre la fracción aparente de defectuosos ) (AFD y la verdadera ) (TFD han sido usadas para estudiar la potencia de la carta de control.. Expresiones del average del largo de la corrida (ARL) y de la curva  OC curva también son obtenidas
APA, Harvard, Vancouver, ISO, and other styles
35

Lerdsuwansri, R., P. Sangnawakij, D. Böhning, et al. "Sensitivity of contact-tracing for COVID-19 in Thailand: a capture-recapture application." BMC Infectious Diseases 22, no. 1 (2022). http://dx.doi.org/10.1186/s12879-022-07046-6.

Full text
Abstract:
Abstract Background We investigate the completeness of contact tracing for COVID-19 during the first wave of the COVID-19 pandemic in Thailand, from early January 2020 to 30 June 2020. Methods Uni-list capture-recapture models were applied to the frequency distributions of index cases to inform two questions: (1) the unobserved number of index cases with contacts, and (2) the unobserved number of index cases with secondary cases among their contacts. Results Generalized linear models (using Poisson and logistic families) did not return any significant predictor (age, sex, nationality, number of contacts per case) on the risk of transmission and hence capture-recapture models did not adjust for observed heterogeneity. Best fitting models, a zero truncated negative binomial for question 1 and zero-truncated Poisson for question 2, returned sensitivity estimates for contact tracing performance of 77.6% (95% CI = 73.75–81.54%) and 67.6% (95% CI = 53.84–81.38%), respectively. A zero-inflated negative binomial model on the distribution of index cases with secondary cases allowed the estimation of the effective reproduction number at 0.14 (95% CI = 0.09–0.22), and the overdispersion parameter at 0.1. Conclusion Completeness of COVID-19 contact tracing in Thailand during the first wave appeared moderate, with around 67% of infectious transmission chains detected. Overdispersion was present suggesting that most of the index cases did not result in infectious transmission chains and the majority of transmission events stemmed from a small proportion of index cases.
APA, Harvard, Vancouver, ISO, and other styles
36

Andika, Ayu, Sarini Abdullah, and Siti Nurrohmah. "Hurdle Negative Binomial Regression Model." ICSA - International Conference on Statistics and Analytics 2019, February 26, 2021, 57–68. http://dx.doi.org/10.29244/icsa.2019.pp57-68.

Full text
Abstract:
Poisson regression is a common regression model used for count data with equidispersion. However, in real data application, overdispersion often encountered, suggesting the seek for alternative model to the Poisson regression. In overdispersion data due to excess zeros and additional overdispersion in positive values, one of alternative model that can be used is hurdle negative binomial model. Hurdle negative binomial model is a two-part model consists of binary model and zero-truncated negative binomial model. In this study we discuss hurdle negative binomial model and Bayesian approach for the model’s parameter estimation, then apply the method for modelling frequency of motoric complication in people with early Parkinson’s disease. Markov Chain Monte Carlo with Gibbs Sampling (MCMC-GS) was implemented to sample the regression parameters from their posterior distribution. The result showed that hurdle negative binomial model fit the data satisfactorily, as implied by the convergence and unimodality of posterior density of the parameters of interest. We also identified risk factors for motoric complications.
APA, Harvard, Vancouver, ISO, and other styles
37

Zulki Alwani, Zetty Izzati, Adriana Irawati Nur Ibrahim, Rossita Mohamad Yunus, and Fadhilah Yusof. "Comparison of Count Data Generalised Linear Models: Application to Air-Pollution Related Disease in Johor Bahru, Malaysia." Pertanika Journal of Science and Technology 31, no. 4 (2023). http://dx.doi.org/10.47836/pjst.31.4.16.

Full text
Abstract:
Poisson regression is a common approach for modelling discrete data. However, due to characteristics of Poisson distribution, Poisson regression might not be suitable since most data are over-dispersed or under-dispersed. This study compared four generalised linear models (GLMs): negative binomial, generalised Poisson, zero-truncated Poisson and zero-truncated negative binomial. An air-pollution-related disease, upper respiratory tract infection (URTI), and its relationship with various air pollution and climate factors were investigated. The data were obtained from Johor Bahru, Malaysia, from January 1, 2012, to December 31, 2013. Multicollinearity between the covariates and the independent variables was examined, and model selection was performed to find the significant variables for each model. This study showed that the negative binomial is the best model to determine the association between the number of URTI cases and air pollution and climate factors. Particulate Matter (PM10), Sulphur Dioxide (SO2) and Ground Level Ozone (GLO) are the air pollution factors that affect this disease significantly. However, climate factors do not significantly influence the number of URTI cases. The model constructed in this study can be utilised as an early warning system to prevent and mitigate URTI cases. The involved parties, such as the local authorities and hospitals, can also employ the model when facing the risk of URTI cases that may occur due to air pollution factors.
APA, Harvard, Vancouver, ISO, and other styles
38

Chin, Su Na, and Dankmar Böhning. "Choice of sampling effort in a Schnabel census for accurate population size estimates." Environmental and Ecological Statistics, May 12, 2025. https://doi.org/10.1007/s10651-025-00660-y.

Full text
Abstract:
Abstract Population size estimation has long been a key area of interest across various fields. The Schnabel census, a widely applied capture–recapture method, is commonly used for population estimation. However, the topic of sampling effort in Schnabel census studies remains insufficiently explored. This study aims to determine the required sampling effort in Schnabel census studies, considering different levels of capture success rates and population heterogeneity. To address this, the number of capture occasions, T, is adjusted to achieve different probabilities of missing observation, $$p_0$$ p 0 , with the goal of maintaining an appropriate width of the confidence interval. Specifically, maintaining $$p_0 < 0.5$$ p 0 < 0.5 could limit uncertainty to within 20% of the true population size for $$N \ge$$ N ≥ 100. Zero-truncated counting distribution was applied by fitting three models: binomial, beta-binomial, and binomial mixture. The findings reveal an exponential relationship between the desired success capture rate and the required number of capture occasions. Additionally, lower detectability requires more capture occasions to achieve the same level of capture success rate compared to higher detectability. This methodological approach provides robust and efficient estimation strategies, ensuring the sustainability and feasibility of population monitoring programs.
APA, Harvard, Vancouver, ISO, and other styles
39

Monisha, Mohanan, Radhakumari Maya, Muhammed Rasheed Irshad, Christophe Chesneau, and Damodaran Santhamani Shibu. "A new generalization of the zero-truncated negative binomial distribution by a Lagrange expansion with associated regression model and applications." International Journal of Data Science and Analytics, September 16, 2023. http://dx.doi.org/10.1007/s41060-023-00449-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Pawan Teh, Mohd Azri, Nazrina Aziz, and Zakiyah Zain. "A new method in designing group chain acceptance sampling plans (GChSP) for generalized exponential distribution." International Journal of Quality & Reliability Management ahead-of-print, ahead-of-print (2020). http://dx.doi.org/10.1108/ijqrm-12-2018-0345.

Full text
Abstract:
PurposeThis paper introduces group chain acceptance sampling plans (GChSP) for a truncated life test at preassumed time by using the minimum angle method. The proposed method is an approach, where both risks associated with acceptance sampling, namely consumers’ and producer’s risks, are considered. Currently, the GChSP only considers the consumer's risk (CR), which means the current plan only protects the consumer not the producer since it does not take into account the producer's risk (PR) at all.Design/methodology/approachThere are six phases involved when designing the GChSP, which are (1) identifying the design parameters, (2) implementing the operating procedures, (3) deriving the probability of lot acceptance, (4) deriving the probability of zero or one defective, (5) deriving the proportion defective and (6) measuring the performance.FindingsThe findings show that the optimal number of groups obtained satisfies both parties, i.e. consumer and producer, compared to the established GChSP, where the number of group calculated only satisfies the consumer not the producer.Research limitations/implicationsThere are three limitations identified for this paper. The first limitation is the distribution, in which this paper only proposes the GChSP for generalized exponential distribution. It can be extended to different distribution available in the literature. The second limitation is that the paper uses binomial distribution when deriving the probability of lot acceptance. Also, it can be derived by using different distributions such as weighted binomial distribution, Poisson distribution and weighted Poisson distribution. The final limitation is that the paper adopts the mean as a quality parameter. For the quality parameter, researchers have other options such as the median and the percentile.Practical implicationsThe proposed GChSP should provide an alternative for the industrial practitioners and for the inspection activity, as they have more options of the sampling plans before they finally decide to select one.Originality/valueThis is the first paper to propose the minimum angle method for the GChSP, where both risks, CR and PR, are considered. The GChSP has been developed since 2015, but all the researchers only considered the CR in their papers.
APA, Harvard, Vancouver, ISO, and other styles
41

Sangnawakij, Patarawan, and Dankmar Böhning. "On repeated diagnostic testing in screening for a medical condition: How often should the diagnostic test be repeated?" Biometrical Journal 66, no. 3 (2024). http://dx.doi.org/10.1002/bimj.202300175.

Full text
Abstract:
AbstractIn screening large populations a diagnostic test is frequently used repeatedly. An example is screening for bowel cancer using the fecal occult blood test (FOBT) on several occasions such as at 3 or 6 days. The question that is addressed here is how often should we repeat a diagnostic test when screening for a specific medical condition. Sensitivity is often used as a performance measure of a diagnostic test and is considered here for the individual application of the diagnostic test as well as for the overall screening procedure. The latter can involve an increasingly large number of repeated applications, but how many are sufficient? We demonstrate the issues involved in answering this question using real data on bowel cancer at St Vincents Hospital in Sydney. As data are only available for those testing positive at least once, an appropriate modeling technique is developed on the basis of the zero‐truncated binomial distribution which allows for population heterogeneity. The latter is modeled using discrete nonparametric maximum likelihood. If we wish to achieve an overall sensitivity of 90%, the FOBT should be repeated for 2 weeks instead of the 1 week that was used at the time of the survey. A simulation study also shows consistency in the sense that bias and standard deviation for the estimated sensitivity decrease with an increasing number of repeated occasions as well as with increasing sample size.
APA, Harvard, Vancouver, ISO, and other styles
42

Saengthong, Pornpop. "Zero Inflated and Zero Truncated Negative Binomial Weighted Garima Distributions for Modeling Count Data." Journal of Applied Science 21, no. 1 (2022). http://dx.doi.org/10.14416/j.appsci.2022.01.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Zhang, Wei, and Huiying Sun. "How to analyze work productivity loss due to health problems in randomized controlled trials? A simulation study." BMC Medical Research Methodology 21, no. 1 (2021). http://dx.doi.org/10.1186/s12874-021-01330-w.

Full text
Abstract:
Abstract Background An increasing number of randomized controlled trials (RCTs) have measured the impact of interventions on work productivity loss. Productivity loss outcome is inflated at zero and max loss values. Our study was to compare the performance of five commonly used methods in analysis of productivity loss outcomes in RCTs. Methods We conducted a simulation study to compare Ordinary Least Squares (OLS), Negative Binominal (NB), two-part models (the non-zero part following truncated NB distribution or gamma distribution) and three-part model (the middle part between zero and max values following Beta distribution). The main number of observations each arm, Nobs, that we considered were 50, 100 and 200. Baseline productivity loss was included as a covariate. Results All models performed similarly well when baseline productivity loss was set at the mean value. When baseline productivity loss was set at other values and Nobs = 50 with ≤5 subjects having max loss, two-part models performed best if the proportion of zero loss> 50% in at least one arm and otherwise, OLS performed best. When Nobs = 100 or 200, the three-part model performed best if the two arms had equal scale parameters for their productivity loss outcome distributions between zero and max values. Conclusions Our findings suggest that when treatment effect at any given values of one single covariate is of interest, the model selection depends on the sample size, the proportions of zero loss and max loss, and the scale parameter for the productivity loss outcome distribution between zero and max loss in each arm of RCTs.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography