Academic literature on the topic 'Profile likelihood confidence regions'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Profile likelihood confidence regions.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Profile likelihood confidence regions"

1

Pek, Jolynn, and Hao Wu. "Profile Likelihood-Based Confidence Intervals and Regions for Structural Equation Models." Psychometrika 80, no. 4 (April 30, 2015): 1123–45. http://dx.doi.org/10.1007/s11336-015-9461-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Montoya, José A., Gudelia Figueroa-Preciado, and Mayra Rosalia Tocto-Erazo. "FLAT LIKELIHOODS: SIR-POISSON MODEL CASE." Revista de la Facultad de Ciencias 11, no. 2 (July 1, 2022): 74–99. http://dx.doi.org/10.15446/rev.fac.cienc.v11n2.100986.

Full text
Abstract:
Systems of differential equations are used as the basis to define mathematical structures for moments, like the mean and variance, of random variables probability distributions. Nevertheless, the integration of a deterministic model and a probabilistic one, with the aim of describing a random phenomenon, and take advantage of the observed data for making inferences on certain population dynamic characteristics, can lead to parameter identifiability problems. Furthermore, approaches to deal with those problems are usually inappropriate. In this paper, the shape of the likelihood function of a SIR-Poisson model is used to describe the relationship between flat likelihoods and the identifiability parameter problem. In particular, we show how a flattened shape for the profile likelihood of the basic reproductive number R0, arises as the observed sample (over time) becomes smaller, causing ambiguity regarding the shape of the average model behavior. We conducted some simulation studies to analyze the flatness severity of the R0 likelihood, and the coverage frequency of the likelihood-confidence regions for the model parameters. Finally, we describe some approaches to deal the practical identifiability problem, showing the impact those can have on inferences. We believe this work can help to raise awareness on the way statistical inferences can be affected by a priori parameter assumptions and the underlying relationship between them, as well as by model reparameterizations and incorrect model assumptions.
APA, Harvard, Vancouver, ISO, and other styles
3

Zou, Yuye, and Chengxin Wu. "Statistical Inference for the Heteroscedastic Partially Linear Varying-Coefficient Errors-in-Variables Model with Missing Censoring Indicators." Discrete Dynamics in Nature and Society 2021 (June 7, 2021): 1–26. http://dx.doi.org/10.1155/2021/1141022.

Full text
Abstract:
In this paper, we focus on heteroscedastic partially linear varying-coefficient errors-in-variables models under right-censored data with censoring indicators missing at random. Based on regression calibration, imputation, and inverse probability weighted methods, we define a class of modified profile least square estimators of the parameter and local linear estimators of the coefficient function, which are applied to constructing estimators of the error variance function. In order to improve the estimation accuracy and take into account the heteroscedastic error, reweighted estimators of the parameter and coefficient function are developed. At the same time, we apply the empirical likelihood method to construct confidence regions and maximum empirical likelihood estimators of the parameter. Under appropriate assumptions, the asymptotic normality of the proposed estimators is studied. The strong uniform convergence rate for the estimators of the error variance function is considered. Also, the asymptotic chi-squared distribution of the empirical log-likelihood ratio statistics is proved. A simulation study is conducted to evaluate the finite sample performance of the proposed estimators. Meanwhile, one real data example is provided to illustrate our methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Hong, Shaoxin, Jiancheng Jiang, Xuejun Jiang, and Zhijie Xiao. "Unifying inference for semiparametric regression." Econometrics Journal 24, no. 3 (March 11, 2021): 482–501. http://dx.doi.org/10.1093/ectj/utab005.

Full text
Abstract:
Summary In the literature, a discrepancy in the limiting distributions of least square estimators between the stationary and nonstationary cases exists in various regression models with different persistence level regressors. This hinders further statistical inference since one has to decide which distribution should be used next. In this paper, we develop a semiparametric partially linear regression model with stationary and nonstationary regressors to attenuate this difficulty, and propose a unifying inference procedure for the coefficients. To be specific, we propose a profile weighted estimation equation method that facilitates the unifying inference. The proposed method is applied to the predictive regressions of stock returns, and an empirical likelihood procedure is developed to test the predictability. It is shown that the Wilks theorem holds for the empirical likelihood ratio regardless of predictors being stationary or not, which provides a unifying method for constructing confidence regions of the coefficients of state variables. Simulations show that the proposed method works well and has favourable finite sample performance over some existing approaches. An empirical application examining the predictability of equity returns highlights the value of our methodology.
APA, Harvard, Vancouver, ISO, and other styles
5

Callingham, Thomas M., Marius Cautun, Alis J. Deason, Carlos S. Frenk, Wenting Wang, Facundo A. Gómez, Robert J. J. Grand, Federico Marinacci, and Ruediger Pakmor. "The mass of the Milky Way from satellite dynamics." Monthly Notices of the Royal Astronomical Society 484, no. 4 (February 5, 2019): 5453–67. http://dx.doi.org/10.1093/mnras/stz365.

Full text
Abstract:
Abstract We present and apply a method to infer the mass of the Milky Way (MW) by comparing the dynamics of MW satellites to those of model satellites in the eagle cosmological hydrodynamics simulations. A distribution function (DF) for galactic satellites is constructed from eagle using specific angular momentum and specific energy, which are scaled so as to be independent of host halo mass. In this two-dimensional space, the orbital properties of satellite galaxies vary according to the host halo mass. The halo mass can be inferred by calculating the likelihood that the observed satellite population is drawn from this DF. Our method is robustly calibrated on mock eagle systems. We validate it by applying it to the completely independent suite of 30 auriga high-resolution simulations of MW-like galaxies: the method accurately recovers their true mass and associated uncertainties. We then apply it to 10 classical satellites of the MW with six-dimensional phase-space measurements, including updated proper motions from the Gaia satellite. The mass of the MW is estimated to be $M_{200}^{\rm {MW}}=1.17_{-0.15}^{+0.21}\times 10^{12}\, \mathrm{M}_{\odot }$ (68 per cent confidence limits). We combine our total mass estimate with recent mass estimates in the inner regions of the Galaxy to infer an inner dark matter (DM) mass fraction $M^\rm {DM}(\lt 20~\rm {kpc})/M^\rm {DM}_{200}=0.12$, which is typical of ${\sim }10^{12}\, \mathrm{M}_{\odot }$ lambda cold dark matter haloes in hydrodynamical galaxy formation simulations. Assuming a Navarro, Frenk and White (NFW) profile, this is equivalent to a halo concentration of $c_{200}^{\rm {MW}}=10.9^{+2.6}_{-2.0}$.
APA, Harvard, Vancouver, ISO, and other styles
6

Bell, Michael J., Wayne Strong, Denis Elliott, and Charlie Walker. "Soil nitrogen—crop response calibration relationships and criteria for winter cereal crops grown in Australia." Crop and Pasture Science 64, no. 5 (2013): 442. http://dx.doi.org/10.1071/cp12431.

Full text
Abstract:
More than 1200 wheat and 120 barley experiments conducted in Australia to examine yield responses to applied nitrogen (N) fertiliser are contained in a national database of field crops nutrient research (BFDC National Database). The yield responses are accompanied by various pre-plant soil test data to quantify plant-available N and other indicators of soil fertility status or mineralisable N. A web application (BFDC Interrogator), developed to access the database, enables construction of calibrations between relative crop yield ((Y0/Ymax) × 100) and N soil test value. In this paper we report the critical soil test values for 90% RY (CV90) and the associated critical ranges (CR90, defined as the 70% confidence interval around that CV90) derived from analysis of various subsets of these winter cereal experiments. Experimental programs were conducted throughout Australia’s main grain-production regions in different eras, starting from the 1960s in Queensland through to Victoria during 2000s. Improved management practices adopted during the period were reflected in increasing potential yields with research era, increasing from an average Ymax of 2.2 t/ha in Queensland in the 1960s and 1970s, to 3.4 t/ha in South Australia (SA) in the 1980s, to 4.3 t/ha in New South Wales (NSW) in the 1990s, and 4.2 t/ha in Victoria in the 2000s. Various sampling depths (0.1–1.2 m) and methods of quantifying available N (nitrate-N or mineral-N) from pre-planting soil samples were used and provided useful guides to the need for supplementary N. The most regionally consistent relationships were established using nitrate-N (kg/ha) in the top 0.6 m of the soil profile, with regional and seasonal variation in CV90 largely accounted for through impacts on experimental Ymax. The CV90 for nitrate-N within the top 0.6 m of the soil profile for wheat crops increased from 36 to 110 kg nitrate-N/ha as Ymax increased over the range 1 to >5 t/ha. Apparent variation in CV90 with seasonal moisture availability was entirely consistent with impacts on experimental Ymax. Further analyses of wheat trials with available grain protein (~45% of all experiments) established that grain yield and not grain N content was the major driver of crop N demand and CV90. Subsets of data explored the impact of crop management practices such as crop rotation or fallow length on both pre-planting profile mineral-N and CV90. Analyses showed that while management practices influenced profile mineral-N at planting and the likelihood and size of yield response to applied N fertiliser, they had no significant impact on CV90. A level of risk is involved with the use of pre-plant testing to determine the need for supplementary N application in all Australian dryland systems. In southern and western regions, where crop performance is based almost entirely on in-crop rainfall, this risk is offset by the management opportunity to split N applications during crop growth in response to changing crop yield potential. In northern cropping systems, where stored soil moisture at sowing is indicative of minimum yield potential, erratic winter rainfall increases uncertainty about actual yield potential as well as reducing the opportunity for effective in-season applications.
APA, Harvard, Vancouver, ISO, and other styles
7

Owen, Art. "Empirical Likelihood Ratio Confidence Regions." Annals of Statistics 18, no. 1 (March 1990): 90–120. http://dx.doi.org/10.1214/aos/1176347494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Royston, Patrick. "Profile Likelihood for Estimation and Confidence Intervals." Stata Journal: Promoting communications on statistics and Stata 7, no. 3 (September 2007): 376–87. http://dx.doi.org/10.1177/1536867x0700700305.

Full text
Abstract:
Normal-based confidence intervals for a parameter of interest are inaccurate when the sampling distribution of the estimate is nonnormal. The technique known as profile likelihood can produce confidence intervals with better coverage. It may be used when the model includes only the variable of interest or several other variables in addition. Profile-likelihood confidence intervals are particularly useful in nonlinear models. The command pllf computes and plots the maximum likelihood estimate and profile likelihood–based confidence interval for one parameter in a wide variety of regression models.
APA, Harvard, Vancouver, ISO, and other styles
9

Ionides, E. L., C. Breto, J. Park, R. A. Smith, and A. A. King. "Monte Carlo profile confidence intervals for dynamic systems." Journal of The Royal Society Interface 14, no. 132 (July 2017): 20170126. http://dx.doi.org/10.1098/rsif.2017.0126.

Full text
Abstract:
Monte Carlo methods to evaluate and maximize the likelihood function enable the construction of confidence intervals and hypothesis tests, facilitating scientific investigation using models for which the likelihood function is intractable. When Monte Carlo error can be made small, by sufficiently exhaustive computation, then the standard theory and practice of likelihood-based inference applies. As datasets become larger, and models more complex, situations arise where no reasonable amount of computation can render Monte Carlo error negligible. We develop profile likelihood methodology to provide frequentist inferences that take into account Monte Carlo uncertainty. We investigate the role of this methodology in facilitating inference for computationally challenging dynamic latent variable models. We present examples arising in the study of infectious disease transmission, demonstrating our methodology for inference on nonlinear dynamic models using genetic sequence data and panel time-series data. We also discuss applicability to nonlinear time-series and spatio-temporal data.
APA, Harvard, Vancouver, ISO, and other styles
10

Monti, A. "Empirical likelihood confidence regions in time series models." Biometrika 84, no. 2 (June 1, 1997): 395–405. http://dx.doi.org/10.1093/biomet/84.2.395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Profile likelihood confidence regions"

1

Riggs, Kent Edward Young Dean M. "Maximum-likelihood-based confidence regions and hypothesis tests for selected statistical models." Waco, Tex. : Baylor University, 2006. http://hdl.handle.net/2104/4879.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Huayu. "Modified Profile Likelihood Approach for Certain Intraclass Correlation Coefficient." Digital Archive @ GSU, 2011. http://digitalarchive.gsu.edu/math_theses/96.

Full text
Abstract:
In this paper we consider the problem of constructing confidence intervals and lower bounds forthe intraclass correlation coefficient in an interrater reliability study where the raters are randomly selected from a population of raters.The likelihood function of the interrater reliability is derived and simplified, and the profile likelihood based approach is readily available for computing the confidence intervals of the interrater reliability. Unfortunately, the confidence intervals computed by using the profile likelihood function are in general too narrow to have the desired coverage probabilities. From the point view of practice, a conservative approach, if is at least as precise as any existing method, is preferred sinceit gives the correct results with a probability higher than claimed. Under this rationale, we propose the so-called modified likelihood approach in this paper. Simulation study shows that, the proposed method in general has better performance than currently used methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Dai, Chenglu. "The Profile Likelihood Method in Finding Confidence Intervals and its Comparison with the Bootstrap Percentile Method." Fogler Library, University of Maine, 2008. http://www.library.umaine.edu/theses/pdf/DaiC2008.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Valeinis, Janis. "Confidence bands for structural relationship models." Doctoral thesis, [S.l.] : [s.n.], 2007. http://webdoc.sub.gwdg.de/diss/2007/valeinis.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Salasar, Luis Ernesto Bueno. "Eliminação de parâmetros perturbadores em um modelo de captura-recaptura." Universidade Federal de São Carlos, 2011. https://repositorio.ufscar.br/handle/ufscar/4485.

Full text
Abstract:
Made available in DSpace on 2016-06-02T20:04:51Z (GMT). No. of bitstreams: 1 4032.pdf: 1016886 bytes, checksum: 6e1eb83f197a88332f8951b054c1f01a (MD5) Previous issue date: 2011-11-18
Financiadora de Estudos e Projetos
The capture-recapture process, largely used in the estimation of the number of elements of animal population, is also applied to other branches of knowledge like Epidemiology, Linguistics, Software reliability, Ecology, among others. One of the _rst applications of this method was done by Laplace in 1783, with aim at estimate the number of inhabitants of France. Later, Carl G. J. Petersen in 1889 and Lincoln in 1930 applied the same estimator in the context of animal populations. This estimator has being known in literature as _Lincoln-Petersen_ estimator. In the mid-twentieth century several researchers dedicated themselves to the formulation of statistical models appropriated for the estimation of population size, which caused a substantial increase in the amount of theoretical and applied works on the subject. The capture-recapture models are constructed under certain assumptions relating to the population, the sampling procedure and the experimental conditions. The main assumption that distinguishes models concerns the change in the number of individuals in the population during the period of the experiment. Models that allow for births, deaths or migration are called open population models, while models that does not allow for these events to occur are called closed population models. In this work, the goal is to characterize likelihood functions obtained by applying methods of elimination of nuissance parameters in the case of closed population models. Based on these likelihood functions, we discuss methods for point and interval estimation of the population size. The estimation methods are illustrated on a real data-set and their frequentist properties are analised via Monte Carlo simulation.
O processo de captura-recaptura, amplamente utilizado na estimação do número de elementos de uma população de animais, é também aplicado a outras áreas do conhecimento como Epidemiologia, Linguística, Con_abilidade de Software, Ecologia, entre outras. Uma das primeiras aplicações deste método foi feita por Laplace em 1783, com o objetivo de estimar o número de habitantes da França. Posteriormente, Carl G. J. Petersen em 1889 e Lincoln em 1930 utilizaram o mesmo estimador no contexto de popula ções de animais. Este estimador _cou conhecido na literatura como o estimador de _Lincoln-Petersen_. Em meados do século XX muitos pesquisadores se dedicaram à formula ção de modelos estatísticos adequados à estimação do tamanho populacional, o que causou um aumento substancial da quantidade de trabalhos teóricos e aplicados sobre o tema. Os modelos de captura-recaptura são construídos sob certas hipóteses relativas à população, ao processo de amostragem e às condições experimentais. A principal hipótese que diferencia os modelos diz respeito à mudança do número de indivíduos da popula- ção durante o período do experimento. Os modelos que permitem que haja nascimentos, mortes ou migração são chamados de modelos para população aberta, enquanto que os modelos em que tais eventos não são permitidos são chamados de modelos para popula- ção fechada. Neste trabalho, o objetivo é caracterizar o comportamento de funções de verossimilhança obtidas por meio da utilização de métodos de eliminação de parâmetros perturbadores, no caso de modelos para população fechada. Baseado nestas funções de verossimilhança, discutimos métodos de estimação pontual e intervalar para o tamanho populacional. Os métodos de estimação são ilustrados através de um conjunto de dados reais e suas propriedades frequentistas são analisadas via simulação de Monte Carlo.
APA, Harvard, Vancouver, ISO, and other styles
6

Critchfield, Brian L. "Statistical Methods For Kinetic Modeling Of Fischer Tropsch Synthesis On A Supported Iron Catalyst." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1670.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sztendur, Ewa M. "Precision of the path of steepest ascent in response surface methodology." Thesis, 2005. https://vuir.vu.edu.au/15792/.

Full text
Abstract:
This thesis provides some extensions to the existing method of determining the precision of the path of steepest ascent in response surface methodology to cover situations with correlated and heteroscedastic responses, including the important class of generalised linear models. It is shown how the eigenvalues of a certain matrix can be used to express the proportion of included directions in the confidence cone for the path of steepest ascent as an integral, which can then be computed using numerical integration. In addition, some tight inequalities for the proportion of included directions are derived for the two, three, and four dimensional cases. For generalised linear models, methods are developed using the Wald approach and profile likelihood confidence regions approach, and bootstrap methods are used to improve the accuracy of the calculations.
APA, Harvard, Vancouver, ISO, and other styles
8

Valdovinos, Alvarez Jose Manuel. "Empirical Likelihood Confidence Intervals for the Population Mean Based on Incomplete Data." 2015. http://scholarworks.gsu.edu/math_theses/145.

Full text
Abstract:
The use of doubly robust estimators is a key for estimating the population mean response in the presence of incomplete data. Cao et al. (2009) proposed an alternative doubly robust estimator which exhibits strong performance compared to existing estimation methods. In this thesis, we apply the jackknife empirical likelihood, the jackknife empirical likelihood with nuisance parameters, the profile empirical likelihood, and an empirical likelihood method based on the influence function to make an inference for the population mean. We use these methods to construct confidence intervals for the population mean, and compare the coverage probabilities and interval lengths using both the ``usual'' doubly robust estimator and the alternative estimator proposed by Cao et al. (2009). An extensive simulation study is carried out to compare the different methods. Finally, the proposed methods are applied to two real data sets.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Profile likelihood confidence regions"

1

DiCiccio, Thomas J. On the implementation of profile likelihood methods. Toronto, Ont: University of Toronto, Dept. of Statistics, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Moek, G. Approximate confidence regions for the parameters of three software reliability models. Amsterdam: National Aerospace Laboratory, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cheng, Russell. Embedded Distributions: Two Numerical Examples. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198505044.003.0007.

Full text
Abstract:
This chapter illustrates use of (i) the score statistic and (ii) a goodness-of-fit statistic to test if an embedded model provides an adequate fit, in the latter case with critical values calculated by bootstrapping. Also illustrated is (iii) calculation of parameter confidence intervals and CDF confidence bands using both asymptotic theory and bootstrapping, and (iv) use of profile log-likelihood plots to display the form of the maximized log-likelihood and scatterplots for checking convergence to normality of estimated parameter distributions. Two different data sets are analysed. In the first, the generalized extreme value (GEVMin) distribution and its embedded model the simple extreme value (EVMin) are fitted to Kevlar-fibre breaking strength data. In the second sample, the four-parameter Burr XII distribution, its three-parameter embedded models, the GEVMin, Type II generalized logistic and Pareto and two-parameter embedded models, the EVMin and shifted exponential, are fitted to carbon-fibre strength data and compared.
APA, Harvard, Vancouver, ISO, and other styles
4

Cheng, Russell. Bootstrap Analysis. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198505044.003.0004.

Full text
Abstract:
Parametric bootstrapping (BS) provides an attractive alternative, both theoretically and numerically, to asymptotic theory for estimating sampling distributions. This chapter summarizes its use not only for calculating confidence intervals for estimated parameters and functions of parameters, but also to obtain log-likelihood-based confidence regions from which confidence bands for cumulative distribution and regression functions can be obtained. All such BS calculations are very easy to implement. Details are also given for calculating critical values of EDF statistics used in goodness-of-fit (GoF) tests, such as the Anderson-Darling A2 statistic whose null distribution is otherwise difficult to obtain, as it varies with different null hypotheses. A simple proof is given showing that the parametric BS is probabilistically exact for location-scale models. A formal regression lack-of-fit test employing parametric BS is given that can be used even when the regression data has no replications. Two real data examples are given.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Profile likelihood confidence regions"

1

Boos, Denni D., and L. A. Stefanski. "Likelihood-Based Tests and Confidence Regions." In Springer Texts in Statistics, 125–61. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-4818-1_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

"Hypothesis Tests and Confidence Intervals or Regions." In Maximum Likelihood Estimation and Inference, 37–63. Chichester, UK: John Wiley & Sons, Ltd, 2011. http://dx.doi.org/10.1002/9780470094846.ch3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Merzougui, Mouna. "The Periodic Restricted EXPAR(1) Model." In Confidence Regions - Applications, Tools and Challenges of Estimation [Working Title]. IntechOpen, 2020. http://dx.doi.org/10.5772/intechopen.94078.

Full text
Abstract:
In this chapter, we discuss the nonlinear periodic restricted EXPAR(1) model. The parameters are estimated by the quasi maximum likelihood (QML) method and we give their asymptotic properties which lead to the construction of confidence intervals of the parameters. Then we consider the problem of testing the nullity of coefficients by using the standard Likelihood Ratio (LR) test, simulation studies are given to assess the performance of this QML and LR test.
APA, Harvard, Vancouver, ISO, and other styles
4

Jordaan, Adele, Mariette Swanepoel, Yvonne Paul, and Terry Jeremy Ellapen. "The Interprofessional Clinical and Therapeutic Team Strategy to Manage Spinal Cord Injuries." In Therapy Approaches in Neurological Disorders. IntechOpen, 2021. http://dx.doi.org/10.5772/intechopen.94850.

Full text
Abstract:
A popular comorbidity of spinal cord injuries is physical deconditioning that frequently prejudice the person to increased risk for secondary non-communicable diseases, such as non-dependent insulin diabetes mellitus, cardiovascular diseases, respiratory diseases, cardiorespiratory diseases, obesity, osteoporosis, arthritis and osteoarthritis. Clinical literature has shown that spinal cord injured individuals have a poor cardiometabolic risk profile that amplifies the likelihood of secondary non-communicable diseases. Components of physical deconditioning include muscle atrophy, decreased aerobic capacity, inflexibility and diminished muscle and endurance. Another problem associated with spinal cord injuries is reliance or dependence on others. The combination of poor physical conditioning and dependence on others often adversely impacts on the individual’s quality of life, limiting their social interaction with others. The adherence to habitual physical activity and exercises has shown to increase conditioning status, improve health and wellbeing, increase independence, and improve confidence and self-image and successful re-integration in community. Therefore it is of paramount importance to increase awareness of the benefits of habitual physical activity and exercise to spinal cord injured patients, medical and clinical practitioners, family and friends. This chapter intends to highlight the health benefits of habitual physical activity in relation to selected secondary non-communicable diseases, and, the importance of interprofessional clinical and therapeutic team strategy to improve the spinal cord injured individuals’ quality of life.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Profile likelihood confidence regions"

1

Barbe, Kurt, and Johan Schoukens. "Robust small sample properties for likelihood based confidence regions." In 2010 IEEE Instrumentation & Measurement Technology Conference Proceedings. IEEE, 2010. http://dx.doi.org/10.1109/imtc.2010.5488192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lei, Lei, Christos Alexopoulos, Yijie Peng, and James R. Wilson. "Confidence Intervals and Regions for Quantiles using Conditional Monte Carlo and Generalized Likelihood Ratios." In 2020 Winter Simulation Conference (WSC). IEEE, 2020. http://dx.doi.org/10.1109/wsc48552.2020.9383910.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Spall, James C. "Reliability estimation and confidence regions from subsystem and full system tests via maximum likelihood." In the 8th Workshop. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1774674.1774677.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Brooker, Daniel C., and Geoffrey K. Cole. "Accurate Calculation of Confidence Intervals on Predicted Extreme Met-Ocean Conditions When Using Small Datasets." In ASME 2004 23rd International Conference on Offshore Mechanics and Arctic Engineering. ASMEDC, 2004. http://dx.doi.org/10.1115/omae2004-51160.

Full text
Abstract:
Estimation of high return period values of wind and wave conditions is usually done using a limited sample of data from measurement or hindcast studies. Because a finite sample size is used, the reliability of estimates is usually evaluated by constructing confidence intervals around design values such as the 100 year return value. In this paper, a numerical simulation study is used to compare the accuracy of calculated confidence intervals using several different calculation methods: the asymptotic normal, parametric bootstrap and profile likelihood methods. The accuracy of each method for different sample sizes is assessed for the truncated Weibull distribution. Based on these results, a profile likelihood method for estimation of confidence intervals is suggested for use when dealing with small datasets.
APA, Harvard, Vancouver, ISO, and other styles
5

Knupp, Diego C., João Vı́tor M. Canato, Antônio J. Silva Neto, and Francisco J. C. P. Soeiro. "Radiative Properties Estimation and Construction of Confidence Regions with a Combination of the Differential Evolution Algorithm and the Likelihood Method." In CNMAC 2016 - XXXVI Congresso Nacional de Matemática Aplicada e Computacional. SBMAC, 2017. http://dx.doi.org/10.5540/03.2017.005.01.0486.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Helbig, Klaus, Dennis Jarmowski, Felix Koelzow, and Christian Kontermann. "Probabilistic Lifetime Assessment Approach of 2%-Cr Steel Considering Material and Loading Profile Scatter." In ASME Turbo Expo 2019: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/gt2019-91297.

Full text
Abstract:
Abstract In regions with high intermittent renewable energy share, thermal plants are forced to operate with greater flexibility beyond their original design intent. Decreasing energy prices and capacity factors will further force these plants to more transient operation with steeper load gradients. Older steam turbine (ST) protections systems on site are often not designed for such flexible operation and do not properly supervise the resulting impact on lifetime consumption. Therefore, precise lifetime management concepts are required to increase plant reliability and flexibility, and to mitigate risks for new implemented operation modes. Several lifetime assessment methods were developed to quantify the damage evolution and the residual lifetime for ST components. Usually, these methods require both input about representative or operated loading profiles as well as characteristic material curves. These characteristic curves are determined by a number of standardized material tests. Due to the material scatter and other different sources of uncertainties, each test is a realization of a stochastic process. Hence, the corresponding characteristic material curves inherit these uncertainties and do not represent an absolute limit. The analyses of different loading profiles even for the same plant and same start-up class reveal that the consideration of statistically evaluated specific start-up distributions and further transient events are of major importance. Probabilistic methods are able to quantify all of these uncertainties and compute the probability of failure for a given lifetime or vice versa. Within this paper, at first an extensive and systematic operational profile analysis is carried out and discussed, which acts as an input for a probabilistic lifetime assessment approach. For that, a developed probabilistic workflow is presented to quantify the uncertainties and for lifetime prediction using the generalized damage accumulation rule with focus on creep-fatigue loading. To quantify the characteristic material curves, existing experimental data of a 2%-Cr forged steel (23CrMoNiWV8-8) is used. A probabilistic representation of the Wilshire-Scharning equation characterizes the creep rupture behavior. The maximum-likelihood method is used for parameter estimation and to take still running long term creep experiments into account. The end of life in low cycle fatigue experiments is characterized by a macroscopic crack initiation and the Manson-Coffin-Basquin equation is utilized to represent the characteristic material curve. A temperature modified version of the Manson-Coffin-Basquin equation is used to represent the experimental data. The parameter estimation is done by using the linear regression analysis followed by a comprehensive regression diagnostic. Taking both, the material and the load scatter into account, a reliability analysis is carried out to compute the probability of crack initiation. Finally, different load cases are considered and evaluated against each other.
APA, Harvard, Vancouver, ISO, and other styles
7

Duchnowski, Edward M., Seokbin Seo, and Nicholas R. Brown. "Uncertainty Quantification for the Development of Mechanistic Hydride Behavior Model for Spent Fuel Cladding Storage and Transportation." In 2020 International Conference on Nuclear Engineering collocated with the ASME 2020 Power Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/icone2020-16812.

Full text
Abstract:
Abstract During operation of light water reactors hydrogen from the primary coolant is absorbed within the zirconium cladding and is able to migrate and redistribute within the cladding. The hydrogen in solid solution is able to precipitate, forming zirconium hydrides which results in a decrease of ductility of the cladding and ultimately an increase in likelihood of cladding failure, especially in conditions such as transport or storage of nuclear fuel rods. In collaboration with other universities, industries, and national laboratories the overarching goal of this project is to enhance the development for modeling hydride behavior to be implemented into the BISON fuel performance tool. The University of Tennessee-Knoxville (UTK) has been tasked with quantifying the uncertainty in the models developed within this project as well as quantifying the sensitivity to the most significant parameters of uncertainty. The BISON fuel performance code has been shown to overpredict the total concentration of hydrogen at cold regions of a temperature profile, thus a sensitivity study was performed to quantify the impact that key diffusion parameters have on the local concentration of hydrogen at the cold end of a 1-D model subjected to an asymmetric temperature profile. It is shown within this document that the diffusion activation energy and the pre-exponential factor, values within the diffusion coefficient of hydrogen, have a large impact on the local concentration of hydrogen and their importance increases for an increase in annealing time.
APA, Harvard, Vancouver, ISO, and other styles
8

Kayode, B., F. Al-Tarrah, and G. Hursan. "Methodology for Static and Dynamic Modeling of Hydrocarbon Systems Having Sharp Viscosity Gradient." In International Petroleum Technology Conference. IPTC, 2021. http://dx.doi.org/10.2523/iptc-21184-ms.

Full text
Abstract:
Abstract This paper describes a methodology for delineating tar surface, incorporating it into a geological model, and the process for numerical modeling of oil viscosity variation with depth above the tar surface. The methodology integrates well log data and compositional fluid analysis to develop a mathematical model that mimics the oil's property variation with depth. While there are a good number of reservoirs that fit this description globally, there is a knowledge gap in literature regarding best practices for dealing with the peculiar challenges of such reservoirs. These challenges include; (i) how to delineate the top-of-tar across the field, (ii) modeling of Saturation Height Function (SHF) in a system where density and wettability is changing with depth, and (iii) the methodology for representing the depth-dependent oil properties (especially viscosity) in reservoir simulation. Nuclear magnetic resonance (NMR) logs were used to predict fluid viscosity using a technique discussed by Hursan et al. (2016). Viscosity regions are identified at every well that has an NMR log, and these regions are mapped from well to well across the reservoir. Within each viscosity region, the analysis results of fluid samples collected from wells are used to develop mathematical models of fluid composition variation with depth. A reliable SHF model was achieved by incorporating depth-varying oil density and depth varying wettability into the calculation of J-Function. A compositional reservoir simulation was set-up, using the viscosity regions and the mathematical models describing composition variation with depth, for the respective regions. Using information obtained from literature as a starting point, residual oil saturation was modeled as a function of oil viscosity. Original reservoir understanding places the top of non-movable oil (tar) at a constant fieldwide subsurface depth, which corresponds to the shallowest historical no-flow drillstem test (DST) depth. Mapping of the NMR viscosity regions across the field resulted in a sloping tar-oil contact (TOC), which resulted in an increase of movable hydrocarbon pore volume. The viscosity versus depth profile from the simulation model matched the observed data, and allow the simulation model better predict well performance. In addition, the simulation model results also matched the depth-variation of observed formation volume factor (FVF) and reservoir fluid density. Some wells that have measured viscosity data but no NMR logs were used as blind-test wells. The simulation model results also matched the measured viscosity at those blind-test wells. These good matches of the oil property variation with depth gave confidence, that the simulation model could be used as an efficient planning tool for ensuring that injectors are placed just-above the tar mat. The use of the simulation model for well planning could reduce the need for geosteering while drilling flank wells, leading to savings in financial costs. This paper contains a generalized approach that can be used in static and dynamic modeling of reservoirs, where oil changes from light to medium to heavy oil, underlain by tar. It contains recommendations and guidelines to construct a reliable simulation model of such systems.
APA, Harvard, Vancouver, ISO, and other styles
9

Huspeni, Paul J., Ly Le, Wei Liu, Elli Saravanos, and Shannon Adkins. "Weibull Analysis Method Applied to Optical Fiber Breaking Stress Data." In ASME 2009 InterPACK Conference collocated with the ASME 2009 Summer Heat Transfer Conference and the ASME 2009 3rd International Conference on Energy Sustainability. ASMEDC, 2009. http://dx.doi.org/10.1115/interpack2009-89217.

Full text
Abstract:
This paper details application of a 2-parameter Weibull maximum likelihood estimation (MLE) method to optical fiber breaking stress data. Optical fiber is used in a broad range of telecommunications applications and its associated performance in fabricated component assemblies is of critical importance to the proper functioning of telecom networks. Fiber optic components incorporating stripped optical fiber include optical couplers, optical splitters, WDM devices, connectors, and mechanical and fusion splices. The strength of optical fiber in component assemblies is dependent upon numerous aspects of the fabrication process. These include the fiber coating stripping process, fiber cleaning after stripping, any manual handling of the fiber after stripping, and the time period after fiber stripping until the fiber is inserted and potted into a component or subjected to mechanical or fusion splice processes. The ambient moisture content of the air is a critical parameter affecting the resultant strength of stripped fiber. The fabrication and assembly processes utilized to produce component assemblies can cause damage to the fiber and the resultant creation of macro defects. From a reliability and analysis perspective being able to separate the effects of fabrication-induced macro defects from those due to intrinsic micro flaws present in the unstripped optical fiber is critical. The Weibull statistical analysis method permits identification of distinct fiber breaking stress regions associated with intrinsic fiber strength vs. macro-defects created by stripping and fabrication processes. The Weibull analysis is most effectively applied to fiber breaking stress data in the lower 25% quartile since this stress region will be of most interest from a reliability perspective. Another significant benefit derived from the Weibull analysis is the ability to predict cumulative failure rates for any selected values of fiber breaking stress. The predictive capability of the Weibull model provides more useful information regarding the lower breaking stress region than conventional statistical analysis methods such as analysis of variance (ANOVA). The Weibull MLE data analysis method provides more conservative and robust results relative to other Weibull analysis methods such as Median Rank Regression (MRR) [1]. The Weibull MLE analysis method was applied to individual and cumulative fiber breaking stress data sets generated from 10 new fiber strippers of a selected specific type. The objective of these analyses was to provide trustworthy predictions of Weibull unreliability for the selected stripper type in the lower modal breaking stress region of interest for component end-use performance. This paper details the following: • methodology for producing the initial Weibull data plots, • identification of modal regions associated with fiber macro and intrinsic defects, • selection of a representative breaking stress region for application of the Weibull analysis, • generation and interpretation of the two Weibull parameters (β and η), and • prediction of cumulative failure rate or unreliability confidence intervals (C.I.’s) for selected fiber breaking stress values. This study showed that combining the data from all 10 fiber strippers into a cumulative data set and then performing the Weibull MLE analysis is the preferred approach. This method provides the best and most comprehensive definition of the lower fiber breaking stress modal region and more trustworthy predictions of Weibull unreliability at critical selected independent variable values of 50 and 100 kpsi.
APA, Harvard, Vancouver, ISO, and other styles
10

Craig, Ian G. R., Coleman D. Hoff, Paul J. Kristo, and Mark L. Kimber. "A Validation Experiment for Turbulent Mixing in a Collated Jet." In ASME-JSME-KSME 2019 8th Joint Fluids Engineering Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/ajkfluids2019-4981.

Full text
Abstract:
Abstract Numerical modeling of turbulent mixing is complicated not only by the natural complexity of the flow physics, but by the model sensitivity to the user specified conditions. When aiming to quantify the level of trust in a given model, a validation experiment is often performed to provide data against which confidence comparisons can be made, ultimately evaluating the adequacy of the model assumptions. In the nuclear community, a number of promising Generation IV reactor concepts have been proposed, each requiring high fidelity modeling capabilities to accurately assess safety related issues. The experiment described in this paper is intended to provide a much needed validation data set to assess the use of computational fluid dynamics (CFD) in a complex turbulent mixing scenario relevant to the prismatic very high temperature reactor (VHTR) concept. Over the course of the VHTR’s operation, the reactor’s graphite moderated hexagonal fuel blocks shrink from neutron damage, forming interstitial gaps between adjacent blocks. A significant percentage of the coolant can flow through these gaps, having a substantial impact on the thermal-hydraulic conditions in the core. An experimental facility is presented that uses air as a simulant fluid and includes a unit cell representation of the hexagonal blocks, which accounts for both the intended circular channels and secondary rectangular slot features induced by the bypass gaps. The outlet of the unit cell consists of a collated jet consisting of a central round jet surrounded by three slot jets at relative 120° angles to one another issuing into a stagnant domain. A preliminary test case is proposed in which the collated jet is set to an isothermal and iso-velocity condition. Constant temperature anemometry (CTA) and constant current anemometry (CCA) measurements serve to capture the velocity and temperature inlet quantities (IQs). Particle image velocimetry (PIV) measurements provide the appropriate system response quantities (SRQs), yielding insight into the mean and fluctuating components of the 2-D velocity field. Results are presented in the range of 0–8 diameters downstream of the inlet to the test section. The collated jet inlet region yields velocity profiles that are heavily influenced by opposing pressure gradients between the neighboring round and slot regions. As a result, the velocity peaks found in this area are neither in the centerline of the round jet nor that of the slot, but are towards the outer edge of each. With increasing downstream distance, the collated jet is found to exhibit a more classical round jet profile. The inlet region of the collated jet is thus of particular interest to future modeling efforts to more accurately depict the lower plenum behavior and transition to a self-similar profile downstream. Proper uncertainty quantification is also presented, and aids in assessing the integrity of the experimental results for future CFD validation.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Profile likelihood confidence regions"

1

McDonagh, Marian S., Jesse Wagner, Azrah Y. Ahmed, Rongwei Fu, Benjamin Morasco, Devan Kansagara, and Roger Chou. Living Systematic Review on Cannabis and Other Plant-Based Treatments for Chronic Pain. Agency for Healthcare Research and Quality (AHRQ), October 2021. http://dx.doi.org/10.23970/ahrqepccer250.

Full text
Abstract:
Objectives. To evaluate the evidence on benefits and harms of cannabinoids and similar plant-based compounds to treat chronic pain. Data sources. Ovid® MEDLINE®, PsycINFO®, Embase®, the Cochrane Library, and SCOPUS® databases, reference lists of included studies, submissions received after Federal Register request were searched to July 2021. Review methods. Using dual review, we screened search results for randomized controlled trials (RCTs) and observational studies of patients with chronic pain evaluating cannabis, kratom, and similar compounds with any comparison group and at least 1 month of treatment or followup. Dual review was used to abstract study data, assess study-level risk of bias, and rate the strength of evidence. Prioritized outcomes included pain, overall function, and adverse events. We grouped studies that assessed tetrahydrocannabinol (THC) and/or cannabidiol (CBD) based on their THC to CBD ratio and categorized them as high-THC to CBD ratio, comparable THC to CBD ratio, and low-THC to CBD ratio. We also grouped studies by whether the product was a whole-plant product (cannabis), cannabinoids extracted or purified from a whole plant, or synthetic. We conducted meta-analyses using the profile likelihood random effects model and assessed between-study heterogeneity using Cochran’s Q statistic chi square and the I2 test for inconsistency. Magnitude of benefit was categorized into no effect or small, moderate, and large effects. Results. From 2,850 abstracts, 20 RCTs (N=1,776) and 7 observational studies (N=13,095) assessing different cannabinoids were included; none of kratom. Studies were primarily short term, and 75 percent enrolled patients with a variety of neuropathic pain. Comparators were primarily placebo or usual care. The strength of evidence (SOE) was low, unless otherwise noted. Compared with placebo, comparable THC to CBD ratio oral spray was associated with a small benefit in change in pain severity (7 RCTs, N=632, 0 to10 scale, mean difference [MD] −0.54, 95% confidence interval [CI] −0.95 to −0.19, I2=28%; SOE: moderate) and overall function (6 RCTs, N=616, 0 to 10 scale, MD −0.42, 95% CI −0.73 to −0.16, I2=24%). There was no effect on study withdrawals due to adverse events. There was a large increased risk of dizziness and sedation and a moderate increased risk of nausea (dizziness: 6 RCTs, N=866, 30% vs. 8%, relative risk [RR] 3.57, 95% CI 2.42 to 5.60, I2=0%; sedation: 6 RCTs, N=866, 22% vs. 16%, RR 5.04, 95% CI 2.10 to 11.89, I2=0%; and nausea: 6 RCTs, N=866, 13% vs. 7.5%, RR 1.79, 95% CI 1.20 to 2.78, I2=0%). Synthetic products with high-THC to CBD ratios were associated with a moderate improvement in pain severity, a moderate increase in sedation, and a large increase in nausea (pain: 6 RCTs, N=390 to 10 scale, MD −1.15, 95% CI −1.99 to −0.54, I2=39%; sedation: 3 RCTs, N=335, 19% vs. 10%, RR 1.73, 95% CI 1.03 to 4.63, I2=0%; nausea: 2 RCTs, N=302, 12% vs. 6%, RR 2.19, 95% CI 0.77 to 5.39; I²=0%). We found moderate SOE for a large increased risk of dizziness (2 RCTs, 32% vs. 11%, RR 2.74, 95% CI 1.47 to 6.86, I2=0%). Extracted whole-plant products with high-THC to CBD ratios (oral) were associated with a large increased risk of study withdrawal due to adverse events (1 RCT, 13.9% vs. 5.7%, RR 3.12, 95% CI 1.54 to 6.33) and dizziness (1 RCT, 62.2% vs. 7.5%, RR 8.34, 95% CI 4.53 to 15.34). We observed a moderate improvement in pain severity when combining all studies of high-THC to CBD ratio (8 RCTs, N=684, MD −1.25, 95% CI −2.09 to −0.71, I2=50%; SOE: moderate). Evidence on whole-plant cannabis, topical CBD, low-THC to CBD, other cannabinoids, comparisons with active products, and impact on use of opioids was insufficient to draw conclusions. Other important harms (psychosis, cannabis use disorder, and cognitive effects) were not reported. Conclusions. Low to moderate strength evidence suggests small to moderate improvements in pain (mostly neuropathic), and moderate to large increases in common adverse events (dizziness, sedation, nausea) and study withdrawal due to adverse events with high- and comparable THC to CBD ratio extracted cannabinoids and synthetic products in short-term treatment (1 to 6 months). Evidence for whole-plant cannabis, and other comparisons, outcomes, and PBCs were unavailable or insufficient to draw conclusions. Small sample sizes, lack of evidence for moderate and long-term use and other key outcomes, such as other adverse events and impact on use of opioids during treatment, indicate that more research is needed.
APA, Harvard, Vancouver, ISO, and other styles
2

Chou, Roger, Jesse Wagner, Azrah Y. Ahmed, Benjamin J. Morasco, Devan Kansagara, Shelley Selph, Rebecca Holmes, and Rongwei Fu. Living Systematic Review on Cannabis and Other Plant-Based Treatments for iii Chronic Pain: 2022 Update. Agency for Healthcare Research and Quality (AHRQ), September 2022. http://dx.doi.org/10.23970/ahrqepccer250update2022.

Full text
Abstract:
Objectives. To update the evidence on benefits and harms of cannabinoids and similar plant-based compounds to treat chronic pain using a living systematic review approach. Data sources. Ovid® MEDLINE®, PsycINFO®, Embase®, the Cochrane Library, and SCOPUS® databases; reference lists of included studies; and submissions received after Federal Register request were searched to April 4, 2022. Review methods. Using dual review, we screened search results for randomized controlled trials (RCTs) and observational studies of patients with chronic pain evaluating cannabis, kratom, and similar compounds with any comparison group and at least 1 month of treatment or followup. Dual review was used to abstract study data, assess study-level risk of bias, and rate the strength of evidence (SOE). Prioritized outcomes included pain, overall function, and adverse events. We grouped studies that assessed tetrahydrocannabinol (THC) and/or cannabidiol (CBD) based on their THC to CBD ratio and categorized them as comparable THC to CBD ratio, high-THC to CBD ratio, and low-THC to CBD ratio. We also grouped studies by whether the product was a whole-plant product (cannabis), cannabinoids extracted or purified from a whole plant, or a synthetic product. We conducted meta-analyses using the profile likelihood random effects model and assessed between-study heterogeneity using Cochran’s Q statistic chi square test and the I2 statistic. Magnitude of benefit was categorized as no effect or small, moderate, and large effects. Results. From 3,283 abstracts, 21 RCTs (N=1,905) and 8 observational studies (N=13,769) assessing different cannabinoids were included; none evaluated kratom. Studies were primarily short term, and 59 percent enrolled patients with neuropathic pain. Comparators were primarily placebo or usual care. The SOE was low unless otherwise noted. Compared with placebo, comparable THC to CBD ratio oral spray was associated with a small benefit in change in pain severity (7 RCTs, N=632, 0 to10 scale, mean difference [MD] −0.54, 95% confidence interval [CI] −0.95 to −0.19, I2=39%; SOE: moderate) and overall function (6 RCTs, N=616, 0 to 10 scale, MD −0.42, 95% CI −0.73 to −0.16, I2=32%). There was no effect on study withdrawals due to adverse events. There was a large increased risk of dizziness and sedation, and a moderate increased risk of nausea (dizziness: 6 RCTs, N=866, 31.0% vs. 8.0%, relative risk [RR] 3.57, 95% CI 2.42 to 5.60, I2=0%; sedation: 6 RCTs, N=866, 8.0% vs. 1.2%, RR 5.04, 95% CI 2.10 to 11.89, I2=0%; and nausea: 6 RCTs, N=866, 13% vs. 7.5%, RR 1.79, 95% CI 1.19 to 2.77, I2=0%). Synthetic products with high-THC to CBD ratios were associated with a moderate improvement in pain severity, a moderate increase in sedation, and a large increase in nausea (pain: 6 RCTs, N=390, 0 to 10 scale, MD −1.15, 95% CI −1.99 to −0.54, I2=48%; sedation: 3 RCTs, N=335, 19% vs. 10%, RR 1.73, 95% CI 1.03 to 4.63, I2=28%; nausea: 2 RCTs, N=302, 12.3% vs. 6.1%, RR 2.19, 95% CI 0.77 to 5.39; I²=0%). We also found moderate SOE for a large increased risk of dizziness (2 RCTs, 32% vs. 11%, RR 2.74, 95% CI 1.47 to 6.86, I2=40%). Extracted whole-plant products with high-THC to CBD ratios (oral) were associated with a large increased risk of study withdrawal due to adverse events (1 RCT, 13.9% vs. 5.7%, RR 3.12, 95% CI 1.54 to 6.33) and dizziness (1 RCT, 62.2% vs. 7.5%, RR 8.34, 95% CI 4.53 to 15.34); outcomes assessing benefit were not reported or insufficient. We observed a moderate improvement in pain severity when combining all studies of high-THC to CBD ratio (8 RCTs, N=684, MD −1.25, 95% CI −2.09 to −0.71, I2=58%; SOE: moderate). Evidence (including observational studies) on whole-plant cannabis, topical or oral CBD, low-THC to CBD, other cannabinoids, comparisons with active products or between cannabis-related products, and impact on use of opioids was insufficient to draw conclusions. Other important harms (psychosis, cannabis use disorder, and cognitive effects) were not reported. Conclusions. Low to moderate strength evidence suggests small to moderate improvements in pain (mostly neuropathic), and moderate to large increases in common adverse events (dizziness, sedation, nausea) with high- and comparable THC to CBD ratio extracted cannabinoids and synthetic products during short-term treatment (1 to 6 months); high-THC to CBD ratio products were also associated with increased risk of withdrawal due to adverse events. Evidence for whole-plant cannabis and other comparisons, outcomes, and plant-based compounds was unavailable or insufficient to draw conclusions. Small sample sizes, lack of evidence for moderate and long-term use and other key outcomes, such as other adverse events and impact on use of opioids during treatment, indicate that more research is needed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography