To see the other types of publications on this topic, follow the link: External validation.

Journal articles on the topic 'External validation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'External validation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Choi, Woo Jin, Richard Walker, Luckshi Rajendran, Owen Jones, Annie Gravely, Marina Englesakis, Steven Gallinger, Gideon Hirschfield, Bettina Hansen, and Gonzalo Sapisochin. "Call to Improve the Quality of Prediction Tools for Intrahepatic Cholangiocarcinoma Resection: A Critical Appraisal, Systematic Review, and External Validation Study." Annals of Surgery Open 4, no. 3 (September 2023): e328. http://dx.doi.org/10.1097/as9.0000000000000328.

Full text
Abstract:
Objective: To conduct a systematic review, critical appraisal, and external validation of survival prediction tools for patients undergoing intrahepatic cholangiocarcinoma (iCCA) resection. Summary background data: Despite the development of several survival prediction tools in recent years for patients undergoing iCCA resections, there is a lack of critical appraisal and external validation of these models. Methods: We conducted a systematic review and critical appraisal of survival and recurrence prediction models for patients undergoing curative-intent iCCA resections. Studies were evaluated based on their model design, risk of bias, reporting, performance, and validation results. We identified the best model and externally validated it using our institution’s data. Results: This review included a total of 31 studies, consisting of 26 studies with original prediction tools and 5 studies that only conducted external validations. Among the 26, 54% of the studies conducted internal validations, 46% conducted external validations, and only 1 study scored a low risk of bias. Harrell’s C-statistics ranged from 0.67 to 0.76 for internal validation and from 0.64 to 0.75 for external validation. Only 81% of the studies reported model calibration. Our external validation of the best model (Intrahepatic Cholangiocarcinoma [ICC]-Metroticket) estimated Harrell’s and Uno’s C-statistics of 0.67 (95% CI: 0.56–0.77) and Uno’s time-dependent area under the receiver operating characteristic curve (AUC) of 0.71 (95% CI: 0.53–0.88), with a Brier score of 0.20 (95% CI: 0.15–0.26) and good calibration plots. Conclusions: Many prediction models have been published in recent years, but their quality remains poor, and minimal methodological quality improvement has been observed. The ICC-Metroticket was selected as the best model (Uno’s time-dependent AUC of 0.71) for 5-year overall survival prediction in patients undergoing curative-intent iCCA resection.
APA, Harvard, Vancouver, ISO, and other styles
2

Aminian, Ali, Stacy A. Brethauer, Sangeeta R. Kashyap, John P. Kirwan, and Philip R. Schauer. "DiaRem score: external validation." Lancet Diabetes & Endocrinology 2, no. 1 (January 2014): 12–13. http://dx.doi.org/10.1016/s2213-8587(13)70202-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Halfon, Philippe, Guillaume Penaranda, Christophe Renou, and Marc Bourliere. "External validation of FibroIndex." Hepatology 46, no. 1 (2007): 280–81. http://dx.doi.org/10.1002/hep.21717.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Biancari, Fausto, Jari Laurikka, Jan-Ola Wistbacka, Juha Nissinen, and Matti Tarkka. "External Validation of Modified EuroSCORE." World Journal of Surgery 34, no. 12 (September 1, 2010): 2979–84. http://dx.doi.org/10.1007/s00268-010-0775-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kornblith, Aaron E., Chandan Singh, Gabriel Devlin, Newton Addo, Christian J. Streck, James F. Holmes, Nathan Kuppermann, et al. "Predictability and stability testing to assess clinical decision instrument performance for children after blunt torso trauma." PLOS Digital Health 1, no. 8 (August 8, 2022): e0000076. http://dx.doi.org/10.1371/journal.pdig.0000076.

Full text
Abstract:
Objective The Pediatric Emergency Care Applied Research Network (PECARN) has developed a clinical-decision instrument (CDI) to identify children at very low risk of intra-abdominal injury. However, the CDI has not been externally validated. We sought to vet the PECARN CDI with the Predictability Computability Stability (PCS) data science framework, potentially increasing its chance of a successful external validation. Materials & methods We performed a secondary analysis of two prospectively collected datasets: PECARN (12,044 children from 20 emergency departments) and an independent external validation dataset from the Pediatric Surgical Research Collaborative (PedSRC; 2,188 children from 14 emergency departments). We used PCS to reanalyze the original PECARN CDI along with new interpretable PCS CDIs developed using the PECARN dataset. External validation was then measured on the PedSRC dataset. Results Three predictor variables (abdominal wall trauma, Glasgow Coma Scale Score <14, and abdominal tenderness) were found to be stable. A CDI using only these three variables would achieve lower sensitivity than the original PECARN CDI with seven variables on internal PECARN validation but achieve the same performance on external PedSRC validation (sensitivity 96.8% and specificity 44%). Using only these variables, we developed a PCS CDI which had a lower sensitivity than the original PECARN CDI on internal PECARN validation but performed the same on external PedSRC validation (sensitivity 96.8% and specificity 44%). Conclusion The PCS data science framework vetted the PECARN CDI and its constituent predictor variables prior to external validation. We found that the 3 stable predictor variables represented all of the PECARN CDI’s predictive performance on independent external validation. The PCS framework offers a less resource-intensive method than prospective validation to vet CDIs before external validation. We also found that the PECARN CDI will generalize well to new populations and should be prospectively externally validated. The PCS framework offers a potential strategy to increase the chance of a successful (costly) prospective validation.
APA, Harvard, Vancouver, ISO, and other styles
6

Lewis, R., and P. Postle. "CFD Validation for External Aerodynamics Part 1: Validating Component Analysis." NAFEMS International Journal of CFD Case Studies 4 (January 2004): 27–37. http://dx.doi.org/10.59972/8gzlg6cv.

Full text
Abstract:
Validating external aerodynamics is often a complex process with more to consider than just obtaining the correct forces. This paper examines some of the issues related to the CFD modelling of car components such as wings, wheel assemblies and radiators. The types of mesh, choice of turbulence model and wall resolution (measured in terms of y+) are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
7

Ban, Jong-Wook, Lucy Abel, Richard Stevens, and Rafael Perera. "Research inefficiencies in external validation studies of the Framingham Wilson coronary heart disease risk rule: A systematic review." PLOS ONE 19, no. 9 (September 13, 2024): e0310321. http://dx.doi.org/10.1371/journal.pone.0310321.

Full text
Abstract:
Background External validation studies create evidence about a clinical prediction rule’s (CPR’s) generalizability by evaluating and updating the CPR in populations different from those used in the derivation, and also by contributing to estimating its overall performance when meta-analysed in a systematic review. While most cardiovascular CPRs do not have any external validation, some CPRs have been externally validated repeatedly. Hence, we examined whether external validation studies of the Framingham Wilson coronary heart disease (CHD) risk rule contributed to generating evidence to their full potential. Methods A forward citation search of the Framingham Wilson CHD risk rule’s derivation study was conducted to identify studies that evaluated the Framingham Wilson CHD risk rule in different populations. For external validation studies of the Framingham Wilson CHD risk rule, we examined whether authors updated the Framingham Wilson CHD risk rule when it performed poorly. We also assessed the contribution of external validation studies to understanding the Predicted/Observed (P/O) event ratio and c statistic of the Framingham Wilson CHD risk rule. Results We identified 98 studies that evaluated the Framingham Wilson CHD risk rule; 40 of which were external validation studies. Of these 40 studies, 27 (67.5%) concluded the Framingham Wilson CHD risk rule performed poorly but did not update it. Of 23 external validation studies conducted with data that could be included in meta-analyses, 13 (56.5%) could not fully contribute to the meta-analyses of P/O ratio and/or c statistic because these performance measures were neither reported nor could be calculated from provided data. Discussion Most external validation studies failed to generate evidence about the Framingham Wilson CHD risk rule’s generalizability to their full potential. Researchers might increase the value of external validation studies by presenting all relevant performance measures and by updating the CPR when it performs poorly.
APA, Harvard, Vancouver, ISO, and other styles
8

Steyerberg, Ewout W., and Frank E. Harrell. "Prediction models need appropriate internal, internal–external, and external validation." Journal of Clinical Epidemiology 69 (January 2016): 245–47. http://dx.doi.org/10.1016/j.jclinepi.2015.04.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hippisley-Cox, Julia, and Carol Coupland. "Independent external validation of QCancer (Ovarian)." European Journal of Cancer Care 22, no. 4 (June 18, 2013): 559–60. http://dx.doi.org/10.1111/ecc.12071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Posthumus, R., T. P. Traas, W. J. G. M. Peijnenburg, and E. M. Hulzebos. "External validation of EPIWIN biodegradation models." SAR and QSAR in Environmental Research 16, no. 1-2 (February 2005): 135–48. http://dx.doi.org/10.1080/10629360412331319899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Villalobos, Nicholas, Ross Davidson, Uzair K. Ghori, Yara Abdou, Jawad Abukhalaf, and Rodrigo Vazquez Guillamet. "External Validation of the COmorbidity Test." COPD: Journal of Chronic Obstructive Pulmonary Disease 14, no. 5 (September 3, 2017): 513–17. http://dx.doi.org/10.1080/15412555.2017.1354981.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Clarke, Jennifer L., S. Claiborne Johnston, Mary Farrant, Richard Bernstein, David Tong, and J. Claude Hemphill. "External Validation of the ICH Score." Neurocritical Care 1, no. 1 (2004): 53–60. http://dx.doi.org/10.1385/ncc:1:1:53.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Still, Christopher D., G. Craig Wood, and George Argyropoulos. "DiaRem score: external validation – Authors' reply." Lancet Diabetes & Endocrinology 2, no. 1 (January 2014): 13. http://dx.doi.org/10.1016/s2213-8587(13)70201-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Moore, Christopher L., Brock Daniels, Cary P. Gross, Seth Luty, and Dinesh Singh. "External Validation of the STONE Score." Annals of Emergency Medicine 67, no. 2 (February 2016): 301–2. http://dx.doi.org/10.1016/j.annemergmed.2015.11.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Lewis, R., and P. Postle. "CFD Validation for External Aerodynamics Part 2: Full Car Validation." NAFEMS International Journal of CFD Case Studies 4 (January 2004): 39–50. http://dx.doi.org/10.59972/8c5efufc.

Full text
Abstract:
This paper examines issues associated with the CFD modelling of a complete racing car in comparison with physical testing. In particular, the issue of how a CFD analysis may be invalid, simply because it does not replicate the set-up of the physical test sufficiently, is examined. Convergence options are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
16

Pinkhasov, Rubin, Alexandr M. Pinkhasov, Kristopher Attwood, Michall Abaev, and Willie Underwood. "External validation of prostatectomy incontinence nomogram (PIN)." Journal of Clinical Oncology 37, no. 7_suppl (March 1, 2019): 106. http://dx.doi.org/10.1200/jco.2019.37.7_suppl.106.

Full text
Abstract:
106 Background: Using previously developed prostatectomy incontinence nomogram (PIN) we sought to externally validate the nomogram that predicts probability of incontinence at 6-, 12-, and 24-months after robot assisted radical prostatectomy (RARP). Methods: Prospective data from 663 men with prostate cancer that underwent RARP from 2010 to 2014 at two comprehensive cancer centers and three large group practices was queried. The performance of the previously developed model was evaluated using calibration plots (predicted continence rates versus observed continence rates with 95% CI obtained using Jeffrey’s prior method) and receiver operating curves (ROC). Using Expanded Prostate Cancer Index Composite (EPIC-50) Urinary Function questionnaire, perfect continence was defined as 0 pads, social continence was defined as 1 or 2 pads, and incontinence was defined as ≥ 3 pads used after RARP. Results: The 6-, 12-, and 24- month social continence rates were 77%, 88%, and 93%, respectively. Similar to the 6- and 12-month model development cohort, the external validation cohort has modest predictability with a 6- and 12- month AUC of 0.61, and 0.62, respectively. The 24-month AUC of 0.62 in the external validation cohort is worse than what was reported in the development cohort (AUC 0.80). Conclusions: The externally validated prostatectomy incontinence nomogram is generalizable but has modest 6-, 12-, and 12-month predictability in risk of incontinence after RARP.
APA, Harvard, Vancouver, ISO, and other styles
17

Lee, Sang-Wook, Jaewon Jang, Woo-Young Seo, Donghee Lee, and Sung-Hoon Kim. "Internal and External Validation of Machine Learning Models for Predicting Acute Kidney Injury Following Non-Cardiac Surgery Using Open Datasets." Journal of Personalized Medicine 14, no. 6 (May 30, 2024): 587. http://dx.doi.org/10.3390/jpm14060587.

Full text
Abstract:
This study developed and validated a machine learning model to accurately predict acute kidney injury (AKI) after non-cardiac surgery, aiming to improve patient outcomes by assessing its clinical feasibility and generalizability. We conducted a retrospective cohort study using data from 76,032 adults who underwent non-cardiac surgery at a single tertiary medical center between March 2019 and February 2021, and used data from 5512 patients from the VitalDB open dataset for external model validation. The predictive variables for model training consisted of demographic, preoperative laboratory, and intraoperative data, including calculated statistical values such as the minimum, maximum, and mean intraoperative blood pressure. When predicting postoperative AKI, our gradient boosting machine model incorporating all the variables achieved the best results, with AUROC values of 0.868 and 0.757 for the internal and external validations using the VitalDB dataset, respectively. The model using intraoperative data performed best in internal validation, while the model with preoperative data excelled in external validation. In this study, we developed a predictive model for postoperative AKI in adult patients undergoing non-cardiac surgery using preoperative and intraoperative data, and external validation demonstrated the efficacy of open datasets for generalization in medical artificial modeling research.
APA, Harvard, Vancouver, ISO, and other styles
18

De Hundt, Marcella, Floortje Vlemmix, Marjolein Kok, Jan Van Der Steeg, Joke Bais, Ben Mol, and Joris Van Der Post. "External Validation of a Prediction Model for Successful External Cephalic Version." American Journal of Perinatology 29, no. 03 (August 1, 2011): 231–36. http://dx.doi.org/10.1055/s-0031-1285098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Reps, Jenna M., Patrick Ryan, and P. R. Rijnbeek. "Investigating the impact of development and internal validation design when training prognostic models using a retrospective cohort in big US observational healthcare data." BMJ Open 11, no. 12 (December 2021): e050146. http://dx.doi.org/10.1136/bmjopen-2021-050146.

Full text
Abstract:
ObjectiveThe internal validation of prediction models aims to quantify the generalisability of a model. We aim to determine the impact, if any, that the choice of development and internal validation design has on the internal performance bias and model generalisability in big data (n~500 000).DesignRetrospective cohort.SettingPrimary and secondary care; three US claims databases.Participants1 200 769 patients pharmaceutically treated for their first occurrence of depression.MethodsWe investigated the impact of the development/validation design across 21 real-world prediction questions. Model discrimination and calibration were assessed. We trained LASSO logistic regression models using US claims data and internally validated the models using eight different designs: ‘no test/validation set’, ‘test/validation set’ and cross validation with 3-fold, 5-fold or 10-fold with and without a test set. We then externally validated each model in two new US claims databases. We estimated the internal validation bias per design by empirically comparing the differences between the estimated internal performance and external performance.ResultsThe differences between the models’ internal estimated performances and external performances were largest for the ‘no test/validation set’ design. This indicates even with large data the ‘no test/validation set’ design causes models to overfit. The seven alternative designs included some validation process to select the hyperparameters and a fair testing process to estimate internal performance. These designs had similar internal performance estimates and performed similarly when externally validated in the two external databases.ConclusionsEven with big data, it is important to use some validation process to select the optimal hyperparameters and fairly assess internal validation using a test set or cross-validation.
APA, Harvard, Vancouver, ISO, and other styles
20

Sutarto, Sutarto, Husaini Usman, and Amat Jaedun. "Impact-Based Training Evaluation Model (IBTEM) For School Supervisors in Indonesia." International Education Studies 9, no. 2 (January 25, 2016): 149. http://dx.doi.org/10.5539/ies.v9n2p149.

Full text
Abstract:
<p class="apa">This article represents a study aiming at developing: (1) an IBTEM which is capable to promote partnership between training providers and their client institutions, easy to understand, effective, efficient; and (2) an IBTEM implementation guide which is comprehensive, coherent, easy to understand, effective, and efficient. The method used in the study was Research and Development (R &amp; D) suggested by D. Gall, P. Gall, and Borg (2007) that was simplified into three phases: pre-development, development, and validation phases. The pre-development phase was used to review related theories, interview resource persons, and collect relevant documents. The result of this phase was an IBTEM draft. The development phase was used to organize a focus group discussion (FGD) attended by participants from the Yogyakarta Education Quality Assurance Institute, Regional Offices of Education, school supervisors, principals, and teachers. The result of this phase was a revised IBTEM draft. The validation phase consisted of internal and external validations. The internal validation was done through a Delphi technique. The team in the Delphi involved experts in educational evaluation, educational management, and vocational education. The result of the internal validation phase was a theoretical IBTEM that was then validated externally. The external validation was conducted by carrying out limited field testing of the theoretical IBTEM involving three institutions: the Center for Development and Empowerment for Arts Teachers, Center for Development and Empowerment for Mathematics Teachers, and the Yogyakarta Education Quality Assurance Institute. The external validation phase resulted in a final IBTEM. The study results confirm that: (1) the developed IBTEM is capable to promote partnership between training providers and their client institutions, easy to understand, effective, and efficient; and (2) the developed IBTEM implementation guide which is comprehensive, coherent, easy to understand, effective, and efficient.</p>
APA, Harvard, Vancouver, ISO, and other styles
21

Wildi, Karin, Jasper Boeddinghaus, Thomas Nestelberger, Philip Haaf, Luca Koechlin, Pedro Ayala Lopez, Joan Walter, et al. "External validation of the clinical chemistry score." Clinical Biochemistry 91 (May 2021): 16–25. http://dx.doi.org/10.1016/j.clinbiochem.2021.02.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Martin-Neira, M., J. Kainulainen, and M. Suess. "SMOS In-Orbit External Calibration and Validation." PIERS Online 1, no. 5 (2005): 529–32. http://dx.doi.org/10.2529/piers050125093433.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Schaeffner, Elke S., Peter Martus, and Natalie Ebert. "External Validation of the Berlin Initiative Equations." American Journal of Kidney Diseases 64, no. 4 (October 2014): 658–59. http://dx.doi.org/10.1053/j.ajkd.2014.04.037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Kotz, D., P. Nelemans, C. P. van Schayck, and G. J. Wesseling. "External validation of a COPD diagnostic questionnaire." European Respiratory Journal 31, no. 2 (February 1, 2008): 298–303. http://dx.doi.org/10.1183/09031936.00074307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Zsólyom, A., G. Merész, L. Nagyjánosi, S. Nagyistók, B. Nagy, Z. Kaló, and Z. Vokó. "External Validation of the Syreon Diabetes Model." Value in Health 16, no. 7 (November 2013): A590. http://dx.doi.org/10.1016/j.jval.2013.08.1640.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Wełna, Marek, Barbara Adamik, Waldemar Goździk, and Andrzej Kübler. "External validation of the sepsis severity score." International Journal of Immunopathology and Pharmacology 34 (January 2020): 205873842093638. http://dx.doi.org/10.1177/2058738420936386.

Full text
Abstract:
Introduction: Sepsis is defined as a life-threatening organ dysfunction caused by a dysregulated host response to infection. Mortality rates are high, exceeding 50% in patients with septic shock. The sepsis severity score (SSS) was developed to determine the severity of sepsis and as a prognostic model. The aim of this study was to externally validate the SSS model. Methods: Calibration and discrimination of the SSS were retrospectively evaluated using data from a single-center sepsis registry. Results: Data from 156 septic patients were recorded; 56% of them had septic shock, 94% of patients required mechanical ventilation. The observed hospital mortality was 60.3%. The mean SSS value was 94.4 (95% CI 90.5–98.3). The SSS presented excellent discrimination with an area under the receiver operating characteristic curve (AUC) of 0.806 (95% CI 0.734–0.866). The pairwise comparison of APACHE II (AUC = 0.789; 95% CI 0.715–0.851) with SSS and 1st day SOFA (AUC = 0.75; 95% CI 0.673–0.817) with SSS revealed no significant differences in discrimination between the models. The calibration of the SSS was good with the Hosmer-Lemeshow goodness-of-fit H test 9.59, P > 0.05. Analyses of calibration curve show absence of accurate predictions in lower deciles of lower risk (2nd and 4th). Conclusion: The SSS demonstrated excellent discrimination. The calibration evaluation gave conflicting results; the H-L test result indicated a good calibration, while the visual analysis of the calibration curve suggested the opposite. The SSS requires further evaluation before it can be safely recommended as an outcome prediction model.
APA, Harvard, Vancouver, ISO, and other styles
27

Bleeker, S. E., H. A. Moll, E. W. Steyerberg, A. R. T. Donders, G. Derksen-Lubsen, D. E. Grobbee, and K. G. M. Moons. "External validation is necessary in prediction research:." Journal of Clinical Epidemiology 56, no. 9 (September 2003): 826–32. http://dx.doi.org/10.1016/s0895-4356(03)00207-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Collins, Gary S., Emmanuel O. Ogundimu, and Yannick Le Manach. "Assessing calibration in an external validation study." Spine Journal 15, no. 11 (November 2015): 2446–47. http://dx.doi.org/10.1016/j.spinee.2015.06.043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Kanis, J. A., A. Oden, H. Johansson, and E. McCloskey. "Pitfalls in the external validation of FRAX." Osteoporosis International 23, no. 2 (November 26, 2011): 423–31. http://dx.doi.org/10.1007/s00198-011-1846-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Khadhouri, S., L. Orecchia, R. Banthia, P. Piazza, D. Mak, N. Pyrgidis, P. Narayan, et al. "External validation of the IDENTIFY risk calculator." European Urology 83 (February 2023): S866—S867. http://dx.doi.org/10.1016/s0302-2838(23)00654-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Wang, Wenjuan, Josline A. Otieno, Marie Eriksson, Charles D. Wolfe, Vasa Curcin, and Benjamin D. Bray. "Developing and externally validating a machine learning risk prediction model for 30-day mortality after stroke using national stroke registers in the UK and Sweden." BMJ Open 13, no. 11 (November 2023): e069811. http://dx.doi.org/10.1136/bmjopen-2022-069811.

Full text
Abstract:
ObjectivesWe aimed to develop and externally validate a generalisable risk prediction model for 30-day stroke mortality suitable for supporting quality improvement analytics in stroke care using large nationwide stroke registers in the UK and Sweden.DesignRegistry-based cohort study.SettingStroke registries including the Sentinel Stroke National Audit Programme (SSNAP) in England, Wales and Northern Ireland (2013–2019) and the national Swedish stroke register (Riksstroke 2015–2020).Participants and methodsData from SSNAP were used for developing and temporally validating the model, and data from Riksstroke were used for external validation. Models were developed with the variables available in both registries using logistic regression (LR), LR with elastic net and interaction terms and eXtreme Gradient Boosting (XGBoost). Performances were evaluated with discrimination, calibration and decision curves.Outcome measuresThe primary outcome was all-cause 30-day in-hospital mortality after stroke.ResultsIn total, 488 497 patients who had a stroke with 12.4% 30-day in-hospital mortality were used for developing and temporally validating the model in the UK. A total of 128 360 patients who had a stroke with 10.8% 30-day in-hospital mortality and 13.1% all mortality were used for external validation in Sweden. In the SSNAP temporal validation set, the final XGBoost model achieved the highest area under the receiver operating characteristic curve (AUC) (0.852 (95% CI 0.848 to 0.855)) and was well calibrated. The performances on the external validation in Riksstroke were as good and achieved AUC at 0.861 (95% CI 0.858 to 0.865) for in-hospital mortality. For Riksstroke, the models slightly overestimated the risk for in-hospital mortality, while they were better calibrated at the risk for all mortality.ConclusionThe risk prediction model was accurate and externally validated using high quality registry data. This is potentially suitable to be deployed as part of quality improvement analytics in stroke care to enable the fair comparison of stroke mortality outcomes across hospitals and health systems across countries
APA, Harvard, Vancouver, ISO, and other styles
32

Meid, Andreas Daniel, Ana Isabel Gonzalez-Gonzalez, Truc Sophia Dinh, Jeanet Blom, Marjan van den Akker, Petra Elders, Ulrich Thiem, et al. "Predicting hospital admissions from individual patient data (IPD): an applied example to explore key elements driving external validity." BMJ Open 11, no. 8 (August 2021): e045572. http://dx.doi.org/10.1136/bmjopen-2020-045572.

Full text
Abstract:
ObjectiveTo explore factors that potentially impact external validation performance while developing and validating a prognostic model for hospital admissions (HAs) in complex older general practice patients.Study design and settingUsing individual participant data from four cluster-randomised trials conducted in the Netherlands and Germany, we used logistic regression to develop a prognostic model to predict all-cause HAs within a 6-month follow-up period. A stratified intercept was used to account for heterogeneity in baseline risk between the studies. The model was validated both internally and by using internal-external cross-validation (IECV).ResultsPrior HAs, physical components of the health-related quality of life comorbidity index, and medication-related variables were used in the final model. While achieving moderate discriminatory performance, internal bootstrap validation revealed a pronounced risk of overfitting. The results of the IECV, in which calibration was highly variable even after accounting for between-study heterogeneity, agreed with this finding. Heterogeneity was equally reflected in differing baseline risk, predictor effects and absolute risk predictions.ConclusionsPredictor effect heterogeneity and differing baseline risk can explain the limited external performance of HA prediction models. With such drivers known, model adjustments in external validation settings (eg, intercept recalibration, complete updating) can be applied more purposefully.Trial registration numberPROSPERO id: CRD42018088129.
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Shuanhu, Yakui Liu, Yi Shi, Jiajia Guan, Mulin Liu, and Wenbin Wang. "Development and external validation of a nomogram predicting overall survival after curative resection of colon cancer." Journal of International Medical Research 49, no. 5 (May 2021): 030006052110150. http://dx.doi.org/10.1177/03000605211015023.

Full text
Abstract:
Objective To develop and externally validate a prognostic nomogram to predict overall survival (OS) in patients with resectable colon cancer. Methods Data for 50,996 patients diagnosed with non-metastatic colon cancer were retrieved from the Surveillance, Epidemiology, and End Results (SEER) database. Patients were assigned randomly to the training set (n = 34,168) or validation set (n = 16,828). Independent prognostic factors were identified by multivariate Cox proportional hazards regression analysis and used to construct the nomogram. Harrell’s C-index and calibration plots were calculated using the SEER validation set. Additional external validation was performed using a Chinese dataset (n = 342). Results Harrell’s C-index of the nomogram for OS in the SEER validation set was 0.71, which was superior to that using the 7th edition of the American Joint Committee on Cancer TNM staging (0.59). Calibration plots showed consistency between actual observations and predicted 1-, 3-, and 5-year survival. Harrell’s C-index (0.72) and calibration plot showed excellent predictive accuracy in the external validation set. Conclusions We developed a nomogram to predict OS after curative resection for colon cancer. Validation using the SEER and external datasets revealed good discrimination and calibration. This nomogram may help predict individual survival in patients with colon cancer.
APA, Harvard, Vancouver, ISO, and other styles
34

Zanier, Olivier, Matteo Zoli, Victor E. Staartjes, Federica Guaraldi, Sofia Asioli, Arianna Rustici, Valentino Marino Picciola, et al. "Machine learning-based clinical outcome prediction in surgery for acromegaly." Endocrine 75, no. 2 (October 12, 2021): 508–15. http://dx.doi.org/10.1007/s12020-021-02890-z.

Full text
Abstract:
Abstract Purpose Biochemical remission (BR), gross total resection (GTR), and intraoperative cerebrospinal fluid (CSF) leaks are important metrics in transsphenoidal surgery for acromegaly, and prediction of their likelihood using machine learning would be clinically advantageous. We aim to develop and externally validate clinical prediction models for outcomes after transsphenoidal surgery for acromegaly. Methods Using data from two registries, we develop and externally validate machine learning models for GTR, BR, and CSF leaks after endoscopic transsphenoidal surgery in acromegalic patients. For the model development a registry from Bologna, Italy was used. External validation was then performed using data from Zurich, Switzerland. Gender, age, prior surgery, as well as Hardy and Knosp classification were used as input features. Discrimination and calibration metrics were assessed. Results The derivation cohort consisted of 307 patients (43.3% male; mean [SD] age, 47.2 [12.7] years). GTR was achieved in 226 (73.6%) and BR in 245 (79.8%) patients. In the external validation cohort with 46 patients, 31 (75.6%) achieved GTR and 31 (77.5%) achieved BR. Area under the curve (AUC) at external validation was 0.75 (95% confidence interval: 0.59–0.88) for GTR, 0.63 (0.40–0.82) for BR, as well as 0.77 (0.62–0.91) for intraoperative CSF leaks. While prior surgery was the most important variable for prediction of GTR, age, and Hardy grading contributed most to the predictions of BR and CSF leaks, respectively. Conclusions Gross total resection, biochemical remission, and CSF leaks remain hard to predict, but machine learning offers potential in helping to tailor surgical therapy. We demonstrate the feasibility of developing and externally validating clinical prediction models for these outcomes after surgery for acromegaly and lay the groundwork for development of a multicenter model with more robust generalization.
APA, Harvard, Vancouver, ISO, and other styles
35

Feng, Qi, Margaret T. May, Suzanne Ingle, Ming Lu, Zuyao Yang, and Jinling Tang. "Prognostic Models for Predicting Overall Survival in Patients with Primary Gastric Cancer: A Systematic Review." BioMed Research International 2019 (September 18, 2019): 1–10. http://dx.doi.org/10.1155/2019/5634598.

Full text
Abstract:
Background. This study was designed to review the methodology and reporting of gastric cancer prognostic models and identify potential problems in model development. Methods. This systematic review was conducted following the CHARMS checklist. MEDLINE and EMBASE were searched. Information on patient characteristics, methodological details, and models’ performance was extracted. Descriptive statistics was used to summarize the methodological and reporting quality. Results. In total, 101 model developments and 32 external validations were included. The median (range) of training sample size, number of death, and number of final predictors were 360 (29 to 15320), 193 (14 to 9560), and 5 (2 to 53), respectively. Ninety-one models were developed from routine clinical data. Statistical assumptions were reported to be checked in only nine models. Most model developments (94/101) used complete-case analysis. Discrimination and calibration were not reported in 33 and 55 models, respectively. The majority of models (81/101) have never been externally validated. None of the models have been evaluated regarding clinical impact. Conclusions. Many prognostic models have been developed, but their usefulness in clinical practice remains uncertain due to methodological shortcomings, insufficient reporting, and lack of external validation and impact studies. Impact. Future research should improve methodological and reporting quality and emphasize more on external validation and impact assessment.
APA, Harvard, Vancouver, ISO, and other styles
36

Harish, Keerthi, Ben Zhang, Peter Stella, Kevin Hauck, Marwa M. Moussa, Nicole M. Adler, Leora I. Horwitz, and Yindalon Aphinyanaphongs. "Validation of parsimonious prognostic models for patients infected with COVID-19." BMJ Health & Care Informatics 28, no. 1 (September 2021): e100267. http://dx.doi.org/10.1136/bmjhci-2020-100267.

Full text
Abstract:
ObjectivesPredictive studies play important roles in the development of models informing care for patients with COVID-19. Our concern is that studies producing ill-performing models may lead to inappropriate clinical decision-making. Thus, our objective is to summarise and characterise performance of prognostic models for COVID-19 on external data.MethodsWe performed a validation of parsimonious prognostic models for patients with COVID-19 from a literature search for published and preprint articles. Ten models meeting inclusion criteria were either (a) externally validated with our data against the model variables and weights or (b) rebuilt using original features if no weights were provided. Nine studies had internally or externally validated models on cohorts of between 18 and 320 inpatients with COVID-19. One model used cross-validation. Our external validation cohort consisted of 4444 patients with COVID-19 hospitalised between 1 March and 27 May 2020.ResultsMost models failed validation when applied to our institution’s data. Included studies reported an average validation area under the receiver–operator curve (AUROC) of 0.828. Models applied with reported features averaged an AUROC of 0.66 when validated on our data. Models rebuilt with the same features averaged an AUROC of 0.755 when validated on our data. In both cases, models did not validate against their studies’ reported AUROC values.DiscussionPublished and preprint prognostic models for patients infected with COVID-19 performed substantially worse when applied to external data. Further inquiry is required to elucidate mechanisms underlying performance deviations.ConclusionsClinicians should employ caution when applying models for clinical prediction without careful validation on local data.
APA, Harvard, Vancouver, ISO, and other styles
37

Tong, Jian-Bo, Jia Chang, Shu-Ling Liu, and Min Bai. "A quantitative structure–activity relationship (QSAR) study of peptide drugs based on a new descriptor of amino acids." Journal of the Serbian Chemical Society 80, no. 3 (2015): 343–53. http://dx.doi.org/10.2298/jsc140604069t.

Full text
Abstract:
Quantitative structure-activity relationships (QSAR) approach is used for finding the relationship between molecular structures and the activity of peptide drugs. In this work, stepwise multiple regression, was employed to select optimal subset of descriptors that have significant contribution to the drug activity of 21 oxytocin analogues, 48 bitter tasting threshold, and 58 angiotensin-converting enzyme inhibitors. A new set of descriptor, SVWGM, was used for the prediction of the drug activity of peptide drugs and then were used to build the model by partial least squares method, for it?s estimation stability and generalization ability was strictly analyzed by both internal and external validations, with cross-validation correlation coefficient, correlation coefficient and correlation coefficient of external validation.
APA, Harvard, Vancouver, ISO, and other styles
38

Yarbrough, Chester K., Jacob K. Greenberg, Matthew D. Smyth, Jeffrey R. Leonard, Tae Sung Park, and David D. Limbrick. "External validation of the Chicago Chiari Outcome Scale." Journal of Neurosurgery: Pediatrics 13, no. 6 (June 2014): 679–84. http://dx.doi.org/10.3171/2014.3.peds13503.

Full text
Abstract:
Object Historically, assessment of clinical outcomes following surgical management of Chiari malformation Type I (CM-I) has been challenging due to the lack of a validated instrument for widespread use. The Chicago Chiari Outcome Scale (CCOS) is a novel system intended to provide a less subjective evaluation of outcomes for patients with CM-I. The goal of this study was to externally validate the performance of the CCOS. Methods Patients undergoing surgery for CM-I between 2001 and 2012 were reviewed (n = 292). Inclusion criteria for this study were as follows: 1) patients receiving primary posterior fossa decompression; 2) at least 5.5 months of postoperative clinical follow-up; and 3) patients ≤ 18 years of age at the time of surgery. Outcomes were evaluated using the CCOS, along with a “gestalt” impression of whether patients experienced significant improvement after surgery. A subgroup of 118 consecutive patients undergoing operations between 2008 and 2010 was selected for analysis of interrater reliability (n = 73 meeting inclusion/exclusion criteria). In this subgroup, gestalt and CCOS scores were independently determined by 2 reviewers, and interrater reliability was assessed using the intraclass correlation coefficient (ICC) and kappa (κ) statistic. Results The median CCOS score was 14, and 67% of patients had improved gestalt scores after surgery. Overall, the CCOS was effective at identifying patients with improved outcome after surgery (area under curve = 0.951). The interrater reliability of the CCOS (ICC = 0.71) was high, although the reliability of the component scores ranged from poor to good (ICC 0.23–0.89). The functionality subscore demonstrated a low ICC and did not add to the predictive ability of the logistic regression model (likelihood ratio = 1.8, p = 0.18). When analyzing gestalt outcome, there was moderate agreement between raters (κ = 0.56). Conclusions In this external validation study, the CCOS was effective at identifying patients with improved outcomes and proved more reliable than the authors' gestalt impression of outcome. However, certain component subscores (functionality and nonpain symptoms) were found to be less reliable, and may benefit from further definition in score assignment. In particular, the functionality subscore does not add to the predictive ability of the CCOS, and may be unnecessary. Overall, the authors found the CCOS to be an improvement over the previously used assessment of outcome at their institution.
APA, Harvard, Vancouver, ISO, and other styles
39

Hara, Hironori, Hiroki Shiomi, David van Klaveren, David M. Kent, Ewout W. Steyerberg, Scot Garg, Yoshinobu Onuma, Takeshi Kimura, and Patrick W. Serruys. "External Validation of the SYNTAX Score II 2020." Journal of the American College of Cardiology 78, no. 12 (September 2021): 1227–38. http://dx.doi.org/10.1016/j.jacc.2021.07.027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Wu, Yilin, Huei-Chung Huang, and Li-Xuan Qin. "Making External Validation Valid for Molecular Classifier Development." JCO Precision Oncology, no. 5 (August 2021): 1250–58. http://dx.doi.org/10.1200/po.21.00103.

Full text
Abstract:
PURPOSE Accurate assessment of a molecular classifier that guides patient care is of paramount importance in precision oncology. Recent years have seen an increasing use of external validation for such assessment. However, little is known about how it is affected by ubiquitous unwanted variations in test data because of disparate experimental handling and by the use of data normalization for alleviating such variations. METHODS In this paper, we studied these issues using two microarray data sets for the same set of tumor samples and additional data simulated by resampling under various levels of signal-to-noise ratio and different designs for array-to-sample allocation. RESULTS We showed that (1) unwanted variations can lead to biased classifier assessment and (2) data normalization mitigates the bias to varying extents depending on the specific method used. In particular, frozen normalization methods for test data outperform their conventional forms in terms of both reducing the bias in accuracy estimation and increasing robustness to handling effects. We make available our benchmarking tool as an R package on GitHub for performing such evaluation on additional methods for normalization and classification. CONCLUSION Our findings thus highlight the importance of proper test-data normalization for valid assessment by external validation and call for caution on the choice of normalization method for molecular classifier development.
APA, Harvard, Vancouver, ISO, and other styles
41

Johnson, Alistair, Leo Anthony Celi, Jesse Raffa, Tom Pollard, and David Stone. "18: EXTERNAL VALIDATION OF THE SEPSIS-3 GUIDELINES." Critical Care Medicine 44, no. 12 (December 2016): 91. http://dx.doi.org/10.1097/01.ccm.0000508736.93826.b5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Biancari, Fausto, Tuomas Tauriainen, Eeva-Maija Kinnunen, and Juhani Airaksinen. "External Validation of the WILL-BLEED Risk Score." Thrombosis and Haemostasis 118, no. 02 (2018): 433–34. http://dx.doi.org/10.1160/th17-10-0702.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Fischer, C. M. "External Validation of the San Francisco Syncope Rule." Academic Emergency Medicine 12, Supplement 1 (May 1, 2005): 127. http://dx.doi.org/10.1197/j.aem.2005.03.357.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

D’Arena, Giovanni, Candida Vitale, Marta Coscia, Fiorella D’Auria, Silvia Bellesi, Giuseppe Topini, Valentina Panichi, et al. "External validation of the accuracy of ‘CLLflow score’." Journal of Investigative Medicine 66, no. 7 (September 14, 2018): e6-e6. http://dx.doi.org/10.1136/jim-2018-000832.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Awan, Shaheen N., Nancy Pearl Solomon, Leah B. Helou, and Alexander Stojadinovic. "Spectral-Cepstral Estimation of Dysphonia Severity: External Validation." Annals of Otology, Rhinology & Laryngology 122, no. 1 (January 2013): 40–48. http://dx.doi.org/10.1177/000348941312200108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Cosgriff, T., A. M. Kelly, and D. Kerr. "External Validation of the San Francisco Syncope Rule." Academic Emergency Medicine 14, no. 5 Supplement 1 (May 1, 2007): S35. http://dx.doi.org/10.1197/j.aem.2007.03.786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Bodda, Saran Srikanth, Abhinav Gupta, and Nam Dinh. "Risk informed validation framework for external flooding scenario." Nuclear Engineering and Design 356 (January 2020): 110377. http://dx.doi.org/10.1016/j.nucengdes.2019.110377.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Cherla, Deepa V., Maya L. Moses, Krislynn M. Mueck, Craig Hannon, Tien C. Ko, Lillian S. Kao, and Mike K. Liang. "External Validation of the HERNIAscore: An Observational Study." Journal of the American College of Surgeons 225, no. 3 (September 2017): 428–34. http://dx.doi.org/10.1016/j.jamcollsurg.2017.05.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Staartjes, Victor E., Carlo Serra, Matteo Zoli, Diego Mazzatenta, Fabio Pozzi, Davide Locatelli, Elena D’Avella, Domenico Solari, Luigi Maria Cavallo, and Luca Regli. "Multicenter external validation of the Zurich Pituitary Score." Acta Neurochirurgica 162, no. 6 (March 14, 2020): 1287–95. http://dx.doi.org/10.1007/s00701-020-04286-w.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Noureldin, Yasser, Mohamed Elkoushy, and Sero Andonian. "External validation of the S.T.O.N.E. nephrolithometry scoring system." Canadian Urological Association Journal 9, no. 5-6 (June 15, 2015): 190. http://dx.doi.org/10.5489/cuaj.2652.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography