To see the other types of publications on this topic, follow the link: Model generalizability.

Journal articles on the topic 'Model generalizability'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Model generalizability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Pitt, Mark A., Woojae Kim, and In Jae Myung. "Flexibility versus generalizability in model selection." Psychonomic Bulletin & Review 10, no. 1 (2003): 29–44. http://dx.doi.org/10.3758/bf03196467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Charles C., and Murray Aitkin. "Bayes factors: Prior sensitivity and model generalizability." Journal of Mathematical Psychology 52, no. 6 (2008): 362–75. http://dx.doi.org/10.1016/j.jmp.2008.03.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sen, Tarun K., Parviz Ghandforoush, and Charles T. Stivason. "Improving prediction of neural networks: a study of tow financial prediction tasks." Journal of Applied Mathematics and Decision Sciences 8, no. 4 (2004): 219–33. http://dx.doi.org/10.1155/s1173912604000148.

Full text
Abstract:
Neural networks are excellent mapping tools for complex financial data. Their mapping capabilities however do not always result in good generalizability for financial prediction models. Increasing the number of nodes and hidden layers in a neural network model produces better mapping of the data since the number of parameters available to the model increases. This is determinal to generalizabilitiy of the model since the model memorizes idiosyncratic patterns in the data. A neural network model can be expected to be more generalizable if the model architecture is made less complex by using fewer input nodes. In this study we simplify the neural network by eliminating input nodes that have the least contribution to the prediction of a desired outcome. We also provide a theoretical relationship of the sensitivity of output variables to the input variables under certain conditions. This research initiates an effort in identifying methods that would improve the generalizability of neural networks in financial prediction tasks by using mergers and bankruptcy models. The result indicates that incorporating more variables that appear relevant in a model does not necessarily improve prediction performance.
APA, Harvard, Vancouver, ISO, and other styles
4

Song, Q. Chelsea, Chen Tang, and Serena Wee. "Making Sense of Model Generalizability: A Tutorial on Cross-Validation in R and Shiny." Advances in Methods and Practices in Psychological Science 4, no. 1 (2021): 251524592094706. http://dx.doi.org/10.1177/2515245920947067.

Full text
Abstract:
Model generalizability describes how well the findings from a sample are applicable to other samples in the population. In this Tutorial, we explain model generalizability through the statistical concept of model overfitting and its outcome (i.e., validity shrinkage in new samples), and we use a Shiny app to simulate and visualize how model generalizability is influenced by three factors: model complexity, sample size, and effect size. We then discuss cross-validation as an approach for evaluating model generalizability and provide guidelines for implementing this approach. To help researchers understand how to apply cross-validation to their own research, we walk through an example, accompanied by step-by-step illustrations in R. This Tutorial is expected to help readers develop the basic knowledge and skills to use cross-validation to evaluate model generalizability in their research and practice.
APA, Harvard, Vancouver, ISO, and other styles
5

Jarjoura, David, Larry Early, and Voula Androulakakis. "A Multivariate Generalizability Model for Clinical Skills Assessments." Educational and Psychological Measurement 64, no. 1 (2004): 22–39. http://dx.doi.org/10.1177/0013164403258466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jianrong Zhang, Louis Fok, Yueming Zhao, and Zhiheng Xu. "Generalizability of COVID-19 Mortality Risk Score Model." American Journal of Preventive Medicine 59, no. 6 (2020): e249-e250. http://dx.doi.org/10.1016/j.amepre.2020.07.021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Arntz, Arnoud, and Marcel A. van den Hout. "Generalizability of the match/mismatch model of fear." Behaviour Research and Therapy 26, no. 3 (1988): 207–23. http://dx.doi.org/10.1016/0005-7967(88)90002-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Stewart, Kent, Christopher G. Pretty, Felicity Thomas, et al. "Generalizability of a Nonlinear Model-based Glycemic Controller." IFAC-PapersOnLine 49, no. 5 (2016): 212–17. http://dx.doi.org/10.1016/j.ifacol.2016.07.115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Forster, Malcolm R. "Key Concepts in Model Selection: Performance and Generalizability." Journal of Mathematical Psychology 44, no. 1 (2000): 205–31. http://dx.doi.org/10.1006/jmps.1999.1284.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chekroud, Adam M., Matt Hawrilenko, Hieronimus Loho, et al. "Illusory generalizability of clinical prediction models." Science 383, no. 6679 (2024): 164–67. http://dx.doi.org/10.1126/science.adg8538.

Full text
Abstract:
It is widely hoped that statistical models can improve decision-making related to medical treatments. Because of the cost and scarcity of medical outcomes data, this hope is typically based on investigators observing a model’s success in one or two datasets or clinical contexts. We scrutinized this optimism by examining how well a machine learning model performed across several independent clinical trials of antipsychotic medication for schizophrenia. Models predicted patient outcomes with high accuracy within the trial in which the model was developed but performed no better than chance when applied out-of-sample. Pooling data across trials to predict outcomes in the trial left out did not improve predictions. These results suggest that models predicting treatment outcomes in schizophrenia are highly context-dependent and may have limited generalizability.
APA, Harvard, Vancouver, ISO, and other styles
11

Moon, Jessica B., Theodore H. Dewitt, Melissa N. Errend, et al. "Model application niche analysis: assessing the transferability and generalizability of ecological models." Ecosphere 8, no. 10 (2017): e01974. http://dx.doi.org/10.1002/ecs2.1974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Riedy, Samantha M., Desta Fekedulegn, Michael Andrew, Bryan Vila, Drew Dawson, and John Violanti. "Generalizability of a biomathematical model of fatigue’s sleep predictions." Chronobiology International 37, no. 4 (2020): 564–72. http://dx.doi.org/10.1080/07420528.2020.1746798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Grice, John Stephen, and Robert W. Ingram. "Tests of the generalizability of Altman's bankruptcy prediction model." Journal of Business Research 54, no. 1 (2001): 53–61. http://dx.doi.org/10.1016/s0148-2963(00)00126-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Vispoel, Walter P., Hyeryung Lee, Tingting Chen, and Hyeri Hong. "Extending Applications of Generalizability Theory-Based Bifactor Model Designs." Psych 5, no. 2 (2023): 545–75. http://dx.doi.org/10.3390/psych5020036.

Full text
Abstract:
In recent years, researchers have described how to analyze generalizability theory (GT) based univariate, multivariate, and bifactor designs using structural equation models. However, within GT studies of bifactor models, variance components have been limited to those reflecting relative differences in scores for norm-referencing purposes, with only limited guidance provided for estimating key indices when making changes to measurement procedures. In this article, we demonstrate how to derive variance components for multi-facet GT-based bifactor model designs that represent both relative and absolute differences in scores for norm- or criterion-referencing purposes using scores from selected scales within the recently expanded form of the Big Five Inventory (BFI-2). We further develop and apply prophecy formulas for determining how changes in numbers of items, numbers of occasions, and universes of generalization affect a wide variety of indices instrumental in determining the best ways to change measurement procedures for specific purposes. These indices include coefficients representing score generalizability and dependability; scale viability and added value; and proportions of observed score variance attributable to general factor effects, group factor effects, and individual sources of measurement error. To enable readers to apply these techniques, we provide detailed formulas, code in R, and sample data for conducting all demonstrated analyses within this article.
APA, Harvard, Vancouver, ISO, and other styles
15

Jiang, Zhehan, Kevin Walker, Dexin Shi, and Jian Cao. "Improving generalizability coefficient estimate accuracy: A way to incorporate auxiliary information." Methodological Innovations 11, no. 2 (2018): 205979911879139. http://dx.doi.org/10.1177/2059799118791397.

Full text
Abstract:
Initially proposed by Marcoulides and further expanded by Raykov and Marcoulides, a structural equation modeling approach can be used in generalizability theory estimation. This article examines the utility of incorporating auxiliary variables into the structural equation modeling approach when missing data is present. In particular, the authors assert that by adapting a saturated correlates model strategy to structural equation modeling generalizability theory models, one can reduce any biased effects caused by missingness. Traditional approaches such as an analysis of variance do not possess such a feature. This article provides detailed instructions for adding auxiliary variables into a structural equation modeling generalizability theory model, demonstrates the corresponding benefits of bias reduction in generalizability coefficient estimate via simulations, and discusses issues relevant to the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
16

Chakrabarti, Bhismadev. "Eyes, amygdala, and other models of face processing: Questions for the SIMS model." Behavioral and Brain Sciences 33, no. 6 (2010): 440–41. http://dx.doi.org/10.1017/s0140525x10001482.

Full text
Abstract:
AbstractThis commentary raises general questions about the parsimony and generalizability of the SIMS model, before interrogating the specific roles that the amygdala and eye contact play in it. Additionally, this situates the SIMS model alongside another model of facial expression processing, with a view to incorporating individual differences in emotion perception.
APA, Harvard, Vancouver, ISO, and other styles
17

Liu, Zoey, and Emily Prud’hommeaux. "Data-driven Model Generalizability in Crosslinguistic Low-resource Morphological Segmentation." Transactions of the Association for Computational Linguistics 10 (2022): 393–413. http://dx.doi.org/10.1162/tacl_a_00467.

Full text
Abstract:
Abstract Common designs of model evaluation typically focus on monolingual settings, where different models are compared according to their performance on a single data set that is assumed to be representative of all possible data for the task at hand. While this may be reasonable for a large data set, this assumption is difficult to maintain in low-resource scenarios, where artifacts of the data collection can yield data sets that are outliers, potentially making conclusions about model performance coincidental. To address these concerns, we investigate model generalizability in crosslinguistic low-resource scenarios. Using morphological segmentation as the test case, we compare three broad classes of models with different parameterizations, taking data from 11 languages across 6 language families. In each experimental setting, we evaluate all models on a first data set, then examine their performance consistency when introducing new randomly sampled data sets with the same size and when applying the trained models to unseen test sets of varying sizes. The results demonstrate that the extent of model generalization depends on the characteristics of the data set, and does not necessarily rely heavily on the data set size. Among the characteristics that we studied, the ratio of morpheme overlap and that of the average number of morphemes per word between the training and test sets are the two most prominent factors. Our findings suggest that future work should adopt random sampling to construct data sets with different sizes in order to make more responsible claims about model evaluation.
APA, Harvard, Vancouver, ISO, and other styles
18

Piedmont, Ralph L., and Joon-Ho Chae. "Cross-Cultural Generalizability of the Five-Factor Model of Personality." Journal of Cross-Cultural Psychology 28, no. 2 (1997): 131–55. http://dx.doi.org/10.1177/0022022197282001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Silver, J. Hokororo, Kitindi Ernest, and Michael Francis. "ACADEMIC STAFF JOB EMBEDDEDNESS: MODEL DIMENSIONALITY AND VALIDATION IN TANZANIA'S UNIVERSITIES." International Journal of Research - Granthaalayah 6, no. 9 (2018): 278–90. https://doi.org/10.5281/zenodo.1443443.

Full text
Abstract:
Universities in Tanzania as many others in Africa and the globe are faced with the challenge of retaining their academic staff. This study examined the dimensionality and generalization of Job Embeddedness Theory, a promising perspective for understanding employee retention, in the context of academic staff in Tanzania’s universities. A survey of 314 members of academic staff from 2 Public Universities and 3 Private Universities was conducted, and Exploratory Factor Analysis (EFA) and Split Sample Cross Validation were used in determining the appropriate dimensionality and generalizability of Job Embeddedness Model in the context of study, respectively. Results indicated that that job embeddedness in the context of academic staff in Tanzania’s universities is a seven factors model. The results also indicate that seven variables out of 30 in the model were not stable, hence compromising generalizability of the model in the context of the study. It was recommended that, since Job Embeddedness Theory is a developing perspective, the volatile variables should be considered for revision or deletion in the future studies, before a seven-factor Job Embeddedness model is accepted for generalizability to larger population of academic members of staff in Tanzania’s Universities.
APA, Harvard, Vancouver, ISO, and other styles
20

Qiu, Xiangyun. "Sequence similarity governs generalizability of de novo deep learning models for RNA secondary structure prediction." PLOS Computational Biology 19, no. 4 (2023): e1011047. http://dx.doi.org/10.1371/journal.pcbi.1011047.

Full text
Abstract:
Making no use of physical laws or co-evolutionary information, de novo deep learning (DL) models for RNA secondary structure prediction have achieved far superior performances than traditional algorithms. However, their statistical underpinning raises the crucial question of generalizability. We present a quantitative study of the performance and generalizability of a series of de novo DL models, with a minimal two-module architecture and no post-processing, under varied similarities between seen and unseen sequences. Our models demonstrate excellent expressive capacities and outperform existing methods on common benchmark datasets. However, model generalizability, i.e., the performance gap between the seen and unseen sets, degrades rapidly as the sequence similarity decreases. The same trends are observed from several recent DL and machine learning models. And an inverse correlation between performance and generalizability is revealed collectively across all learning-based models with wide-ranging architectures and sizes. We further quantitate how generalizability depends on sequence and structure identity scores via pairwise alignment, providing unique quantitative insights into the limitations of statistical learning. Generalizability thus poses a major hurdle for deploying de novo DL models in practice and various pathways for future advances are discussed.
APA, Harvard, Vancouver, ISO, and other styles
21

Hokororo, Silver J., Ernest Kitindi, and Francis Michael. "ACADEMIC STAFF JOB EMBEDDEDNESS: MODEL DIMENSIONALITY AND VALIDATION IN TANZANIA’S UNIVERSITIES." International Journal of Research -GRANTHAALAYAH 6, no. 9 (2018): 278–90. http://dx.doi.org/10.29121/granthaalayah.v6.i9.2018.1232.

Full text
Abstract:
Universities in Tanzania as many others in Africa and the globe are faced with the challenge of retaining their academic staff. This study examined the dimensionality and generalization of Job Embeddedness Theory, a promising perspective for understanding employee retention, in the context of academic staff in Tanzania’s universities. A survey of 314 members of academic staff from 2 Public Universities and 3 Private Universities was conducted, and Exploratory Factor Analysis (EFA) and Split Sample Cross Validation were used in determining the appropriate dimensionality and generalizability of Job Embeddedness Model in the context of study, respectively. Results indicated that that job embeddedness in the context of academic staff in Tanzania’s universities is a seven factors model. The results also indicate that seven variables out of 30 in the model were not stable, hence compromising generalizability of the model in the context of the study. It was recommended that, since Job Embeddedness Theory is a developing perspective, the volatile variables should be considered for revision or deletion in the future studies, before a seven-factor Job Embeddedness model is accepted for generalizability to larger population of academic members of staff in Tanzania’s Universities.
APA, Harvard, Vancouver, ISO, and other styles
22

Liu, Zhexiong, Licheng Liu, Yiqun Xie, Zhenong Jin, and Xiaowei Jia. "Task-Adaptive Meta-Learning Framework for Advancing Spatial Generalizability." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (2023): 14365–73. http://dx.doi.org/10.1609/aaai.v37i12.26680.

Full text
Abstract:
Spatio-temporal machine learning is critically needed for a variety of societal applications, such as agricultural monitoring, hydrological forecast, and traffic management. These applications greatly rely on regional features that characterize spatial and temporal differences. However, spatio-temporal data often exhibit complex patterns and significant data variability across different locations. The labels in many real-world applications can also be limited, which makes it difficult to separately train independent models for different locations. Although meta learning has shown promise in model adaptation with small samples, existing meta learning methods remain limited in handling a large number of heterogeneous tasks, e.g., a large number of locations with varying data patterns. To bridge the gap, we propose task-adaptive formulations and a model-agnostic meta-learning framework that transforms regionally heterogeneous data into location-sensitive meta tasks. We conduct task adaptation following an easy-to-hard task hierarchy in which different meta models are adapted to tasks of different difficulty levels. One major advantage of our proposed method is that it improves the model adaptation to a large number of heterogeneous tasks. It also enhances the model generalization by automatically adapting the meta model of the corresponding difficulty level to any new tasks. We demonstrate the superiority of our proposed framework over a diverse set of baselines and state-of-the-art meta-learning frameworks. Our extensive experiments on real crop yield data show the effectiveness of the proposed method in handling spatial-related heterogeneous tasks in real societal applications.
APA, Harvard, Vancouver, ISO, and other styles
23

MINOWA, Yasushi, Norifumi SUZUKI, and Kazuhiro TANAKA. "Generalizability and Accuracy of Site Index Estimation Model with Ensemble Learning." Japanese Journal of Forest Planning 42, no. 1 (2009): 53–67. http://dx.doi.org/10.20659/jjfp.42.1_53.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Yu, Caizheng, Wei Liu, Wengang Li, et al. "Author Response to “Generalizability of COVID-19 Mortality Risk Score Model”." American Journal of Preventive Medicine 59, no. 6 (2020): e251. http://dx.doi.org/10.1016/j.amepre.2020.07.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

장재학. "The Plausibility and Generalizability of Larsen-Freeman’s Model of L2 Knowledge." English Teaching 62, no. 4 (2007): 31–46. http://dx.doi.org/10.15858/engtea.62.4.200712.31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Mlaver, Eli, Grant C. Lynde, John F. Sweeney, and Jyotirmay Sharma. "Generalizability of COBRA: A Parsimonious Perioperative Venous Thromboembolism Risk Assessment Model." Journal of Surgical Research 293 (January 2024): 8–13. http://dx.doi.org/10.1016/j.jss.2023.08.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Li, Ruihang, Tao Li, Shanding Ye, et al. "Enhancing Generalizability via Utilization of Unlabeled Data for Occupancy Perception." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 5 (2025): 4896–904. https://doi.org/10.1609/aaai.v39i5.32518.

Full text
Abstract:
3D occupancy perception accurately estimates the volumetric status and semantic labels of a scene, attracting significant attention in the field of autonomous driving. However, enhancing the model's ability to generalize across different driving scenarios or sensing systems, often requires redesigning the model or extra-expensive annotations. To this end, following a comprehensive analysis of the occupancy model architecture, we proposed the UGOCC method that utilizes domain adaptation to efficiently harness unlabeled autonomous driving data, thereby enhancing the model's generalizability. Specifically, we design the depth fusion module by employing self-supervised depth estimation, and propose a strategy based on semantic attention and domain adversarial learning to improve the generalizability of the learnable fusion module. Additionally, we propose an OCC-specific pseudo-label selection tailored for semi-supervised learning, which optimizes the overall network's generalizability. Our experiment results on two challenging datasets nuScenes and Waymo, demonstrate that our method not only achieves state-of-the-art generalizability but also enhances the model's perceptual capabilities within the source domain by utilizing unlabeled data.
APA, Harvard, Vancouver, ISO, and other styles
28

O'Neill, Lotte D., Lars Korsholm, Birgitta Wallstedt, Berit Eika, and Jan Hartvigsen. "Generalizability of a Composite Student Selection Procedure at a University-Based Chiropractic Program." Journal of Chiropractic Education 23, no. 1 (2009): 8–16. http://dx.doi.org/10.7899/1042-5055-23.1.8.

Full text
Abstract:
Purpose: Non-cognitive admission criteria are typically used in chiropractic student selection to supplement grades. The reliability of non-cognitive student admission criteria in chiropractic education has not previously been examined. In addition, very few studies have examined the overall test generalizability of composites of non-cognitive admission variables in admission to health science programs. The aim of this study was to estimate the generalizability of a composite selection to a chiropractic program, consisting of: application form information, a written motivational essay, a common knowledge test, and an admission interview. Methods: Data from 105 Chiropractic applicants from the 2007 admission at the University of Southern Denmark were available for analysis. Each admission parameter was double scored using two random, blinded, and independent raters. Variance components for applicant, rater and residual effects were estimated for a mixed model with the restricted maximum likelihood method. The reliability of obtained applicant ranks (generalizability coefficients) was calculated for the individual admission criteria and for the composite admission procedure. Results: Very good generalizability was found for the common knowledge test (G = 1.00) and the admission interview (G = 0.88). Good generalizability was found for application form information (G = 0.75) and moderate generalizability (G = 0.50) for the written motivation essay. The generalizability of the final composite admission procedure, which was a weighted composite of all 4 admission variables was good (Gc = 0.80). Conclusion: Good generalizability for a composite admission to a chiropractic program was found. Optimal weighting and adequate sampling are important for obtaining optimal generalizability. Limitations and suggestions for future research are discussed.
APA, Harvard, Vancouver, ISO, and other styles
29

An, Feng-Ping, and Jun-e. Liu. "Medical Image Segmentation Algorithm Based on Optimized Convolutional Neural Network-Adaptive Dropout Depth Calculation." Complexity 2020 (May 15, 2020): 1–13. http://dx.doi.org/10.1155/2020/1645479.

Full text
Abstract:
Medical image segmentation is a key technology for image guidance. Therefore, the advantages and disadvantages of image segmentation play an important role in image-guided surgery. Traditional machine learning methods have achieved certain beneficial effects in medical image segmentation, but they have problems such as low classification accuracy and poor robustness. Deep learning theory has good generalizability and feature extraction ability, which provides a new idea for solving medical image segmentation problems. However, deep learning has problems in terms of its application to medical image segmentation: one is that the deep learning network structure cannot be constructed according to medical image characteristics; the other is that the generalizability y of the deep learning model is weak. To address these issues, this paper first adapts a neural network to medical image features by adding cross-layer connections to a traditional convolutional neural network. In addition, an optimized convolutional neural network model is established. The optimized convolutional neural network model can segment medical images using the features of two scales simultaneously. At the same time, to solve the generalizability problem of the deep learning model, an adaptive distribution function is designed according to the position of the hidden layer, and then the activation probability of each layer of neurons is set. This enhances the generalizability of the dropout model, and an adaptive dropout model is proposed. This model better addresses the problem of the weak generalizability of deep learning models. Based on the above ideas, this paper proposes a medical image segmentation algorithm based on an optimized convolutional neural network with adaptive dropout depth calculation. An ultrasonic tomographic image and lumbar CT medical image were separately segmented by the method of this paper. The experimental results show that not only are the segmentation effects of the proposed method improved compared with those of the traditional machine learning and other deep learning methods but also the method has a high adaptive segmentation ability for various medical images. The research work in this paper provides a new perspective for research on medical image segmentation.
APA, Harvard, Vancouver, ISO, and other styles
30

Scanlan, Tara K., David G. Russell, T. Michelle Magyar, and Larry A. Scanlan. "Project on Elite Athlete Commitment (PEAK): III. An Examination of the External Validity across Gender, and the Expansion and Clarification of the Sport Commitment Model." Journal of Sport and Exercise Psychology 31, no. 6 (2009): 685–705. http://dx.doi.org/10.1123/jsep.31.6.685.

Full text
Abstract:
The Sport Commitment Model was further tested using the Scanlan Collaborative Interview Method to examine its generalizability to New Zealand’s elite female amateur netball team, the Silver Ferns. Results supported or clarified Sport Commitment Model predictions, revealed avenues for model expansion, and elucidated the functions of perceived competence and enjoyment in the commitment process. A comparison and contrast of the in-depth interview data from the Silver Ferns with previous interview data from a comparable elite team of amateur male athletes allowed assessment of model external validity, tested the generalizability of the underlying mechanisms, and separated gender differences from discrepancies that simply reflected team or idiosyncratic differences.
APA, Harvard, Vancouver, ISO, and other styles
31

Lu, Bingqian, Yanni Li, and Ciaran Evans. "Assessing generalizability of a dengue classifier across multiple datasets." PLOS One 20, no. 6 (2025): e0323886. https://doi.org/10.1371/journal.pone.0323886.

Full text
Abstract:
Early diagnosis of dengue fever is important for individual treatment and monitoring disease prevalence in the population. To assist diagnosis, previous studies have proposed classification models to detect dengue from symptoms and clinical measurements. However, there has been little exploration of whether existing models can be used to make predictions for new populations. In this study, we assess the generalizability of dengue classification models to new datasets. We trained logistic regression models on five publicly available dengue datasets from previous studies, using three explanatory variables identified as important in prior work: age, white blood cell count, and platelet count. These five datasets were collected at different times in different locations, with a variety of disease rates and patient ages. A model was trained on each dataset, and predictive performance and model calibration was evaluated on both the original (training) dataset, and the other (test) datasets from different studies. By comparing the model’s performance when applied to data from a new location, we are able to assess the model’s generalizability to new populations. We further compared performance with larger models and other classification methods. In-sample area under the receiver operating characteristic curve (AUC) values for the logistic regression models ranged from 0.74 to 0.89, while out-of-sample AUCs ranged from 0.55 to 0.89. Matching age ranges in training/test datasets increased AUC values and balanced the sensitivity and specificity. Adjusting the predicted probabilities to account for differences in dengue prevalence improved calibration in 20/28 training-test pairs. Results were similar when other explanatory variables were included and when other classification methods (decision trees and support vector machines) were used. The in-sample performance of the logistic regression model was consistent with previous dengue classifiers, suggesting the chosen model is a good choice in a variety of settings and has decent overall performance. However, adjustments are required to make predictions on new datasets. Practitioners can use existing dengue classifiers in new settings but should be careful with different patient ages and disease rates.
APA, Harvard, Vancouver, ISO, and other styles
32

Nishi, Yasunari, Andreas Krumbein, Tobias Knopp, Axel Probst, and Cornelia Grabe. "On the Generalization Capability of a Data-Driven Turbulence Model by Field Inversion and Machine Learning." Aerospace 11, no. 7 (2024): 592. http://dx.doi.org/10.3390/aerospace11070592.

Full text
Abstract:
This paper discusses the generalizability of a data-augmented turbulence model with a focus on the field inversion and machine learning approach. It is highlighted that the augmented model based on two-dimensional (2D) separated airfoil flows gives poor predictive capability for a different class of separated flows (NASA wall-mounted hump) compared to the baseline model due to extrapolation. We demonstrate a sensor-based approach to localize the data-driven model correction to tackle this generalizability issue. Furthermore, the applicability of the augmented model to a more complex aeronautical three-dimensional case, the NASA Common Research Model configuration, is studied. Observations on the pressure coefficient predictions and the model correction field suggest that the present 2D-based augmentation is to some extent applicable to a three-dimensional aircraft flow.
APA, Harvard, Vancouver, ISO, and other styles
33

Rüter, Joachim, Umut Durak, and Johann C. Dauer. "Investigating the Sim-to-Real Generalizability of Deep Learning Object Detection Models." Journal of Imaging 10, no. 10 (2024): 259. http://dx.doi.org/10.3390/jimaging10100259.

Full text
Abstract:
State-of-the-art object detection models need large and diverse datasets for training. As these are hard to acquire for many practical applications, training images from simulation environments gain more and more attention. A problem arises as deep learning models trained on simulation images usually have problems generalizing to real-world images shown by a sharp performance drop. Definite reasons and influences for this performance drop are not yet found. While previous work mostly investigated the influence of the data as well as the use of domain adaptation, this work provides a novel perspective by investigating the influence of the object detection model itself. Against this background, first, a corresponding measure called sim-to-real generalizability is defined, comprising the capability of an object detection model to generalize from simulation training images to real-world evaluation images. Second, 12 different deep learning-based object detection models are trained and their sim-to-real generalizability is evaluated. The models are trained with a variation of hyperparameters resulting in a total of 144 trained and evaluated versions. The results show a clear influence of the feature extractor and offer further insights and correlations. They open up future research on investigating influences on the sim-to-real generalizability of deep learning-based object detection models as well as on developing feature extractors that have better sim-to-real generalizability capabilities.
APA, Harvard, Vancouver, ISO, and other styles
34

Luo, Gang. "A Roadmap for Boosting Model Generalizability for Predicting Hospital Encounters for Asthma." JMIR Medical Informatics 10, no. 3 (2022): e33044. http://dx.doi.org/10.2196/33044.

Full text
Abstract:
In the United States, ~9% of people have asthma. Each year, asthma incurs high health care cost and many hospital encounters covering 1.8 million emergency room visits and 439,000 hospitalizations. A small percentage of patients with asthma use most health care resources. To improve outcomes and cut resource use, many health care systems use predictive models to prospectively find high-risk patients and enroll them in care management for preventive care. For maximal benefit from costly care management with limited service capacity, only patients at the highest risk should be enrolled. However, prior models built by others miss >50% of true highest-risk patients and mislabel many low-risk patients as high risk, leading to suboptimal care and wasted resources. To address this issue, 3 site-specific models were recently built to predict hospital encounters for asthma, gaining up to >11% better performance. However, these models do not generalize well across sites and patient subgroups, creating 2 gaps before translating these models into clinical use. This paper points out these 2 gaps and outlines 2 corresponding solutions: (1) a new machine learning technique to create cross-site generalizable predictive models to accurately find high-risk patients and (2) a new machine learning technique to automatically raise model performance for poorly performing subgroups while maintaining model performance on other subgroups. This gives a roadmap for future research.
APA, Harvard, Vancouver, ISO, and other styles
35

Ispas, Dan, Dragos Iliescu, Alexandra Ilie, and Russell E. Johnson. "Exploring the Cross-Cultural Generalizability of the Five-Factor Model of Personality." Journal of Cross-Cultural Psychology 45, no. 7 (2014): 1074–88. http://dx.doi.org/10.1177/0022022114534769.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Arce, Alvaro J., and Ze Wang. "Applying Rasch Model and Generalizability Theory to Study Modified-Angoff Cut Scores." International Journal of Testing 12, no. 1 (2012): 44–60. http://dx.doi.org/10.1080/15305058.2011.614366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Wasti, S. Arzu, Mindy E. Bergman, Theresa M. Glomb, and Fritz Drasgow. "Test of the cross-cultural generalizability of a model of sexual harassment." Journal of Applied Psychology 85, no. 5 (2000): 766–78. http://dx.doi.org/10.1037/0021-9010.85.5.766.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Marcoulides, George A., Terry J. Larsen, and Ronald H. Heck. "Examining the generalizability of a leadership model: issues for assessing administrator performance." International Journal of Educational Management 9, no. 6 (1995): 4–9. http://dx.doi.org/10.1108/09513549510098362.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Rossier, Jérôme, Anton Aluja, Luis F. García, et al. "The Cross-Cultural Generalizability of Zuckerman's Alternative Five-Factor Model of Personality." Journal of Personality Assessment 89, no. 2 (2007): 188–96. http://dx.doi.org/10.1080/00223890701468618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Miller, A., and G. G. Dess. "ASSESSING PORTER'S (1980) MODEL IN TERMS OF ITS GENERALIZABILITY, ACCURACY AND SIMPLICITY." Journal of Management Studies 30, no. 4 (1993): 553–85. http://dx.doi.org/10.1111/j.1467-6486.1993.tb00316.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Song, Juyeon, Hanna Gaspard, Benjamin Nagengast, and Ulrich Trautwein. "The Conscientiousness × Interest Compensation (CONIC) model: Generalizability across domains, outcomes, and predictors." Journal of Educational Psychology 112, no. 2 (2020): 271–87. http://dx.doi.org/10.1037/edu0000379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Ho, Sung Yang, Kimberly Phua, Limsoon Wong, and Wilson Wen Bin Goh. "Extensions of the External Validation for Checking Learned Model Interpretability and Generalizability." Patterns 1, no. 8 (2020): 100129. http://dx.doi.org/10.1016/j.patter.2020.100129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Green, Michael W., Peter J. Rogers, and Nicola A. Elliman. "Dietary restraint and addictive behaviors: The generalizability of Tiffany's Cue Reactivity Model." International Journal of Eating Disorders 27, no. 4 (2000): 419–27. http://dx.doi.org/10.1002/(sici)1098-108x(200005)27:4<419::aid-eat6>3.0.co;2-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Dee, William, Rana Alaaeldin Ibrahim, and Eirini Marouli. "Histopathological domain adaptation with generative adversarial networks: Bridging the domain gap between thyroid cancer histopathology datasets." PLOS ONE 19, no. 12 (2024): e0310417. https://doi.org/10.1371/journal.pone.0310417.

Full text
Abstract:
Deep learning techniques are increasingly being used to classify medical imaging data with high accuracy. Despite this, due to often limited training data, these models can lack sufficient generalizability to predict unseen test data, produced in different domains, with comparable performance. This study focuses on thyroid histopathology image classification and investigates whether a Generative Adversarial Network [GAN], trained with just 156 patient samples, can produce high quality synthetic images to sufficiently augment training data and improve overall model generalizability. Utilizing a StyleGAN2 approach, the generative network produced images with an Fréchet Inception Distance (FID) score of 5.05, matching state-of-the-art GAN results in non-medical domains with comparable dataset sizes. Augmenting the training data with these GAN-generated images increased model generalizability when tested on external data sourced from three separate domains, improving overall precision and AUC by 7.45% and 7.20% respectively compared with a baseline model. Most importantly, this performance improvement was observed on minority class images, tumour subtypes which are known to suffer from high levels of inter-observer variability when classified by trained pathologists.
APA, Harvard, Vancouver, ISO, and other styles
45

Banerjee, Chayan, Kien Nguyen, Clinton Fookes, Gregory Hancock, and Thomas Coulthard. "Introducing Iterative Model Calibration (IMC) v1.0: a generalizable framework for numerical model calibration with a CAESAR-Lisflood case study." Geoscientific Model Development 18, no. 3 (2025): 803–18. https://doi.org/10.5194/gmd-18-803-2025.

Full text
Abstract:
Abstract. In geosciences, including hydrology and geomorphology, the reliance on numerical models necessitates the precise calibration of their parameters to effectively translate information from observed to unobserved settings. Traditional calibration techniques, however, are marked by poor generalizability, demanding significant manual labor for data preparation and the calibration process itself. Moreover, the utility of machine-learning-based and data-driven approaches is curtailed by the requirement for the numerical model to be differentiable for optimization purposes, which challenges their generalizability across different models. Furthermore, the potential of freely available geomorphological data remains underexploited in existing methodologies. In response to these challenges, we introduce a generalizable framework for calibrating numerical models, with a particular focus on geomorphological models, named Iterative Model Calibration (IMC). This approach efficiently identifies the optimal set of parameters for a given numerical model through a strategy based on a Gaussian neighborhood algorithm. Through experiments, we demonstrate the efficacy of IMC in calibrating the widely used landscape evolution model CAESAR-Lisflood (CL). The IMC process substantially improves the agreement between CL predictions and observed data (in the context of gully catchment landscape evolution), surpassing both uncalibrated and manual approaches.
APA, Harvard, Vancouver, ISO, and other styles
46

Xu, Xuhai, Xin Liu, Han Zhang, et al. "GLOBEM." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, no. 4 (2022): 1–34. http://dx.doi.org/10.1145/3569485.

Full text
Abstract:
There is a growing body of research revealing that longitudinal passive sensing data from smartphones and wearable devices can capture daily behavior signals for human behavior modeling, such as depression detection. Most prior studies build and evaluate machine learning models using data collected from a single population. However, to ensure that a behavior model can work for a larger group of users, its generalizability needs to be verified on multiple datasets from different populations. We present the first work evaluating cross-dataset generalizability of longitudinal behavior models, using depression detection as an application. We collect multiple longitudinal passive mobile sensing datasets with over 500 users from two institutes over a two-year span, leading to four institute-year datasets. Using the datasets, we closely re-implement and evaluated nine prior depression detection algorithms. Our experiment reveals the lack of model generalizability of these methods. We also implement eight recently popular domain generalization algorithms from the machine learning community. Our results indicate that these methods also do not generalize well on our datasets, with barely any advantage over the naive baseline of guessing the majority. We then present two new algorithms with better generalizability. Our new algorithm, Reorder, significantly and consistently outperforms existing methods on most cross-dataset generalization setups. However, the overall advantage is incremental and still has great room for improvement. Our analysis reveals that the individual differences (both within and between populations) may play the most important role in the cross-dataset generalization challenge. Finally, we provide an open-source benchmark platform GLOBEM- short for Generalization of Longitudinal BEhavior Modeling - to consolidate all 19 algorithms. GLOBEM can support researchers in using, developing, and evaluating different longitudinal behavior modeling methods. We call for researchers' attention to model generalizability evaluation for future longitudinal human behavior modeling studies.
APA, Harvard, Vancouver, ISO, and other styles
47

Hung, Chuan-Sheng, Chun-Hung Richard Lin, Jain-Shing Liu, Shi-Huang Chen, Tsung-Chi Hung, and Chih-Min Tsai. "Enhancing generalization in a Kawasaki Disease prediction model using data augmentation: Cross-validation of patients from two major hospitals in Taiwan." PLOS ONE 19, no. 12 (2024): e0314995. https://doi.org/10.1371/journal.pone.0314995.

Full text
Abstract:
Kawasaki Disease (KD) is a rare febrile illness affecting infants and young children, potentially leading to coronary artery complications and, in severe cases, mortality if untreated. However, KD is frequently misdiagnosed as a common fever in clinical settings, and the inherent data imbalance further complicates accurate prediction when using traditional machine learning and statistical methods. This paper introduces two advanced approaches to address these challenges, enhancing prediction accuracy and generalizability. The first approach proposes a stacking model termed the Disease Classifier (DC), specifically designed to recognize minority class samples within imbalanced datasets, thereby mitigating the bias commonly observed in traditional models toward the majority class. Secondly, we introduce a combined model, the Disease Classifier with CTGAN (CTGAN-DC), which integrates DC with Conditional Tabular Generative Adversarial Network (CTGAN) technology to improve data balance and predictive performance further. Utilizing CTGAN-based oversampling techniques, this model retains the original data characteristics of KD while expanding data diversity. This effectively balances positive and negative KD samples, significantly reducing model bias toward the majority class and enhancing both predictive accuracy and generalizability. Experimental evaluations indicate substantial performance gains, with the DC and CTGAN-DC models achieving notably higher predictive accuracy than individual machine learning models. Specifically, the DC model achieves sensitivity and specificity rates of 95%, while the CTGAN-DC model achieves 95% sensitivity and 97% specificity, demonstrating superior recognition capability. Furthermore, both models exhibit strong generalizability across diverse KD datasets, particularly the CTGAN-DC model, which surpasses the JAMA model with a 3% increase in sensitivity and a 95% improvement in generalization sensitivity and specificity, effectively resolving the model collapse issue observed in the JAMA model. In sum, the proposed DC and CTGAN-DC architectures demonstrate robust generalizability across multiple KD datasets from various healthcare institutions and significantly outperform other models, including XGBoost. These findings lay a solid foundation for advancing disease prediction in the context of imbalanced medical data.
APA, Harvard, Vancouver, ISO, and other styles
48

Wilson, Christopher J., Stephen C. Bowden, Linda K. Byrne, Louis-Charles Vannier, Ana Hernandez, and Lawrence G. Weiss. "Cross-National Generalizability of WISC-V and CHC Broad Ability Constructs across France, Spain, and the US." Journal of Intelligence 11, no. 8 (2023): 159. http://dx.doi.org/10.3390/jintelligence11080159.

Full text
Abstract:
The Cattell–Horn–Carroll (CHC) model is based on psychometric cognitive ability research and is the most empirically supported model of cognitive ability constructs. This study is one in a series of cross-national comparisons investigating the equivalence and generalizability of psychological constructs which align with the CHC model. Previous research exploring the cross-cultural generalizability of cognitive ability measures concluded that the factor analytic models of cognitive abilities generalize across cultures and are compatible with well-established CHC constructs. The equivalence of the psychological constructs, as measured by the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V), has been established across English-speaking samples. However, few studies have explored the equivalence of psychological constructs across non-English speaking, nationally representative samples. This study explored the equivalence of the WISC-V five-factor model across standardization samples from France, Spain, and the US. The five-factor scoring model demonstrated excellent fit across the three samples independently. Factorial invariance was investigated and the results demonstrated strict factorial invariance across France, Spain, and the US. The results provide further support for the generalizability of CHC constructs across Western cultural populations that speak different languages and support the continued use and development of the CHC model as a common nomenclature and blueprint for cognitive ability researchers and test developers. Suggestions for future research on the CHC model of intelligence are discussed.
APA, Harvard, Vancouver, ISO, and other styles
49

Yang, Hao, Qianyu Zhou, Haijia Sun, et al. "PointDGMamba: Domain Generalization of Point Cloud Classification via Generalized State Space Model." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 9 (2025): 9193–201. https://doi.org/10.1609/aaai.v39i9.32995.

Full text
Abstract:
Domain Generalization (DG) has been recently explored to improve the generalizability of point cloud classification (PCC) models toward unseen domains. However, they often suffer from limited receptive fields or quadratic complexity due to the use of convolution neural networks or vision Transformers. In this paper, we present the first work that studies the generalizability of state space models (SSMs) in DG PCC and find that directly applying SSMs into DG PCC will encounter several challenges: the inherent topology of the point cloud tends to be disrupted and leads to noise accumulation during the serialization stage. Besides, the lack of designs in domain-agnostic feature learning and data scanning will introduce unanticipated domain-specific information into the 3D sequence data. To this end, we propose a novel framework, PointDGMamba, that excels in strong generalizability toward unseen domains and has the advantages of global receptive fields and efficient linear complexity. PointDGMamba consists of three innovative components: Masked Sequence Denoising (MSD), Sequence-wise Cross-domain Feature Aggregation (SCFA), and Dual-level Domain Scanning (DDS). In particular, MSD selectively masks out the noised point tokens of the point cloud sequences, SCFA introduces cross-domain but same-class point cloud features to encourage the model to learn how to extract more generalized features. DDS includes intra-domain scanning and cross-domain scanning to facilitate information exchange between features. In addition, we propose a new and more challenging benchmark PointDG-3to1 for multi-domain generalization. Extensive experiments demonstrate the effectiveness and state-of-the-art performance of PointDGMamba.
APA, Harvard, Vancouver, ISO, and other styles
50

Fuller, Barbara F., and Madalynn Neu. "Generalizability and Clinical Utility of a Practice-Based Infant Pain Assessment Instrument." Clinical Nursing Research 10, no. 2 (2001): 122–39. http://dx.doi.org/10.1177/c10n2r4.

Full text
Abstract:
The purpose of this study was to determine the clinical usefulness and generalizability of an infant pain assessment instrument. Earlier work showed that this instrument-an algorithm derived from a model of infant pain assessment-possessed excellent content validity, criterion-like validity, and 3-month stability (test-retest reliability). In this study, generalizability was determined by comparing the percentage agreement between inexperienced pediatric nurses and one author, both using the tool to assess pain of infants in various clinical settings, and by comparing the percentage agreement between one author who used the tool to assess pain and the infant's pediatric nurse caretaker who used his or her clinical expertise, not the tool, to assess pain across various clinical settings. The results show excellent generalizability.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography