To see the other types of publications on this topic, follow the link: Generalizability theory.

Journal articles on the topic 'Generalizability theory'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Generalizability theory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Shavelson, Richard J., Noreen M. Webb, and Glenn L. Rowley. "Generalizability theory." American Psychologist 44, no. 6 (1989): 922–32. http://dx.doi.org/10.1037/0003-066x.44.6.922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Brennan, Robert L. "Generalizability Theory." Journal of Educational Measurement 40, no. 1 (March 2003): 105–7. http://dx.doi.org/10.1111/j.1745-3984.2003.tb01098.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Brennan, Robert L. "Generalizability Theory." Educational Measurement: Issues and Practice 11, no. 4 (October 25, 2005): 27–34. http://dx.doi.org/10.1111/j.1745-3992.1992.tb00260.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kane, Michael. "Generalizability Theory." International Journal of Testing 3, no. 1 (March 2003): 95–100. http://dx.doi.org/10.1207/s15327574ijt0301_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wardrop, James L. "Generalizability theory." Psychometrika 71, no. 3 (September 2006): 601. http://dx.doi.org/10.1007/s11336-005-1366-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

O'Brian, Nigel, Sue O'Brian, Ann Packman, and Mark Onslow. "Generalizability Theory I." Journal of Speech, Language, and Hearing Research 46, no. 3 (June 2003): 711–17. http://dx.doi.org/10.1044/1092-4388(2003/056).

Full text
Abstract:
Perceptual rating scales can be valid, reliable, and convenient tools for evaluating speech outcomes in research and clinical practice. However, they depend on the perceptions of observers. Too few raters may compromise accuracy, whereas too many would be inefficient. There is therefore a need to determine the minimum number of raters required for a reliable result. In this context, the ideas of Generalizability Theory have become increasingly popular in the behavioral sciences; suggestions have been made for their application to the assessment of speech-language disorders. Here we review the concepts involved, which are applied in a companion article dealing with speech naturalness data obtained from clients who recently completed treatment for their stuttering. We pay particular attention to the statistical requirements of the theory, including some cautions about possible inappropriate use of these techniques. We also offer a new interpretation of the results of the analysis that aims to be more meaningful to most speech-language pathologists.
APA, Harvard, Vancouver, ISO, and other styles
7

O'Brian, Sue, Ann Packman, Mark Onslow, and Nigel O'Brian. "Generalizability Theory II." Journal of Speech, Language, and Hearing Research 46, no. 3 (June 2003): 718–23. http://dx.doi.org/10.1044/1092-4388(2003/057).

Full text
Abstract:
Generalizability theory has been recommended as the most comprehensive method for assessing the reliability of observational data. It provides a framework for calculating the various sources of measurement error and allows further design of measurements for a particular purpose. This paper gives a practical illustration of how this method may be used in the analysis of observational data. We use the ratings of 15 unsophisticated raters using the 9-point speech naturalness scale of R. R. Martin, S. K. Haroldson, and K. A. Triden (1984) to evaluate the speech of adults before and after treatment for stuttering. We calculate various sources of measurement error and use these to estimate the minimum number of raters and ratings per rater for a reliable result. For posttreatment data, the average of three independent raters, and for pretreatment data, the average of five independent raters should give a result within one scale point of the hypothetical true score for the speaker in at least 80% of samples. The example illustrates the advantages of using this method of analysis.
APA, Harvard, Vancouver, ISO, and other styles
8

Sundre, Donna L. "Generalizability theory: A primer." Evaluation Practice 14, no. 2 (June 1993): 207–9. http://dx.doi.org/10.1016/0886-1633(93)90019-l.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Brennan, Robert L. "Generalizability Theory and Classical Test Theory." Applied Measurement in Education 24, no. 1 (December 30, 2010): 1–21. http://dx.doi.org/10.1080/08957347.2011.532417.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

ATILGAN, Hakan. "Reliability of Essay Ratings: A Study on Generalizability Theory." Eurasian Journal of Educational Research 19, no. 80 (April 3, 2019): 1–18. http://dx.doi.org/10.14689/ejer.2019.80.7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Okamoto, Yasuharu. "Bayesian Analysis in Generalizability Theory." Proceedings of the Annual Convention of the Japanese Psychological Association 78 (September 10, 2014): 1PM—2–004–1PM—2–004. http://dx.doi.org/10.4992/pacjpa.78.0_1pm-2-004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Brennan, Robert L. "(Mis) Conception About Generalizability Theory." Educational Measurement: Issues and Practice 19, no. 1 (October 25, 2005): 5–10. http://dx.doi.org/10.1111/j.1745-3992.2000.tb00017.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Bell, John F. "Generalizability Theory: The Software Problem." Journal of Educational Statistics 10, no. 1 (March 1985): 19–29. http://dx.doi.org/10.3102/10769986010001019.

Full text
Abstract:
This paper outlines the problems associated with the estimation of variance components in generalizability analyses using analysis of variance software, and discusses the most useful software currently available for this specialist application: the MIVQUE method of the SAS procedure VARCOMP.
APA, Harvard, Vancouver, ISO, and other styles
14

Bell, John F. "Generalizability Theory: The Software Problem." Journal of Educational Statistics 10, no. 1 (1985): 19. http://dx.doi.org/10.2307/1164927.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Ward, David G. "Factor Indeterminacy in Generalizability Theory." Applied Psychological Measurement 10, no. 2 (June 1986): 159–65. http://dx.doi.org/10.1177/014662168601000206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

CETIN, Bayram, Nese GULER, and Rabia SARICA. "Using Generalizability Theory to Examine Different Concept Map Scoring Methods." Eurasian Journal of Educational Research 16, no. 66 (December 19, 2016): 1–30. http://dx.doi.org/10.14689/ejer.2016.66.12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Li, Mao-Neng Fred, and Gary Lautenschlager. "Generalizability Theory Applied to Categorical Data." Educational and Psychological Measurement 57, no. 5 (October 1997): 813–22. http://dx.doi.org/10.1177/0013164497057005007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Orlitzky, Marc. "Corporate Social Performance and Generalizability Theory." Proceedings of the International Association for Business and Society 12 (2001): 463–70. http://dx.doi.org/10.5840/iabsproc20011245.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

LoPilato, Alexander C., Nathan T. Carter, and Mo Wang. "Updating Generalizability Theory in Management Research." Journal of Management 41, no. 2 (October 8, 2014): 692–717. http://dx.doi.org/10.1177/0149206314554215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Bryman, Alan. "The Generalizability of Implicit Leadership Theory." Journal of Social Psychology 127, no. 2 (April 1987): 129–41. http://dx.doi.org/10.1080/00224545.1987.9713672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Hovell, Melbourne F., and Ding Ding. "The Generalizability and Specificity of Theory." Journal of Adolescent Health 46, no. 3 (March 2010): 207–8. http://dx.doi.org/10.1016/j.jadohealth.2009.12.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Bimpeh, Yaw, William Pointer, Ben Alexander Smith, and Liz Harrison. "Evaluating Human Scoring Using Generalizability Theory." Applied Measurement in Education 33, no. 3 (July 2, 2020): 198–209. http://dx.doi.org/10.1080/08957347.2020.1750403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Highhouse, Scott, Alison Broadfoot, Jennifer E. Yugo, and Shelba A. Devendorf. "Examining corporate reputation judgments with generalizability theory." Journal of Applied Psychology 94, no. 3 (May 2009): 782–89. http://dx.doi.org/10.1037/a0013934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Webb, Noreen M., Glenn L. Rowley, and Richard J. Shavelson. "Using Generalizability Theory in Counseling and Development." Measurement and Evaluation in Counseling and Development 21, no. 2 (July 1988): 81–90. http://dx.doi.org/10.1080/07481756.1988.12022886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

van Weeren, J., and T. J. J. M. Theunissen. "Testing Pronunciation: An Application of Generalizability Theory*." Language Learning 37, no. 1 (March 1987): 109–22. http://dx.doi.org/10.1111/j.1467-1770.1968.tb01314.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Boodoo, Gwyneth M., and Patricia S. O'Sullivan. "Assessing Pediatric Clerkship Evaluations Using Generalizability Theory." Evaluation & the Health Professions 9, no. 4 (December 1986): 467–86. http://dx.doi.org/10.1177/016327878600900406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Zhang, Jinming, and Chih-Kai (Cary) Lin. "Generalizability Theory With One-Facet Nonadditive Models." Applied Psychological Measurement 40, no. 6 (July 28, 2016): 367–86. http://dx.doi.org/10.1177/0146621616651603.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Talsma, Paul. "Assessing sensory panel performance using generalizability theory." Food Quality and Preference 47 (January 2016): 3–9. http://dx.doi.org/10.1016/j.foodqual.2015.02.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

SOYSAL, Sümeyra. "Examining Cross-Cultural Applicability via Generalizability Theory." Participatory Educational Research 10, no. 1 (January 30, 2023): 178–89. http://dx.doi.org/10.17275/per.23.10.10.1.

Full text
Abstract:
Applying a measurement instrument developed in a specific country to other countries raise a critical and important question of interest in especially cross-cultural studies. Confirmatory factor analysis (CFA) is the most preferred and used method to examine the cross-cultural applicability of measurement tools. Although CFA is a sophisticated technique to investigate various equivalence types (structural, metric, scalar and alike.), it has some limitations. In light of the classical test theory, when a measurement tool is not invariant between countries, what factors contribute to the error variance become unclear. Also, CFA reveals little as to how dimensionality of the relevant measurement tool affects measurement invariance. Hence, a fundamental focus of this study is to examine the measurement comparability or cross-cultural applicability for different countries on an international assessment using generalizability theory (G-theory) in educational science studies. With multi-faceted design, the contribution of dimensionality to error variance is examined, as well. For illustration purposes, eight scales from PISA 2012 student questionnaire dataset related to attitudes towards mathematics are used. The study is based on data from Türkiye, Finland and USA. The unbalanced multi-faceted designs are performed using G String IV. In conclusion, almost all results supported all research expectations. From the estimations of the G-theory, it can be rightly deduced cross-nationally applicability of the attitudes towards mathematics scales from these research findings.
APA, Harvard, Vancouver, ISO, and other styles
30

Hart, Peter D., and Patrick Jensen. "Reliability of Body Composition Assessment Using Generalizability Theory (G-Theory)." Medicine & Science in Sports & Exercise 48 (May 2016): 992. http://dx.doi.org/10.1249/01.mss.0000487982.91421.f8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

McDaniel, Anna M. "Using Generalizability Theory for the Estimation of Reliability of a Patient Classification System." Journal of Nursing Measurement 2, no. 1 (January 1994): 49–62. http://dx.doi.org/10.1891/1061-3749.2.1.49.

Full text
Abstract:
This article discusses the measurement issues associated with estimating the reliability of patient classification systems (PCSs). Generalizability theory is proposed as an approach to overcome the limitations of traditional methods of estimating reliability of PCSs. The results of a demonstration study in which generalizability theory is used to support the reliability of a PCS are reported. A coefficient of generalizability, analogous to the reliability coefficient, was computed based on the variance components estimated. The generalizability coefficient for the total PCS score was .034, which increased to .650 when one item was deleted. The generalizability coefficient for individual items ranged from .053 to .961. Suggestions for further instrument development are offered.
APA, Harvard, Vancouver, ISO, and other styles
32

Scarsellone, Jana M. "Analysis of Observational Data in Speech and Language Research Using Generalizability Theory." Journal of Speech, Language, and Hearing Research 41, no. 6 (December 1998): 1341–47. http://dx.doi.org/10.1044/jslhr.4106.1341.

Full text
Abstract:
Most research in speech-language pathology relies on observational data collected by human observers or judges. The reliability and generalizability of such measurements are always important considerations. This article reviews classical methods of estimating reliability and proposes that a more powerful approach capable of estimating the dependability of behavioral measurements is available. This approach, based on generalizability theory, provides a practical framework for estimating multiple sources of measurement error in the collection of observational data. Concepts central to generalizability theory are discussed, and a hypothetical data set illustrates the usefulness of generalizability measurements in speech and language research.
APA, Harvard, Vancouver, ISO, and other styles
33

Mushquash, Christopher, and Brian P. O’Connor. "SPSS and SAS programs for generalizability theory analyses." Behavior Research Methods 38, no. 3 (August 2006): 542–47. http://dx.doi.org/10.3758/bf03192810.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Preuss, Richard A. "Using Generalizability Theory to Develop Clinical Assessment Protocols." Physical Therapy 93, no. 4 (April 1, 2013): 562–69. http://dx.doi.org/10.2522/ptj.20120368.

Full text
Abstract:
Clinical assessment protocols must produce data that are reliable, with a clinically attainable minimal detectable change (MDC). In a reliability study, generalizability theory has 2 advantages over classical test theory. These advantages provide information that allows assessment protocols to be adjusted to match individual patient profiles. First, generalizability theory allows the user to simultaneously consider multiple sources of measurement error variance (facets). Second, it allows the user to generalize the findings of the main study across the different study facets and to recalculate the reliability and MDC based on different combinations of facet conditions. In doing so, clinical assessment protocols can be chosen based on minimizing the number of measures that must be taken to achieve a realistic MDC, using repeated measures to minimize the MDC, or simply based on the combination that best allows the clinician to monitor an individual patient's progress over a specified period of time.
APA, Harvard, Vancouver, ISO, and other styles
35

Ragan, Brian G., Minsoo Kang, Tanya Marquez, Gerald W. Bell, and Weimo Zhu. "Graphic Pain Rating Scale Reliability using Generalizability Theory." Medicine & Science in Sports & Exercise 36, Supplement (May 2004): S295. http://dx.doi.org/10.1249/00005768-200405001-01413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Turner, A. Allan, Marcel Bouffard, and Henry C. Lukaski. "Examination of bioelectrical impedance errors using generalizability theory." Sports Medicine, Training and Rehabilitation 7, no. 2 (November 1996): 87–103. http://dx.doi.org/10.1080/15438629609512074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Vispoel, Walter P., Carrie A. Morris, and Murat Kilinc. "Using generalizability theory with continuous latent response variables." Psychological Methods 24, no. 2 (April 2019): 153–78. http://dx.doi.org/10.1037/met0000177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Ye Tong and Robert L. Brennan. "Bootstrap Estimates of Standard Errors in Generalizability Theory." Educational and Psychological Measurement 67, no. 5 (June 6, 2007): 804–17. http://dx.doi.org/10.1177/0013164407301533.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Ragan, Brian G., Minsoo Kang, Tanya Marquez, Gerald W. Bell, and Weimo Zhu. "Graphic Pain Rating Scale Reliability using Generalizability Theory." Medicine & Science in Sports & Exercise 36, Supplement (May 2004): S295. http://dx.doi.org/10.1097/00005768-200405001-01413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Suen, Hoi K., Chin-Hsieh Lu, John T. Neisworth, and Stephen J. Bagnato. "Measurement of Team Decision-Making Through Generalizability Theory." Journal of Psychoeducational Assessment 11, no. 2 (June 1993): 120–32. http://dx.doi.org/10.1177/073428299301100202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

GOODWIN, LAURA D., and WILLIAM L. Goodwin. "Using Generalizability Theory in Early Childhood Special Education." Journal of Early Intervention 15, no. 2 (April 1991): 193–204. http://dx.doi.org/10.1177/105381519101500208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Hall, Charles B. "Comment: Generalizability theory and assessment in medical training." Neurology 85, no. 18 (October 2, 2015): 1628. http://dx.doi.org/10.1212/wnl.0000000000002057.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

JOHNSON, SANDRA, and JOHN F. BELL. "EVALUATING AND PREDICTING SURVEY EFFICIENCY USING GENERALIZABILITY THEORY." Journal of Educational Measurement 22, no. 2 (June 1985): 107–19. http://dx.doi.org/10.1111/j.1745-3984.1985.tb01051.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Sanders, Piet F. "Alternative solutions for optimization problems in generalizability theory." Psychometrika 57, no. 3 (September 1992): 351–56. http://dx.doi.org/10.1007/bf02295423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Wang, Luming, and Adam Finn. "Measuring CBBE across brand portfolios: Generalizability theory perspective." Journal of Targeting, Measurement and Analysis for Marketing 20, no. 2 (June 2012): 109–16. http://dx.doi.org/10.1057/jt.2012.9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Brennan, Robert L. "Performance Assessments from the Perspective of Generalizability Theory." Applied Psychological Measurement 24, no. 4 (December 2000): 339–53. http://dx.doi.org/10.1177/01466210022031796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Wu, Yi-Fang, and Hueying Tzou. "A Multivariate Generalizability Theory Approach to Standard Setting." Applied Psychological Measurement 39, no. 7 (April 8, 2015): 507–24. http://dx.doi.org/10.1177/0146621615577972.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

WICKEL, ERIC E., and GREGORY J. WELK. "Applying Generalizability Theory to Estimate Habitual Activity Levels." Medicine & Science in Sports & Exercise 42, no. 8 (August 2010): 1528–34. http://dx.doi.org/10.1249/mss.0b013e3181d107c4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Burns, Karyl J. "Classical reliability: Using generalizability theory to assess dependability." Research in Nursing & Health 21, no. 1 (February 1998): 83–90. http://dx.doi.org/10.1002/(sici)1098-240x(199802)21:1<83::aid-nur9>3.0.co;2-p.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Marcoulides, George A. "Maximizing Power in Generalizability Studies Under Budget Constraints." Journal of Educational Statistics 18, no. 2 (June 1993): 197–206. http://dx.doi.org/10.3102/10769986018002197.

Full text
Abstract:
Generalizability theory provides a framework for examining the dependability of behavioral measurements. When designing generalizability studies, two important statistical issues are generally considered: power and measurement error. Control over power and error of measurement can be obtained by manipulation of sample size and/or test reliability. In generalizability theory, the mean error variance is an estimate that takes into account both these statistical issues. When limited resources are available, determining an optimal measurement design is not a simple task. This article presents a methodology for minimizing mean error variance in generalizability studies when resource constraints are imposed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography