Dissertations / Theses on the topic 'Educational tests and measurements – Validity – Belize'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Educational tests and measurements – Validity – Belize.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Hinerman, Krystal M. "Construct Validation of the Social-Emotional Character Development Scale in Belize: Measurement Invariance Through Exploratory Structural Equation Modeling." Thesis, University of North Texas, 2014. https://digital.library.unt.edu/ark:/67531/metadc699875/.
Full textGao, Rui. "Construct validity of College Basic Academic Subject examination /." free to MU campus, to others for purchase, 2003. http://wwwlib.umi.com/cr/mo/fullcit?p3091926.
Full textKetterlin-Geller, Leanne Robyn. "Establishing a validity argument for universally designed assessments /." view abstract or download file of text, 2003. http://wwwlib.umi.com/cr/uoregon/fullcit?p3113012.
Full textTypescript. Includes vita and abstract. Includes bibliographical references (leaves 121-126). Also available for download via the World Wide Web; free to University of Oregon users.
Ip, Tsang Chui-hing Betty. "The construct validity of the aptitude test for prevocational schools." Click to view the E-thesis via HKUTO, 1986. http://sunzi.lib.hku.hk/HKUTO/record/B3862770X.
Full textKaye, Gail Leslie. "Construct validity study of the Myers-Briggs type indicator." The Ohio State University, 1989. http://rave.ohiolink.edu/etdc/view?acc_num=osu1399891250.
Full textClay-Adkins, Sandra L. Thompson James Richard. "Reliability and validity of the Supports Intensity Scale." Normal, Ill. : Illinois State University, 2004. http://wwwlib.umi.com/cr/ilstu/fullcit?p3128272.
Full textTitle from title page screen, viewed Jan. 11, 2005. Dissertation Committee: James R. Thompson (chair), Barbara M. Fulk, Jeffrey H. Kahn, Debra L. Shelden, W. Paul Vogt. Includes bibliographical references (leaves 135-145) and abstract. Also available in print.
Thurber, Robin Schul. "Construct validity of curriculum-based mathematics measures /." view abstract or download file of text, 1999. http://wwwlib.umi.com/cr/uoregon/fullcit?p9957576.
Full textTypescript. Includes vita and abstract. Includes bibliographical references (leaves 78-83). Also available for download via the World Wide Web; free to University of Oregon users. Address: http://wwwlib.umi.com/cr/uoregon/fullcit?p9957576.
Chamoy, Waritsa. "Evaluation of the Psychometric Quality and Validity of a Student Survey of Instruction in Bangkok University, Thailand." Thesis, University of Pittsburgh, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13819746.
Full textThe main purpose of this study was to conduct a validation analysis of student surveys of teaching effectiveness implemented at Bangkok University, Thailand. This study included three phases; survey development, a pilot study, and a full implementation study. Four sources of validity evidence were collected to support intended interpretations and uses of survey scores. To this end, this study evaluated the extent to which the content evidence supported the construct definition of the survey (RQ1), the relationships among survey items and survey components corresponded to the construct dimension (RQ2), the survey exhibited gender differential item functioning (RQ3), and student ratings and a similar measure of teaching quality and student achievement (RQ4) were related.
Overall, the student survey demonstrated good psychometric quality and the intended purposes and uses of the survey were supported. Based on expert reviews, the dimensions and survey items were perceived adequate in covering teaching quality, the survey items were perceived to properly assess the associated dimensions, and the response scales were perceived suitable with what was intended to measure. Exploratory factor analysis suggested that the construct of teaching effectiveness as defined in this survey may be unidimensional. Although the results did not support multidimensionality, the dimensions can still be used by individual instructors to evaluate their own teaching. Cronbach’s α coefficients were high and supported the internal consistency of the survey. There was no occurrence of gender DIF in this student survey. Therefore, the validity evidence of survey score interpretations was supported since the meaning of survey categories/scales was shared across male and female students. Finally, the results based on relation to other variables showed a strong positive relationship between the student survey and another currently used survey at Bangkok University which was used to evaluate teaching effectiveness for a decade. This could indicate that the student survey was measuring a similar construct of teaching effectiveness.
Wong, Luke L. S. "Validity and Reliability Study of the Bridges 7-Stage Spiritual Growth Questionnaire (BSG-Q)." Thesis, Nyack College, Alliance Theological Seminary, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13425929.
Full textThis doctoral project was developed to evaluate the validity and reliability of a spiritual growth assessment tool that the author created for his ministry in Southeast Asia called the Bridge or BRIDGES. This tool called the BRIDGES Spiritual Growth Questionnaire (BSG-Q) is helpful for church leaders who intend on implementing The Bridge’s 7-Stage Discipleship Strategy in determining the spiritual stage of their church members. Fifty volunteers at the Bridge were recruited to complete the BSG-Q. To study the validity of the BSG-Q, the three basic and traditional components of validity (criterion-related validity, content validity, and construct validity) were applied. Nine small group leaders at the Bridge were recruited to help assess the criterion-related validity by completing a criterion assessment form. Five experts concerning the Bridge’s 7-Stage strategy were recruited to help assess the content validity by completing a content assessment form. Construct validity was assessed by referencing published authors. To study the reliability of the BSG-Q, the test-retest method and the split-halves method were applied. The accumulated data from all the questionnaires and tests and the analysis of the data confirmed the hypothesis of this project: “The BSG-Q is a valid and reliable tool in determining a person’s level or stage of spiritual growth within the 7- Stage strategy.” This project also enabled the author to make some critical discoveries in how to interpret the scores of BSG-Q participants resulting in important recommendations for church leaders who intend on using this tool.
Coleman, Susan Lee. "Estimating the reliability and validity of concept mapping as a tool to assess prior knowledge." Diss., This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-06062008-164956/.
Full textSchmid, Dale Walter. "A validity study of the National Dance Education Organization's Dance Entry Level Teachers' Assessment (DELTA)." Thesis, University of Pennsylvania, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3721067.
Full textDance education is the only arts discipline without a national entry-level teacher readiness examination, which serves as a proxy for subject matter competency demanded by the Highly Qualified Teacher (HQT) requirement of the No Child Left Behind Act. Consequently, the absence of a qualifying examination has been a barrier to K-12 dance licensure in several states. Additionally, lack of commonly held expectations for what entry-level dance teachers should know and be able to do have led to great disparity in teacher preparation programs nationwide. In response, the National Dance Education Organization engaged dance education experts from thirteen states to create the Dance Entry Level Teachers Examination (DELTA) as an indicator of Pedagogic Content Knowledge (PCK) deemed crucial for K-12 entry-level public school dance teachers by an expert group.
This dissertation chronicles the development of DELTA and focuses on the psychometric analysis of field-test results of two draft forms of DELTA, administered to approximately half of the nation’s graduates hailing from 19 of the 58 Colleges and Universities that conferred dance education degrees in School Year 2013-14. The objectives of this study are to ascertain how well the test items discriminated among examinees; to assure the items are free from inherent bias and sensitivity issues; and discern the psychometric validity of DELTA as a measure of teacher readiness in dance. The quantitative analysis of DELTA field tested items relies heavily on the tools of Item Response Theory, and more specifically on a subclass of the logistic model, the one-parameter logistic (Rasch) model and other related models from Classical Test Theory to measure PCK as a result of exposure to dance pedagogy in a codified teacher education program. Additionally, survey instruments were employed to gauge the level of consensus among university pre-service dance education program coordinators regarding the importance of and relative degree of current alignment to ten PCK Skills Clusters embedded within three Domains of Knowledge comprising the DELTA Conceptual Framework. Given the lack of cohesion among pre-service dance education programs, DELTA represents a first step toward reaching national consensus on crucial baseline PCK and skills for beginning dance teachers.
Curabay, Muhammet. "Meta-analysis of the predictive validity of Scholastic Aptitude Test (SAT) and American College Testing (ACT) scores for college GPA." Thesis, University of Denver, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10242126.
Full textThe college admission systems of the United States require the Scholastic Aptitude Test (SAT) and American College Testing (ACT) examinations. Although, some resources suggest that SAT and ACT scores give some meaningful information about academic success, others disagree. The objective of this study was to determine whether there is significant predictive validity of SAT and ACT exams for college success. This study examined the effectiveness of SAT and ACT scores for predicting college students’ first year GPA scores with a meta-analytic approach. Most of the studies were retrieved from Academic Search Complete and ERIC databases, published between 1990 and 2016. In total, 60 effect sizes were obtained from 48 studies. The average correlation between test score and college GPA was 0.36 (95% confidence interval: .32, .39) using a random effects model. There was a significant positive relationship between exam score and college success. Moderators examined were publication status and exam type with no effect found for publication status. A significant effect of exam type was found, with a slightly higher average correlation for SAT compared to ACT score and college GPA. No publication bias was found in the study.
Blanchard, Janey. "The Predictive Validity of Norm-Referenced Assessments to the Minnesota Comprehensive Assessment on Native American Reservations." Thesis, Saint Mary's University of Minnesota, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=3745625.
Full textThis research study compared the three commonly used norm-referenced assessments (Northwest Evaluation Assessment, STAR Enterprise, and AIMSweb) to the Minnesota Comprehensive Assessment. The basic question was which one of the three assessments provided the best predictive validity scores to the Minnesota Comprehensive Assessment. Yearly scores from three years were gathered to evaluate which one of the three assessments had a stronger correlation score to the MCA. The study was confined to using 4th grade scores from three different schools located on a Native American reservation. Each school used one of the three common standardized reference assessments, and each school administered the MCA in the spring using winter scores. These scores were used to evaluate whether a student is on track to reach proficiency on the MCA. Findings showed that two of the three assessments had strong correlation scores. NWEA-MAP and STAR Enterprise had the strongest correlation. Further findings showed that STAR Enterprise had the strongest correlation score with a caveat that this is a new assessment and needs more research. Findings from this study allow schools to use two of the assessments with confidence that it is giving them quality scores.
Smith, Jean Marie. "Construct and criterion-related validity of the Draw a Person: a quantitative scoring system for normal, reading disabled, and developmentally handicapped children." The Ohio State University, 1987. http://rave.ohiolink.edu/etdc/view?acc_num=osu1392913586.
Full textGifford, Tierney A. "Predictive Validity of Curriculum-Based Reading Measures for High-Stakes Outcome Assessments with Secondary Students Identified as Struggling Readers." Thesis, State University of New York at Albany, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10241844.
Full textCurriculum-based measurement (CBM) tools are used widely to assess students’ progress within different stages of the Response to Intervention (RTI) process. Despite the wide-spread use, little research has identified the efficacy of reading CBMs in predicting secondary student outcomes on high-stakes assessments. High-stakes assessments are being utilized to determine outcomes for not just students, but teachers, administrators, and districts. More research is needed to determine if reading CBMs are useful tools for the populations of struggling secondary readers. The current study was a secondary analysis of existing data, which attempted to gain an understanding of this through examining the predictive validity of CBMs and high-stakes pre-assessments on end-of-year outcomes. The population included struggling, seventh grade readers who had not demonstrated proficiency on previous state tests and who attended urban schools representing low socio-economic status and high ethnic diversity. Results identified previous year state tests and norm-referenced tests as significant predictors of end-of-year outcomes, both individually and in combination. Though the reading fluency CBMs accounted for some variance in the regression equation, the amount was negligible. Student ethnicity and group status (i.e., whether received intervention) were not significant predictors of end-of year outcomes. These results indicate that CBMs may not provide additional valuable information in the prediction of student outcomes for secondary struggling readers. This finding is important for educators to weigh with other concerns, such as ease of use and time constraints, as existing pre-assessments (i.e., state tests, norm-referenced screening tools) may provide enough information without the additional use of CBMs.
Petetit, Lynn Marie. "Construct validity of curriculum-based reading measures for intermediate-grade students /." view abstract or download file of text, 2000. http://wwwlib.umi.com/cr/uoregon/fullcit?p9963452.
Full textTypescript. Includes vita and abstract. Includes bibliographical references (leaves 125-134). Also available for download via the World Wide Web; free to University of Oregon users. Address: http://wwwlib.umi.com/cr/uoregon/fullcit?p9963452.
Goins, David Matthew. "Population Cross-Validity Estimation and Adjustment for Direct Range Restriction: A Monte Carlo Investigation of Procedural Sequences to Achieve Optimal Cross-Validity." TopSCHOLAR®, 2010. http://digitalcommons.wku.edu/theses/165.
Full textMoody, Ian Robin. "The validity and reliability of value-added and target-setting procedures with special reference to Key Stage 3." Thesis, n.p, 2003. http://ethos.bl.uk/.
Full textMorgan, M. Sue. "Criterion validity of the Indiana Basic Competency Skills Test for third graders." Virtual Press, 1988. http://liblink.bsu.edu/uhtbin/catkey/546153.
Full textDepartment of Educational Psychology
Swanson, Chad C. "Phonics curriculum-based measurement| An initial study of reliability and validity." Thesis, Alfred University, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3619869.
Full textEarly literacy and reading skills are both important predictors of an individual's future success in school and employment settings (Moats, 1999). Moreover, poor reading performance in elementary school has been associated with future conduct problems and juvenile delinquency by age fifteen (Williams, 1994). Research supports the notion that scientifically-based instruction provides all students with the best opportunity to prevent future academic, behavioral, and vocational problems associated with poor reading skill acquisition. The current study investigated the reliability and validity of a curriculum-based measure developed by the current author named Phonics Curriculum-Based Measurement (P-CBM). Two hundred and twenty five first grade students (117 males, 103 females) from two partnering school districts in rural western New York State were included in the study. The results indicated strong alternate forms reliability, inter-rater reliability, and concurrent validity. Upon further validation, P-CBM could be helpful in making screening, progress monitoring, or instructional planning decisions as well as providing pre-referral data to school psychologists who are conducting special education eligibility evaluations for a specific learning disability in reading.
Anderson, Craig Donavin. "Video portfolios : do they have validity as an assessment tool?" Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=82679.
Full textEspinosa, Juan Emilio. "Assessing the Factorial Validity, Measurement Invariance, and Latent Mean Differences of a Second-Order, Multidimensional Model of Academic and Social College Course Engagement| A Comparison Across Course Format, Ethnic Groups, and Economic Status." Thesis, University of California, Santa Barbara, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10248471.
Full textThe current study seeks to validate a second-order, multifaceted model of engagement that contains a behavioral, an emotional, and a cognitive subtype as proposed by Fredericks, Blumenfeld, and Paris’ (2004), while also incorporating literature on student interactions. The second-order, 12-factor model proposed and tested for its validity partitioned engagement into the second-order constructs of academic and social engagement and examined each of the three engagement subtypes in relation to the interactions that students experience with their course material, with their classmates, and with their instructors/teaching assistants. Since the proposed model did not meet accepted standards of fit, the dataset was randomly split into two approximately equal halves and a follow-up exploratory factor analysis (EFA) was conducted on the first half of the dataset, which yielded a second-order, five-factor solution. The second-order academic engagement constructs that emerged from the EFA consisted of students’ behavioral, emotional, and cognitive engagement with their course material. In addition, two first-order factors emerged from the EFA, consisting of students’ emotional and cognitive engagement with their fellow students or classmates.
These constructs and relationships were consistent with the theory that drove the original proposed model, but differed slightly in their composition and relationship with one another. After establishing this empirical model through EFA procedures, the model was cross-validated on the second-half of the randomly split dataset and examined for invariance across students enrolled in online courses and students enrolled in traditional, in-person college courses, as well students from ethnically and economically diverse backgrounds. Latent mean comparisons revealed differences in levels of academic and social engagement between these three groups of students, suggesting that students enrolled in online courses and students from African-American and Latino/a ethnicities were slightly more academically engaged than their counterparts. However, students enrolled in online courses scored much lower than students enrolled in face-to-face courses on the social engagement measures, while students from African-American and Latino/a ethnic groups scored higher on the social engagement measures than did students from Asian and Caucasian ethnicities. Interestingly, no differences emerged between groups of students from lower and higher economic backgrounds.
Doran, Harold Cass. "Evaluating the consequential aspect of validity on the Arizona Instrument to Measure Standards." Diss., The University of Arizona, 2001. http://hdl.handle.net/10150/279924.
Full textIp, Tsang Chui-hing Betty, and 葉鈤翠卿. "The construct validity of the aptitude test for prevocational schools." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1986. http://hub.hku.hk/bib/B3862770X.
Full textStaub, Michael J. "A study of the content validity of the Stanford Achievement Test in relation to the Christian school curriculum." Theological Research Exchange Network (TREN), 1988. http://www.tren.com.
Full textWatson, Jennifer Marie. "Examining the reliability and validity of the Inicadores Dinámicos del Éxito en la Lectura (IDEL) : a research study /." view abstract or download file of text, 2004. http://wwwlib.umi.com/cr/uoregon/fullcit?p3153799.
Full textTypescript. Includes vita and abstract. Includes bibliographical references (leaves 148-155). Also available for download via the World Wide Web; free to University of Oregon users.
Moahi, Serara. "The validity of the Botswana Junior Certificate Mathematics Examination over time." Diss., The University of Arizona, 2004. http://hdl.handle.net/10150/280629.
Full textReed, Sandra J. "A Study of the Validity of a Modified Ordinal Scale of HIV Transmission Risk Among Seropositive Men who Have Sex with Men." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1338391437.
Full textKruse, Lance M. "Item-Reduction Methodologies for Complex Educational Assessments: A Comparative Methodological Exploration." University of Toledo / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1576175496892792.
Full textChelimo, Sheila. "Structural Validity of Competency Based Assessments: An Approach to CurriculumEvaluation." Ohio University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1529504437498332.
Full textHaigh, Charles Frederick. "Gender differences in SAT scores : analysis by race and socioeconomic level." Virtual Press, 1995. http://liblink.bsu.edu/uhtbin/catkey/941574.
Full textDepartment of Educational Leadership
McGill, Ryan J. "Beyond g| Assessing the Incremental Validity of the Cattell-Horn-Carroll (CHC) Broad Ability Factors on the Woodcock-Johnson III Tests of Cognitive Abilities." Thesis, Chapman University, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3621595.
Full textDespite their widespread use, controversy remains about how to best interpret norm-referenced tests of cognitive ability. Due to the fact that contemporary cognitive measures appraise performance at multiple levels (e.g., subtest, factor, full-scale), a multitude of inferences about individual functioning are possible. Because school psychologists primarily utilize intelligence tests for predicting achievement outcomes, the cognitive variables that provide the most optimal weighting for prediction are of greatest importance. This study examined the predictive validity of the Cattell-Horn-Carroll (CHC) factor structure from the Woodcock-Johnson III Tests of Cognitive Abilities (WJ-COG; Woodcock, McGrew, & Mather, 2011c). Specifically, the incremental achievement variance accounted for by the CHC broad factors, after controlling for the effects of the General Intellectual Ability (GIA) composite, was assessed across reading, mathematics, writing, and oral language variables from the Woodcock-Johnson III Tests of Achievement (WJ-ACH; Woodcock, McGrew, & Mather, 2001b). Hierarchical regression was used to assess predictive relationships between the cognitive-achievement variables on the Woodcock-Johnson III assessment battery (WJ-III; Woodcock, McGrew, & Mather, 2001a). This study utilized archived standard score data from individuals (N = 4,722) who participated in the original WJ-III standardization project. Results showed that the GIA accounted for the largest portions of achievement for all but one of the regression models that were assessed. Across the models, the GIA variance coefficients represented moderate to large effects whereas the CHC factors accounted for non-significant incremental effects in most of the models. Nevertheless, the WJ-COG factor scores did account for meaningful portions of achievement variance in several situations: (a) in predicting oral expression scores; (b) in the presence of significant inter-factor variability; and (c) when the effects of Spearman's law of diminishing returns (SLODR) was accounted for in reading, mathematics, and written language regression models. Additionally, the chi-square goodness of fit test was utilized to assess model invariance across several moderating variables. Results suggest that incremental validity is not a unitary construct and is not invariant across samples on the WJ-COG. Additionally, simultaneous interpretation of both the GIA and CHC factor scores on the WJ-COG may be useful within specific clinical contexts.
Bloomfield, Alison Elizabeth. "An Investigation of the Content and Concurrent Validity of the School-wide Evaluation Tool." Diss., Temple University Libraries, 2015. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/310450.
Full textPh.D.
The School-wide Evaluation Tool (SET) is a commonly used measure of the implementation fidelity of school-wide positive behavior interventions and supports (SWPBIS) programs. The current study examines the content and concurrent validity of the SET to establish whether an alternative approach to weighting and scoring the SET might provide a more accurate assessment of SWPBIS implementation fidelity. Twenty published experts in the field of SWPBIS completed online surveys to obtain ratings of the relative importance of each item on the SET to sustainable SWPBIS implementation. Using the experts' mean ratings, four novel SET scoring approaches were developed: unweighted, reweighted using mean ratings, unweighted dropping lowest quartile items, and reweighted dropping lowest quartile items. SET 2.1 data from 1,018 schools were used to compare the four novel and two established SET scoring methods and examine their concurrent validity with the Team Implementation Checklist 3.1 (TIC; across a subsample of 492 schools). Correlational data indicated that the two novel SET scoring methods with dropped items were both significantly stronger predictors of TIC scores than the established SET scoring methods. Continuous SET scoring methods have greater concurrent validity with the TIC overall score and greater sensitivity than the dichotomous SET 80/80 Criterion. Based on the equivalent concurrent validity of the unweighted SET with dropped items and the reweighted SET with dropped items compared to the TIC, this study recommends that the unweighted SET with dropped items be used by schools and researchers to obtain a more cohesive and prioritized set of SWPBIS elements than the existing or other SET scoring methods developed in this study.
Temple University--Theses
McGraw, Kelly A. "Identifying valid measures of reading comprehension : comparing the validity of oral reading fluency, retell fluency, and maze procedures /." view abstract or download file of text, 2006. http://proquest.umi.com/pqdweb?did=1196411101&sid=1&Fmt=2&clientId=11238&RQT=309&VName=PQD.
Full textTypescript. Includes vita and abstract. Includes bibliographical references (leaves 103-108). Also available for download via the World Wide Web; free to University of Oregon users.
Jacobsen, S. Suzanne. "Identifying children at risk : the predictive validity of kindergarten screening measures." Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/31104.
Full textEducation, Faculty of
Educational and Counselling Psychology, and Special Education (ECPS), Department of
Graduate
Elkins, Sharon Patricia. "Continuing professional nursing education and the relationship of learner motivation, the nature of the change, the social system of the organizational climate, and the educational offering : a reliability study." Virtual Press, 1998. http://liblink.bsu.edu/uhtbin/catkey/1115730.
Full textSchool of Nursing
Floyd, Nancy D. "Validity Evidence for the Use of Holland's Vocational Personality Types in College Student Populations." Thesis, University of South Carolina, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3597481.
Full textHigher education in the United States is replete with inventories and instruments designed to help administrators to identify students who are more likely to succeed in college and to tailor the higher education experience to foster this success. One area of research involves the Holland vocational personality type (Holland 1973, 1985, 1997) inventory, used to classify people into three-level personality types according to their work interests, behaviors, habits and preferences. This inventory has received a great deal of attention as a potential tool for steering college students into their optimal majors and thereby streamlining their college careers. Smart, Feldman and Ethington (2000) examined the Holland types as assessed through items present on the Cooperative Institutional Research Program (CIRP) Freshman Survey. Using both student and faculty data from a national sample, they argued that the Holland type can be generalized to students pursuing higher education through the academic department; departments are where students "work." This Holland/CIRP Freshman Survey inventory and the "factor structure" developed by Smart and associates was presented in the original work (2000) and a subsequent work sponsored by the National Symposium for Postsecondary Student Success (2006) but the evidence of the validity of their factors and analysis was never complete; no psychometric evaluation was done and their argument rests weakly on others' assessment of the constructs (Pike, 1996).
This study sought to provide validity evidence of the Smart, Feldman and Ethington (2000) estimation of the Holland vocational personality type provided to colleges and universities through the CIRP Freshman Survey. First, the model proposed by Smart and associates (2000) was examined through exploratory factor analysis to determine if the proposed factor structure could be reproduced with an independent single-institution sample of the same size used in the original research. Results showed that the factors identified by Smart et al (2000) could not be replicated, with the possible exception of the dimension of Artistic orientation. Next, items on the CIRP Freshman Survey were then used to attempt to make an independent alternative factor structure. Using a randomly split development sample, a factor structure was developed and validated with the remainder of the sample. Factor scores from the final structure were then used to classify students using cluster analysis, and the clusters were compared to their academic majors in an attempt to provide an alternative Holland model. The clusters did not capture trends in choosing either a freshman or a graduating major, and so does not provide a means of alternative estimation for the Holland vocational personality type.
Multiple arguments against the validity of the original Smart, Feldman and Ethington (2000) estimation of the Holland vocational personality type via the CIRP Freshman Survey with the exception of the Artistic orientation dimension are presented. More troubling are the questions raised by the lack of validity evidence, given that the authors suggest that these subscales can be used to optimize fit between students and academic departments—and that the information is used nationally at "face value." The information calls into question the use of such scales, even those which are nationally published and widely used, if validity evidence is not present. Discussion focuses on the institution's responsibility in establishing the usability of such forms to make advisement or other intervention decisions for individual students.
Burns, Stephanie Tursic. "The Predictive Validity of Person Matching Methods in Interest Measurement." Kent State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=kent1327781557.
Full textStroupe, Heather. "An Evaluation of the Convergent Validity of Multi-Source Feedback with Situational Assessment of Leadership - Student Assessment (SALSA©)." TopSCHOLAR®, 2010. http://digitalcommons.wku.edu/theses/162.
Full textLundeen, Rebecca J. "Validity testing of instruments to measure variables affecting behavior change following continuing professional education in nursing." Virtual Press, 1997. http://liblink.bsu.edu/uhtbin/catkey/1048395.
Full textSchool of Nursing
McGuffey, Amy R. "Validity and Utility of the Comprehensive Assessment of School Environment (CASE) Survey." University of Dayton / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1417510261.
Full textParis, Joseph. "Predicting Success: An Examination of the Predictive Validity of a Measure of Motivational-Developmental Dimensions in College Admissions." Diss., Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/494981.
Full textEd.D.
Although many colleges and universities use a wide range of criteria to evaluate and select admissions applicants, much of the variance in college student success remains unexplained. Thus, success in college, as defined by academic performance and student retention, may be related to other variables or combinations of variables beyond those traditionally used in college admissions (high school grade point average and standardized test scores). The current study investigated the predictive validity of a measure of motivational-developmental dimensions as a predictor of the academic achievement and persistence of college students as measured by cumulative undergraduate grade point average and retention. These dimensions are based on social-cognitive (self-concept, self-set goals, causal attributions, and coping strategies) and developmental-constructivist (self-awareness and self-authorship) perspectives. Motivational-developmental constructs are under-explored in terms of the predictive potential derived from their use in evaluating admission applicants’ ability to succeed and persevere despite the academic and social challenges presented by postsecondary participation. Therefore, the current study aimed to generate new understandings to benefit the participating institution and other institutions of higher education that seek new methodologies for evaluating and selecting college admission applicants. This dissertation describes two studies conducted at a large, urban public university located in the Northeastern United States. Participants included 10,149 undergraduate students who enrolled as first-time freshmen for the Fall 2015 (Study 1) and Fall 2016 (Study 2) semesters. Prior to matriculation, participants applied for admission using one of two methods: standard admissions or test-optional admissions. Standard admission applicants submitted standardized test scores (e.g., SAT) whereas test-optional applicants responded to four short-answer essay questions, each of which measured a subset of the motivational-developmental dimensions examined in the current study. Trained readers evaluated the essays to produce a “test-optional essay rating score,” which served as the primary predictor variable in the current study. Quantitative analyses were conducted to investigate the predictive validity of the “test-optional essay rating score” and its relationship to cumulative undergraduate grade point average and retention, which served as the outcome variables in the current study. The results revealed statistically significant group differences between test-optional applicants and standard applicants. Test-optional admission applicants are more likely to be female, of lower socioeconomic status, and ethnic minorities as compared to standard admission applicants. Given these group differences, Pearson product-moment correlation coefficients were computed to determine whether the test-optional essay rating score differentially predicted success across racial and gender subgroups. There was inconclusive evidence regarding whether the test-optional essay rating score differentially predicts cumulative undergraduate grade point average and retention across student subgroups. The results revealed a weak correlation between the test-optional essay rating score and cumulative undergraduate grade point average (Study 1: r = .11, p < .01; Study 2: r = .07, p < .05) and retention (Study 1: r = .08, p < .05; Study 2: r = .10, p < .01), particularly in comparison to the relationship between these outcome variables and the criteria most commonly considered in college admissions (high school grade point average, SAT Verbal, SAT Quantitative, and SAT Writing). Despite these findings, the test-optional essay rating score contributed nominal value (R2 = .07) in predicting academic achievement and persistence beyond the explanation provided by traditional admissions criteria. Additionally, a ROC analysis determined that the test-optional essay rating score does not predict student retention in a way that is meaningfully different than chance and therefore is not an accurate binary classifier of retention. Further research should investigate the validity of other motivational-developmental dimensions and the fidelity of other methods for measuring them in an attempt to account for a greater proportion of variance in college student success.
Temple University--Theses
Slack, Patricia. "A Situational Assessment of Student Leadership: An Evaluation of Alternate Forms Reliability and Convergent Validity." TopSCHOLAR®, 2010. http://digitalcommons.wku.edu/theses/172.
Full textWallace-Pascoe, Dawn Marie. "Assessing the Validity of a Measure of the Culture of Evidence at Two-Year Colleges." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1373305560.
Full textWong, Kwong-keung. "The validity and reliability of Hong Kong Certificate of Education technical subjects examination with special reference to the project method of assessment." Hong Kong : University of Hong Kong, 1986. http://sunzi.lib.hku.hk/hkuto/record.jsp?B18035152.
Full textSoto, Ramirez Pamela. "Validity Evidence of Internal Structure and Subscores Use of the Portfolio in the Chilean Teachers’ Evaluation System." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu159316412299089.
Full textBruckner, Terri Ann. "Using an Argument-based Approach to Validity for Selected Tests of Spatial Ability in Allied Medical Professions Students." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1371562493.
Full textTucker, Justin. "An Evaluation of the Convergent Validity of Situational Assessment of Leadership-Student Assessment (SALSA© ) with Multi-Source Feedback in MBA and Ed.D. in Educational Leadership Students." TopSCHOLAR®, 2011. http://digitalcommons.wku.edu/theses/1058.
Full textZhao, Jing. "Contextual Differential Item Functioning: Examining the Validity of Teaching Self-Efficacy Instruments Using Hierarchical Generalized Linear Modeling." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1339551861.
Full textChow, Chi Ping. "An investigation of the validity of the computer assisted child interview (CACI) as a self-report measure of children's academic performance and school experience /." view abstract or download file of text, 2007. http://proquest.umi.com/pqdweb?did=1404336121&sid=3&Fmt=2&clientId=11238&RQT=309&VName=PQD.
Full textTypescript. Includes vita and abstract. Includes bibliographical references (leaves 103-108). Also available for download via the World Wide Web; free to University of Oregon users.