To see the other types of publications on this topic, follow the link: Educational tests and measurements – Validity – Belize.

Dissertations / Theses on the topic 'Educational tests and measurements – Validity – Belize'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Educational tests and measurements – Validity – Belize.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hinerman, Krystal M. "Construct Validation of the Social-Emotional Character Development Scale in Belize: Measurement Invariance Through Exploratory Structural Equation Modeling." Thesis, University of North Texas, 2014. https://digital.library.unt.edu/ark:/67531/metadc699875/.

Full text
Abstract:
Social-emotional learning (SEL) measures assessing social-emotional learning and character development across a broad array of constructs have been developed but lack construct validity. Determining the efficacy of educational interventions requires structurally valid measures which are generalizable across settings, gender, and time. Utilizing recent factor analytic methods, the present study extends validity literature for SEL measures by investigating the structural validity and generalizability of the Social-Emotional and Character Development Scale (SECDS) with a large sample of children from schools in Belize (n = 1877, ages 8 to13). The SECDS exhibited structural and generalizability evidence of construct validity when examined under exploratory structural equation modeling (ESEM). While a higher order confirmatory factor structure with six secondary factors provided acceptable fit, the ESEM six-factor structure provided both substantive and methodological advantages. The ESEM structural model situates the SECDS into the larger body of SEL literature while also exhibiting generalizability evidence over both gender and time.
APA, Harvard, Vancouver, ISO, and other styles
2

Gao, Rui. "Construct validity of College Basic Academic Subject examination /." free to MU campus, to others for purchase, 2003. http://wwwlib.umi.com/cr/mo/fullcit?p3091926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ketterlin-Geller, Leanne Robyn. "Establishing a validity argument for universally designed assessments /." view abstract or download file of text, 2003. http://wwwlib.umi.com/cr/uoregon/fullcit?p3113012.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 2003.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 121-126). Also available for download via the World Wide Web; free to University of Oregon users.
APA, Harvard, Vancouver, ISO, and other styles
4

Ip, Tsang Chui-hing Betty. "The construct validity of the aptitude test for prevocational schools." Click to view the E-thesis via HKUTO, 1986. http://sunzi.lib.hku.hk/HKUTO/record/B3862770X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kaye, Gail Leslie. "Construct validity study of the Myers-Briggs type indicator." The Ohio State University, 1989. http://rave.ohiolink.edu/etdc/view?acc_num=osu1399891250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Clay-Adkins, Sandra L. Thompson James Richard. "Reliability and validity of the Supports Intensity Scale." Normal, Ill. : Illinois State University, 2004. http://wwwlib.umi.com/cr/ilstu/fullcit?p3128272.

Full text
Abstract:
Thesis (Ed. D.)--Illinois State University, 2004.
Title from title page screen, viewed Jan. 11, 2005. Dissertation Committee: James R. Thompson (chair), Barbara M. Fulk, Jeffrey H. Kahn, Debra L. Shelden, W. Paul Vogt. Includes bibliographical references (leaves 135-145) and abstract. Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
7

Thurber, Robin Schul. "Construct validity of curriculum-based mathematics measures /." view abstract or download file of text, 1999. http://wwwlib.umi.com/cr/uoregon/fullcit?p9957576.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 1999.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 78-83). Also available for download via the World Wide Web; free to University of Oregon users. Address: http://wwwlib.umi.com/cr/uoregon/fullcit?p9957576.
APA, Harvard, Vancouver, ISO, and other styles
8

Chamoy, Waritsa. "Evaluation of the Psychometric Quality and Validity of a Student Survey of Instruction in Bangkok University, Thailand." Thesis, University of Pittsburgh, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13819746.

Full text
Abstract:

The main purpose of this study was to conduct a validation analysis of student surveys of teaching effectiveness implemented at Bangkok University, Thailand. This study included three phases; survey development, a pilot study, and a full implementation study. Four sources of validity evidence were collected to support intended interpretations and uses of survey scores. To this end, this study evaluated the extent to which the content evidence supported the construct definition of the survey (RQ1), the relationships among survey items and survey components corresponded to the construct dimension (RQ2), the survey exhibited gender differential item functioning (RQ3), and student ratings and a similar measure of teaching quality and student achievement (RQ4) were related.

Overall, the student survey demonstrated good psychometric quality and the intended purposes and uses of the survey were supported. Based on expert reviews, the dimensions and survey items were perceived adequate in covering teaching quality, the survey items were perceived to properly assess the associated dimensions, and the response scales were perceived suitable with what was intended to measure. Exploratory factor analysis suggested that the construct of teaching effectiveness as defined in this survey may be unidimensional. Although the results did not support multidimensionality, the dimensions can still be used by individual instructors to evaluate their own teaching. Cronbach’s α coefficients were high and supported the internal consistency of the survey. There was no occurrence of gender DIF in this student survey. Therefore, the validity evidence of survey score interpretations was supported since the meaning of survey categories/scales was shared across male and female students. Finally, the results based on relation to other variables showed a strong positive relationship between the student survey and another currently used survey at Bangkok University which was used to evaluate teaching effectiveness for a decade. This could indicate that the student survey was measuring a similar construct of teaching effectiveness.

APA, Harvard, Vancouver, ISO, and other styles
9

Wong, Luke L. S. "Validity and Reliability Study of the Bridges 7-Stage Spiritual Growth Questionnaire (BSG-Q)." Thesis, Nyack College, Alliance Theological Seminary, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13425929.

Full text
Abstract:

This doctoral project was developed to evaluate the validity and reliability of a spiritual growth assessment tool that the author created for his ministry in Southeast Asia called the Bridge or BRIDGES. This tool called the BRIDGES Spiritual Growth Questionnaire (BSG-Q) is helpful for church leaders who intend on implementing The Bridge’s 7-Stage Discipleship Strategy in determining the spiritual stage of their church members. Fifty volunteers at the Bridge were recruited to complete the BSG-Q. To study the validity of the BSG-Q, the three basic and traditional components of validity (criterion-related validity, content validity, and construct validity) were applied. Nine small group leaders at the Bridge were recruited to help assess the criterion-related validity by completing a criterion assessment form. Five experts concerning the Bridge’s 7-Stage strategy were recruited to help assess the content validity by completing a content assessment form. Construct validity was assessed by referencing published authors. To study the reliability of the BSG-Q, the test-retest method and the split-halves method were applied. The accumulated data from all the questionnaires and tests and the analysis of the data confirmed the hypothesis of this project: “The BSG-Q is a valid and reliable tool in determining a person’s level or stage of spiritual growth within the 7- Stage strategy.” This project also enabled the author to make some critical discoveries in how to interpret the scores of BSG-Q participants resulting in important recommendations for church leaders who intend on using this tool.

APA, Harvard, Vancouver, ISO, and other styles
10

Coleman, Susan Lee. "Estimating the reliability and validity of concept mapping as a tool to assess prior knowledge." Diss., This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-06062008-164956/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Schmid, Dale Walter. "A validity study of the National Dance Education Organization's Dance Entry Level Teachers' Assessment (DELTA)." Thesis, University of Pennsylvania, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3721067.

Full text
Abstract:

Dance education is the only arts discipline without a national entry-level teacher readiness examination, which serves as a proxy for subject matter competency demanded by the Highly Qualified Teacher (HQT) requirement of the No Child Left Behind Act. Consequently, the absence of a qualifying examination has been a barrier to K-12 dance licensure in several states. Additionally, lack of commonly held expectations for what entry-level dance teachers should know and be able to do have led to great disparity in teacher preparation programs nationwide. In response, the National Dance Education Organization engaged dance education experts from thirteen states to create the Dance Entry Level Teachers Examination (DELTA) as an indicator of Pedagogic Content Knowledge (PCK) deemed crucial for K-12 entry-level public school dance teachers by an expert group.

This dissertation chronicles the development of DELTA and focuses on the psychometric analysis of field-test results of two draft forms of DELTA, administered to approximately half of the nation’s graduates hailing from 19 of the 58 Colleges and Universities that conferred dance education degrees in School Year 2013-14. The objectives of this study are to ascertain how well the test items discriminated among examinees; to assure the items are free from inherent bias and sensitivity issues; and discern the psychometric validity of DELTA as a measure of teacher readiness in dance. The quantitative analysis of DELTA field tested items relies heavily on the tools of Item Response Theory, and more specifically on a subclass of the logistic model, the one-parameter logistic (Rasch) model and other related models from Classical Test Theory to measure PCK as a result of exposure to dance pedagogy in a codified teacher education program. Additionally, survey instruments were employed to gauge the level of consensus among university pre-service dance education program coordinators regarding the importance of and relative degree of current alignment to ten PCK Skills Clusters embedded within three Domains of Knowledge comprising the DELTA Conceptual Framework. Given the lack of cohesion among pre-service dance education programs, DELTA represents a first step toward reaching national consensus on crucial baseline PCK and skills for beginning dance teachers.

APA, Harvard, Vancouver, ISO, and other styles
12

Curabay, Muhammet. "Meta-analysis of the predictive validity of Scholastic Aptitude Test (SAT) and American College Testing (ACT) scores for college GPA." Thesis, University of Denver, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10242126.

Full text
Abstract:

The college admission systems of the United States require the Scholastic Aptitude Test (SAT) and American College Testing (ACT) examinations. Although, some resources suggest that SAT and ACT scores give some meaningful information about academic success, others disagree. The objective of this study was to determine whether there is significant predictive validity of SAT and ACT exams for college success. This study examined the effectiveness of SAT and ACT scores for predicting college students’ first year GPA scores with a meta-analytic approach. Most of the studies were retrieved from Academic Search Complete and ERIC databases, published between 1990 and 2016. In total, 60 effect sizes were obtained from 48 studies. The average correlation between test score and college GPA was 0.36 (95% confidence interval: .32, .39) using a random effects model. There was a significant positive relationship between exam score and college success. Moderators examined were publication status and exam type with no effect found for publication status. A significant effect of exam type was found, with a slightly higher average correlation for SAT compared to ACT score and college GPA. No publication bias was found in the study.

APA, Harvard, Vancouver, ISO, and other styles
13

Blanchard, Janey. "The Predictive Validity of Norm-Referenced Assessments to the Minnesota Comprehensive Assessment on Native American Reservations." Thesis, Saint Mary's University of Minnesota, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=3745625.

Full text
Abstract:

This research study compared the three commonly used norm-referenced assessments (Northwest Evaluation Assessment, STAR Enterprise, and AIMSweb) to the Minnesota Comprehensive Assessment. The basic question was which one of the three assessments provided the best predictive validity scores to the Minnesota Comprehensive Assessment. Yearly scores from three years were gathered to evaluate which one of the three assessments had a stronger correlation score to the MCA. The study was confined to using 4th grade scores from three different schools located on a Native American reservation. Each school used one of the three common standardized reference assessments, and each school administered the MCA in the spring using winter scores. These scores were used to evaluate whether a student is on track to reach proficiency on the MCA. Findings showed that two of the three assessments had strong correlation scores. NWEA-MAP and STAR Enterprise had the strongest correlation. Further findings showed that STAR Enterprise had the strongest correlation score with a caveat that this is a new assessment and needs more research. Findings from this study allow schools to use two of the assessments with confidence that it is giving them quality scores.

APA, Harvard, Vancouver, ISO, and other styles
14

Smith, Jean Marie. "Construct and criterion-related validity of the Draw a Person: a quantitative scoring system for normal, reading disabled, and developmentally handicapped children." The Ohio State University, 1987. http://rave.ohiolink.edu/etdc/view?acc_num=osu1392913586.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Gifford, Tierney A. "Predictive Validity of Curriculum-Based Reading Measures for High-Stakes Outcome Assessments with Secondary Students Identified as Struggling Readers." Thesis, State University of New York at Albany, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10241844.

Full text
Abstract:

Curriculum-based measurement (CBM) tools are used widely to assess students’ progress within different stages of the Response to Intervention (RTI) process. Despite the wide-spread use, little research has identified the efficacy of reading CBMs in predicting secondary student outcomes on high-stakes assessments. High-stakes assessments are being utilized to determine outcomes for not just students, but teachers, administrators, and districts. More research is needed to determine if reading CBMs are useful tools for the populations of struggling secondary readers. The current study was a secondary analysis of existing data, which attempted to gain an understanding of this through examining the predictive validity of CBMs and high-stakes pre-assessments on end-of-year outcomes. The population included struggling, seventh grade readers who had not demonstrated proficiency on previous state tests and who attended urban schools representing low socio-economic status and high ethnic diversity. Results identified previous year state tests and norm-referenced tests as significant predictors of end-of-year outcomes, both individually and in combination. Though the reading fluency CBMs accounted for some variance in the regression equation, the amount was negligible. Student ethnicity and group status (i.e., whether received intervention) were not significant predictors of end-of year outcomes. These results indicate that CBMs may not provide additional valuable information in the prediction of student outcomes for secondary struggling readers. This finding is important for educators to weigh with other concerns, such as ease of use and time constraints, as existing pre-assessments (i.e., state tests, norm-referenced screening tools) may provide enough information without the additional use of CBMs.

APA, Harvard, Vancouver, ISO, and other styles
16

Petetit, Lynn Marie. "Construct validity of curriculum-based reading measures for intermediate-grade students /." view abstract or download file of text, 2000. http://wwwlib.umi.com/cr/uoregon/fullcit?p9963452.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 2000.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 125-134). Also available for download via the World Wide Web; free to University of Oregon users. Address: http://wwwlib.umi.com/cr/uoregon/fullcit?p9963452.
APA, Harvard, Vancouver, ISO, and other styles
17

Goins, David Matthew. "Population Cross-Validity Estimation and Adjustment for Direct Range Restriction: A Monte Carlo Investigation of Procedural Sequences to Achieve Optimal Cross-Validity." TopSCHOLAR®, 2010. http://digitalcommons.wku.edu/theses/165.

Full text
Abstract:
The current study employs Monte Carlo analyses to evaluate the effectiveness of various statistical procedures for determining specific values of interest within a population of 1,000,000 cases. Specifically, the proper procedures for addressing the opposing effects of direct range restriction and validity overestimation were assessed through a comparison of multiple correlation coefficients derived using various sequences of procedures in randomly drawn samples. A comparison of the average bias associated with these methods indicated that correction for range restriction prior to the application of a validity overestimation adjustment formula yielded the best estimate of population parameters over a number of conditions. Additionally, similar methods were employed to assess the effectiveness of the standard ΔR2F-test for determining, based on characteristics of the derivation sample, the comparative superiority of either optimally or unit weighted composites in future samples; this procedure was largely ineffective under the conditions employed in the current study.
APA, Harvard, Vancouver, ISO, and other styles
18

Moody, Ian Robin. "The validity and reliability of value-added and target-setting procedures with special reference to Key Stage 3." Thesis, n.p, 2003. http://ethos.bl.uk/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Morgan, M. Sue. "Criterion validity of the Indiana Basic Competency Skills Test for third graders." Virtual Press, 1988. http://liblink.bsu.edu/uhtbin/catkey/546153.

Full text
Abstract:
The purpose of this study was to assess the criterion validity of the Indiana Basic Competency Skills Test (IBCST) by exploring the relationships between scores obtained on the IBCST and (a) student gender, (b) teacher-assigned letter grades, (c) scores obtained on the Otis-Lennon School Ability Test (OLSAT), and (a) scores obtained on the Stanford Achievement Test (SAT). The subjects were 300 third grade students enrolled in a small mid-Indiana school system. Data collected included gender, age, IBCST scores, OLSAT scores, SAT scores, and teacher-assigned letter grades in reading and mathematics. An alpha level of .01 was used in each statistical analysis. Gender differences were investigated by comparisons of the relative IBCST pass/fail (p/f) frequencies of males and females and boys' and girls' correct answers on the IBCST Reading and Math tests. Neither the chi square analysis of p/f frequencies nor the multivariate analysis of variance of the IBCST scores disclosed significant gender differences. Therefore, subsequent correlational analyses were done with pooled data.The relationship of teacher-assigned letter grades to IBCST p/f levels was studied with nonparametric and parametric statistical techniques. The 2x3 chi squares computed between IBCST performance and letter grades in reading and math were significant. The analyses of variance of the data yielded similar results. Teacher grades were related to IBCST performance.Multiple regression analyses were used to study the relationships between the IBCST and OLSAT performances. Significant multiple R-squares of approximately .30 were obtained in each analysis. Scholastic aptitude was related to IBCST performance.Canonical correlation analyses were used to explore the relationships between the reading and mathematics sections of the IBCST and SAT. Both analyses yielded a single significant, meaningful canonical correlation coefficient. The canonical variable loadings suggested that the IBCST Reading and Math composites, as well as the SAT composites, were expressions of general achievement. Thus, levels of achievement on the criterion referenced IBCST and the norm referenced SAT were related. The results of the study support the criterion validity of the IBCST with traditional methods of assessment as criteria.
Department of Educational Psychology
APA, Harvard, Vancouver, ISO, and other styles
20

Swanson, Chad C. "Phonics curriculum-based measurement| An initial study of reliability and validity." Thesis, Alfred University, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3619869.

Full text
Abstract:

Early literacy and reading skills are both important predictors of an individual's future success in school and employment settings (Moats, 1999). Moreover, poor reading performance in elementary school has been associated with future conduct problems and juvenile delinquency by age fifteen (Williams, 1994). Research supports the notion that scientifically-based instruction provides all students with the best opportunity to prevent future academic, behavioral, and vocational problems associated with poor reading skill acquisition. The current study investigated the reliability and validity of a curriculum-based measure developed by the current author named Phonics Curriculum-Based Measurement (P-CBM). Two hundred and twenty five first grade students (117 males, 103 females) from two partnering school districts in rural western New York State were included in the study. The results indicated strong alternate forms reliability, inter-rater reliability, and concurrent validity. Upon further validation, P-CBM could be helpful in making screening, progress monitoring, or instructional planning decisions as well as providing pre-referral data to school psychologists who are conducting special education eligibility evaluations for a specific learning disability in reading.

APA, Harvard, Vancouver, ISO, and other styles
21

Anderson, Craig Donavin. "Video portfolios : do they have validity as an assessment tool?" Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=82679.

Full text
Abstract:
This thesis presents a study of the validity of video portfolios as an assessment tool. For this study, first and second grade students were videotaped doing exercises four times in reading and four times in math over the course of a school year. After portfolios were collected, each set of four videos (either math or reading) was shown to teachers in random order. The teachers were asked to put the clips into the correct chronological and, therefore, developmental order. Interviews after the task investigated the criteria teachers used to order the clips, and found that they used task complexity, task performance, and demeanor of students as the primary factors. The teachers were able to correctly order the video clips to a high level of significance. This finding supports the hypothesis that video portfolios have validity as an assessment of progress in student achievement. Interview data also yielded relevant findings for the future use and implementation of video portfolios. Further studies should investigate the generalizability of these results, more closely examine the criteria teachers use to evaluate portfolios, and determine the validity of portfolios as an evaluation for other aspects of student learning.
APA, Harvard, Vancouver, ISO, and other styles
22

Espinosa, Juan Emilio. "Assessing the Factorial Validity, Measurement Invariance, and Latent Mean Differences of a Second-Order, Multidimensional Model of Academic and Social College Course Engagement| A Comparison Across Course Format, Ethnic Groups, and Economic Status." Thesis, University of California, Santa Barbara, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10248471.

Full text
Abstract:

The current study seeks to validate a second-order, multifaceted model of engagement that contains a behavioral, an emotional, and a cognitive subtype as proposed by Fredericks, Blumenfeld, and Paris’ (2004), while also incorporating literature on student interactions. The second-order, 12-factor model proposed and tested for its validity partitioned engagement into the second-order constructs of academic and social engagement and examined each of the three engagement subtypes in relation to the interactions that students experience with their course material, with their classmates, and with their instructors/teaching assistants. Since the proposed model did not meet accepted standards of fit, the dataset was randomly split into two approximately equal halves and a follow-up exploratory factor analysis (EFA) was conducted on the first half of the dataset, which yielded a second-order, five-factor solution. The second-order academic engagement constructs that emerged from the EFA consisted of students’ behavioral, emotional, and cognitive engagement with their course material. In addition, two first-order factors emerged from the EFA, consisting of students’ emotional and cognitive engagement with their fellow students or classmates.

These constructs and relationships were consistent with the theory that drove the original proposed model, but differed slightly in their composition and relationship with one another. After establishing this empirical model through EFA procedures, the model was cross-validated on the second-half of the randomly split dataset and examined for invariance across students enrolled in online courses and students enrolled in traditional, in-person college courses, as well students from ethnically and economically diverse backgrounds. Latent mean comparisons revealed differences in levels of academic and social engagement between these three groups of students, suggesting that students enrolled in online courses and students from African-American and Latino/a ethnicities were slightly more academically engaged than their counterparts. However, students enrolled in online courses scored much lower than students enrolled in face-to-face courses on the social engagement measures, while students from African-American and Latino/a ethnic groups scored higher on the social engagement measures than did students from Asian and Caucasian ethnicities. Interestingly, no differences emerged between groups of students from lower and higher economic backgrounds.

APA, Harvard, Vancouver, ISO, and other styles
23

Doran, Harold Cass. "Evaluating the consequential aspect of validity on the Arizona Instrument to Measure Standards." Diss., The University of Arizona, 2001. http://hdl.handle.net/10150/279924.

Full text
Abstract:
High stakes tests have become a prominent tool in the systemic reform movement documenting the need for change and serving as the instrument of educational change. The primary purpose of this investigation was to determine whether the positive consequences associated with high stakes test use and interpretation in Arizona were shared among all grade levels, not just the tested grades. Additionally, a curriculum alignment variable was examined to observe its association with curricular and instructional change. The AIMS Questionnaire was developed using principal components with varimax rotation and the Multitrait-Multimethod Matrix (Campbell & Fiske, 1959). The questionnaire was administered to elementary teachers using a Posttest-Only with Nonequivalent Groups quasi-experimental research design (Cook & Campbell, 1979) where teachers in the nontested grades (1, 2, and 4) served as the comparison group. A two-factor analysis of variance was performed to examine the primary hypothesis, and the Pearson Product Moment correlation was computed to observe the strength of the relationship between the curriculum alignment variable and the curricular/instructional change variable. Results of the analysis suggested that positive consequences were not equally shared among all grade levels in the elementary school. Additionally, the curriculum alignment variable accounted for less than 2% of the variance in the change variable. It is recommended that policymakers use a randomized testing model and select a new grade level and a new form of the test each year. Further, educational leaders should use curriculum alignment strategies with caution as they may be viewed as top-down change strategies that constrain a teacher's creativity. Future researchers should consider the use of predicted pattern testing (Levin & Neumann, 1999) to statistically examine the system-wide effects of a high-stakes assessment designed to impact student learning.
APA, Harvard, Vancouver, ISO, and other styles
24

Ip, Tsang Chui-hing Betty, and 葉鈤翠卿. "The construct validity of the aptitude test for prevocational schools." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1986. http://hub.hku.hk/bib/B3862770X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Staub, Michael J. "A study of the content validity of the Stanford Achievement Test in relation to the Christian school curriculum." Theological Research Exchange Network (TREN), 1988. http://www.tren.com.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Watson, Jennifer Marie. "Examining the reliability and validity of the Inicadores Dinámicos del Éxito en la Lectura (IDEL) : a research study /." view abstract or download file of text, 2004. http://wwwlib.umi.com/cr/uoregon/fullcit?p3153799.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 2004.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 148-155). Also available for download via the World Wide Web; free to University of Oregon users.
APA, Harvard, Vancouver, ISO, and other styles
27

Moahi, Serara. "The validity of the Botswana Junior Certificate Mathematics Examination over time." Diss., The University of Arizona, 2004. http://hdl.handle.net/10150/280629.

Full text
Abstract:
The conceptualization of validity has evolved over time, from the reign of criterion validity as a prominent type of validity through the phase of the traditional validity trinity concept that considered construct, content, and criterion validity as different kinds or types of validity. The current view among the measurement community is that there are no distinct forms of validity; instead validity is the extent to which the appropriateness of proposed uses and interpretations can be supported by various kinds of validity evidence. National examinations such as the Junior Certificate Examination in Botswana typically assess content and skills defined by national curricula. The extent to which items in examination papers are relevant to important content and cognitive skills espoused by national curricula is critical to the accuracy, appropriateness, and fairness of examinations results. This study investigated content, substantive, reliability, and internal structure validity evidence of the Junior Certificate Mathematics Examination over a period of three years, 2000, 2001, and 2002. Three alignment models were used to investigate content and cognitive skill validity evidence. A correlational analysis and exploratory factor analysis were used to detect the internal structure of the 2000, 2001, and 2002 Junior Certificate Mathematics examination papers and reliability of the objective tests was assessed through Coefficient alpha. The results showed that the sampling of mathematics content fluctuates from year to year, and does not always reflect content emphases in the Mathematics syllabus. Content of items in all three years' examination papers was judged as sufficiently aligned to content expressed in syllabus objectives the items were intended to measure using a liberal alignment criterion. The results of the study also indicated that the 2000, 2001, and 2002 Paper 1 component of the Mathematics examinations were sufficiently reliable albeit minimally so. Results of the exploratory factor analysis indicated that the Paper 1 component of the Mathematics examination assesses a possibly multidimensional construct. The findings of this study highlight the need for more comprehensive and systemic validity studies that would continue to generate information concerning the validity of examinations in Botswana.
APA, Harvard, Vancouver, ISO, and other styles
28

Reed, Sandra J. "A Study of the Validity of a Modified Ordinal Scale of HIV Transmission Risk Among Seropositive Men who Have Sex with Men." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1338391437.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Kruse, Lance M. "Item-Reduction Methodologies for Complex Educational Assessments: A Comparative Methodological Exploration." University of Toledo / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1576175496892792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Chelimo, Sheila. "Structural Validity of Competency Based Assessments: An Approach to CurriculumEvaluation." Ohio University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1529504437498332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Haigh, Charles Frederick. "Gender differences in SAT scores : analysis by race and socioeconomic level." Virtual Press, 1995. http://liblink.bsu.edu/uhtbin/catkey/941574.

Full text
Abstract:
Gender differences on Scholastic Aptitude Test (SAT) scores were analyzed by racial and socioeconomic groupings. Differences in SAT-Math scores, in SAT-Verbal scores, and in the difference between SAT-Math and SAT-Verbal scores were studied using four racial groupings (African American, Asian American, Caucasian American, and Hispanic American) and two socioeconomic groupings (average-to-high income and average-low income) of students. All differences were tested at the .05 level. Socioeconomic status was determined by using federal guidelines for free and reduced school lunches.The population of the study consisted of 7625 students (3962 females and 3663 males) from two school districts. School District A provided the SAT-M and SAT-V scores of 767 African American, 111 Asian American, 5202 Caucasian American, and 101 Hispanic American students. School District B provided the SAT-M and SAT-V scores of 139 African American,'179 Asian American, and 1126 Caucasian American students.Males, as a group, were found to be significantly higher than females in SAT-M scores and in the difference between SAT-M and SAT-V scores. Asian Americans and Caucasian Americans were found to score significantly higher than both African Americans and Hispanic Americans in SAT-M and SAT-V scores. Asian Americans were found to score significantly higher than all other racial groups in the difference between SAT-M and SAT-V scores. Hispanic Americans were found to score significantly lower than Asian Americans and Caucasian Americans and significantly higher than African Americans in SAT-M and SAT-V scores. African Americans were found to. score significantly lower than all other racial groups in SAT-M and SAT-V scores. A significant two-way interaction was found for gender and race in SAT-M scores, in SAT-V scores, and in the difference between SAT-M and. SAT-V scores. Gender differences in SAT scores varied significantly between each racial grouping.Average-to-high socioeconomic groups were found to have significantly higher scores than average-to-low socioeconomic groups in both SAT-M and SAT-V scores. These differences occurred regardless of gender and race. Significant linear differences were also found to occur in the difference between SAT-M and SAT-V scores over a seven year period.
Department of Educational Leadership
APA, Harvard, Vancouver, ISO, and other styles
32

McGill, Ryan J. "Beyond g| Assessing the Incremental Validity of the Cattell-Horn-Carroll (CHC) Broad Ability Factors on the Woodcock-Johnson III Tests of Cognitive Abilities." Thesis, Chapman University, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3621595.

Full text
Abstract:

Despite their widespread use, controversy remains about how to best interpret norm-referenced tests of cognitive ability. Due to the fact that contemporary cognitive measures appraise performance at multiple levels (e.g., subtest, factor, full-scale), a multitude of inferences about individual functioning are possible. Because school psychologists primarily utilize intelligence tests for predicting achievement outcomes, the cognitive variables that provide the most optimal weighting for prediction are of greatest importance. This study examined the predictive validity of the Cattell-Horn-Carroll (CHC) factor structure from the Woodcock-Johnson III Tests of Cognitive Abilities (WJ-COG; Woodcock, McGrew, & Mather, 2011c). Specifically, the incremental achievement variance accounted for by the CHC broad factors, after controlling for the effects of the General Intellectual Ability (GIA) composite, was assessed across reading, mathematics, writing, and oral language variables from the Woodcock-Johnson III Tests of Achievement (WJ-ACH; Woodcock, McGrew, & Mather, 2001b). Hierarchical regression was used to assess predictive relationships between the cognitive-achievement variables on the Woodcock-Johnson III assessment battery (WJ-III; Woodcock, McGrew, & Mather, 2001a). This study utilized archived standard score data from individuals (N = 4,722) who participated in the original WJ-III standardization project. Results showed that the GIA accounted for the largest portions of achievement for all but one of the regression models that were assessed. Across the models, the GIA variance coefficients represented moderate to large effects whereas the CHC factors accounted for non-significant incremental effects in most of the models. Nevertheless, the WJ-COG factor scores did account for meaningful portions of achievement variance in several situations: (a) in predicting oral expression scores; (b) in the presence of significant inter-factor variability; and (c) when the effects of Spearman's law of diminishing returns (SLODR) was accounted for in reading, mathematics, and written language regression models. Additionally, the chi-square goodness of fit test was utilized to assess model invariance across several moderating variables. Results suggest that incremental validity is not a unitary construct and is not invariant across samples on the WJ-COG. Additionally, simultaneous interpretation of both the GIA and CHC factor scores on the WJ-COG may be useful within specific clinical contexts.

APA, Harvard, Vancouver, ISO, and other styles
33

Bloomfield, Alison Elizabeth. "An Investigation of the Content and Concurrent Validity of the School-wide Evaluation Tool." Diss., Temple University Libraries, 2015. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/310450.

Full text
Abstract:
School Psychology
Ph.D.
The School-wide Evaluation Tool (SET) is a commonly used measure of the implementation fidelity of school-wide positive behavior interventions and supports (SWPBIS) programs. The current study examines the content and concurrent validity of the SET to establish whether an alternative approach to weighting and scoring the SET might provide a more accurate assessment of SWPBIS implementation fidelity. Twenty published experts in the field of SWPBIS completed online surveys to obtain ratings of the relative importance of each item on the SET to sustainable SWPBIS implementation. Using the experts' mean ratings, four novel SET scoring approaches were developed: unweighted, reweighted using mean ratings, unweighted dropping lowest quartile items, and reweighted dropping lowest quartile items. SET 2.1 data from 1,018 schools were used to compare the four novel and two established SET scoring methods and examine their concurrent validity with the Team Implementation Checklist 3.1 (TIC; across a subsample of 492 schools). Correlational data indicated that the two novel SET scoring methods with dropped items were both significantly stronger predictors of TIC scores than the established SET scoring methods. Continuous SET scoring methods have greater concurrent validity with the TIC overall score and greater sensitivity than the dichotomous SET 80/80 Criterion. Based on the equivalent concurrent validity of the unweighted SET with dropped items and the reweighted SET with dropped items compared to the TIC, this study recommends that the unweighted SET with dropped items be used by schools and researchers to obtain a more cohesive and prioritized set of SWPBIS elements than the existing or other SET scoring methods developed in this study.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
34

McGraw, Kelly A. "Identifying valid measures of reading comprehension : comparing the validity of oral reading fluency, retell fluency, and maze procedures /." view abstract or download file of text, 2006. http://proquest.umi.com/pqdweb?did=1196411101&sid=1&Fmt=2&clientId=11238&RQT=309&VName=PQD.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 2006.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 103-108). Also available for download via the World Wide Web; free to University of Oregon users.
APA, Harvard, Vancouver, ISO, and other styles
35

Jacobsen, S. Suzanne. "Identifying children at risk : the predictive validity of kindergarten screening measures." Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/31104.

Full text
Abstract:
The early identification of children who are "at risk" of experiencing learning problems is of interest to educators and policymakers. Conflicting evidence exists regarding the efficacy of screening measures for identifying children "at risk". The rationale for screening programs is that early identification of problems allows for treatment which may eliminate more severe problems from developing. If a student is identified as "at risk", school personnel may intervene with remedial programs. Subsequently, if the student succeeds, the earlier prediction is no longer valid. The identification of "at risk" would appear inaccurate because the intervention was successful in improving skills. Researchers often measure the prediction of "at risk" with a correlation coefficient. To the extent that the intervention is successful, the correlation of the identification of "at risk" with later measures of achievement is lowered. One of the problems with research on early prediction has been failure to control for the effects of the interventions which were implemented as a consequence of screening. An evaluation of "at risk" prediction is important because results of screening procedures are used to make decisions about retentions and the allocation of special services. The purpose of this study is to investigate the relationship between kindergarten screening measures and grade three achievement for two entire cohorts enrolled in 30 schools in one school district. The analysis employs a two-level hierarchical linear regression model to estimate the average within-school relationship between kindergarten screening measures and grade three achievement in basic skills, and determine whether this relationship varies significantly across schools. The model allows for the estimation of the relationship with control for individual pupil characteristics such as age, gender and physical problems. The study examines the extent to which the relationship between kindergarten screening and grade three achievement is mediated by children receiving learning assistance or attending extended (4-year) primary schooling. The study also examines differences among schools in the kindergarten screen/achievement relationships and the achievement of "at risk" pupils by including school characteristics in the analysis. The results of this study indicate positive relationships between kindergarten screening measures and achievement outcomes, even after controlling for age, gender and physical conditions. The kindergarten screen/achievement relationship did not vary among schools. The study failed to demonstrate that controlling for interventions would improve the kindergarten screen/achievement relationship; in fact the effects were in the opposite direction. Levels of adjusted achievement of pupils who obtained scores at the cut-off point for risk status varied significantly among schools. The "at risk" pupils performed better on all four achievement measures in schools with high school mean-ability than similar pupils in schools with low school mean-ability. These results show that progress in the study of the predictive validity of screening measures can be made through the use of hierarchical regression techniques. Researchers need to give consideration to the effects of educational interventions and the contextual effects of schools.
Education, Faculty of
Educational and Counselling Psychology, and Special Education (ECPS), Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
36

Elkins, Sharon Patricia. "Continuing professional nursing education and the relationship of learner motivation, the nature of the change, the social system of the organizational climate, and the educational offering : a reliability study." Virtual Press, 1998. http://liblink.bsu.edu/uhtbin/catkey/1115730.

Full text
Abstract:
Dr. Ronald Cervero (1985) identified learner motivation, the nature of the change, the social system of the organizational climate, and the educational program as factors affecting the application of learning to professional practice. A repeated measures research design was used to measure stability over time of instruments developed to measure variables in Cervero's model. Participants, N=27) graduate students, completed the instruments, "New Ideas and You" which measures learner's motivation to change, "The Nature of Change" which measures the learner's perception of the proposed change, and "Organizational Climate of the Social System" which measures the learner's perception of the social system's affect on the implementation of change. Staff nurses (N=27) completed the instrument "Continuing Education Offering Evaluation" which measures the learner's perception of the educational offering. Participants then completed the instruments again in three weeks. Procedures for the protection of human subjects were followed. The test-retest reliability coefficients were: "New Ideas and You," r=.72 L)-.01; "The Nature of Change," r.84 p=.01; "Organizational Climate of the Social System," r.83 p=.01; "Continuing Education Offering Evaluation," r.91 p=.01. The significance of this study was the initial establishment of stability over time of instruments developed to measure specific factors that affect the application of newly gained knowledge to nursing practice. Establishing reliability coefficients of instruments to measure the variables in Cervero's model is a step forward in the investigation of the larger question, "Does continuing education change practice?"
School of Nursing
APA, Harvard, Vancouver, ISO, and other styles
37

Floyd, Nancy D. "Validity Evidence for the Use of Holland's Vocational Personality Types in College Student Populations." Thesis, University of South Carolina, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3597481.

Full text
Abstract:

Higher education in the United States is replete with inventories and instruments designed to help administrators to identify students who are more likely to succeed in college and to tailor the higher education experience to foster this success. One area of research involves the Holland vocational personality type (Holland 1973, 1985, 1997) inventory, used to classify people into three-level personality types according to their work interests, behaviors, habits and preferences. This inventory has received a great deal of attention as a potential tool for steering college students into their optimal majors and thereby streamlining their college careers. Smart, Feldman and Ethington (2000) examined the Holland types as assessed through items present on the Cooperative Institutional Research Program (CIRP) Freshman Survey. Using both student and faculty data from a national sample, they argued that the Holland type can be generalized to students pursuing higher education through the academic department; departments are where students "work." This Holland/CIRP Freshman Survey inventory and the "factor structure" developed by Smart and associates was presented in the original work (2000) and a subsequent work sponsored by the National Symposium for Postsecondary Student Success (2006) but the evidence of the validity of their factors and analysis was never complete; no psychometric evaluation was done and their argument rests weakly on others' assessment of the constructs (Pike, 1996).

This study sought to provide validity evidence of the Smart, Feldman and Ethington (2000) estimation of the Holland vocational personality type provided to colleges and universities through the CIRP Freshman Survey. First, the model proposed by Smart and associates (2000) was examined through exploratory factor analysis to determine if the proposed factor structure could be reproduced with an independent single-institution sample of the same size used in the original research. Results showed that the factors identified by Smart et al (2000) could not be replicated, with the possible exception of the dimension of Artistic orientation. Next, items on the CIRP Freshman Survey were then used to attempt to make an independent alternative factor structure. Using a randomly split development sample, a factor structure was developed and validated with the remainder of the sample. Factor scores from the final structure were then used to classify students using cluster analysis, and the clusters were compared to their academic majors in an attempt to provide an alternative Holland model. The clusters did not capture trends in choosing either a freshman or a graduating major, and so does not provide a means of alternative estimation for the Holland vocational personality type.

Multiple arguments against the validity of the original Smart, Feldman and Ethington (2000) estimation of the Holland vocational personality type via the CIRP Freshman Survey with the exception of the Artistic orientation dimension are presented. More troubling are the questions raised by the lack of validity evidence, given that the authors suggest that these subscales can be used to optimize fit between students and academic departments—and that the information is used nationally at "face value." The information calls into question the use of such scales, even those which are nationally published and widely used, if validity evidence is not present. Discussion focuses on the institution's responsibility in establishing the usability of such forms to make advisement or other intervention decisions for individual students.

APA, Harvard, Vancouver, ISO, and other styles
38

Burns, Stephanie Tursic. "The Predictive Validity of Person Matching Methods in Interest Measurement." Kent State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=kent1327781557.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Stroupe, Heather. "An Evaluation of the Convergent Validity of Multi-Source Feedback with Situational Assessment of Leadership - Student Assessment (SALSA©)." TopSCHOLAR®, 2010. http://digitalcommons.wku.edu/theses/162.

Full text
Abstract:
The current study assessed the convergent validity of the Situational Assessment of Leadership – Student Assessment (SALSA©), a situational judgment test (SJT), with multi-source ratings. The SALSA© was administered to ROTC cadets via Blackboard; multi-source ratings, which paralleled the leadership dimensions of the SALSA©, were administered via paper. Each cadet completed the SALSA© and was rated by 10 peers, his/herself, and at least one cadre (superior). SALSA© scores were not correlated with any of the corresponding dimensions on multi-source ratings, with one exception. Cadre ratings of Consideration/Team Skills were positively correlated with SALSA© scores on the same dimension. This finding suggests that the multi-source ratings and the SALSA© are not measuring the same leadership construct. Self-ratings were significantly higher than peer or cadre ratings. Senior ROTC cadets scored significantly higher on SALSA© than did Junior ROTC cadets. Future research should focus on differences between autocratic styles of leadership and democratic styles of leadership and whether different SJTs are needed to measure each style.
APA, Harvard, Vancouver, ISO, and other styles
40

Lundeen, Rebecca J. "Validity testing of instruments to measure variables affecting behavior change following continuing professional education in nursing." Virtual Press, 1997. http://liblink.bsu.edu/uhtbin/catkey/1048395.

Full text
Abstract:
Nurse educators are faced with the issues of cost containment and documenting the results of continuing professional education (CPE). The results of successful CPE are behavior changes observed in the nursing staff upon returning to the work environment. Continuing professional education requires valid evaluation of instruments to determine its effectiveness, quality, and documentation of behavior changes. The purpose of this study was to establish the validity of four instruments measuring variables of behavior change in nurses after attendance at a CPE program. Cervero's (1985) evaluation model applied to CPE and behavior change was used to guide the study.Data was collected from three different convenience samples and merged for a total of 114 subjects. The four instruments that participants were asked to complete at the CPE programs were: (a) "New Ideas and You" (Brigham et al., 1995); (b) "Social System of the Organization"analysis. "New Ideas and You" (Brigham et al., 1995) (Ryan et al, 1995); (c) "CPE Program and Change" (Ryan et al., 1995); and (d) "The Continuing Professional Education Offering" (Elkins et al., 1995).Findings in this study were revealed through factor outcome to improve the quality of patient care. This end revealed two factors. "Social System of the Organization" (Ryan et al., 1995) resulted in a three factor solution. "CPE Program and Change" (Ryan et al., 1995) resulted in a three factor solution and "Continuing Professional Education Offering" (Elkins et al., 1995) resulted in a three factor solution.Conclusions from this study was that the four instruments have some degree of validity and reliability. The highest obtained factor scores confirmed the concepts identified as subscales in the four instruments.Nurse educators need a valid and reliable method of evaluating CPE to assess the effectiveness and extent of behavior changes in nurses after attendance at workshops, seminars, and other CPE programs. These behavior changes are a result of an increased knowledge base with an ultimateresult has a positive impact on the nursing profession, nursing education, and health care.
School of Nursing
APA, Harvard, Vancouver, ISO, and other styles
41

McGuffey, Amy R. "Validity and Utility of the Comprehensive Assessment of School Environment (CASE) Survey." University of Dayton / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1417510261.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Paris, Joseph. "Predicting Success: An Examination of the Predictive Validity of a Measure of Motivational-Developmental Dimensions in College Admissions." Diss., Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/494981.

Full text
Abstract:
Educational Leadership
Ed.D.
Although many colleges and universities use a wide range of criteria to evaluate and select admissions applicants, much of the variance in college student success remains unexplained. Thus, success in college, as defined by academic performance and student retention, may be related to other variables or combinations of variables beyond those traditionally used in college admissions (high school grade point average and standardized test scores). The current study investigated the predictive validity of a measure of motivational-developmental dimensions as a predictor of the academic achievement and persistence of college students as measured by cumulative undergraduate grade point average and retention. These dimensions are based on social-cognitive (self-concept, self-set goals, causal attributions, and coping strategies) and developmental-constructivist (self-awareness and self-authorship) perspectives. Motivational-developmental constructs are under-explored in terms of the predictive potential derived from their use in evaluating admission applicants’ ability to succeed and persevere despite the academic and social challenges presented by postsecondary participation. Therefore, the current study aimed to generate new understandings to benefit the participating institution and other institutions of higher education that seek new methodologies for evaluating and selecting college admission applicants. This dissertation describes two studies conducted at a large, urban public university located in the Northeastern United States. Participants included 10,149 undergraduate students who enrolled as first-time freshmen for the Fall 2015 (Study 1) and Fall 2016 (Study 2) semesters. Prior to matriculation, participants applied for admission using one of two methods: standard admissions or test-optional admissions. Standard admission applicants submitted standardized test scores (e.g., SAT) whereas test-optional applicants responded to four short-answer essay questions, each of which measured a subset of the motivational-developmental dimensions examined in the current study. Trained readers evaluated the essays to produce a “test-optional essay rating score,” which served as the primary predictor variable in the current study. Quantitative analyses were conducted to investigate the predictive validity of the “test-optional essay rating score” and its relationship to cumulative undergraduate grade point average and retention, which served as the outcome variables in the current study. The results revealed statistically significant group differences between test-optional applicants and standard applicants. Test-optional admission applicants are more likely to be female, of lower socioeconomic status, and ethnic minorities as compared to standard admission applicants. Given these group differences, Pearson product-moment correlation coefficients were computed to determine whether the test-optional essay rating score differentially predicted success across racial and gender subgroups. There was inconclusive evidence regarding whether the test-optional essay rating score differentially predicts cumulative undergraduate grade point average and retention across student subgroups. The results revealed a weak correlation between the test-optional essay rating score and cumulative undergraduate grade point average (Study 1: r = .11, p < .01; Study 2: r = .07, p < .05) and retention (Study 1: r = .08, p < .05; Study 2: r = .10, p < .01), particularly in comparison to the relationship between these outcome variables and the criteria most commonly considered in college admissions (high school grade point average, SAT Verbal, SAT Quantitative, and SAT Writing). Despite these findings, the test-optional essay rating score contributed nominal value (R2 = .07) in predicting academic achievement and persistence beyond the explanation provided by traditional admissions criteria. Additionally, a ROC analysis determined that the test-optional essay rating score does not predict student retention in a way that is meaningfully different than chance and therefore is not an accurate binary classifier of retention. Further research should investigate the validity of other motivational-developmental dimensions and the fidelity of other methods for measuring them in an attempt to account for a greater proportion of variance in college student success.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
43

Slack, Patricia. "A Situational Assessment of Student Leadership: An Evaluation of Alternate Forms Reliability and Convergent Validity." TopSCHOLAR®, 2010. http://digitalcommons.wku.edu/theses/172.

Full text
Abstract:
The Situational Assessment of Leadership: Student Assessment (SALSA©) was developed in the spring of 2009 to be used as a measure of student leadership. Study 1 assessed alternate forms reliability of the SALSA using scores from 178 students. The overall scores on SALSA Form A and SALSA Form B showed a significant correlation (rAB = .906, p < .01). Dimension scores on the two forms ranged from rAB = .475 to rAB = .804. Study 2 evaluated the convergent validity between the SALSA and the Western Kentucky University Center for Leadership Excellence assessment center. SALSA scores as well as assessment scores from 53 students were analyzed. The overall scores on the SALSA and CLE assessment center had a significant yet moderate correlation (r = .513). Dimension correlations were significant but low, ranging from r = .310 to r = .392. The strong correlations in Study 1 indicate the two forms of the SALSA may be used as alternate measures such as in a pre and post-test of leadership. The convergent validities in Study 2 demonstrate that both the SALSA and assessment center may be used to assess leadership. However, the low convergent validities across dimensions indicate overall scores likely should be used rather than dimension scores.
APA, Harvard, Vancouver, ISO, and other styles
44

Wallace-Pascoe, Dawn Marie. "Assessing the Validity of a Measure of the Culture of Evidence at Two-Year Colleges." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1373305560.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Wong, Kwong-keung. "The validity and reliability of Hong Kong Certificate of Education technical subjects examination with special reference to the project method of assessment." Hong Kong : University of Hong Kong, 1986. http://sunzi.lib.hku.hk/hkuto/record.jsp?B18035152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Soto, Ramirez Pamela. "Validity Evidence of Internal Structure and Subscores Use of the Portfolio in the Chilean Teachers’ Evaluation System." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu159316412299089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Bruckner, Terri Ann. "Using an Argument-based Approach to Validity for Selected Tests of Spatial Ability in Allied Medical Professions Students." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1371562493.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Tucker, Justin. "An Evaluation of the Convergent Validity of Situational Assessment of Leadership-Student Assessment (SALSA© ) with Multi-Source Feedback in MBA and Ed.D. in Educational Leadership Students." TopSCHOLAR®, 2011. http://digitalcommons.wku.edu/theses/1058.

Full text
Abstract:
The current study assessed the convergent validity of the Situational Assessment of Leadership – Student Assessment (SALSA©), a situational judgment test (SJT), with multi-source ratings. The SALSA© was administered to MBA and Ed.D. in Educational Leadership students via Blackboard; multi-source ratings, which paralleled the leadership dimensions of the SALSA©, were administered online. Each student completed the SALSA© and was rated by his or her supervisor, 3-5 peers, 1-5 subordinates, and him/herself. SALSA© scores were not correlated with any of the corresponding dimensions on multi-source ratings. This finding may suggest that the multi-source ratings and SALSA© are not measuring the same leadership construct; or these results may be due to low variance in SALSA scores and low variance in the ratings. Self ratings were not significantly higher than other ratings, with three exceptions. Also, no difference was found between SALSA scores for MBA and Ed.D. students. This study was limited by the small sample size.
APA, Harvard, Vancouver, ISO, and other styles
49

Zhao, Jing. "Contextual Differential Item Functioning: Examining the Validity of Teaching Self-Efficacy Instruments Using Hierarchical Generalized Linear Modeling." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1339551861.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Chow, Chi Ping. "An investigation of the validity of the computer assisted child interview (CACI) as a self-report measure of children's academic performance and school experience /." view abstract or download file of text, 2007. http://proquest.umi.com/pqdweb?did=1404336121&sid=3&Fmt=2&clientId=11238&RQT=309&VName=PQD.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 2007.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 103-108). Also available for download via the World Wide Web; free to University of Oregon users.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography