To see the other types of publications on this topic, follow the link: Assessment validity.

Dissertations / Theses on the topic 'Assessment validity'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Assessment validity.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

French, Elizabeth. "The Validity of the CampusReady Survey." Thesis, University of Oregon, 2014. http://hdl.handle.net/1794/18369.

Full text
Abstract:
The purpose of this study is to examine the evidence underlying the claim that scores from CampusReady, a diagnostic measure of student college and career readiness, are valid indicators of student college and career readiness. Participants included 4,649 ninth through twelfth grade students from 19 schools who completed CampusReady in the 2012-13 school year. The first research question tested my hypothesis that grade level would have an effect on CampusReady scores. There were statistically significant effects of grade level on scores in two subscales, and I controlled for grade level in subsequent analyses on those subscales. The second, third and fourth research questions examined the differences in scores for subgroups of students to explore the evidence supporting the assumption that scores are free of sources of systematic error that would bias interpretation of student scores as indicators of college and career readiness. My hypothesis that students' background characteristics would have little to no effect on scores was confirmed for race/ethnicity and first language but not for mothers' education, which had medium effects on scores. The fifth and six research questions explored the assumption that students with higher CampusReady scores are more prepared for college and careers. My hypothesis that there would be small to moderate effects of students' aspirations for after high school on CampusReady scores was confirmed, with higher scores for students who aspired to attend college than for students with other plans. My hypothesis that there would be small to moderate relationships between CampusReady scores and grade point average was also confirmed. I conclude with a discussion of the implications and limitations of these results for the argument supporting the validity of CampusReady score interpretation as well as the implications of these results for future CampusReady validation research. This study concludes with the suggestion that measures of metacognitive learning skills, such as the CampusReady survey, show promise for measuring student preparation for college and careers when triangulated with other measures of college and career preparation.
APA, Harvard, Vancouver, ISO, and other styles
2

Chinedozi, Ifeanyichukwu, and L. Lee Glenn. "Criterion Validity Measurements in Automated ECG Assessment." Digital Commons @ East Tennessee State University, 2013. https://dc.etsu.edu/etsu-works/7484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Clounch, Kristopher L. "Sex offender assessment clinical utility and predictive validity /." Diss., St. Louis, Mo. : University of Missouri--St. Louis, 2008. http://etd.umsl.edu/r3221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wessels, Gunter Frederik. "Salespeople's Selling Orientation: Reconceptualization, Measurement and Validity Assessment." Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/202997.

Full text
Abstract:
A study of Elite Salespeople (ES), those salespeople who maintain and sustain consistent high performance in the sales task was completed to discover and understand elite salesperson behavior. Analysis of participants' responses to structured depth interview questions led to the emergence of a construct called a Selling Orientation (SO). SO is made up of behaviors that guide salespeople to build, maintain, and monitor their personal credibility both with customers and industry members, as well as within the company. A number of field pre-tests were performed to derive a measurement scale for SO. This process was followed by a field survey that measured SO in a sales force. Confirmatory factor analysis was performed to assess the validity of the measurement scale and results support internal consistency and construct validity of a short 9 item scale for SO. This study advances the understanding of sales performance related theory by illuminating attributes of ES's. Additionally, this study introduces the concept of a Selling Orientation that is associated with high sales performance and key account management. Finally, the study introduces a measurement scale useful in the study of salespeople's selling orientation.
APA, Harvard, Vancouver, ISO, and other styles
5

Van, Leeuwen Sarah. "Validity of the Devereux Early Childhood Assessment instrument." Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/31396.

Full text
Abstract:
Parent ratings of social-emotional development on standardized assessment instruments for a sample of 69 kindergarten children in a mid-size Canadian city are utilized to examine the validity of the Devereux Early Childhood Assessment (DECA; LeBuffe & Naglieri, 1999a). Results provide support for the DECA's reliability and internal validity when used with a sample different from the standardization sample. In general, results illustrate an expected pattern of convergence and divergence between the DECA scales and scales from two comparison instruments, the Behavior Assessment System for Children, Second Edition (Reynolds & Kamphaus, 2004) and the Preschool and Kindergarten Behavior Scales, Second Edition (Merrell, 2002). The DECA's protective factor scales relate positively to other measures of social skills/adaptive behaviours, and negatively to other measures of problematic/clinical behaviours; these correlations were strongest for the DECA's Self-Control scale, and weakest for the DECA's Attachment scale. The DECA’s Behavioral Concerns screener scale related negatively to other measures of social skills/adaptive behaviours, and positively to other measures of problematic/clinical behaviours, particularly those reflecting externalizing behaviour problems. The DECA is a psychometrically sound instrument that makes an important and unique contribution to the field of social-emotional assessment of young children.
Arts, Faculty of
Psychology, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
6

Grimard, Donna (Donna Christine) Carleton University Dissertation Psychology. "An assessment of the validity of the ministry Risk\Needs Assessment Form." Ottawa, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Love, Ross. "A Construct Validity Analysis of a Leadership Assessment Center." TopSCHOLAR®, 2007. http://digitalcommons.wku.edu/theses/404.

Full text
Abstract:
This study was designed to assess the construct validity of a leadership assessment center. Participants were evaluated in a leadership assessment center and completed a 360 degree feedback tool designed to measure leadership. Convergent and discriminant validity coefficients were calculated between assessment center ratings and the 360 degree feedback ratings of four different leadership competencies. Results showed little support for the construct validity of the assessment center. Additionally, results replicated prior research regarding the construct validity of assessment centers, with high correlations among different competencies within exercises and low correlations between competencies measured via different methods (assessment center-360 degree feedback tool correlations and assessment center correlations across different exercises).
APA, Harvard, Vancouver, ISO, and other styles
8

MAUK, JACQUELINE KERN. "RELIABILITY AND VALIDITY ASSESSMENT OF THE EXERCISE SUITABILITY SCALE." Diss., The University of Arizona, 1985. http://hdl.handle.net/10150/188035.

Full text
Abstract:
This study examined the reliability and the validity of the Exercise Suitability Scale (ESS). The ESS was a psychometric instrument developed to measure the suitability of four different forms of exercise (aerobics, bicycling, jogging, and swimming) for different individuals. Aspects of Exercise Suitability included in the ESS were ease, satisfaction, enjoyableness, fatigue, interest, convenience, comfort, safety, affordability, and time-involvement. Background information relating to the development of the ESS as well as methods and results of testing the instrument for reliability and validity were included in this study. Data from a student population were used for estimating the reliability and validity of the ESS. Reliability testing included computing inter-item and item-to-total correlation coefficients, Cronbach's alpha, and internal consistency coefficients (theta and omega) derived from factor analytic techniques. Several types of validity were assessed: content validity, criterion-related validity, and construct validity. Criterion-related validity was estimated by comparing scores on the ESS with information about participation in exercise. Multiple regression was also used to assess criterion-related validity. Principal components analysis was used to examine the construct and content validity of the ESS. Construct validity was also estimated by correlating ESS scale scores with a parallel instrumentation approach, a Q-Sort. Satisfactory reliability indices were obtained for all four ESS exercise scales. Criterion-related validity indices were also adequate. Factor analysis provided some evidence of content validity of the ESS, but provided little support for the construct validity of the ESS. Construct validity was supported, however by the convergence approach.
APA, Harvard, Vancouver, ISO, and other styles
9

Brits, Nadia M. "Investigating the construct validity of a developmental assessment centre." Thesis, Stellenbosch : Stellenbosch University, 2011. http://hdl.handle.net/10019.1/18071.

Full text
Abstract:
Thesis (MComm)--University of Stellenbosch, 2011.
AFRIKAANSE OPSOMMING: Organisasies bestaan om skaars produksiefaktore te verander na bemarkbare goedere en dienste. Aangesien organisasies deur mense bedryf en bestuur word, is hierdie instellings grotendeels afhanklik van hul menslike produksiefaktor om hul hoofdoel te bereik, nl. om hul wins te vergroot. Organisasies poog om geskikte werknemers aan te stel wat sal voldoen aan die vereistes van 'n spesifieke pos of dit selfs sal oortref. In 'n werkswêreld wat konstant verander, vereis tegnologie en die kenmerke van die werkswêreld dat hierdie persone deurgaans ontwikkel word om by te bly met verandering. Personeelkeuring en –ontwikkeling is dus kritieke bedrywighede van die Bedryfsielkundige en Menslike Hulpbronpraktisyn. Die Takseersentrum is 'n gewilde meetinstrument wat dikwels gebruik word vir die doel van keuring of ontwikkeling. Hierdie gewilde assesseringsmetode word hoog aangeskryf vir sy vermoë om toekomstige werksprestasie te voorspel. Takseersentrums wat gebruik word vir keuring doeleindes, toon inkrementele geldigheid bo meetinstrumente van persoonlikheid sowel as kognitiewe vaardigheidstoetse. Al word takseersentrums internasionaal en hier in Suid-Afrika dikwels gebruik, word hulle ook dikwels gekritiseer op grond van die vraag of hulle werklik die dimensies meet wat hulle veronderstel is om te meet. Die konstrukgeldigheid van takseersentrums word dikwels bevraagteken aangesien lae diskriminante en konvergerende geldigheid, sowel as hardnekkige oefeningseffekte, navorsingsbevindinge oorheers. Hierdie vraag is die beweegrede vir die huidige studie. Die doel met hierdie studie is om die konstrukgeldigheid van 'n ontwikkelingstakseersentrum te ondersoek. 'n Geriefsteekproef is gebruik om die navorsing te doen. Die data is verskaf deur 'n private konsultasie maatskappy in die vorm van die takseersentrumtellings van 202 persone wat in 'n eendaagse sentrum geassesseer is. Die sentrum is ontwikkel vir 'n Suid-Afrikaanse bankinstelling en het drie hoofdoelwitte, nl. om kandidate te identifiseer vir die rol van 'n nuwe posbeskrywing, om werknemers na meer topaslike rolle te verskuif en om toekomstige ontwikkelingsgeleenthede vir alle deelnemers te verskaf. Twaalf vaardighede is deur vier verskillende oefeninge geëvalueer. Verskeie beperkinge is opgelê deur die aard van die geriefsteekproef deurdat die navorser geen invloed op die ontwerp van die takseersentrum gehad het nie. Die aanvanklike twaalf vaardighede kon nie afsonderlik ontleed word nie en moes gevolglik as subdimensies in hul onderskeie globale faktore gekombineer word. Dit het gelei tot vier enkeldimensie (ED) metingsmodelle wat eers ondersoek moes word om gesigswaarde van konstrukgeldigheid te bewys voordat oefeninge by die bestaande modelle gevoeg kon word. Die vier afsonderlike oefeninge is in een globale oefeningseffek saamgevoeg. As gevolg van die ontoereikende getal indikators in die datastel kon net twee van die vier ED-modelle oefeninge insluit en dit het gelei tot twee enkeldimensie-, enkeloefening-metingsmodelle (EDEO). Inter-itemkorrelatsies is in SPSS bereken, gevolg deur bevestigende faktorontleding van elke afsonderlike metingsmodel in EQS wat gebruik is om die interne struktuur van die dimensies te bestudeer. Met een dimensie as uitsondering, impliseer die uitslae van die CFA dat die indikators van die takseersentrum (d.w.s. gedragsbeoordelings) nie daarin slaag om die onderliggende dimensie te weerspieël soos dit veronderstel was om te doen nie. Nadat die saamgestelde oefeningseffek byvoeg is, het slegs een van die twee dimensies geloofwaardige uitslae met buitengewoon goeie modelpassing en parameterskattings wat dui op dimensie- eerder as oefeningseffekte. As gevolg van hierdie bevindings word die geldigheid van die ontwikkelingsterugvoer wat elke deelnemer na die evaluering ontvang het, ernstig in twyfel getrek. Met die uitsondering van een dimensie se resultate, bevestig die resultate van hierdie studie vorige navorsingsbevindinge.
ENGLISH ABSTRACT: Organisations exist by transforming scarce factors of production into goods and services. Since organisations are run and managed by people, these institutions are largely dependent on their human production factor to achieve their main goal of maximising profits. Organisations strive to appoint suitable employees who will meet, even exceed, the requirements of a particular job position. In a constantly evolving world of work, advancing technology and inherent features of the modern working environment necessitate ongoing development of these individuals in order to keep up with the changes. Personnel selection and development are therefore crucial activities of the Industrial Psychologist and Human Resource Practitioner. The Assessment Centre (AC) is a popular measuring instrument that is often used for either selection or development purposes. This popular method of assessment has received a great degree of praise for its ability to predict future job performance. ACs have also shown incremental validity over and above both personality and cognitive ability measuring instruments when used for selection purposes. Nevertheless, despite the frequent use of ACs both internationally and locally in South Africa, ACs have been widely criticised on the basis of whether they actually measure the dimensions that they intend to measure. The question has often been asked whether ACs are construct valid, since low discriminant- and convergent validity, as well as persistent exercise effects, seem to dominate research findings. This question serves as the driving force of the present study. The aim of this study is to examine the construct validity of a development assessment centre (DAC). A convenience sample was used to pursue the research objective. The data was received from a private consultant company in the form of the AC ratings of 202 individuals who were assessed in a one-day DAC. The DAC was developed for a South African banking institution and had three main purposes, namely to identify candidates who fit the role of a new job position, to reposition employees into more appropriate roles, and to provide future development opportunities to all participants. Twelve competencies were assessed by four different exercises. Several limitations were imposed by the nature of the convenience sample since the researcher did not have an influence on the design of the AC. The initial twelve competencies were not represented by a sufficient number of indicators and could consequently not be statistically analysed on an individual level. These dimensions therefore had to be used as sub-dimensions to be combined within their respective global (second-order) factors. This resulted in four single trait (ST) measurement models that had to be investigated first to provide face value of construct validity before adding exercises into the existing models. The four separate exercises were integrated into one global exercise effect. The insufficient number of indicators within the data set brought about only two of the four ST models to be examined for any existing exercise effects. The result was two single trait, single exercise (STSE) measurement models. Inter-item correlations were calculated in SPSS, followed by confirmatory factor analysis on each respective measurement model in EQS used to study the internal structure of the dimensions. With one dimension as the exception, the results of the CFA imply that the DAC's indicators (i.e. behavioural ratings) in each second-order factor, fail to reflect the underlying dimension, as it was intended to do. When adding the conglomerated exercise effect, only one of the two dimensions had plausible results with good model fit and parameter estimates that leaned towards dimension and not exercise effects. Based on these findings, serious doubt is placed on the validity of the developmental feedback provided to each participant after the completion of the DAC. With one dimension as the exception, the present study's results corroborate previous research findings on the construct validity of ACs.
APA, Harvard, Vancouver, ISO, and other styles
10

Morris, William Alan. "A Rhetorical Approach to Examining Writing Assessment Validity Claims." Kent State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=kent1619704495223314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Melin, Nicole Lynn. "Construct Validity of the Preschool Visual Motor Integration Assessment." The Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=osu1396359214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Stephan, Sarah Allison. "THE PREDICTIVE VALIDITY OF STIMLULUS PREFERENCE ASSESSMENTS." Miami University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=miami1216247455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Pierce, Laura E. "CONVERGENT VALIDITY OF THE FUNCTIONAL ASSESSMENT INFORMANT RECORD FOR TEACHERS (FAIR-T)." UKnowledge, 2013. http://uknowledge.uky.edu/edp_etds/9.

Full text
Abstract:
This study assessed the convergent validity of the Functional Assessment Informant Record for Teachers (FAIR-T; Edwards, 2002) with analog functional analyses (FAs). Participants were five teachers and students located at a specialized school serving individuals with disabilities. Teachers had worked with the student for a minimum of 1 month, and students displayed a variety of behavioral topographies. The FAIR-T was conducted by the researcher using telephone or video conferencing technology, and analog functional analyses were conducted in a clinic setting by trained therapists within the course of the student’s typical treatment plan. Results of the FAIR-T were coded according to function, and the results of the analog FAs were graphed and analyzed visually. Results of the FAIR-T and FAs indicated limited convergence between the two assessment methods, though results were somewhat inconclusive. Results are discussed in relation to the utility of the FAIR-T, particularly in the school setting. Directions for future research are discussed in light of the need to delineate efficient means with which to conduct functional behavior assessments within the schools.
APA, Harvard, Vancouver, ISO, and other styles
14

ARUA, CEASER. "Assessing the validity of microcredit impact studies in Uganda : Assessing the validity of microcredit impact studies in Uganda." Thesis, Linnéuniversitetet, Institutionen för samhällsstudier (SS), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-36218.

Full text
Abstract:
A number of developing countries including Uganda have of recent experienced tremendous growth of microfinance industry in financial and credit service provision. Microfinance development in developing countries and its’ impacts on the poor’s livelihood have been a central point of focus by academic community and development stakeholders. A number of actors like donors and government agencies have accredited microcredit as a program to help the poor improve their living conditions, fight extreme poverty and reduce the number of people living in absolutely lacking situations. The growth of microcredit schemes in Uganda has incited donors, government agencies, different microfinance institutions, individual and academia to measure the achievements of the program in relation to its’ different objectives. Despite the growing efforts and attention to measure microcredit impacts on livelihood transformation, less focus has been given to this scientific process of measuring program impacts. Ensuring credibility and validity is an important aspect that guarantees realistic representation and quality in scientific research when researchers seek to understand what has been achieved. It is upon the above background that this study established strong interest to understand and explore how different scientific research processes of impact evaluation relate to the quality of impact reports or outcomes measured. The study examines the main debate about microcredit impacts, this is aimed at providing necessary information required (epistemological benefit) to understand microcredit impacts within different perspectives of development. Different researchers’ background more specifically their academic qualifications, expertise, gender, institutions attached to and roles played during different impact studies is assessed by this study. The study looks at different methods of data collection, analysis employed by different microcredit impact studies and they impacted on different studies being assessed. The study uses text and systematic method of data and information analysis, different articles searched from Linnaeus University library website and other organizational reports got from different organizations databases, form set of data used in this study. A total of sixteen impact studies done in Uganda have been systematically reviewed. Conceptual framework in which validity is used as the main tool in the analytical discussion of study has been employed.
APA, Harvard, Vancouver, ISO, and other styles
15

Parker, Kimberly. "Utility of the General Validity Scale Model: Development of Validity Scales for the Co-parenting Behavior Questionnaire." VCU Scholars Compass, 2010. http://scholarscompass.vcu.edu/etd/2301.

Full text
Abstract:
Validity scales for child-report measures are necessary tools in clinical and forensic settings in which major decisions affecting the child and family are in question. Currently there is no standard model for the development and testing of such validity scales. The present study focused on 1) creating the General Validity Scale (GVS) Model to serve as a guide in validity scale development and 2) applying this model in the development of validity scales for the Co-parenting Behavior Questionnaire (CBQ), a child-report measure of parenting and co-parenting behaviors for children whose parents are divorced. Study 1 used the newly developed GVS Model to identify threats to CBQ validity and to develop procedures for detecting such threats. Four different validity scales were created to detect inaccurate responding due to 1) presenting mothering, fathering, and/or co-parenting in an overly negative light, 2) rating mothering and fathering in a highly discrepant manner, 3) inconsistent item responses, and 4) low reading level. Study 2 followed the GVS Model to test the newly developed scales by comparing CBQ responses produced under a standard instruction set to responses from contrived or randomly generated data. Support for the ability of each validity scale to accurately detect threats to validity was found.
APA, Harvard, Vancouver, ISO, and other styles
16

Kavanaugh, Maureen. "Examining the Impact of Accommodations and Universal Design on Test Accessibility and Validity." Thesis, Boston College, 2017. http://hdl.handle.net/2345/bc-ir:107317.

Full text
Abstract:
Thesis advisor: Michael Russell
Large-scale assessments are often used for statewide accountability and for instructional and institutional planning. It is essential that the instruments used are valid and reliable for all test takers included in the testing population. However, these tests have often fallen short in the area of accessibility, which can impact validity for students with special needs. This dissertation examines two strategies to addressing accessibility: the use of technology to implement principles of universal design to assessment and the provision of accommodations. This study analyzed test data for students attending high schools in New Hampshire, Vermont and Rhode Island who participated in the 2009 11th grade New England Common Assessment Program (NECAP) science assessment. Three test conditions were of interest: (1) no accommodations with a paper-based form (2) accommodated test administration with a paper-based form and (3) accommodated test administration using a universally designed computer-based test delivery system with embedded accommodations and accessibility features. Results from two analyses are presented: differential item functioning (DIF) and confirmatory factor analysis (CFA). DIF was used to explore item functioning, comparing item difficulty and discrimination under accommodated and non-accommodated conditions. Similarly, CFA was used to examine the consistency of underlying factor structure as evidence that constructs measured were stable across test conditions. Results from this study offered evidence that overall item functioning and underlying factor structure was consistent across accommodated and unaccommodated conditions, regardless of whether accommodations were provided with a paper form or a universally designed computer-based test delivery system. These results support the viability of using technology-based assessments as a valid means of assessing students and offering embedded, standardized supports to address access needs
APA, Harvard, Vancouver, ISO, and other styles
17

Thurber, Robin Schul. "Construct validity of curriculum-based mathematics measures /." view abstract or download file of text, 1999. http://wwwlib.umi.com/cr/uoregon/fullcit?p9957576.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 1999.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 78-83). Also available for download via the World Wide Web; free to University of Oregon users. Address: http://wwwlib.umi.com/cr/uoregon/fullcit?p9957576.
APA, Harvard, Vancouver, ISO, and other styles
18

Clesham, Rose. "Changing assessment practices resulting from the shift towards on-screen assessment in schools." Thesis, University of Hertfordshire, 2010. http://hdl.handle.net/2299/5014.

Full text
Abstract:
This dissertation reports a study into the appropriateness of on-screen assessment materials compared to paper-based versions, and how any potential change in assessment modes might affect assessment practices in schools. The research was centred around a controlled comparative trial of paper and on-screen assessments with 1000 school students. The appropriateness of the assessments was conceptualised in terms of exploring the comparative reliability, validity and scoring equivalence of these assessments in paper and on-screen modes. Reliability was considered using quantitative analysis: calculating the performance and internal reliability of the assessments using classical test theory, Cronbach’s alpha and Rasch latent trait modelling. Equivalence was also addressed empirically. Marking reliability was not quantified, however it is discussed. Validity was considered through qualitative analysis, using questionnaire and interview data obtained from the students and teachers participating in the trial; the focus on the comparative authenticity and fitness for purpose in assessments in different modes. The outcomes of the research can be summarised as follows: the assessment tests in both modes scored highly in terms of internal reliability, however they were not necessarily measuring the same constructs. The scores from different modes were not equivalent, with students performing better on paper. The on-screen versions were considered to have greater validity by students and teachers. All items in the assessments that resulted in significant differences in performance were analysed and categorised in terms of item types. Consideration is then given to whether differences in performance are the result of construct irrelevant or relevant factors. The recommendations from this research focus on three main areas; that in order for on-screen assessments to be used in schools and utilise their considerable potential, the equivalence issue needs to be removed, the construct irrelevant factors need to be clearly identified and minimised and the construct relevant factors need to be enhanced. Finally a model of comparative modal dependability is offered, which can be used to contrast and compare the potential benefits and issues when changing assessment modes or item types are considered.
APA, Harvard, Vancouver, ISO, and other styles
19

Anderson, Craig Donavin. "Video portfolios : do they have validity as an assessment tool?" Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=82679.

Full text
Abstract:
This thesis presents a study of the validity of video portfolios as an assessment tool. For this study, first and second grade students were videotaped doing exercises four times in reading and four times in math over the course of a school year. After portfolios were collected, each set of four videos (either math or reading) was shown to teachers in random order. The teachers were asked to put the clips into the correct chronological and, therefore, developmental order. Interviews after the task investigated the criteria teachers used to order the clips, and found that they used task complexity, task performance, and demeanor of students as the primary factors. The teachers were able to correctly order the video clips to a high level of significance. This finding supports the hypothesis that video portfolios have validity as an assessment of progress in student achievement. Interview data also yielded relevant findings for the future use and implementation of video portfolios. Further studies should investigate the generalizability of these results, more closely examine the criteria teachers use to evaluate portfolios, and determine the validity of portfolios as an evaluation for other aspects of student learning.
APA, Harvard, Vancouver, ISO, and other styles
20

Pollock, Nancy. "The reliability and validity of the Erhardt Developmental Prehension Assessment /." Thesis, McGill University, 1988. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=61242.

Full text
Abstract:
The Erhardt Developmental Prehension Assessment (EDPA) was designed as a measure of hand function for use with developmentally and physically disabled children. In this study the inter-observer reliability of the EDPA, and the concurrent validity of the EDPA with the fine motor scale of the Peabody Developmental Motor Scales (PDMS) were evaluated. The EDPA was initially revised by standardizing the procedures for administering the test and developing an objective scoring system. Thirty developmentally disabled children ranging in age from 3 to 18 months were tested in this study.
The results indicate that the EDPA has high levels of inter-observer reliability, and that it has concurrent validity with the PDMS in this population. Further test revisions are necessary, however, to improve the EDPA's discriminative power. Normative data needs to be gathered on a large, cross-sectional sample of children so that future measures of impaired hand function will be based on a good understanding of the sequence of normal development.
APA, Harvard, Vancouver, ISO, and other styles
21

Francis, Charmine 1978. "The discriminative validity of the McGill Ingestive Skills Assessment (MISA) /." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=111570.

Full text
Abstract:
Introduction: Stroke is associated with a high prevalence of dysphagia in the elderly population. Hence, dysphagia evaluation and management are key issues in stroke rehabilitation. The McGill Ingestive Skills Assessment (MISA) is a recently developed mealtime observational tool aimed at evaluating the functional aspects of the oral phase of ingestion. Objective: To determine the discriminative validity of the MISA by assessing known/extreme groups of elderly individuals presenting with stroke, who have been admitted to an acute-care-hospital or a rehabilitation center. Participants were allocated to one of two groups: 1) individuals with stroke and no dysphagia, who are on a regular diet and 2) individuals with stroke and dysphagia, who are permitted only purees. Methods: Participants were evaluated with the MISA and a comprehensive chart review was conducted. Analysis: Groups were compared on socio-demographic and clinical characteristics. Univariate tests were performed to test the significance of between-group differences. Conclusion and significance: The results of the study are satisfactory, and enhance the clinical usefulness of the tool for dysphagia management. These results also support future studies addressing the responsiveness of the MISA.
APA, Harvard, Vancouver, ISO, and other styles
22

Ahmed, Sara 1974. "The Stroke Rehabilitation Assessment of Movement (STREAM) : validity and responsiveness." Thesis, McGill University, 1998. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=20946.

Full text
Abstract:
The main objectives of this prospective cohort study were to examine the construct and predictive validity of the STREAM, and estimating its responsiveness. Sixty three acute stroke patients were evaluated on the STREAM and other measures of impairment and disability during the first week post-stroke, four weeks later, and three months post-stroke. The results of the study showed that STREAM scores were associated with measures of impairment and disability, and could discriminate subjects based on Balance Scale and Barthel Index scores. Moreover, the STREAM during the first week post-stroke was found to be an independent predictor of discharge destination after the acute care hospital, and of gait speed and the Barthel Index at three months post stroke. In addition, the total and subscale STREAM scores were able to mirror changes in motor performance between each evaluation. The utility and measurement properties of STREAM warrant its use in clinical practice and research.
APA, Harvard, Vancouver, ISO, and other styles
23

Jeffs, Lianne Patricia. "Content validity assessment of the University Student Health Survey Instrument." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0026/MQ34071.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Meloff, Liann Rachel. "Assessment of disordered eating in young children, a validity study." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ38600.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Ahmed, Sara. "The STroke REhabilitation Assessment of Movement, STREAM, validity and responsiveness." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0034/MQ50705.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Veerman, Jacob Lennert. "Quantitative health impact assessment: an exploration of methods and validity." [S.l.] : Rotterdam : [The Author] ; Erasmus University [Host], 2007. http://hdl.handle.net/1765/10490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Watts, Tracy N. "EVERYDAY SPEECH PRODUCTION ASSESSMENT MEASURE (E-SPAM): RELIABILITY AND VALIDITY." UKnowledge, 2011. http://uknowledge.uky.edu/gradschool_theses/98.

Full text
Abstract:
Purpose: The Everyday Speech Production Assessment Measure (E-SPAM) is a novel test for assessing changes in clients‟ speech production skills after intervention. This study provides information on reliability and validity for the test and overviews its clinical application. Method & Procedures: E-SPAM, oral reading, and sequential motion rate tasks were administered to 15 participants with motor speech disorders (MSDs). E-SPAM responses were scored using a 5-point system by four graduate students to assess inter-scorer and temporal reliability and to determine validity for E-SPAM. Results: Findings of this study indicate that the E-SPAM can be scored with sufficient reliability for clinical use, yields stable scores on repeat administrations, and that its results correlate highly with other accepted measures of speech production ability, specifically sentence intelligibility and severity. Conclusions: While the results of this study must be considered preliminary because of the small sample size, it does appear that the E-SPAM can provide information about aspects of speech production such as intelligibility, efficiency, and speech naturalness, that are important when treatment focuses on improving speech. The E-SPAM also appears to be a “clinician-friendly” test as it is quick to administer and score and can be administered to patients across the severity continuum.
APA, Harvard, Vancouver, ISO, and other styles
28

McCarty, Joseph C. "The construct validity of the behavior assessment system for children." Virtual Press, 2001. http://liblink.bsu.edu/uhtbin/catkey/1213150.

Full text
Abstract:
The purpose of this study was to test the construct validity of the Behavior Assessment System for Children (BASC), Parent and Teacher Rating Scales (PRS and TRS). Six samples were considered, including the Normative General and Clinical Samples for each measure (Reynolds & Kamphaus, 1992). Another pair of samples were taken from a database of a Georgia hospital (PRS n = 130, TRS n = 108). The Normative Clinical Sample of TRS scores was multicollinear, and was not used.Five models were designed for each measure: a single factor solution, the theoretical model of the BASC, and three adaptations of the scoring system. Using AMOS, these models were fit to the samples. Only the theoretical model met minimum standards for adequate fit. Multi-sample analyses with different combinations of parameter restrictions were conducted to determine which aspects of the theoretical model's factor structure accounted for the most sample variance. When fit to both normative samples of PRS scores, all aspects of the factor solution were found to contribute. For all other runs, it was found that error, unique, and factor variances contributed the most to the factor solution. This suggests that the relationship of variables/scales to the factors/composites in this model could be improved. It is suggested that practitioners disregard composite scores, and that the authors/publishers of the BASC consider using regression weights to formulate composite scores in the scoring program.
Department of Educational Psychology
APA, Harvard, Vancouver, ISO, and other styles
29

Horn, Michael T. "Investigating the construct validity of a life-skills assessment instrument /." Thesis, Connect to this title online; UW restricted, 2001. http://hdl.handle.net/1773/8128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Chaytor, Naomi S. "Improving the ecologicical [i.e. ecological] validity of executive functioning assessment." Online access for everyone, 2004. http://www.dissertations.wsu.edu/Dissertations/Summer2004/n%5Fchaytor%5F070604.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Baker, Thomas Grant. "Managerial Assessment Centers in the Hotel Industry: Concerns with Validity." Thesis, North Texas State University, 1988. https://digital.library.unt.edu/ark:/67531/metadc501177/.

Full text
Abstract:
A replication of an original study of managerial assessment centers performed by Sackett and Dreher (1982) is presented. Their major finding, indicating that assessment centers lack key tenets of internal construct validity, was corroborated in this study of a hotel managers' assessment center. This hotel managers' assessment center is also found to be externally valid using criterion-related validity. The argument is posed that assessment centers, as standardized tests of complex behavioral traits, appear to be operating outside the bounds of normal test construction principles. Five key explanations for this paradox are offered to guide much needed future research in this area. Additionally, a description of commonly utilized assessment center activities is offered the reader.
APA, Harvard, Vancouver, ISO, and other styles
32

Koerner, Kelly. "The reliability and validity of a cognitive-behavioral case formulation method /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/9111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Blanchard, Janey. "The Predictive Validity of Norm-Referenced Assessments to the Minnesota Comprehensive Assessment on Native American Reservations." Thesis, Saint Mary's University of Minnesota, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=3745625.

Full text
Abstract:

This research study compared the three commonly used norm-referenced assessments (Northwest Evaluation Assessment, STAR Enterprise, and AIMSweb) to the Minnesota Comprehensive Assessment. The basic question was which one of the three assessments provided the best predictive validity scores to the Minnesota Comprehensive Assessment. Yearly scores from three years were gathered to evaluate which one of the three assessments had a stronger correlation score to the MCA. The study was confined to using 4th grade scores from three different schools located on a Native American reservation. Each school used one of the three common standardized reference assessments, and each school administered the MCA in the spring using winter scores. These scores were used to evaluate whether a student is on track to reach proficiency on the MCA. Findings showed that two of the three assessments had strong correlation scores. NWEA-MAP and STAR Enterprise had the strongest correlation. Further findings showed that STAR Enterprise had the strongest correlation score with a caveat that this is a new assessment and needs more research. Findings from this study allow schools to use two of the assessments with confidence that it is giving them quality scores.

APA, Harvard, Vancouver, ISO, and other styles
34

Ong, Yoke Mooi. "Understanding differential functioning by gender in mathematics assessment." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/understanding-differential-functioning-by-gender-in-mathematics-assessment(bbf798fb-eb7a-4e99-bf33-f0c4e8d87e42).html.

Full text
Abstract:
When examinees with the same ‘ability’ take a test, they should have an equal chance of responding correctly to an item irrespective of group membership. This logic in assessment is known as measurement invariance. The lack of invariance of the item-, bundle-, and test-difficulty across different subgroups indicates differential functioning (DF). The aim of this study is to advance our understanding of DF by detecting, predicting and explaining the sources of DF by gender in a mathematics test. The presence of DF means that the test scores of these examinees may fail to provide a valid measure of their performance. A framework for investigating DF was proposed, moving from the item-level to a more complex random-item level, which provides a theme of critiques of limitations in DF methods and explorations of some advances. A dataset of 11-year-olds of a high-stakes National mathematics examination from England was used in this study. The results are reported in three journal publication format papers. The first paper addressed the issue of understanding nonuniform differential item functioning (DIF) at the item- level. The nonuniform DIF is investigated because it is a possible threat when common DIF statistics sensitive to uniform DIF may indicate no significant DIF. This study differentiates two different types of nonuniform DIF, namely crossing and noncrossing DIF. Two commonly used DIF detection methods, namely the Logistic Regression (LR) procedure and the Rasch measurement model were used to identify crossing and noncrossing DIF. This paper concludes that items with nonuniform DIF do exist in empirical data; hence there is a need to include statistics sensitive to crossing DIF in item analysis. The second paper investigated the sources of DF via differential bundle functioning (DBF) because this way we may get a substantive explanations of DF - without which we do not know if DF is ‘valid’ or ‘biased’. Roussos and Stout’s (1996a) multidimensionality-based DIF paradigm was used with an extension of the LR procedure to detect DBF. Three qualitatively different content areas: test modality, curriculum domains and problem presentation were studied. This paper concludes that DBF in curriculum domains may elicit construct-relevant variance, and so may indicate 'real' differences, whereas problem presentation and test modality arguably includes construct-irrelevant variance and so may indicate gender bias. Finally, the third paper considered item-person responses as hierarchically nested within items. Hence a two-level logistic model was used to model the random item effects, because otherwise it is argued that DF might be over-exaggerated and may lead to invalid inferences. This paper aimed to explain DF via DBF comparing single-level and two-level models. The DIF effects of the single-level model were found to be attenuated in the two-level model. A discussion of why the two different models produced different results was presented. Taken together, this thesis shows how validity arguments regarding bias should not be reduced to DF at item-level but can be analysed on three different levels.
APA, Harvard, Vancouver, ISO, and other styles
35

Perkins, Anne Witt. "Learning and Study Strategies Inventory (LASSI): A validity study." W&M ScholarWorks, 1991. https://scholarworks.wm.edu/etd/1539618615.

Full text
Abstract:
The purpose of this study was to examine the construct and predictive validity of the Learning and Study Strategies Inventory (LASSI). The LASSI is an instrument designed to assess utilization of learning and study strategies and methods for the purpose of measuring strategy use, diagnosing deficiencies, and prescribing intervention. The literature suggests that valid instruments of this type are sadly lacking. The LASSI User's Manual, however, presents no statistical evidence of instrument validity. The need for this verification became crucial with The College of William and Mary's selection of the inventory for administration to the 1990 freshman class. Using data obtained from this administration and a subsequent retest, statistical analyses were conducted to confirm instrument reliability and examine construct and predictive validity. Results indicated that while reliable, the ten LASSI scales possessed no construct validity, as measured by factor analysis, and low predictive validity when first semester college grade point average was the performance criterion. Until the completion of further research, the validity of the LASSI is at best suspect, and use of the instrument is not recommended.
APA, Harvard, Vancouver, ISO, and other styles
36

Yurkon, Andrew C. "An Examination of the Criterion-Related Validity of a Developmental Assessment Center." Thesis, University of North Texas, 1998. https://digital.library.unt.edu/ark:/67531/metadc278115/.

Full text
Abstract:
The purpose of this study was to investigate the criterion-related validity of an assessment center's competency dimension ratings, exercise ratings, and standardized test scores. Numerous studies have clearly demonstrated assessment centers display substantial evidence of content and criterion-related validity. However, the inability of assessment centers to display construct-related validity has caused a great deal of concern among researchers. The suggestions of these researchers are addressed through a more detailed examination of the criterion-related validity of an assessment center. Despite a number of methodological issues, two competency dimensions and two components stand out as viable predictors of the criteria used in this study. Examination of individual and incremental validity coefficients reveals the Strategic Focus and Attracting and Developing Talent competency dimensions, the In-Basket exercise, and the Watson-Glaser scaled score consistently predict the criteria used in this study. The implications of these results for future research are discussed.
APA, Harvard, Vancouver, ISO, and other styles
37

Balthrop, Kullen Charles. "MMPI-2-RF UNDERREPORTING VALIDITY SCALES IN FIREFIGHTER APPLICANTS: A CROSS-VALIDATION STUDY." UKnowledge, 2018. https://uknowledge.uky.edu/psychology_etds/149.

Full text
Abstract:
The identification of potential underreporting in employment evaluations is important to consider when examining a measure’s validity. This importance increases in personnel selection involving high-virtue positions (e.g., police officers and firefighters). The current study aimed to utilize an archival firefighter applicant sample to examine the construct validity of the Minnesota Multiphasic Personality Inventory-2-Restructured Form’s (MMPI-2-RF) underreporting scales (L-r and K-r). Results were analyzed using a correlation matrix comprised of a modified version of the Multi-Trait Multi-Method Matrix (MTMM), as well as multiple regression and partial correlation. The present study provides additional support for the construct validity of the MMPI-2-RF’s underreporting validity scales. Further research using outcome measures and alternate assessment methods would be able to provide further information on the efficacy of these scales.
APA, Harvard, Vancouver, ISO, and other styles
38

McGraw, Robert Charles. "Testing the validity of an assessment process for airway management skills." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0016/MQ54473.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Valutis, William Ernest. "Assessment of the construct validity of an organizational citizenship behavior scale." Virtual Press, 1991. http://liblink.bsu.edu/uhtbin/catkey/832989.

Full text
Abstract:
This paper concerns a construct labeled Organizational Citizenship Behavior (OCB). OCBs are unsolicited, cooperative gestures that employees choose to exhibit. While the OCB construct is professed as being quite promising for both research and practice, efforts to develop the construct have been lacking in consistency and reliability. This study addresses both conceptual and psychometric issues associated with OCB by investigating the most predominant measure of the construct. Also, several methodological practices in OCB research are challenged.To test several hypotheses, ratings of OCB were collected in field settings from supervisors, coworkers, and employees. Investigated were 1) the factor structure of the Smith, et al. (1983) measure of OCB, 2) different raters' perceptions of similar factors, 3) the psychometric effect of using different raters' perceptions, and 4) the congruency of OCB items to the conceptual criteria put forth by OCB theorists.Results did not strongly support the psychometric or conceptual stability of this OCB measure. While one stable and reliable factor was revealed (Altruism), discrepancy by raters in the hypothesized models caused concern. In addition, most participants did not perceive the items in this measure as representative of extra-role behaviors, and thus they cannot be conclusively labeled as citizenship behaviors.Implications from the results suggest that further development of the conceptual parameters of OCB be initiated prior to developing new measures. Also, concerns as to the practicality of the OCB construct are conveyed, and recommendations for future research and conceptual development are provided.
Department of Counseling Psychology and Guidance Services
APA, Harvard, Vancouver, ISO, and other styles
40

Brooks, Donald Andrew John. "Training the military engineer : a study of assessment and its validity." Thesis, n.p, 2001. http://oro.open.ac.uk/18834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Slade, Denim L. "An Assessment of the Concurrent Validity of the Family Profile II." DigitalCommons@USU, 1998. https://digitalcommons.usu.edu/etd/2544.

Full text
Abstract:
This study was designed to assess the concurrent validity of the Family Profile II (FPII). The FPII is an instrument designed to measure 13 areas of family functioning. Matches for II of the 13 subscales of the FPII were identified from the literature. These comparison subscales were used to confirm the concurrent validity of the FPII. The sample consisted of 229 undergraduate students enrolled in summer classes at Utah State University. The factor structure of the FPII was also assessed. Four of the 13 subscales factored exactly as previously reported. Five factored with only minimal differences. The remaining four subscales were substantially different. All of the correlations between the FPII subscales and the comparison subscales were statistically significant. Five of the pairs shared 42% or more of their variance. Results indicate that the FPII has promise as an easy-to-score-and-interpret measure of the 13 aspects of family functioning it assesses.
APA, Harvard, Vancouver, ISO, and other styles
42

Esguerra, John Laurence. "Economics of Landfill Mining : Usefulness and Validity of Different Assessment Approaches." Licentiate thesis, Linköpings universitet, Industriell miljöteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-165391.

Full text
Abstract:
Landfill mining (LFM) is an alternative strategy to manage landfills that integrates remediation with secondary resource recovery. At present, LFM remains as an emerging concept with a few pilot-scale project implementations, which presents challenges when assessing its economic performance. These challenges include large knowledge deficits about the individual processes along the LFM process chain, lack of know-how in terms of project implementation and economic drivers, and limited applicability of results to specific case studies. Based on how these challenges were addressed, this thesis aims to analyze the usefulness and validity of different economic assessments of LFM towards the provision of better support for decision-making and in-depth learning for the development of cost-efficient projects. Different studies were analyzed including the previous studies through a systematic literature review and the factor-based method that is developed in this thesis. Four categories of economic assessment approaches were derived in terms of the study object that is about either an individual LFM project (case-study specific) or multiple LFM projects in a region (generic); and in terms of the extent of analysis that is about either the identification of the net economic potential (decision-oriented) or extended towards an in-depth learning of what builds up such result (learning-oriented). Across the different approaches, most of the previous studies have questionable usefulness and validity. The unaddressed parametric uncertainties exclude the influence of using inherently uncertain input data due to large knowledge deficits. While the narrowly accounted scenario uncertainties limits the fact that LFM can be done in various ways and settings in terms of site selection, project set-up and regulatory and market conditions. In essence, these uncertainties propagate from case-study specific to generic study object. From decision-oriented to learning-oriented studies, the identification of what builds up the result are unsystematically determined that raises issues on their subsequent recommendations for improvement based on superficially derived economic drivers. The factor-based method, with exploratory scenario development and global sensitivity analysis, is presented as an approach to performing generic and learning-oriented studies. As for general recommendations, applied research is needed to aid large knowledge deficits, methodological rigor is needed to account for uncertainties and systematically identify economic drivers, and learningoriented assessment is needed to facilitate future development of LFM. This thesis highlights the important role of economic assessments, which is not only limited for the assessment of economic potential but also for learning and guiding the development of emerging concepts such as LFM.
APA, Harvard, Vancouver, ISO, and other styles
43

Fuentes, Debra Smith. "A Validity Study of the Cognitively Guided Instruction Teacher Knowledge Assessment." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7773.

Full text
Abstract:
This study reports the development of an instrument intended to measure mathematics teachers' knowledge of Cognitively Guided Instruction (CGI). CGI is a mathematics professional development framework based on how students think about and solve problems and how that knowledge guides instruction for developing mathematical understanding. The purpose of this study was to (a) analyze and revise the original CGI Teacher Knowledge Assessment (CGI TKA), (b) administer the revised CGI TKA, and (c) analyze the results from the revised CGI TKA. As part of the revision of the original CGI TKA, distractor analysis identified distractors that could be improved. Experts in CGI content were interviewed to identify ways in which the content of the CGI TKA could be improved, and some new items were created based on their feedback. Formatting changes were also made to administer the assessment electronically.After the original CGI TKA was revised, the revised CGI TKA was administered to teachers who had been trained in CGI. Two hundred thirteen examinees completed the revised CGI TKA and the results were analyzed. Exploratory and confirmatory factor analyses showed 21 of the items loaded adequately onto one factor, considered to be overall knowledge of CGI. The Rasch model was used to estimate item difficulty and person abilities as well as to compare models using dichotomous and partial credit scoring. Advantages and disadvantages of using partial credit scoring as compared to dichotomous scoring are discussed. Except under special circumstances, the dichotomous scoring produced better fitting models and more reliable scores than the partial credit scoring. The reliability of the scores was estimated using Raykov's rho coefficient. Overall, the revised CGI TKA appears to validly and reliably measure teachers' CGI knowledge.
APA, Harvard, Vancouver, ISO, and other styles
44

Dula, Chris S. "Validity and Reliability Assessment of a Dangerous Driving Self-Report Measure." Diss., Virginia Tech, 2003. http://hdl.handle.net/10919/26606.

Full text
Abstract:
The Dula Dangerous Driving Index (DDDI) was created to measure drivers' self-reported propensity to drive dangerously (Dula & Ballard, in press). In the early stages of development, the DDDI and each of its subscales (Dangerous Driving Total, Aggressive Driving, Negative Emotional Driving, and Risky Driving) were found to have strong internal reliability (alphas from .83 to .92), and there was evidence of construct validity. In Study One, the alpha coefficient of .91 for the DDDI Total scale indicated excellent internal reliability for the measure and good internal reliability was demonstrated for its subscales with coefficient alphas equal to .81 for the DDDI Risky Driving subscale, .79 for the DDDI Negative Emotional subscale, and the DDDI Aggressive Driving subscale. Additionally, convergent and divergent validity was shown for the DDDI, but evidence was weaker for the validity of the separate subscales. Factor analysis demonstrated that the DDDI seemed to measure a unitary construct. In Study Two, coefficients of stability were generated from a four-week test-retest procedure, which were .76 for the DDDI Risky Driving subscale, .68 for the DDDI Negative Emotional subscale, .55 for the DDDI Aggressive Driving subscale, and .73 for the DDDI Total. In Study Three, the percentage of variance accounted for in criterion variables by different models ranged from 13.6% to 47.7%, where the DDDI Negative Emotional and DDDI Total scales frequently accounted for large portions of variance. In Study Four, the percent of variance accounted for in criterion variables by different models ranged from 22.0% to 65.6%, where some of the DDDI scales were regularly found to account for significant variance. Thus, it was concluded that the DDDI is a measure with high levels of internal reliability and reasonable stability across time, and that face, construct, and predictive validity was demonstrated. However, the evidence in support of the present division of subscales was weak, though present. Therefore, should further data fail to produce more substantial evidence for the validity of the DDDI subscales, a singular dangerous driving measure would be warranted, and the number of items should be shortened as guided by results from factorial analysis.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
45

Hitchcock, Kathryn. "Validity of a Food Literacy Assessment Tool in Food Pantry Clients." University of Cincinnati / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1535460317710244.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Nenonen, Mark O. "Socio-cultural differences and the predictive validity of risk assessment scales." Thesis, University of Ottawa (Canada), 1989. http://hdl.handle.net/10393/5939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Lee, Huntar Alexis. "Assessing the Convergent Validity of the PEAK-E Long Assessment and the PEAK-E Short Assessment." OpenSIUC, 2018. https://opensiuc.lib.siu.edu/theses/2426.

Full text
Abstract:
The current study evaluated the convergent validity of the PEAK-E short assessment versus the PEAK-E long assessment, to determine if the short form version of the assessment would be just as effective in identifying potential skill deficits as the long form version. This assessment will extend on the current PEAK literature and will indicate in the results the validity between the PEAK-E long assessment and the short assessment. In the current study twenty-four participants were assessed using both the long assessment and the short assessment. The researchers performed both the PEAK-E long assessment and the short assessment with each participant and then a Pearson’s correlation was conducted to determine the convergent validity of the two measures. The results of the current study lend support to the validity of the PEAK-E long and short assessment tool. These results suggest that the PEAK-E short assessment captures many of the same skills and abilities as the long assessment scores, and that the two assessment produce similar results. The results show a strong positive correlation between the PEAK-E short assessment and the PEAK-E long assessment.
APA, Harvard, Vancouver, ISO, and other styles
48

Burchett, Danielle L. "The Need for Validity Indices in Personality Assessment: A Demonstration Using the MMPI-2-RF." [Kent, Ohio] : Kent State University, 2009. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=kent1246838666.

Full text
Abstract:
Thesis (M.A.)--Kent State University, 2009-07-07.
Title from PDF t.p. (viewed Jan. 26, 2010). Advisor: Yossef Ben-Porath. Keywords: validity scales; validity indices; overreporting; feigning; invalid responding; scale score validity; protocol validity; MMPI-2-RF. Includes bibliographical references (p. 69-79)
APA, Harvard, Vancouver, ISO, and other styles
49

Bink, Martin L. "Motivational distortion in personality profiles of undergraduate distance education students." Virtual Press, 1995. http://liblink.bsu.edu/uhtbin/catkey/941727.

Full text
Abstract:
Motivational Distortion is a construct of replicable error characterized by a shift in one's responding on a personality measure from an anonymous role to a role motivated by the testing situation. The Sixteen Personality Factor (16PF) contains an embedded scale designed to measure the this construct. Scores on this scale provide a basis for correcting scores on the primary factors.To date, individual studies on Motivational Distortion have not adequately addressed the construct validity of the scale. The present study utilized a sample of teleeducation students in an attempt determine if varying levels of role aptitude and role-congruent settings does impact Motivational Distortion.The results of regression and Aptitude-by-treatment interaction analysis (ATI) have two implications for Motivational Distortion. Namely, The relation of Motivational Distortion and its components may be moderating instead of causal, and the relation of the construct and other personality factors is more limited.
Department of Psychological Science
APA, Harvard, Vancouver, ISO, and other styles
50

Jalbert, Nicole Marie. "The Search for Construct Validity of Assessment Centers: Does the Ease of Evaluation of Dimensions Matter?" Diss., Virginia Tech, 1999. http://hdl.handle.net/10919/37790.

Full text
Abstract:
The purpose of the present study was to investigate the effect of ease of evaluation of dimensions on the construct validity of a selection assessment center conducted in 1993. High ease of evaluation dimensions, operationalized as the greatest proportion of highly diagnostic behaviors, were expected to demonstrate greater construct and criterion related validity. Multitrait-multimethod analysis and confirmatory factor analysis results indicated that high ease of evaluation dimensions demonstrated greater convergent and discriminant validity than low ease of evaluation dimensions. Contrary to predictions, however, there was little difference in the criterion related validity of the high versus low ease of evaluation dimensions. Moreover, the entire assessment center yielded extremely low predictive validity using both dimension and exercise scores as predictors. The implications of the findings from this study are discussed.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography