Academic literature on the topic 'Assessment validity'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Assessment validity.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Assessment validity"

1

Kirkwood, Michael W. "Pediatric validity assessment." NeuroRehabilitation 36, no. 4 (July 20, 2015): 439–50. http://dx.doi.org/10.3233/nre-151232.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Teglasi, Hedwig, Allison Joan Nebbergall, and Daniel Newman. "Construct validity and case validity in assessment." Psychological Assessment 24, no. 2 (June 2012): 464–75. http://dx.doi.org/10.1037/a0026012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chapelle, Carol A. "VALIDITY IN LANGUAGE ASSESSMENT." Annual Review of Applied Linguistics 19 (January 1999): 254–72. http://dx.doi.org/10.1017/s0267190599190135.

Full text
Abstract:
All previous papers on language assessment in the Annual Review of Applied Linguistics make explicit reference to validity. These reviews, like other work on language testing, use the term to refer to the quality or acceptability of a test. Beneath the apparent stability and clarity of the term, however, its meaning and scope have shifted over the past years. Given the significance of changes in the conception of validity, the time is ideal to probe its meaning for language assessment.
APA, Harvard, Vancouver, ISO, and other styles
4

Watzl, Bernhard, and Gerhard Rechkemmer. "Validity of dietary assessment." American Journal of Clinical Nutrition 74, no. 2 (August 1, 2001): 273. http://dx.doi.org/10.1093/ajcn/74.2.273.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Koninckx, Philippe R., Jasper Verguts, and Dirk Timmerman. "Assessment of measurement validity." Fertility and Sterility 85, no. 1 (January 2006): 268. http://dx.doi.org/10.1016/j.fertnstert.2005.10.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dozortseva, E. G., and A. G. Krasavina. "Assessment of juveniles testimonies’ validity." Современная зарубежная психология 4, no. 3 (2015): 47–56. http://dx.doi.org/10.17759/jmfp.2015040306.

Full text
Abstract:
The article presents a review of the English language publications concerning the history and the current state of differential psychological assessment of validity of testimonies produced by child and adolescent victims of crimes. The topicality of the problem in Russia is high due to the tendency of Russian specialists to use methodical means and instruments developed abroad in this sphere for forensic assessments of witness testimony veracity. A system of Statement Validity Analysis (SVA) by means of Criteria-Based Content Analysis (CBCA) and Validity Checklist is described. The results of laboratory and field studies of validity of CBCA criteria on the basis of child and adult witnesses are discussed. The data display a good differentiating capacity of the method, however, a high level of error probability. The researchers recommend implementation of SVA in the criminal investigation process, but not in the forensic assessment. New perspective developments in the field of methods for differentiation of witness statements based on the real experience and fictional are noted. The conclusion is drawn that empirical studies and a special work for adaptation and development of new approaches should precede their implementation into Russian criminal investigation and forensic assessment practice
APA, Harvard, Vancouver, ISO, and other styles
7

Larrabee, Glenn J. "Performance Validity and Symptom Validity in Neuropsychological Assessment." Journal of the International Neuropsychological Society 18, no. 4 (May 8, 2012): 625–30. http://dx.doi.org/10.1017/s1355617712000240.

Full text
Abstract:
AbstractFailure to evaluate the validity of an examinee's neuropsychological test performance can alter prediction of external criteria in research investigations, and in the individual case, result in inaccurate conclusions about the degree of impairment resulting from neurological disease or injury. The terms performance validity referring to validity of test performance (PVT), and symptom validity referring to validity of symptom report (SVT), are suggested to replace less descriptive terms such as effort or response bias. Research is reviewed demonstrating strong diagnostic discrimination for PVTs and SVTs, with a particular emphasis on minimizing false positive errors, facilitated by identifying performance patterns or levels of performance that are atypical for bona fide neurologic disorder. It is further shown that false positive errors decrease, with a corresponding increase in the positive probability of malingering, when multiple independent indicators are required for diagnosis. The rigor of PVT and SVT research design is related to a high degree of reproducibility of results, and large effect sizes of d=1.0 or greater, exceeding effect sizes reported for several psychological and medical diagnostic procedures. (JINS, 2012, 18, 1–7)
APA, Harvard, Vancouver, ISO, and other styles
8

Mislevy, Robert J. "Validity by Design." Educational Researcher 36, no. 8 (November 2007): 463–69. http://dx.doi.org/10.3102/0013189x07311660.

Full text
Abstract:
Lissitz and Samuelsen (2007) argue that the unitary conception of validity for educational assessments is too broad to guide applied work. They call for attention to considerations and procedures that focus on “test development and analysis of the test itself” and propose that those activities be collectively termed content validity. The author of this article describes work that makes more explicit the underlying principles of assessment design, thereby providing conceptual foundations for familiar practices and supporting the development of new ones. By structuring design activities around assessment arguments, the test developer accrues evidence in passing for what Embretson (1983) calls “construct representation” argumentation for validity.
APA, Harvard, Vancouver, ISO, and other styles
9

VARVERI, Loredana, Gioacchino LAVANCO, and Santo DI NUOVO. "Buying Addiction: Reliability and Construct Validity of an Assessment Questionnaire." Postmodern Openings 06, no. 01 (June 30, 2015): 149–60. http://dx.doi.org/10.18662/po/2015.0601.10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

AKBIYIK, Melike, and Murat SENTURK. "Assessment Scale of Academic Enablers: A Validity and Reliability Study." Eurasian Journal of Educational Research 19, no. 80 (April 3, 2019): 1–26. http://dx.doi.org/10.14689/ejer.2019.80.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Assessment validity"

1

French, Elizabeth. "The Validity of the CampusReady Survey." Thesis, University of Oregon, 2014. http://hdl.handle.net/1794/18369.

Full text
Abstract:
The purpose of this study is to examine the evidence underlying the claim that scores from CampusReady, a diagnostic measure of student college and career readiness, are valid indicators of student college and career readiness. Participants included 4,649 ninth through twelfth grade students from 19 schools who completed CampusReady in the 2012-13 school year. The first research question tested my hypothesis that grade level would have an effect on CampusReady scores. There were statistically significant effects of grade level on scores in two subscales, and I controlled for grade level in subsequent analyses on those subscales. The second, third and fourth research questions examined the differences in scores for subgroups of students to explore the evidence supporting the assumption that scores are free of sources of systematic error that would bias interpretation of student scores as indicators of college and career readiness. My hypothesis that students' background characteristics would have little to no effect on scores was confirmed for race/ethnicity and first language but not for mothers' education, which had medium effects on scores. The fifth and six research questions explored the assumption that students with higher CampusReady scores are more prepared for college and careers. My hypothesis that there would be small to moderate effects of students' aspirations for after high school on CampusReady scores was confirmed, with higher scores for students who aspired to attend college than for students with other plans. My hypothesis that there would be small to moderate relationships between CampusReady scores and grade point average was also confirmed. I conclude with a discussion of the implications and limitations of these results for the argument supporting the validity of CampusReady score interpretation as well as the implications of these results for future CampusReady validation research. This study concludes with the suggestion that measures of metacognitive learning skills, such as the CampusReady survey, show promise for measuring student preparation for college and careers when triangulated with other measures of college and career preparation.
APA, Harvard, Vancouver, ISO, and other styles
2

Chinedozi, Ifeanyichukwu, and L. Lee Glenn. "Criterion Validity Measurements in Automated ECG Assessment." Digital Commons @ East Tennessee State University, 2013. https://dc.etsu.edu/etsu-works/7484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Clounch, Kristopher L. "Sex offender assessment clinical utility and predictive validity /." Diss., St. Louis, Mo. : University of Missouri--St. Louis, 2008. http://etd.umsl.edu/r3221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wessels, Gunter Frederik. "Salespeople's Selling Orientation: Reconceptualization, Measurement and Validity Assessment." Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/202997.

Full text
Abstract:
A study of Elite Salespeople (ES), those salespeople who maintain and sustain consistent high performance in the sales task was completed to discover and understand elite salesperson behavior. Analysis of participants' responses to structured depth interview questions led to the emergence of a construct called a Selling Orientation (SO). SO is made up of behaviors that guide salespeople to build, maintain, and monitor their personal credibility both with customers and industry members, as well as within the company. A number of field pre-tests were performed to derive a measurement scale for SO. This process was followed by a field survey that measured SO in a sales force. Confirmatory factor analysis was performed to assess the validity of the measurement scale and results support internal consistency and construct validity of a short 9 item scale for SO. This study advances the understanding of sales performance related theory by illuminating attributes of ES's. Additionally, this study introduces the concept of a Selling Orientation that is associated with high sales performance and key account management. Finally, the study introduces a measurement scale useful in the study of salespeople's selling orientation.
APA, Harvard, Vancouver, ISO, and other styles
5

Van, Leeuwen Sarah. "Validity of the Devereux Early Childhood Assessment instrument." Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/31396.

Full text
Abstract:
Parent ratings of social-emotional development on standardized assessment instruments for a sample of 69 kindergarten children in a mid-size Canadian city are utilized to examine the validity of the Devereux Early Childhood Assessment (DECA; LeBuffe & Naglieri, 1999a). Results provide support for the DECA's reliability and internal validity when used with a sample different from the standardization sample. In general, results illustrate an expected pattern of convergence and divergence between the DECA scales and scales from two comparison instruments, the Behavior Assessment System for Children, Second Edition (Reynolds & Kamphaus, 2004) and the Preschool and Kindergarten Behavior Scales, Second Edition (Merrell, 2002). The DECA's protective factor scales relate positively to other measures of social skills/adaptive behaviours, and negatively to other measures of problematic/clinical behaviours; these correlations were strongest for the DECA's Self-Control scale, and weakest for the DECA's Attachment scale. The DECA’s Behavioral Concerns screener scale related negatively to other measures of social skills/adaptive behaviours, and positively to other measures of problematic/clinical behaviours, particularly those reflecting externalizing behaviour problems. The DECA is a psychometrically sound instrument that makes an important and unique contribution to the field of social-emotional assessment of young children.
Arts, Faculty of
Psychology, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
6

Grimard, Donna (Donna Christine) Carleton University Dissertation Psychology. "An assessment of the validity of the ministry Risk\Needs Assessment Form." Ottawa, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Love, Ross. "A Construct Validity Analysis of a Leadership Assessment Center." TopSCHOLAR®, 2007. http://digitalcommons.wku.edu/theses/404.

Full text
Abstract:
This study was designed to assess the construct validity of a leadership assessment center. Participants were evaluated in a leadership assessment center and completed a 360 degree feedback tool designed to measure leadership. Convergent and discriminant validity coefficients were calculated between assessment center ratings and the 360 degree feedback ratings of four different leadership competencies. Results showed little support for the construct validity of the assessment center. Additionally, results replicated prior research regarding the construct validity of assessment centers, with high correlations among different competencies within exercises and low correlations between competencies measured via different methods (assessment center-360 degree feedback tool correlations and assessment center correlations across different exercises).
APA, Harvard, Vancouver, ISO, and other styles
8

MAUK, JACQUELINE KERN. "RELIABILITY AND VALIDITY ASSESSMENT OF THE EXERCISE SUITABILITY SCALE." Diss., The University of Arizona, 1985. http://hdl.handle.net/10150/188035.

Full text
Abstract:
This study examined the reliability and the validity of the Exercise Suitability Scale (ESS). The ESS was a psychometric instrument developed to measure the suitability of four different forms of exercise (aerobics, bicycling, jogging, and swimming) for different individuals. Aspects of Exercise Suitability included in the ESS were ease, satisfaction, enjoyableness, fatigue, interest, convenience, comfort, safety, affordability, and time-involvement. Background information relating to the development of the ESS as well as methods and results of testing the instrument for reliability and validity were included in this study. Data from a student population were used for estimating the reliability and validity of the ESS. Reliability testing included computing inter-item and item-to-total correlation coefficients, Cronbach's alpha, and internal consistency coefficients (theta and omega) derived from factor analytic techniques. Several types of validity were assessed: content validity, criterion-related validity, and construct validity. Criterion-related validity was estimated by comparing scores on the ESS with information about participation in exercise. Multiple regression was also used to assess criterion-related validity. Principal components analysis was used to examine the construct and content validity of the ESS. Construct validity was also estimated by correlating ESS scale scores with a parallel instrumentation approach, a Q-Sort. Satisfactory reliability indices were obtained for all four ESS exercise scales. Criterion-related validity indices were also adequate. Factor analysis provided some evidence of content validity of the ESS, but provided little support for the construct validity of the ESS. Construct validity was supported, however by the convergence approach.
APA, Harvard, Vancouver, ISO, and other styles
9

Brits, Nadia M. "Investigating the construct validity of a developmental assessment centre." Thesis, Stellenbosch : Stellenbosch University, 2011. http://hdl.handle.net/10019.1/18071.

Full text
Abstract:
Thesis (MComm)--University of Stellenbosch, 2011.
AFRIKAANSE OPSOMMING: Organisasies bestaan om skaars produksiefaktore te verander na bemarkbare goedere en dienste. Aangesien organisasies deur mense bedryf en bestuur word, is hierdie instellings grotendeels afhanklik van hul menslike produksiefaktor om hul hoofdoel te bereik, nl. om hul wins te vergroot. Organisasies poog om geskikte werknemers aan te stel wat sal voldoen aan die vereistes van 'n spesifieke pos of dit selfs sal oortref. In 'n werkswêreld wat konstant verander, vereis tegnologie en die kenmerke van die werkswêreld dat hierdie persone deurgaans ontwikkel word om by te bly met verandering. Personeelkeuring en –ontwikkeling is dus kritieke bedrywighede van die Bedryfsielkundige en Menslike Hulpbronpraktisyn. Die Takseersentrum is 'n gewilde meetinstrument wat dikwels gebruik word vir die doel van keuring of ontwikkeling. Hierdie gewilde assesseringsmetode word hoog aangeskryf vir sy vermoë om toekomstige werksprestasie te voorspel. Takseersentrums wat gebruik word vir keuring doeleindes, toon inkrementele geldigheid bo meetinstrumente van persoonlikheid sowel as kognitiewe vaardigheidstoetse. Al word takseersentrums internasionaal en hier in Suid-Afrika dikwels gebruik, word hulle ook dikwels gekritiseer op grond van die vraag of hulle werklik die dimensies meet wat hulle veronderstel is om te meet. Die konstrukgeldigheid van takseersentrums word dikwels bevraagteken aangesien lae diskriminante en konvergerende geldigheid, sowel as hardnekkige oefeningseffekte, navorsingsbevindinge oorheers. Hierdie vraag is die beweegrede vir die huidige studie. Die doel met hierdie studie is om die konstrukgeldigheid van 'n ontwikkelingstakseersentrum te ondersoek. 'n Geriefsteekproef is gebruik om die navorsing te doen. Die data is verskaf deur 'n private konsultasie maatskappy in die vorm van die takseersentrumtellings van 202 persone wat in 'n eendaagse sentrum geassesseer is. Die sentrum is ontwikkel vir 'n Suid-Afrikaanse bankinstelling en het drie hoofdoelwitte, nl. om kandidate te identifiseer vir die rol van 'n nuwe posbeskrywing, om werknemers na meer topaslike rolle te verskuif en om toekomstige ontwikkelingsgeleenthede vir alle deelnemers te verskaf. Twaalf vaardighede is deur vier verskillende oefeninge geëvalueer. Verskeie beperkinge is opgelê deur die aard van die geriefsteekproef deurdat die navorser geen invloed op die ontwerp van die takseersentrum gehad het nie. Die aanvanklike twaalf vaardighede kon nie afsonderlik ontleed word nie en moes gevolglik as subdimensies in hul onderskeie globale faktore gekombineer word. Dit het gelei tot vier enkeldimensie (ED) metingsmodelle wat eers ondersoek moes word om gesigswaarde van konstrukgeldigheid te bewys voordat oefeninge by die bestaande modelle gevoeg kon word. Die vier afsonderlike oefeninge is in een globale oefeningseffek saamgevoeg. As gevolg van die ontoereikende getal indikators in die datastel kon net twee van die vier ED-modelle oefeninge insluit en dit het gelei tot twee enkeldimensie-, enkeloefening-metingsmodelle (EDEO). Inter-itemkorrelatsies is in SPSS bereken, gevolg deur bevestigende faktorontleding van elke afsonderlike metingsmodel in EQS wat gebruik is om die interne struktuur van die dimensies te bestudeer. Met een dimensie as uitsondering, impliseer die uitslae van die CFA dat die indikators van die takseersentrum (d.w.s. gedragsbeoordelings) nie daarin slaag om die onderliggende dimensie te weerspieël soos dit veronderstel was om te doen nie. Nadat die saamgestelde oefeningseffek byvoeg is, het slegs een van die twee dimensies geloofwaardige uitslae met buitengewoon goeie modelpassing en parameterskattings wat dui op dimensie- eerder as oefeningseffekte. As gevolg van hierdie bevindings word die geldigheid van die ontwikkelingsterugvoer wat elke deelnemer na die evaluering ontvang het, ernstig in twyfel getrek. Met die uitsondering van een dimensie se resultate, bevestig die resultate van hierdie studie vorige navorsingsbevindinge.
ENGLISH ABSTRACT: Organisations exist by transforming scarce factors of production into goods and services. Since organisations are run and managed by people, these institutions are largely dependent on their human production factor to achieve their main goal of maximising profits. Organisations strive to appoint suitable employees who will meet, even exceed, the requirements of a particular job position. In a constantly evolving world of work, advancing technology and inherent features of the modern working environment necessitate ongoing development of these individuals in order to keep up with the changes. Personnel selection and development are therefore crucial activities of the Industrial Psychologist and Human Resource Practitioner. The Assessment Centre (AC) is a popular measuring instrument that is often used for either selection or development purposes. This popular method of assessment has received a great degree of praise for its ability to predict future job performance. ACs have also shown incremental validity over and above both personality and cognitive ability measuring instruments when used for selection purposes. Nevertheless, despite the frequent use of ACs both internationally and locally in South Africa, ACs have been widely criticised on the basis of whether they actually measure the dimensions that they intend to measure. The question has often been asked whether ACs are construct valid, since low discriminant- and convergent validity, as well as persistent exercise effects, seem to dominate research findings. This question serves as the driving force of the present study. The aim of this study is to examine the construct validity of a development assessment centre (DAC). A convenience sample was used to pursue the research objective. The data was received from a private consultant company in the form of the AC ratings of 202 individuals who were assessed in a one-day DAC. The DAC was developed for a South African banking institution and had three main purposes, namely to identify candidates who fit the role of a new job position, to reposition employees into more appropriate roles, and to provide future development opportunities to all participants. Twelve competencies were assessed by four different exercises. Several limitations were imposed by the nature of the convenience sample since the researcher did not have an influence on the design of the AC. The initial twelve competencies were not represented by a sufficient number of indicators and could consequently not be statistically analysed on an individual level. These dimensions therefore had to be used as sub-dimensions to be combined within their respective global (second-order) factors. This resulted in four single trait (ST) measurement models that had to be investigated first to provide face value of construct validity before adding exercises into the existing models. The four separate exercises were integrated into one global exercise effect. The insufficient number of indicators within the data set brought about only two of the four ST models to be examined for any existing exercise effects. The result was two single trait, single exercise (STSE) measurement models. Inter-item correlations were calculated in SPSS, followed by confirmatory factor analysis on each respective measurement model in EQS used to study the internal structure of the dimensions. With one dimension as the exception, the results of the CFA imply that the DAC's indicators (i.e. behavioural ratings) in each second-order factor, fail to reflect the underlying dimension, as it was intended to do. When adding the conglomerated exercise effect, only one of the two dimensions had plausible results with good model fit and parameter estimates that leaned towards dimension and not exercise effects. Based on these findings, serious doubt is placed on the validity of the developmental feedback provided to each participant after the completion of the DAC. With one dimension as the exception, the present study's results corroborate previous research findings on the construct validity of ACs.
APA, Harvard, Vancouver, ISO, and other styles
10

Morris, William Alan. "A Rhetorical Approach to Examining Writing Assessment Validity Claims." Kent State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=kent1619704495223314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Assessment validity"

1

Validity evaluation in language assessment. Frankfurt am Main: Peter Lang, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

E, Robbins Douglas, and Sawicki Robert F, eds. Reliability and validity in neuropsychological assessment. New York: Plenum Press, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Franzen, Michael D. Reliability and validity in neuropsychological assessment. 2nd ed. New York: Kluwer Academic/Plenum Publishers, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Franzen, Michael D. Reliability and Validity in Neuropsychological Assessment. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4757-3224-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Uebersax, John. Validity inferences from interobserver agreement. Santa Monica, CA: Rand, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Valiga, Michael J. The accuracy of self-reported high school course and grade information. Iowa City, Iowa: American College Testing Program, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Laing, Joan. Accuracy of self-reported activities and accomplishments of college-bound students. Iowa City, Iowa: American College Testing Program, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Valiga, Michael J. The accuracy of self-reported high school course and grade information. Iowa City, Iowa: American College Testing Program, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Spray, Judith A. Effects of item difficulty heterogeneity on the estimation of true-score and classification consistency. Iowa City, Iowa: American College Testing Program, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Noble, Julie. Predicting grades in specific college freshman courses from ACT test scores and self-reported high school grades. Iowa City, Iowa: American College Testing Program, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Assessment validity"

1

Iverson, Grant L. "Symptom Validity Assessment." In Encyclopedia of Clinical Neuropsychology, 3383–85. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-57111-9_213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Iverson, Grant L. "Symptom Validity Assessment." In Encyclopedia of Clinical Neuropsychology, 2450–52. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-0-387-79948-3_213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Iverson, Grant L. "Symptom Validity Assessment." In Encyclopedia of Clinical Neuropsychology, 1–3. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-56782-2_213-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Luiselli, James K. "Social Validity Assessment." In Applied Behavior Analysis Treatment of Violence and Aggression in Persons with Neurodevelopmental Disabilities, 85–103. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68549-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Luiselli, James K. "Social Validity Assessment." In Organizational Behavior Management Approaches for Intellectual and Developmental Disabilities, 46–66. New York: Routledge, 2021. http://dx.doi.org/10.4324/9780429324840-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bonner, Sarah M., and Peggy P. Chen. "Validity in Classroom Assessment." In Systematic Classroom Assessment, 112–30. New York, NY : Routledge, 2019.: Routledge, 2019. http://dx.doi.org/10.4324/9781315123127-10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chapelle, Carol A. "Validity in Language Assessment." In The Routledge Handbook of Second Language Acquisition and Language Testing, 11–20. New York: Routledge, 2020. | Series: The Routledge handbooks in second language acquisition: Routledge, 2020. http://dx.doi.org/10.4324/9781351034784-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sireci, Stephen G., and Tia Sukin. "Test validity." In APA handbook of testing and assessment in psychology, Vol. 1: Test theory and testing and assessment in industrial and organizational psychology., 61–84. Washington: American Psychological Association, 2013. http://dx.doi.org/10.1037/14047-004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Franzen, Michael D. "Benton’s Neuropsychological Assessment." In Reliability and Validity in Neuropsychological Assessment, 153–70. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/978-1-4757-3224-5_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Franzen, Michael D. "Elemental Considerations in Validity." In Reliability and Validity in Neuropsychological Assessment, 27–32. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/978-1-4757-3224-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Assessment validity"

1

Jamson, Hamish. "Image Characteristics and Their Effect on Driving Simulator Validity." In Driving Assessment Conference. Iowa City, Iowa: University of Iowa, 2001. http://dx.doi.org/10.17077/drivingassessment.1036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Knipling, Ronald R. "Naturalistic Driving Events: No Harm, No Foul, No Validity." In Driving Assessment Conference. Iowa City, Iowa: University of Iowa, 2015. http://dx.doi.org/10.17077/drivingassessment.1571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Simmons-Morton, Bruce G., Kaigang Li, Ashley Brooks-Russell, Johnathon Ehsani, Anuj Pradhan, Marie Claude Ouimet, and Sheila Klauer. "Validity of the C-RDS Self-Reported Risky Driving Measure." In Driving Assessment Conference. Iowa City, Iowa: University of Iowa, 2013. http://dx.doi.org/10.17077/drivingassessment.1462.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Knipling, Ronald R. "Threats to Scientific Validity in Truck Driver Hours-of-Service Studies." In Driving Assessment Conference. Iowa City, Iowa: University of Iowa, 2017. http://dx.doi.org/10.17077/drivingassessment.1662.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nilsson, Gunnar. "Validity of Comfort Assessment in RAMSIS." In Digital Human Modeling For Design And Engineering Conference And Exposition. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 1999. http://dx.doi.org/10.4271/1999-01-1900.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Misut, Martin, and Maria Misutova. "VALIDITY OF DURING-TERM E- ASSESSMENT." In International Technology, Education and Development Conference. IATED, 2017. http://dx.doi.org/10.21125/inted.2017.0762.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Roelofs, Erik, Jan Vissers, Marieke van Onna, and Reinoud Nägele. "Validity of an On-Road Driver Performance Assessment Within an Initial Driver Training Context." In Driving Assessment Conference. Iowa City, Iowa: University of Iowa, 2009. http://dx.doi.org/10.17077/drivingassessment.1361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Heimlich, Michael C., Venkata Gutta, Anthony Edward Parker, and Tony Fattorini. "Microwave device model validity assessment for statistical analysis." In 2009 Asia Pacific Microwave Conference - (APMC 2009). IEEE, 2009. http://dx.doi.org/10.1109/apmc.2009.5384438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Olorisade, Babatunde Kazeem, Pearl Brereton, and Peter Andras. "Reporting Statistical Validity and Model Complexity in Machine Learning based Computational Studies." In EASE'17: Evaluation and Assessment in Software Engineering. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3084226.3084283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Walsh, Cole, Katherine N. Quinn, and Natasha G. Holmes. "Assessment of critical thinking in physics labs: concurrent validity." In 2018 Physics Education Research Conference. American Association of Physics Teachers, 2019. http://dx.doi.org/10.1119/perc.2018.pr.walsh.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Assessment validity"

1

Buttrey, Samuel L., Paul O'Connor, Angela O'Dea, and Quinn Kennedy. An Evaluation of the Construct Validity of the Command Safety Assessment Survey. Fort Belvoir, VA: Defense Technical Information Center, December 2010. http://dx.doi.org/10.21236/ada533937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Clemente, Filipe Manuel, Rui Silva, Zeki Akyildiz, José Pino-Ortega, and Markel Rico-González. Validity and reliability of the inertial measurement unit for assessment of barbell velocity: A systematic review. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, December 2020. http://dx.doi.org/10.37766/inplasy2020.12.0135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kolman, D. G., Y. Park, M. Stan, R. J. Jr Hanrahan, and D. P. Butt. An assessment of the validity of cerium oxide as a surrogate for plutonium oxide gallium removal studies. Office of Scientific and Technical Information (OSTI), March 1999. http://dx.doi.org/10.2172/329498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Maurer, Todd J., and Michael Lippstreu. Self-Initiated Development of Leadership Capabilities: Toward Establishing the Validity of Key Motivational Constructs and Assessment Tools. Fort Belvoir, VA: Defense Technical Information Center, November 2010. http://dx.doi.org/10.21236/ada532359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Saifer, Steffen. Validity, Reliability, and Utility of the Oregon Assessment for 3-5 Year Olds in Developmentally Appropriate Classrooms. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.1265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shih, C. F., and X. H. Liu. Validity limits in J-resistance curve determination: An assessment of the J{sub M} Parameter. Volume 1. Office of Scientific and Technical Information (OSTI), February 1995. http://dx.doi.org/10.2172/10123475.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Billman, L., and D. Keyser. Assessment of the Value, Impact, and Validity of the Jobs and Economic Development Impacts (JEDI) Suite of Models. Office of Scientific and Technical Information (OSTI), August 2013. http://dx.doi.org/10.2172/1090964.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Podva-Baskin, H. Review and Validity of 2010 Health Risk Assessment for Hazardous Waste Treatment and Storage Facilities LLNL, Livermore Site (September 2019). Office of Scientific and Technical Information (OSTI), October 2019. http://dx.doi.org/10.2172/1571730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Clemente, Filipe Manuel, Ricardo Lima, Zeki Akyildiz, José Pino-Ortega, and Markel Rico-González. Validity and reliability of the mobile applications for human’s strength, power, velocity and change-of-direction assessment: A systematic review. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, January 2021. http://dx.doi.org/10.37766/inplasy2021.1.0089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

McCrea, Michael. An Independent, Prospective, Head to Head Study of the Reliability and Validity of Neurocognitive Test Batteries for the Assessment of Mild Traumatic Brain Injury. Fort Belvoir, VA: Defense Technical Information Center, March 2013. http://dx.doi.org/10.21236/ada573016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography