Academic literature on the topic 'Assessment validity'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Assessment validity.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Assessment validity"
Kirkwood, Michael W. "Pediatric validity assessment." NeuroRehabilitation 36, no. 4 (July 20, 2015): 439–50. http://dx.doi.org/10.3233/nre-151232.
Full textTeglasi, Hedwig, Allison Joan Nebbergall, and Daniel Newman. "Construct validity and case validity in assessment." Psychological Assessment 24, no. 2 (June 2012): 464–75. http://dx.doi.org/10.1037/a0026012.
Full textChapelle, Carol A. "VALIDITY IN LANGUAGE ASSESSMENT." Annual Review of Applied Linguistics 19 (January 1999): 254–72. http://dx.doi.org/10.1017/s0267190599190135.
Full textWatzl, Bernhard, and Gerhard Rechkemmer. "Validity of dietary assessment." American Journal of Clinical Nutrition 74, no. 2 (August 1, 2001): 273. http://dx.doi.org/10.1093/ajcn/74.2.273.
Full textKoninckx, Philippe R., Jasper Verguts, and Dirk Timmerman. "Assessment of measurement validity." Fertility and Sterility 85, no. 1 (January 2006): 268. http://dx.doi.org/10.1016/j.fertnstert.2005.10.003.
Full textDozortseva, E. G., and A. G. Krasavina. "Assessment of juveniles testimonies’ validity." Современная зарубежная психология 4, no. 3 (2015): 47–56. http://dx.doi.org/10.17759/jmfp.2015040306.
Full textLarrabee, Glenn J. "Performance Validity and Symptom Validity in Neuropsychological Assessment." Journal of the International Neuropsychological Society 18, no. 4 (May 8, 2012): 625–30. http://dx.doi.org/10.1017/s1355617712000240.
Full textMislevy, Robert J. "Validity by Design." Educational Researcher 36, no. 8 (November 2007): 463–69. http://dx.doi.org/10.3102/0013189x07311660.
Full textVARVERI, Loredana, Gioacchino LAVANCO, and Santo DI NUOVO. "Buying Addiction: Reliability and Construct Validity of an Assessment Questionnaire." Postmodern Openings 06, no. 01 (June 30, 2015): 149–60. http://dx.doi.org/10.18662/po/2015.0601.10.
Full textAKBIYIK, Melike, and Murat SENTURK. "Assessment Scale of Academic Enablers: A Validity and Reliability Study." Eurasian Journal of Educational Research 19, no. 80 (April 3, 2019): 1–26. http://dx.doi.org/10.14689/ejer.2019.80.11.
Full textDissertations / Theses on the topic "Assessment validity"
French, Elizabeth. "The Validity of the CampusReady Survey." Thesis, University of Oregon, 2014. http://hdl.handle.net/1794/18369.
Full textChinedozi, Ifeanyichukwu, and L. Lee Glenn. "Criterion Validity Measurements in Automated ECG Assessment." Digital Commons @ East Tennessee State University, 2013. https://dc.etsu.edu/etsu-works/7484.
Full textClounch, Kristopher L. "Sex offender assessment clinical utility and predictive validity /." Diss., St. Louis, Mo. : University of Missouri--St. Louis, 2008. http://etd.umsl.edu/r3221.
Full textWessels, Gunter Frederik. "Salespeople's Selling Orientation: Reconceptualization, Measurement and Validity Assessment." Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/202997.
Full textVan, Leeuwen Sarah. "Validity of the Devereux Early Childhood Assessment instrument." Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/31396.
Full textArts, Faculty of
Psychology, Department of
Graduate
Grimard, Donna (Donna Christine) Carleton University Dissertation Psychology. "An assessment of the validity of the ministry Risk\Needs Assessment Form." Ottawa, 1995.
Find full textLove, Ross. "A Construct Validity Analysis of a Leadership Assessment Center." TopSCHOLAR®, 2007. http://digitalcommons.wku.edu/theses/404.
Full textMAUK, JACQUELINE KERN. "RELIABILITY AND VALIDITY ASSESSMENT OF THE EXERCISE SUITABILITY SCALE." Diss., The University of Arizona, 1985. http://hdl.handle.net/10150/188035.
Full textBrits, Nadia M. "Investigating the construct validity of a developmental assessment centre." Thesis, Stellenbosch : Stellenbosch University, 2011. http://hdl.handle.net/10019.1/18071.
Full textAFRIKAANSE OPSOMMING: Organisasies bestaan om skaars produksiefaktore te verander na bemarkbare goedere en dienste. Aangesien organisasies deur mense bedryf en bestuur word, is hierdie instellings grotendeels afhanklik van hul menslike produksiefaktor om hul hoofdoel te bereik, nl. om hul wins te vergroot. Organisasies poog om geskikte werknemers aan te stel wat sal voldoen aan die vereistes van 'n spesifieke pos of dit selfs sal oortref. In 'n werkswêreld wat konstant verander, vereis tegnologie en die kenmerke van die werkswêreld dat hierdie persone deurgaans ontwikkel word om by te bly met verandering. Personeelkeuring en –ontwikkeling is dus kritieke bedrywighede van die Bedryfsielkundige en Menslike Hulpbronpraktisyn. Die Takseersentrum is 'n gewilde meetinstrument wat dikwels gebruik word vir die doel van keuring of ontwikkeling. Hierdie gewilde assesseringsmetode word hoog aangeskryf vir sy vermoë om toekomstige werksprestasie te voorspel. Takseersentrums wat gebruik word vir keuring doeleindes, toon inkrementele geldigheid bo meetinstrumente van persoonlikheid sowel as kognitiewe vaardigheidstoetse. Al word takseersentrums internasionaal en hier in Suid-Afrika dikwels gebruik, word hulle ook dikwels gekritiseer op grond van die vraag of hulle werklik die dimensies meet wat hulle veronderstel is om te meet. Die konstrukgeldigheid van takseersentrums word dikwels bevraagteken aangesien lae diskriminante en konvergerende geldigheid, sowel as hardnekkige oefeningseffekte, navorsingsbevindinge oorheers. Hierdie vraag is die beweegrede vir die huidige studie. Die doel met hierdie studie is om die konstrukgeldigheid van 'n ontwikkelingstakseersentrum te ondersoek. 'n Geriefsteekproef is gebruik om die navorsing te doen. Die data is verskaf deur 'n private konsultasie maatskappy in die vorm van die takseersentrumtellings van 202 persone wat in 'n eendaagse sentrum geassesseer is. Die sentrum is ontwikkel vir 'n Suid-Afrikaanse bankinstelling en het drie hoofdoelwitte, nl. om kandidate te identifiseer vir die rol van 'n nuwe posbeskrywing, om werknemers na meer topaslike rolle te verskuif en om toekomstige ontwikkelingsgeleenthede vir alle deelnemers te verskaf. Twaalf vaardighede is deur vier verskillende oefeninge geëvalueer. Verskeie beperkinge is opgelê deur die aard van die geriefsteekproef deurdat die navorser geen invloed op die ontwerp van die takseersentrum gehad het nie. Die aanvanklike twaalf vaardighede kon nie afsonderlik ontleed word nie en moes gevolglik as subdimensies in hul onderskeie globale faktore gekombineer word. Dit het gelei tot vier enkeldimensie (ED) metingsmodelle wat eers ondersoek moes word om gesigswaarde van konstrukgeldigheid te bewys voordat oefeninge by die bestaande modelle gevoeg kon word. Die vier afsonderlike oefeninge is in een globale oefeningseffek saamgevoeg. As gevolg van die ontoereikende getal indikators in die datastel kon net twee van die vier ED-modelle oefeninge insluit en dit het gelei tot twee enkeldimensie-, enkeloefening-metingsmodelle (EDEO). Inter-itemkorrelatsies is in SPSS bereken, gevolg deur bevestigende faktorontleding van elke afsonderlike metingsmodel in EQS wat gebruik is om die interne struktuur van die dimensies te bestudeer. Met een dimensie as uitsondering, impliseer die uitslae van die CFA dat die indikators van die takseersentrum (d.w.s. gedragsbeoordelings) nie daarin slaag om die onderliggende dimensie te weerspieël soos dit veronderstel was om te doen nie. Nadat die saamgestelde oefeningseffek byvoeg is, het slegs een van die twee dimensies geloofwaardige uitslae met buitengewoon goeie modelpassing en parameterskattings wat dui op dimensie- eerder as oefeningseffekte. As gevolg van hierdie bevindings word die geldigheid van die ontwikkelingsterugvoer wat elke deelnemer na die evaluering ontvang het, ernstig in twyfel getrek. Met die uitsondering van een dimensie se resultate, bevestig die resultate van hierdie studie vorige navorsingsbevindinge.
ENGLISH ABSTRACT: Organisations exist by transforming scarce factors of production into goods and services. Since organisations are run and managed by people, these institutions are largely dependent on their human production factor to achieve their main goal of maximising profits. Organisations strive to appoint suitable employees who will meet, even exceed, the requirements of a particular job position. In a constantly evolving world of work, advancing technology and inherent features of the modern working environment necessitate ongoing development of these individuals in order to keep up with the changes. Personnel selection and development are therefore crucial activities of the Industrial Psychologist and Human Resource Practitioner. The Assessment Centre (AC) is a popular measuring instrument that is often used for either selection or development purposes. This popular method of assessment has received a great degree of praise for its ability to predict future job performance. ACs have also shown incremental validity over and above both personality and cognitive ability measuring instruments when used for selection purposes. Nevertheless, despite the frequent use of ACs both internationally and locally in South Africa, ACs have been widely criticised on the basis of whether they actually measure the dimensions that they intend to measure. The question has often been asked whether ACs are construct valid, since low discriminant- and convergent validity, as well as persistent exercise effects, seem to dominate research findings. This question serves as the driving force of the present study. The aim of this study is to examine the construct validity of a development assessment centre (DAC). A convenience sample was used to pursue the research objective. The data was received from a private consultant company in the form of the AC ratings of 202 individuals who were assessed in a one-day DAC. The DAC was developed for a South African banking institution and had three main purposes, namely to identify candidates who fit the role of a new job position, to reposition employees into more appropriate roles, and to provide future development opportunities to all participants. Twelve competencies were assessed by four different exercises. Several limitations were imposed by the nature of the convenience sample since the researcher did not have an influence on the design of the AC. The initial twelve competencies were not represented by a sufficient number of indicators and could consequently not be statistically analysed on an individual level. These dimensions therefore had to be used as sub-dimensions to be combined within their respective global (second-order) factors. This resulted in four single trait (ST) measurement models that had to be investigated first to provide face value of construct validity before adding exercises into the existing models. The four separate exercises were integrated into one global exercise effect. The insufficient number of indicators within the data set brought about only two of the four ST models to be examined for any existing exercise effects. The result was two single trait, single exercise (STSE) measurement models. Inter-item correlations were calculated in SPSS, followed by confirmatory factor analysis on each respective measurement model in EQS used to study the internal structure of the dimensions. With one dimension as the exception, the results of the CFA imply that the DAC's indicators (i.e. behavioural ratings) in each second-order factor, fail to reflect the underlying dimension, as it was intended to do. When adding the conglomerated exercise effect, only one of the two dimensions had plausible results with good model fit and parameter estimates that leaned towards dimension and not exercise effects. Based on these findings, serious doubt is placed on the validity of the developmental feedback provided to each participant after the completion of the DAC. With one dimension as the exception, the present study's results corroborate previous research findings on the construct validity of ACs.
Morris, William Alan. "A Rhetorical Approach to Examining Writing Assessment Validity Claims." Kent State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=kent1619704495223314.
Full textBooks on the topic "Assessment validity"
E, Robbins Douglas, and Sawicki Robert F, eds. Reliability and validity in neuropsychological assessment. New York: Plenum Press, 1989.
Find full textFranzen, Michael D. Reliability and validity in neuropsychological assessment. 2nd ed. New York: Kluwer Academic/Plenum Publishers, 2000.
Find full textFranzen, Michael D. Reliability and Validity in Neuropsychological Assessment. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4757-3224-5.
Full textUebersax, John. Validity inferences from interobserver agreement. Santa Monica, CA: Rand, 1989.
Find full textValiga, Michael J. The accuracy of self-reported high school course and grade information. Iowa City, Iowa: American College Testing Program, 1987.
Find full textLaing, Joan. Accuracy of self-reported activities and accomplishments of college-bound students. Iowa City, Iowa: American College Testing Program, 1988.
Find full textValiga, Michael J. The accuracy of self-reported high school course and grade information. Iowa City, Iowa: American College Testing Program, 1987.
Find full textSpray, Judith A. Effects of item difficulty heterogeneity on the estimation of true-score and classification consistency. Iowa City, Iowa: American College Testing Program, 1988.
Find full textNoble, Julie. Predicting grades in specific college freshman courses from ACT test scores and self-reported high school grades. Iowa City, Iowa: American College Testing Program, 1988.
Find full textBook chapters on the topic "Assessment validity"
Iverson, Grant L. "Symptom Validity Assessment." In Encyclopedia of Clinical Neuropsychology, 3383–85. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-57111-9_213.
Full textIverson, Grant L. "Symptom Validity Assessment." In Encyclopedia of Clinical Neuropsychology, 2450–52. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-0-387-79948-3_213.
Full textIverson, Grant L. "Symptom Validity Assessment." In Encyclopedia of Clinical Neuropsychology, 1–3. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-56782-2_213-2.
Full textLuiselli, James K. "Social Validity Assessment." In Applied Behavior Analysis Treatment of Violence and Aggression in Persons with Neurodevelopmental Disabilities, 85–103. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68549-2_5.
Full textLuiselli, James K. "Social Validity Assessment." In Organizational Behavior Management Approaches for Intellectual and Developmental Disabilities, 46–66. New York: Routledge, 2021. http://dx.doi.org/10.4324/9780429324840-6.
Full textBonner, Sarah M., and Peggy P. Chen. "Validity in Classroom Assessment." In Systematic Classroom Assessment, 112–30. New York, NY : Routledge, 2019.: Routledge, 2019. http://dx.doi.org/10.4324/9781315123127-10.
Full textChapelle, Carol A. "Validity in Language Assessment." In The Routledge Handbook of Second Language Acquisition and Language Testing, 11–20. New York: Routledge, 2020. | Series: The Routledge handbooks in second language acquisition: Routledge, 2020. http://dx.doi.org/10.4324/9781351034784-3.
Full textSireci, Stephen G., and Tia Sukin. "Test validity." In APA handbook of testing and assessment in psychology, Vol. 1: Test theory and testing and assessment in industrial and organizational psychology., 61–84. Washington: American Psychological Association, 2013. http://dx.doi.org/10.1037/14047-004.
Full textFranzen, Michael D. "Benton’s Neuropsychological Assessment." In Reliability and Validity in Neuropsychological Assessment, 153–70. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/978-1-4757-3224-5_10.
Full textFranzen, Michael D. "Elemental Considerations in Validity." In Reliability and Validity in Neuropsychological Assessment, 27–32. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/978-1-4757-3224-5_4.
Full textConference papers on the topic "Assessment validity"
Jamson, Hamish. "Image Characteristics and Their Effect on Driving Simulator Validity." In Driving Assessment Conference. Iowa City, Iowa: University of Iowa, 2001. http://dx.doi.org/10.17077/drivingassessment.1036.
Full textKnipling, Ronald R. "Naturalistic Driving Events: No Harm, No Foul, No Validity." In Driving Assessment Conference. Iowa City, Iowa: University of Iowa, 2015. http://dx.doi.org/10.17077/drivingassessment.1571.
Full textSimmons-Morton, Bruce G., Kaigang Li, Ashley Brooks-Russell, Johnathon Ehsani, Anuj Pradhan, Marie Claude Ouimet, and Sheila Klauer. "Validity of the C-RDS Self-Reported Risky Driving Measure." In Driving Assessment Conference. Iowa City, Iowa: University of Iowa, 2013. http://dx.doi.org/10.17077/drivingassessment.1462.
Full textKnipling, Ronald R. "Threats to Scientific Validity in Truck Driver Hours-of-Service Studies." In Driving Assessment Conference. Iowa City, Iowa: University of Iowa, 2017. http://dx.doi.org/10.17077/drivingassessment.1662.
Full textNilsson, Gunnar. "Validity of Comfort Assessment in RAMSIS." In Digital Human Modeling For Design And Engineering Conference And Exposition. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 1999. http://dx.doi.org/10.4271/1999-01-1900.
Full textMisut, Martin, and Maria Misutova. "VALIDITY OF DURING-TERM E- ASSESSMENT." In International Technology, Education and Development Conference. IATED, 2017. http://dx.doi.org/10.21125/inted.2017.0762.
Full textRoelofs, Erik, Jan Vissers, Marieke van Onna, and Reinoud Nägele. "Validity of an On-Road Driver Performance Assessment Within an Initial Driver Training Context." In Driving Assessment Conference. Iowa City, Iowa: University of Iowa, 2009. http://dx.doi.org/10.17077/drivingassessment.1361.
Full textHeimlich, Michael C., Venkata Gutta, Anthony Edward Parker, and Tony Fattorini. "Microwave device model validity assessment for statistical analysis." In 2009 Asia Pacific Microwave Conference - (APMC 2009). IEEE, 2009. http://dx.doi.org/10.1109/apmc.2009.5384438.
Full textOlorisade, Babatunde Kazeem, Pearl Brereton, and Peter Andras. "Reporting Statistical Validity and Model Complexity in Machine Learning based Computational Studies." In EASE'17: Evaluation and Assessment in Software Engineering. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3084226.3084283.
Full textWalsh, Cole, Katherine N. Quinn, and Natasha G. Holmes. "Assessment of critical thinking in physics labs: concurrent validity." In 2018 Physics Education Research Conference. American Association of Physics Teachers, 2019. http://dx.doi.org/10.1119/perc.2018.pr.walsh.
Full textReports on the topic "Assessment validity"
Buttrey, Samuel L., Paul O'Connor, Angela O'Dea, and Quinn Kennedy. An Evaluation of the Construct Validity of the Command Safety Assessment Survey. Fort Belvoir, VA: Defense Technical Information Center, December 2010. http://dx.doi.org/10.21236/ada533937.
Full textClemente, Filipe Manuel, Rui Silva, Zeki Akyildiz, José Pino-Ortega, and Markel Rico-González. Validity and reliability of the inertial measurement unit for assessment of barbell velocity: A systematic review. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, December 2020. http://dx.doi.org/10.37766/inplasy2020.12.0135.
Full textKolman, D. G., Y. Park, M. Stan, R. J. Jr Hanrahan, and D. P. Butt. An assessment of the validity of cerium oxide as a surrogate for plutonium oxide gallium removal studies. Office of Scientific and Technical Information (OSTI), March 1999. http://dx.doi.org/10.2172/329498.
Full textMaurer, Todd J., and Michael Lippstreu. Self-Initiated Development of Leadership Capabilities: Toward Establishing the Validity of Key Motivational Constructs and Assessment Tools. Fort Belvoir, VA: Defense Technical Information Center, November 2010. http://dx.doi.org/10.21236/ada532359.
Full textSaifer, Steffen. Validity, Reliability, and Utility of the Oregon Assessment for 3-5 Year Olds in Developmentally Appropriate Classrooms. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.1265.
Full textShih, C. F., and X. H. Liu. Validity limits in J-resistance curve determination: An assessment of the J{sub M} Parameter. Volume 1. Office of Scientific and Technical Information (OSTI), February 1995. http://dx.doi.org/10.2172/10123475.
Full textBillman, L., and D. Keyser. Assessment of the Value, Impact, and Validity of the Jobs and Economic Development Impacts (JEDI) Suite of Models. Office of Scientific and Technical Information (OSTI), August 2013. http://dx.doi.org/10.2172/1090964.
Full textPodva-Baskin, H. Review and Validity of 2010 Health Risk Assessment for Hazardous Waste Treatment and Storage Facilities LLNL, Livermore Site (September 2019). Office of Scientific and Technical Information (OSTI), October 2019. http://dx.doi.org/10.2172/1571730.
Full textClemente, Filipe Manuel, Ricardo Lima, Zeki Akyildiz, José Pino-Ortega, and Markel Rico-González. Validity and reliability of the mobile applications for human’s strength, power, velocity and change-of-direction assessment: A systematic review. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, January 2021. http://dx.doi.org/10.37766/inplasy2021.1.0089.
Full textMcCrea, Michael. An Independent, Prospective, Head to Head Study of the Reliability and Validity of Neurocognitive Test Batteries for the Assessment of Mild Traumatic Brain Injury. Fort Belvoir, VA: Defense Technical Information Center, March 2013. http://dx.doi.org/10.21236/ada573016.
Full text