Academic literature on the topic 'Criterion-referenced assessment'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Criterion-referenced assessment.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Criterion-referenced assessment"

1

Hudson, Thom. "TRENDS IN ASSESSMENT SCALES AND CRITERION-REFERENCED LANGUAGE ASSESSMENT." Annual Review of Applied Linguistics 25 (March 2005): 205–27. http://dx.doi.org/10.1017/s0267190505000115.

Full text
Abstract:
Two current developments reflecting a common concern in second/foreign language assessment are the development of: (1) scales for describing language proficiency/ability/performance; and (2) criterion-referenced performance assessments. Both developments are motivated by a perceived need to achieve communicatively transparent test results anchored in observable behaviors. Each of these developments in one way or another is an attempt to recognize the complexity of language in use, the complexity of assessing language ability, and the difficulty in interpreting potential interactions of scale task, trait, text, and ability. They reflect a current appetite for language assessment anchored in the world of functions and events, but also must address how the worlds of functions and events contain non skill-specific and discretely hierarchical variability. As examples of current tests that attempt to use performance criteria, the chapter reviews the Canadian Language Benchmark, the Common European Framework, and the Assessment of Language Performance projects.
APA, Harvard, Vancouver, ISO, and other styles
2

Turnbull, Jeffrey M. "What Is… Normative versus Criterion-referenced Assessment." Medical Teacher 11, no. 2 (January 1989): 145–50. http://dx.doi.org/10.3109/01421598909146317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Freeman, Liz, and Andy Miller. "Norm-referenced, Criterion-referenced, and Dynamic Assessment: What exactly is the point?" Educational Psychology in Practice 17, no. 1 (March 2001): 3–16. http://dx.doi.org/10.1080/02667360120039942.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fabiano-Smith, Leah. "Standardized Tests and the Diagnosis of Speech Sound Disorders." Perspectives of the ASHA Special Interest Groups 4, no. 1 (February 26, 2019): 58–66. http://dx.doi.org/10.1044/2018_pers-sig1-2018-0018.

Full text
Abstract:
Purpose The purpose of this tutorial is to provide speech-language pathologists with the knowledge and tools to (a) evaluate standardized tests of articulation and phonology and (b) utilize criterion-referenced approaches to assessment in the absence of psychometrically strong standardized tests. Method Relevant literature on psychometrics of standardized tests used to diagnose speech sound disorders in children is discussed. Norm-referenced and criterion-referenced approaches to assessment are reviewed, and a step-by-step guide to a criterion-referenced assessment is provided. Published criterion references are provided as a quick and easy resource guide for professionals. Results Few psychometrically strong standardized tests exist for the evaluation of speech sound disorders for monolingual and bilingual populations. The use of criterion-referenced testing is encouraged to avoid diagnostic pitfalls. Discussion Speech-language pathologists who increase their use of criterion-referenced measures and decrease their use of standardized tests will arrive at more accurate diagnoses of speech sound disorders.
APA, Harvard, Vancouver, ISO, and other styles
5

Masters, Geoffrey N., and John Evans. "A sense of direction in criterion-referenced assessment." Studies in Educational Evaluation 12, no. 3 (January 1986): 257–65. http://dx.doi.org/10.1016/0191-491x(86)90044-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

McCauley, Rebecca J. "Familiar Strangers." Language, Speech, and Hearing Services in Schools 27, no. 2 (April 1996): 122–31. http://dx.doi.org/10.1044/0161-1461.2702.122.

Full text
Abstract:
Although frequently used in the assessment and treatment of communication disorders, criterion-referenced measures are often not well understood, making them both familiar and alien—thus, familiar strangers. This article is designed to better acquaint test users with the characteristics associated with the use and evaluation of criterion-referenced measures, particularly as they differ from norm-referenced measures. Guidelines are proposed for the evaluation and selection of standardized criterion-referenced measures as well as for the development and ongoing evaluation of informal criterion-referenced measures.
APA, Harvard, Vancouver, ISO, and other styles
7

Brantmeier, Cindy, and Robert Vanderplank. "Descriptive and criterion-referenced self-assessment with L2 readers." System 36, no. 3 (September 2008): 456–77. http://dx.doi.org/10.1016/j.system.2008.03.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Simpson, Mary. "Why criterion‐referenced assessment is unlikely to improve learning." Curriculum Journal 1, no. 2 (September 1990): 171–83. http://dx.doi.org/10.1080/0958517900010205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tay, Kai Meng, and Chee Peng Lim. "A fuzzy inference system-based criterion-referenced assessment model." Expert Systems with Applications 38, no. 9 (September 2011): 11129–36. http://dx.doi.org/10.1016/j.eswa.2011.02.158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wilhelm, Kim Hughes. "Combined Assessment Model for EAP Writing Workshop: Portfolio Decision-Making, Criterion-Referenced Grading, and Contract Negotiation." TESL Canada Journal 14, no. 1 (October 26, 1996): 21. http://dx.doi.org/10.18806/tesl.v14i1.675.

Full text
Abstract:
An assessment model that combines portfolio decision-making with criterion-referenced grading is described as applied in an EAP (English for Academic Purposes) pre-university ESL writing program. In this model, portfolio decision-making is combined with criterion-referenced assessment. The portfolio concept is valuable in that learners are encouraged to "own" and to make decisions about their work. At the same time, criterion-referenced assessment allows teachers to set meaningful, consistent standards while encouraging learner self- and peer assessment. Learner involvement may be further encouraged through the use of contract grading and collaborative revision of grading criteria. For academically oriented adult ESL learners, in particular, this assessment scheme encourages learner control while keeping performance-based standards at desirable levels.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Criterion-referenced assessment"

1

MacIntyre, Christine Campbell. "Criterion-referenced assessment for modern dance education." Thesis, University of Stirling, 1985. http://hdl.handle.net/1893/2182.

Full text
Abstract:
This study monitored the conceptualisation, implementation and evaluation of criterion-referenced assessment for Modern Dance by two teachers specifically chosen because they represented the two most usual stances in current teaching i.e. one valuing dance as part of a wider, more general education, the other as a performance art. The Review of Literature investigated the derivation of these differences and identified the kinds of assessment criteria which would be relevant in each context. It then questioned both the timing of the application of the criteria and the benefits and limitations inherent in using a pre-active or re-active model. Lastly it examined the philosophy of criterion-referenced assessment and thereafter formulated the main hypothesis, i. e. "That criterion-referenced assessment is an appropriate and realistic method for Modern Dance in schools". Both the main and sub-hypotheses were tested by the use of Case Study/Collaborative Action research. In this chosen method of investigation the teachers' actions were the primary focus of study while the researcher played a supportive but ancillary role. The study has three sections. The first describes the process experienced by the teachers as they identified their criteria for assessment and put their new strategy into action. It shows the problems which arose and the steps which were taken to resolve them. It gives exemplars of the assessment instruments which were designed and evaluates their use. It highlights the differences in the two approaches to dance and the different competencies required by the teachers if their criterion-referenced strategy was adequately and validly to reflect the important features of their course. In the second section the focus moves from the teachers to the pupils. Given that the pupils have participated in different programmes of dance, the study investigates what criteria the pupils spontaneously use and what criteria they can be taught to use. It does this through the introduction of self-assessment in each course. In this way the pupils' observations and movement analyses were made explicit and through discussion, completing specially prepared leaflets and using video, they were recorded and compared. And finally, the research findings were circulated to a larger number of teachers to find to what extent their concerns and problems had been anticipated by the first two and to discover if they, without extensive support, could also mount a criterion-referenced assessment strategy with an acceptable amount of effort and within a realistic period of time. And given that they could, the final question concerned the evaluations of all those participants i.e. teachers, parents and pupils. Would this extended group similarly endorse the strategy and strengthen the claim that criterion-referenced assessment was a valid and beneficial way of assessing Modern Dance in Schools?
APA, Harvard, Vancouver, ISO, and other styles
2

Reynolds, J. Karen. "Criterion-referenced assessment and evaluation as socially situated practice." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq25144.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wikström, Christina. "Criterion-referenced measurement for educational evaluation and selection." Doctoral thesis, Umeå universitet, Beteendevetenskapliga mätningar, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-492.

Full text
Abstract:
In recent years, Sweden has adopted a criterion-referenced grading system, where the grade outcome is used for several purposes, but foremost for educational evaluation on student- and school levels as well as for selection to higher education. This thesis investigates the consequences of using criterion-referenced measurement for both educational evaluation and selection purposes. The thesis comprises an introduction and four papers that empirically investigate school grades and grading practices in Swedish upper secondary schools. The first paper investigates the effect of school competition on the school grades. The analysis focuses on how students in schools with and without competition are ranked, based on their grades and SweSAT scores. The results show that schools that are exposed to competition tend to grade their students higher than other schools. This effect is found to be related to the use of grades as quality indicators for the schools, which means that schools that compete for their students tend to be more lenient, hence inflating the grades. The second paper investigates grade averages over a six-year period, starting with the first cohort who graduated from upper secondary school with a GPA based on criterion-referenced grades. The results show that grades have increased every year since the new grading system was introduced, which cannot be explained by improved performances, selection effects or strategic course choices. The conclusion is that the increasing pressure for high grading has led to grade inflation over time. The third paper investigates if grading practices are related to school size. The study is based on a similar model as paper I, but with data from graduates over a six-year period, and with school size as the main focus. The results show small but significant size effects, suggesting that the smallest schools (<300 students) are higher grading than other schools, and that the largest schools (>1000 students) are lower grading than other schools. This is assumed to be an effect of varying assessment practices, in combination with external and internal pressure for high grading. The fourth and final paper investigates if grading practices differ among upper secondary programmes, and how the course compositions in the programmes affect how students are ranked in the process of selection to higher education. The results show that students in vocationally oriented programmes are higher graded than other students, and also favoured by their programmes’ course compositions, which have a positive effect on their competitive strength in the selection to higher education. In the introductory part of the thesis, these results are discussed from the perspective of a theoretical framework, with special attention to validity issues in a broad perspective. The conclusion is that the criterion-referenced grades, both in terms of being used for educational evaluation, and as an instrument for selection to higher education, are wanting both in reliability and in validity. This is related to the conflicting purposes of the instruments, in combination with few control mechanisms, which affects how grades are interpreted and used, hence leading to consequences for students, schools and society in general.
APA, Harvard, Vancouver, ISO, and other styles
4

Dyson, Kaitlyn Nicole. "Predicting Performance on Criterion-Referenced Reading Tests with Benchmark Assessments." BYU ScholarsArchive, 2008. https://scholarsarchive.byu.edu/etd/1483.

Full text
Abstract:
The current research study investigates the predictive value of two frequently-used benchmark reading assessments: Developmental Reading Assessment (DRA) and the Dynamic Indicators of Basic Early Literacy Skills (DIBELS). With an increasing emphasis on high-stakes testing to measure reading proficiency, benchmark assessments may assist in predicting end-of-year performance on high-stakes testing. Utah's high-stakes measurement of end-of-year reading achievement is the English Language Arts Criterion-Referenced Test (ELA-CRT). A Utah urban school district provided data for students who completed the DRA, DIBELS, and ELA-CRT in the 2005-2006 school year. The primary purpose of the study was to determine the accuracy to which the Fall administrations of the DRA and the DIBELS predicted performance on the ELA-CRT. Supplementary analysis also included cross-sectional data for the DIBELS. Results indicated that both Fall administrations of the DRA and the DIBELS were statistically significant in predicting performance on the ELA-CRT. Students who were high risk on the benchmark assessments were less likely to score proficiently on the ELA-CRT. Also, demographic factors did not appear to affect individual performance on the ELA-CRT. Important implications include the utility of data collected from benchmark assessments to address immediate interventions for students at risk of failing end-of-year, high-stakes testing.
APA, Harvard, Vancouver, ISO, and other styles
5

Kerrison, Terence Michael. "A study of the effect of criterion-referencing on teaching, learning and assessment in secondary schools." Thesis, Hong Kong : University of Hong Kong, 1996. http://sunzi.lib.hku.hk/hkuto/record.jsp?B18047300.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Walker, Shunda F. "Comparing Fountas and Pinnell's Reading Levels to Reading Scores on the Criterion Referenced Competency Test." ScholarWorks, 2016. https://scholarworks.waldenu.edu/dissertations/1987.

Full text
Abstract:
Reading competency is related to individuals' success at school and in their careers. Students who experience significant problems with reading may be at risk of long-term academic and social problems. High-quality measures that determine student progress toward curricular goals are needed for early identification and interventions to improve reading abilities and ultimately prevent subsequent failure in reading. The purpose of this quantitative nonexperimental ex post facto research study was to determine whether a correlation existed amongst student achievement scores on the Fountas and Pinnell Reading Benchmark Assessment and reading comprehension scores on the Criterion Reference Competency Test (CRCT). The item response theory served as the conceptual framework for examining whether a relationship exists between Fountas and Pinnell Benchmark Instructional Reading Levels and the reading comprehension scores on the CRCT of students in Grades 3, 4, and 5 in the year 2013-2014. Archival data for 329 students in Grades 3-5 were collected and analyzed through Spearman's rank-order correlation. The results showed positive relationships between the scores. The findings promote positive social change by supporting the use of benchmark assessment data to identify at-risk reading students early.
APA, Harvard, Vancouver, ISO, and other styles
7

Domaleski, Christopher Stephen. "Exploring the Efficacy of Pre-Equating a Large Scale Criterion-Referenced Assessment with Respect to Measurement Equivalence." Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/eps_diss/3.

Full text
Abstract:
This investigation examined the practice of relying on field test item calibrations in advance of the operational administration of a large scale assessment for purposes of equating and scaling. Often termed “pre-equating,” the effectiveness of this method is explored for a statewide, high-stakes assessment in grades three, five, and seven for the content areas of language arts, mathematics, and social studies. Pre-equated scaling was based on item calibrations using the Rasch model from an off-grade field test event in which students tested were one grade higher than the target population. These calibrations were compared to those obtained from post-equating, which used the full statewide population of examinees. Item difficulty estimates and Test Characteristic Curves (TCC) were compared for each approach and found to be similar. The Root Mean Square Error (RMSE) of the theta estimates for each approach ranged from .02 to .12. Moreover, classification accuracy for the pre-equated approach was generally high compared to results from post-equating. Only 3 of the 9 tests examined showed differences in the percent of students classified as passing; errors ranged from 1.7 percent to 3 percent. Measurement equivalence between the field test and operational assessment was also explored using the Differential Functioning of Items and Tests (DFIT) framework. Overall, about 20 to 40 percent of the items on each assessment exhibited statistically significant Differential Item Functioning (DIF). Differential Test Functioning (DTF) was significant for fully 7 tests. There was a positive relationship between the magnitude of DTF and degree of incongruence between pre-equating and post-equating. Item calibrations, score consistency, and measurement equivalence were also explored for a test calibrated with the one, two, and three parameter logistic model, using the TCC equating method. Measurement equivalence and score table incongruence was found to be slightly more pronounced with this approach. It was hypothesized that differences between the field test and operational tests resulted from 1) recency of instruction 2) cognitive growth and 3) motivation factors. Additional research related to these factors is suggested.
APA, Harvard, Vancouver, ISO, and other styles
8

Moulding, Louise Richards. "An Evaluative Argument-Based Investigation of Validity Evidence for the Utah Pre-Algebra Criterion-Referenced Test." DigitalCommons@USU, 2001. https://digitalcommons.usu.edu/etd/6162.

Full text
Abstract:
This study collected evidence to address the assumptions underlying the use of the Utah Core Assessment to Pre-Algebra (UCAP) to (a) measure student achievement in pre-algebra, and (b) assist teachers in making adjustments to instruction. An evaluative argument was defined to guide the collection of evidence. Each of the assumptions in the evaluative argument was addressed using data from a suburban northern Utah school district. To collect the evidence, test content was examined including item match to course objectives, reliability, and subtest intercorrelations. Analyses of correlations of the UCAP with convergent and discriminant measures were completed using student test data (N = 1,461), including an examination of both the pattern of correlations and tests of statistical significance. Pre-algebra teachers (N = 12) were interviewed to ascertain the degree to which UCAP results were used to make necessary adjustments to instruction. It was found that the UCAP was technically sound, but measured only 65% of course objectives. Correlation coefficients were analyzed using pattern comparisons and tests of statistical significance. It was found that the pattern of correlation coefficients and the distinction of convergent and discriminant measures supported the UCAP as a measure of mathematics. Teacher interview data revealed that teachers did not make substantive adjustments to the instruction of pre-algebra based on test scores. Based on these results it was concluded that the underlying assumptions concerning the use of the UCAP were not fully supported. The lack of complete coverage of the pre-algebra course objectives calls into question the ability of the UCAP scores to be used as measures of student achievement, in spite of the technical quality of the test. There was support for the assumption that the UCAP measures mathematics. There was little evidence that teachers use the UCAP score reports to make meaningful and appropriate adjustments to instruction. More evidence is needed to understand the factors that may have led to this lack of use. The evaluative argument framework defined in this study provides guidance for future research to collect evidence of the validity of decisions based on UCAP scores.
APA, Harvard, Vancouver, ISO, and other styles
9

McDaniels, Darl. "A predictive validation study of criterion-referenced tests for the certification of soldiers in specialist-level military training programs." W&M ScholarWorks, 1988. https://scholarworks.wm.edu/etd/1539618311.

Full text
Abstract:
Problem. This study assessed the predictive validity of criterion-referenced tests in a military setting with cutoff scores set by the Angoff and conventional score-setting methods.;Procedure. Thirty-six instructors and thirty-six specialists assessed each test item for job relevance and the probability that a minimally competent person would answer each question correctly, resulting in a new test cutoff score. Intragroup variability and interrater reliability of judgments were calculated. Test predictive validity assessment compared classroom test scores, supervisory rating scores, and skill qualification test scores of 100 job performers based on the two score-setting methods. Sample sizes varied from 17 to 100. Behaviorally anchored rating scale was used to estimate soldier performance effectiveness. Hypotheses were tested using analysis of variance, a correlation procedure by Ebel, t-test, and Pearson Product-Moment correlation. Null was accepted or rejected at.05 level of significance.;Results. Findings follow: (1) intragroup variability and interrater reliability of judges' estimates were statistically significant; (2) strengths of correlation coefficients for classroom test scores (CTS) and supervisory rating scores (SRS) under Angoff method exceeded r values for scores under conventional method; (3) strength of correlation coefficient for CTS and skill qualification test (SQT) scores under conventional method exceeded r value for scores under Angoff method; (4) correlation coefficients for CTS and SRS were statistically significant for Angoff "accepts" but not for Angoff "rejects" in three of four job performance areas, and means of SRS of the two groups of job performers were significantly different; and (5) correlation coefficient for CTS and SQT scores was statistically significant for Angoff "accepts" but not for Angoff "rejects" and means of SQT scores of the two groups of job performers were significantly different.;Conclusions. The Angoff cutoff score-setting method provides an effective means for setting criterion-referenced test cutoff scores. The Angoff and present score setting methods yield significantly different test standards. The score derived by the empirical method is a better measure of minimum job requirements of an entry-level performer, thereby enhancing the predictive validity of the classroom test. Recommendations for future research are included.
APA, Harvard, Vancouver, ISO, and other styles
10

Campbell, Chad. "Assessing Student Understanding of the "New Biology": Development and Evaluation of a Criterion-Referenced Genomics and Bioinformatics Assessment." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1374118655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Criterion-referenced assessment"

1

Murdoch, Elizabeth. Criterion-referenced assessment for physical education: Research report. Edinburgh: Dunferline College of Physical Education, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Learning and Teaching Support Network. Generic Centre., ed. A briefing on key concepts: Formative and summative, criterion and norm-referenced assessment. York: Learning and Teaching Support Network, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Children, Council for Exceptional, and ERIC/OSEP Special Project, eds. Connecting performance assessment to instruction. Reston, Va: Council for Exceptional Children, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Johnson, Kristin. Megawords: Assessment of decoding and encoding skills : a criterion-referenced test : test manual. Cambridge: Educators Pub. Service, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ratcliffe, Mary. A comparison of the Suffolk co-ordinated science criterion referenced assessment scheme as implemented in two schools. [Guildford]: [University of Surrey], 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rowe, Helga A. H. Work readiness profile: A criterion-referenced tool to assist in the initial assessment of individuals with disabilities : manual. Melbourne, Vic: Australian Council for Educational Research, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Assessment and evaluation of developmental learning: Qualitative individual assessment and evaluation models. Westport, Conn: Praeger, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Griffith, Trevor P. A study of the validity of using multiple-choice items as test instruments in criterion-referenced assessment with particular reference to partial knowledge. [S.l: The author], 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

M, Wilson Kenneth. Enhancing the interpretation of a norm referenced second-language test through criterion-referencing: A research assessment of experience in the TOIEC testing context. Princteon, N.J: Educational Testing Service, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Elliott, Stephen N. Creating meaningful performance assessments: Fundamental concepts. Reston, Va: Council for Exceptional Children, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Criterion-referenced assessment"

1

Wong, Cathy S. P., Carmela Briguglio, Sundrakanthi Singh, and Michael Collins. "Implementing Criterion-Referenced Assessment." In Enhancing Teaching and Learning through Assessment, 3–30. Dordrecht: Springer Netherlands, 2007. http://dx.doi.org/10.1007/978-1-4020-6226-1_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hambleton, Ronald K. "Criterion-Referenced Assessment of Individual Differences." In Methodological and Statistical Advances in the Study of Individual Differences, 393–424. Boston, MA: Springer US, 1985. http://dx.doi.org/10.1007/978-1-4684-4940-2_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

"Criterion-Referenced Assessment." In Encyclopedia of Child Behavior and Development, 433. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-79061-9_4669.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

"Criterion-Referenced Assessment." In Encyclopedia of Pain, 810. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-28753-4_200490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

"Criterion-Referenced Assessment." In Encyclopedia of the Sciences of Learning, 846. Boston, MA: Springer US, 2012. http://dx.doi.org/10.1007/978-1-4419-1428-6_2103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

"Criterion-Referenced Assessment." In Encyclopedia of Autism Spectrum Disorders, 822. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4419-1698-3_100389.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

"Criterion-referenced Assessment." In Developing Personal, Social and Moral Education through Physical Education, 71–74. Routledge, 2002. http://dx.doi.org/10.4324/9780203181850-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

"Criterion-Referenced Assessment." In Beyond Testing, 87–105. Routledge, 2002. http://dx.doi.org/10.4324/9780203486009-11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

"Criterion-Referenced Assessment." In Encyclopedia of Autism Spectrum Disorders, 1239. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-319-91280-6_300441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

"Criterion Referenced Assessment." In Encyclopedia of Autism Spectrum Disorders, 1239. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-319-91280-6_300440.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Criterion-referenced assessment"

1

Tay, Kai Meng, Chee Peng Lim, and Tze Ling Jee. "Enhancing Fuzzy Inference System Based Criterion-Referenced Assessment With An Application." In 24th European Conference on Modelling and Simulation. ECMS, 2010. http://dx.doi.org/10.7148/2010-0213-0218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jewels, Tony, Marilyn Ford, and Wendy Jones. "What Exactly Do You Want Me To Do? Analysis of a Criterion Referenced Assessment Project." In InSITE 2007: Informing Science + IT Education Conference. Informing Science Institute, 2007. http://dx.doi.org/10.28945/3105.

Full text
Abstract:
In tertiary institutions in Australia, and no doubt elsewhere, there is increasing pressure for accountability. No longer are academics assumed a priori to be responsible and capable of self management in teaching and assessing the subjects they run. Procedures are being dictated more from the ‘top down’. Although academics may not always appreciate ‘top down’ policies on teaching and learning, they should at least be open to the possibility that the policies may indeed have merit. On the other hand, academics should never be expected to blindly accept policies dictated from elsewhere. Responsible academics generally also need to evaluate for themselves the validity and legitimacy of externally introduced new policies and procedures.
APA, Harvard, Vancouver, ISO, and other styles
3

Van Der Vyver, Glen. "Assessing for Competence Need Not Devalue Grades." In InSITE 2007: Informing Science + IT Education Conference. Informing Science Institute, 2007. http://dx.doi.org/10.28945/3109.

Full text
Abstract:
Norm-based assessment is under fire from some quarters because it is often unfair and is out of touch with the demands of the job market. Criterion-referenced assessment is touted as the answer by others but problems remain, in particular with regards to the maintenance of standards. This study examines the use of competency-based assessment in an undergraduate database course. The findings suggest that it is possible to create an assessment instrument that is relevant to particular skills required in the job market but does not inflate grades across the board. A remarkable idiosyncrasy emerges in that the distribution of scores assumes a bi-polar shape with a significant number of high grades and a significant number of grades at the lowest passing level or failing grades.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography