Academic literature on the topic 'Psychometric scoring'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Psychometric scoring.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Psychometric scoring"

1

Grønnerød, Cato, and Ellen Hartmann. "Moving Rorschach Scoring Forward." Rorschachiana 31, no. 1 (2010): 22–42. http://dx.doi.org/10.1027/1192-5604/a000003.

Full text
Abstract:
A new scoring system called RN-Rorschach was developed in Norway to provide a simple system focusing on clinical usefulness, with acceptable psychometric properties and with a high level of compatibility with the Comprehensive System (CS). The Rorschach method is a demanding method, and the CS may be too complex for learning the basic aspects of the Rorschach method, especially for students in introductory courses. Experience from teaching in introductory courses indicates that the goal of a simple and useful system has been achieved. Data on psychometric properties indicate that interscorer reliability is generally high. Two overall iota (ι) estimates were found to be .85 and .93. Future developments of Rorschach scoring are discussed.
APA, Harvard, Vancouver, ISO, and other styles
2

Ramineni, Chaitanya, and David M. Williamson. "Automated essay scoring: Psychometric guidelines and practices." Assessing Writing 18, no. 1 (2013): 25–39. http://dx.doi.org/10.1016/j.asw.2012.10.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Han, Chao, and Xiaolei Lu. "Interpreting quality assessment re-imagined: The synergy between human and machine scoring." Interpreting and Society 1, no. 1 (2021): 70–90. http://dx.doi.org/10.1177/27523810211033670.

Full text
Abstract:
Assessment of interpreting quality is a ubiquitous social practice in the interpreting industry and academia. In this article, we focus on both psychometric and social dimensions of assessment practice, and analyse two major assessment paradigms, namely, human rater scoring and automatic machine scoring. Regarding human scoring, we describe five specific methods, including atomistic scoring, questionnaire-based scoring, multi-methods scoring, rubric scoring, and ranking, and critically analyse their respective strengths and weaknesses. In terms of automatic scoring, we highlight four assessment approaches that have been researched and operationalised in cognate disciplines and interpreting studies, including automatic assessment based on temporal variables, linguistic/surface features, machine translation metrics, and quality estimation methodology. Finally, we problematise the socio-technological tension between these two paradigms and envisage human–machine collaboration to produce psychometrically sound and socially responsible assessment. We hope that this article sparks more scholarly discussion of rater-mediated and automatic assessment of interpreting quality from a psychometric-social perspective.
APA, Harvard, Vancouver, ISO, and other styles
4

Wolfe, Edward W. "The relationship between essay reading style and scoring proficiency in a psychometric scoring system." Assessing Writing 4, no. 1 (1997): 83–106. http://dx.doi.org/10.1016/s1075-2935(97)80006-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Strümpfer, D. J. W. "Psychometric Properties of an Instrument to Measure Resilience in Adults." South African Journal of Psychology 31, no. 1 (2001): 36–44. http://dx.doi.org/10.1177/008124630103100107.

Full text
Abstract:
A rationale for using a projective approach, in addition to self-reports, is presented. A resilience exercise is described, consisting of 6 sentences describing adverse situations, in response to which participants write projective stories. A scoring scheme for such stories is introduced. 152 adults ( Mage = 34.28, SD = 9.15; Meduc = 14.55, SD = 2.31) working in organizations, completed the exercise and self-report scales. On the basis of initial scoring by two judges, the scoring scheme was revised to clarify some instructions. On a new sample of 20 protocols a 0.87 agreement between two judges was obtained. One judge then re-scored all protocols on the revised manual. A word count per protocol correlated 0.54 ( p < 0.000) with the total score. Scores per story and scores per scoring category, were corrected for word count, using a regression procedure. The 6 stories all loaded on a single resilience factor. Exploratory and confirmatory factor analyses showed a 2-factor model to fit the data best, producing factors which measured abstract and concrete aspects. The total resilience score correlated 0.26 ( p < 0.001) with Antonovsky's Sense of Coherence scale (short form) and 0.21 ( p < 0.01) with Diener's Satisfaction with Life scale.
APA, Harvard, Vancouver, ISO, and other styles
6

Shorey, Ryan C., Hope Brasfield, Jeniimarie Febres, Tara L. Cornelius, and Gregory L. Stuart. "A Comparison of Three Different Scoring Methods for Self-Report Measures of Psychological Aggression in a Sample of College Females." Violence and Victims 27, no. 6 (2012): 973–90. http://dx.doi.org/10.1891/0886-6708.27.6.973.

Full text
Abstract:
Psychological aggression in females’ dating relationships has received increased empirical attention in recent years. However, researchers have used numerous measures of psychological aggression and various scoring methods with these measures, making it difficult to compare across studies on psychological aggression. In addition, research has yet to examine whether different scoring methods for psychological aggression measures may affect the psychometric properties of these instruments. This study examined three self-report measures of psychological aggression within a sample of female college students (N = 108), including their psychometric properties when scored using frequency, sum, and variety scores. Results showed that the Revised Conflict Tactics Scales (CTS2) had variable internal consistency depending on the scoring method used and good validity; the Multidimensional Measure of Emotional Abuse (MMEA) and the Follingstad Psychological Aggression Scale (FPAS) both had good internal consistency and validity across scoring methods. Implications of these findings for the assessment of psychological aggression and future research are discussed.
APA, Harvard, Vancouver, ISO, and other styles
7

Gelfand, Stanley A., and Jessica T. Gelfand. "Psychometric Functions for Shortened Administrations of a Speech Recognition Approach Using Tri-Word Presentations and Phonemic Scoring." Journal of Speech, Language, and Hearing Research 55, no. 3 (2012): 879–91. http://dx.doi.org/10.1044/1092-4388(2011/11-0123).

Full text
Abstract:
Method Complete psychometric functions for phoneme and word recognition scores at 8 signal-to-noise ratios from −15 dB to 20 dB were generated for the first 10, 20, and 25, as well as all 50, three-word presentations of the Tri-Word or Computer Assisted Speech Recognition Assessment (CASRA) Test (Gelfand, 1998) based on the results of 12 normal-hearing young adult participants from the original study. Results The psychometric functions for both phoneme and word scores were very similar and essentially overlapping for all set sizes. Performance on the shortened tests accounted for 98.8% to 99.5% of the full (50-set) test variance with phoneme scoring, and 95.8% to 99.2% of the full test variance with word scoring. Shortening the tests accounted for little if any of the variance in the slopes of the functions. Conclusions The psychometric functions for abbreviated versions of the Tri-Word speech recognition test using 10, 20, and 25 presentation sets were described and are comparable to those of the original 50-presentation approach for both phoneme and word scoring in healthy, normal-hearing, young adult participants.
APA, Harvard, Vancouver, ISO, and other styles
8

Gorsuch, Richard L. "Psychometric evaluation of scales when their scoring keys are unavailable." Professional Psychology: Research and Practice 17, no. 5 (1986): 399–402. http://dx.doi.org/10.1037/0735-7028.17.5.399.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hua, Cheng, and Stefanie A. Wind. "Exploring the psychometric properties of the mind-map scoring rubric." Behaviormetrika 46, no. 1 (2018): 73–99. http://dx.doi.org/10.1007/s41237-018-0062-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Prieto, Gerardo, and Ana R. Delgado. "The Effect of Instructions on Multiple-Choice Test Scores." European Journal of Psychological Assessment 15, no. 2 (1999): 143–50. http://dx.doi.org/10.1027//1015-5759.15.2.143.

Full text
Abstract:
Summary: Most standardized tests instruct subjects to guess under scoring procedures that do not correct for guessing or correct only for expected random guessing. Other scoring rules, such as offering a small reward for omissions or punishing errors by discounting more than expected from random guessing, have been proposed. This study was designed to test the effects of these four instruction/scoring conditions on performance indicators and on score reliability of multiple-choice tests. Some 240 participants were randomly assigned to four conditions differing in how much they discourage guessing. Subjects performed two psychometric computerized tests, which differed only in the instructions provided and the associated scoring procedure. For both tests, our hypotheses predicted (0) an increasing trend in omissions (showing that instructions were effective); (1) decreasing trends in wrong and right responses; (2) an increase in reliability estimates of both number right and scores. Predictions regarding performance indicators were mostly fulfilled, but expected differences in reliability failed to appear. The discussion of results takes into account not only psychometric issues related to guessing, but also the misleading educational implications of recommendations to guess in testing contexts.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Psychometric scoring"

1

Castle, Courtney. "Measuring Multidimensional Science Learning: Item Design, Scoring, and Psychometric Considerations." Thesis, Boston College, 2018. http://hdl.handle.net/2345/bc-ir:107904.

Full text
Abstract:
Thesis advisor: Henry Braun<br>The Next Generation Science Standards propose a multidimensional model of science learning, comprised of Core Disciplinary Ideas, Science and Engineering Practices, and Crosscutting Concepts (NGSS Lead States, 2013). Accordingly, there is a need for student assessment aligned with the new standards. Creating assessments that validly and reliably measure multidimensional science ability is a challenge for the measurement community (Pellegrino, et al., 2014). Multidimensional assessment tasks may need to go beyond typical item designs of standalone multiple-choice and short-answer items. Furthermore, scoring and modeling of student performance should account for the multidimensionality of the construct. This research contributes to knowledge about best practices for multidimensional science assessment by exploring three areas of interest: 1) item design, 2) scoring rubrics, and 3) measurement models. This study investigated multidimensional scaffolding and response format by comparing alternative item designs on an elementary assessment of matter. Item variations had a different number of item prompts and/or response formats. Observations about student cognition and performance were collected during cognitive interviews and a pilot test. Items were scored using a holistic rubric and a multidimensional rubric, and interrater agreement was examined. Assessment data was scaled with multidimensional scores and holistic scores, using unidimensional and multidimensional Rasch models, and model-data fit was compared. Results showed that scaffolding is associated with more thorough responses, especially among low ability students. Students tended to utilize different cognitive processes to respond to selected-response items and constructed-response items, and were more likely to respond to selected-response arguments. Interrater agreement was highest when the structure of the item aligned with the structure of the scoring rubric. Holistic scores provided similar reliability and precision as multidimensional scores, but item and person fit was poorer. Multidimensional subscales had lower reliability, less precise student estimates than the unidimensional model, and interdimensional correlations were high. However, the multidimensional rubric and model provide nuanced information about student performance and better fit to the response data. Recommendations about optimal combinations of scaffolding, rubric, and measurement models are made for teachers, policymakers, and researchers<br>Thesis (PhD) — Boston College, 2018<br>Submitted to: Boston College. Lynch School of Education<br>Discipline: Educational Research, Measurement and Evaluation
APA, Harvard, Vancouver, ISO, and other styles
2

Kurle, Angela. "Developing Scoring Methods for a Non-Additive Psychometric Measure of Social Skills/Interpersonal Competence." Scholarship @ Claremont, 2001. https://scholarship.claremont.edu/hmc_theses/128.

Full text
Abstract:
For my senior thesis, I am planning to blend my mathematical studies with my second field of study, psychology. In particular, to develop and test various scoring methods for a multidimensional, psychometric measure of social skills/competence. I would work with the Social Skills Inventory (see below) and an existing data set, using statistical modelling to design a more representative total score measure. The current total score measure does not appear to take into account balances and value weights of the six inventory items.
APA, Harvard, Vancouver, ISO, and other styles
3

Burns, Stephanie Tursic. "The Predictive Validity of Person Matching Methods in Interest Measurement." Kent State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=kent1327781557.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Borschuk, Adrienne P. "Scoring and Validation of the Cystic Fibrosis Disclosure Questionnaire." VCU Scholars Compass, 2015. http://scholarscompass.vcu.edu/etd/4318.

Full text
Abstract:
As more patients with cystic fibrosis (CF) are living into adulthood, patients may need to disclose their CF status to others, such as in romantic or professional settings. Patients who choose not to disclose their CF status may be limited in their closeness with others, which may negatively affect their psychological functioning and health-related quality of life. Few studies, however, have examined disclosure in CF, and currently no validated measures of CF disclosure exist. The purpose of this study was to explore CF disclosure in adults and validate a new assessment of CF disclosure, the Cystic Fibrosis Disclosure Scale (CFDS). Results were consistent with prior research in disclosure in CF, with participants disclosing most often to close others and less often at school or in the workplace. Disclosure to close and casual friends was consistently associated with better psychosocial functioning. Factor analyses determined the CFDS was valid and that all questions should be retained. The Count Group subscale emerged as the “best” subscale grouping and coding method. This study contributed to the literature by serving as the first validation study of a questionnaire of disclosure in CF. Additionally, as disclosure in CF is a new emerging area, this study added information to the sparse literature on this issue. The CFDS as it exists now gathers important research and clinical information from adults with CF, and should be examined further with a larger sample size and more descriptive information.
APA, Harvard, Vancouver, ISO, and other styles
5

Becker, R. Lance. "Latent trait, factor, and number endorsed scoring of polychotomous and dichotomous responses to the Common Metric Questionnaire." Diss., This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-07282008-135639/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Buhr, Dianne C. "Variability in the estimation of item option characteristic curves for the multiple-category scoring model." Gainesville, FL, 1989. http://www.archive.org/details/variabilityinest00buhr.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ron, Tom Haim. "Bringing Situational Judgement Tests to the 21st Century: Scoring of Situational Judgement Tests Using Item Response Theory." Bowling Green State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu157146239865009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jimenez, Laura. "Estimating the Reliability of Concept Map Ratings Using a Scoring Rubric Based on Three Attributes." BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2284.

Full text
Abstract:
Concept maps provide a way to assess how well students have developed an organized understanding of how the concepts taught in a unit are interrelated and fit together. However, concept maps are challenging to score because of the idiosyncratic ways in which students organize their knowledge (McClure, Sonak, & Suen, 1999). The construct a map or C-mapping" task has been shown to capture students' organized understanding. This "C-mapping" task involves giving students a list of concepts and asking them to produce a map showing how these concepts are interrelated. The purpose of this study was twofold: (a) to determine to what extent the use of the restricted C-mapping technique coupled with the threefold scoring rubric produced reliable ratings of students conceptual understanding from two examinations, and (b) to project how the reliability of the mean ratings for individual students would likely vary as a function of the average number of raters and rating occasions from two examinations. Nearly three-fourths (73%) of the variability in the ratings for one exam and (43 %) of the variability for the other exam were due to dependable differences in the students' understanding detected by the raters. The rater inconsistencies were higher for one exam and somewhat lower for the other exam. The person-to-rater interaction was relatively small for one exam and somewhat higher for the other exam. The rater-by-occasion variance components were zero for both exams. The unexplained variance accounted for 19% on one exam and 14% on the other. The size of the reliability coefficient of student concept map scores varied across the two examinations. A reliability of .95 and .93 for relative and absolute decision was obtained for one exam. A reliability of .88 and .78. for absolute and relative decision was obtained for the other exam. Increasing the number of raters from one to two on one rating occasion would yield a greater increase in the reliability of the ratings at a lower cost than increasing the number of rating occasions. The same pattern holds for both exams.
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Chih-Ying. "The Usefulness and Psychometric Study of the Kinetic-House-Tree-Person Computerized Scoring System for Persons with Mental Illness in Taiwan." 2008. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-3001200814484700.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kalsi-Ryan, Sukhvinder. "The Graded Redfined Assessment of Strength, Senssibility and Prehension (GRASSP): Development of the Scoring Approach, Evaluation of Psychometric Properties and the Relationship of Upper Limb Impairment to Function." Thesis, 2011. http://hdl.handle.net/1807/29768.

Full text
Abstract:
Upper limb function is important for individuals with tetraplegia because upper limb function supports global function for these individuals. As a result, a great deal of time and effort has been devoted to the restoration of upper limb function. Appropriate outcome measures that can be used to characterize the neurological status of the upper limb have been one of the current barriers in substantiating the efficacy of interventions. Techniques and protocols to evaluate changes in upper limb neurological status have not been applied to the SCI population adequately. The objectives of this thesis were to develop a measure; which is called the Graded Redefined Assessment of Strength Sensibility and Prehension (GRASSP). Development of the scoring approach, testing for reliability and construct validity, and determining impairment and function relationships specific to the upper limb neurological were established. The GRASSP is a clinical measure of upper limb impairment which incorporates the construct of “sensorimotor upper limb function”; comprised of three domains which include five subtests. The GRASSP was designed to capture information on upper limb neurological impairment for individuals with tetraplegia. The GRASSP defines neurological status with numerical values, which represent the deficits in a predictive pattern, is reliable and valid as an assessment technique, and the scores can be used to determine relationships between impairment and functional capability of the upper limb. The GRASSP is recommended for use in the very early acute phases after injury to approximately one year post injury. Use of the GRASSP is recommended when a change in neurological status is being assessed.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Psychometric scoring"

1

Reliability. Oxford University Press, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

service), SpringerLink (Online, ed. Statistical Models for Test Equating, Scaling, and Linking. Springer Science+Business Media, LLC, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zabrocki, Emily Catherine. THE HEALTH ASSESSMENT OF OLDER WOMEN INTERVIEW GUIDE HEALTH SCORING COMPONENT: A PSYCHOMETRIC EVALUATION. 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Whitfill, Travis, Heidi Rossetti, and Michael C. Gottlieb. Psychological Testing and Assessment. Edited by John Z. Sadler, K. W. M. Fulford, and Werdie (C W. ). van Staden. Oxford University Press, 2015. http://dx.doi.org/10.1093/oxfordhb/9780198732372.013.22.

Full text
Abstract:
Psychological evaluations are commonly conducted within psychiatric settings with the goal of informing treatment decisions that are intended to benefit the patient. In addition to a clinical interview and review of records, the evaluation includes the selection, administration, scoring, and interpretation of psychological testing. Guided by the question(s) from the referring party, this multifaceted process occurs within a setting of existing care that requires clarification of the roles/duties of the professionals and organizations relative to each other and the patient. Additionally, psychological testing involves unique ethical considerations (e.g., psychometrics) not typically encountered during psychotherapy or psychiatric care. A variety of standards, provided in the form of rules that require or prohibit specific behaviors, has been created by governing organizations in order to inform the ethical decision-making process while conducting psychological assessments. These standards are understood within the broader framework of aspirational principles of ethics (e.g., nonmaleficence) universal to biomedical practice.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Psychometric scoring"

1

Brown, Anna. "Item Response Theory Approaches to Test Scoring and Evaluating the Score Accuracy." In The Wiley Handbook of Psychometric Testing. John Wiley & Sons, Ltd, 2018. http://dx.doi.org/10.1002/9781118489772.ch20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

DiBello, Louis V., and William Stout. "Student Profile Scoring for Formative Assessment." In New Developments in Psychometrics. Springer Japan, 2003. http://dx.doi.org/10.1007/978-4-431-66996-8_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shizuka, Tetsuhito. "The Effect of Clustered Objective Probability Scoring with Truncation (T-COPS) on Reliability." In New Developments in Psychometrics. Springer Japan, 2003. http://dx.doi.org/10.1007/978-4-431-66996-8_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lorié, William. "Automated Scoring of Multicomponent Tasks." In Handbook of Research on Technology Tools for Real-World Skill Development. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9441-5.ch024.

Full text
Abstract:
Assessment of real-world skills increasingly requires efficient scoring of non-routine test items. This chapter addresses the scoring and psychometric treatment of a broad class of automatically-scorable complex assessment tasks allowing a definite set of responses orderable by quality. These multicomponent tasks are described and proposals are advanced on how to score them so that they support capturing gradations of performance quality. The resulting response evaluation functions are assessed empirically against alternatives using data from a pilot of technology-enhanced items (TEIs) administered to a sample of high school students in one U.S. state. Results support scoring frameworks leveraging the full potential of multicomponent tasks for providing evidence of partial knowledge, understanding, or skill.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Psychometric scoring"

1

Suyunu, Burak, Gonul Ayci, Mine Ogretir, et al. "Semi-Supervised Psychometric Scoring of Document Collections." In 2018 IEEE International Conference on Data Mining Workshops (ICDMW). IEEE, 2018. http://dx.doi.org/10.1109/icdmw.2018.00194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tabberer, M., R. von Maltzahn, E. Bacci, et al. "Evaluation of the Psychometric Properties, Scoring Algorithm, and Score Interpretation of the E-RS®: Asthma in Two Clinical Trials of Moderate to Severe Asthma." In American Thoracic Society 2020 International Conference, May 15-20, 2020 - Philadelphia, PA. American Thoracic Society, 2020. http://dx.doi.org/10.1164/ajrccm-conference.2020.201.1_meetingabstracts.a5624.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Psychometric scoring"

1

Schoen, Robert, Xiaotong Yang, and Gizem Solmaz. Psychometric Report for the 2019 Knowledge for Teaching Early Elementary Mathematics (K-TEEM) Test. Florida State University Libraries, 2021. http://dx.doi.org/10.33009/lsi.1620243057.

Full text
Abstract:
The 2019 Knowledge for Teaching Early Elementary Mathematics (2019 K-TEEM) test measures teachers’ mathematical knowledge for teaching early elementary mathematics. This report presents information about a large-scale field test of the 2019 K-TEEM test with 649 practicing educators. The report contains information about the development process used for the test; a description of the sample; descriptions of the procedures used for data entry, scoring of responses, and analysis of data; recommended scoring procedures; and findings regarding the distribution of test scores, standard error of measurement, and marginal reliability. The intended use of the data from the 2019 K-TEEM test is to serve as a measure of teacher knowledge that will be used in a randomized controlled trial to investigate the impact—and variation in impact—of a teacher professional-development program for early elementary teachers.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography