Academic literature on the topic 'Ability Examinations Sequential analysis. Item response theory'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Ability Examinations Sequential analysis. Item response theory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Ability Examinations Sequential analysis. Item response theory"

1

O. A., Awopeju,, and Afolabi, E. R. I. "Comparative Analysis of Classical Test Theory and Item Response Theory Based Item Parameter Estimates of Senior School Certificate Mathematics Examination." European Scientific Journal, ESJ 12, no. 28 (October 31, 2016): 263. http://dx.doi.org/10.19044/esj.2016.v12n28p263.

Full text
Abstract:
The study compared Classical Test Theory (CTT) and Item Response Theory (IRT)-estimated item difficulty and item discrimination indices in relation to the ability of examinees in Senior School Certificate Examination (SSCE) in Mathematics with a view to providing empirical basis for informed decisions on the appropriateness of statistical and psychometric tests. The study adopted ex-post-facto design. A sample of 6,000 students was selected from the population of 35,262 students who sat for the NECO SSCE Mathematics Paper 1 in 2008 in Osun State, Nigeria. An instrument consisting of 60-multiple-choice items, May/June 2008 NECO SSCE Mathematics Paper 1 was used. Three sampling plans: random, gender and ability sampling plans were employed to study the behaviours of the examinees scores under the CTT and IRT measurement frameworks. BILOG-MG 3 was used to estimate the indices of item parameters and SPSS 20 was used to compare CTT- and IRT-based item parameters. The results showed that CTT-based item difficulty estimates and oneparameter IRT item difficulty estimates were comparable (the correlations were generally in the -0.702 to -0.988 range in large sample and -0.622 to - 0.989 range in small sample). Results also indicated that CTT-based and two-parameter IRT-based item discrimination estimates were comparable (the correlations were in the 0.430 to 0.880 ranges in large sample and 0.531 to 0.950 range in small sample). The study concluded that CTT and IRT were comparable in estimating item characteristics of statistical and psychometric tests and thus could be used as complementary procedures in the development of national examinations
APA, Harvard, Vancouver, ISO, and other styles
2

Seo, Dong Gi. "Overview and current management of computerized adaptive testing in licensing/certification examinations." Journal of Educational Evaluation for Health Professions 14 (July 26, 2017): 17. http://dx.doi.org/10.3352/jeehp.2017.14.17.

Full text
Abstract:
Computerized adaptive testing (CAT) has been implemented in high-stakes examinations such as the National Council Licensure Examination-Registered Nurses in the United States since 1994. Subsequently, the National Registry of Emergency Medical Technicians in the United States adopted CAT for certifying emergency medical technicians in 2007. This was done with the goal of introducing the implementation of CAT for medical health licensing examinations. Most implementations of CAT are based on item response theory, which hypothesizes that both the examinee and items have their own characteristics that do not change. There are 5 steps for implementing CAT: first, determining whether the CAT approach is feasible for a given testing program; second, establishing an item bank; third, pretesting, calibrating, and linking item parameters via statistical analysis; fourth, determining the specification for the final CAT related to the 5 components of the CAT algorithm; and finally, deploying the final CAT after specifying all the necessary components. The 5 components of the CAT algorithm are as follows: item bank, starting item, item selection rule, scoring procedure, and termination criterion. CAT management includes content balancing, item analysis, item scoring, standard setting, practice analysis, and item bank updates. Remaining issues include the cost of constructing CAT platforms and deploying the computer technology required to build an item bank. In conclusion, in order to ensure more accurate estimations of examinees’ ability, CAT may be a good option for national licensing examinations. Measurement theory can support its implementation for high-stakes examinations.
APA, Harvard, Vancouver, ISO, and other styles
3

Arlinwibowo, Janu, Heri Retnawati, and Badrun Kartowagiran. "Item Response Theory Utilization for Developing the Student Collaboration Ability Assessment Scale in STEM Classes." Ingénierie des systèmes d information 26, no. 4 (August 31, 2021): 409–15. http://dx.doi.org/10.18280/isi.260409.

Full text
Abstract:
Collaboration is an ability that develops in STEM learning and is very influential in 21st-century life. Thus, students' collaboration abilities must be detected properly. This study aims to produce a quality and easy-to-use instrument for assessing student collaboration skills in STEM classes. The research is development research that contains three steps, namely preliminary research, making prototypes, and conducting product evaluations. Methods of data collection using FGD and questionnaires. The FGD was carried out with experts to produce descriptive data and assessment instruments as well as questionnaires which were also development products with data in the form of graded scales 1, 2, 3, and 4. The study involved 187 junior high school students who took lessons in STEM classes. The instrument is a questionnaire with 4 graded answer choices. To ensure the quality of the instrument, the researcher conducted FGD and expert validation and proved the construct with CFA. The instrument profile was traced using the unidimensional graded response model (GRM) method of response analysis. The results showed that the final instrument containing 17 items was declared valid in terms of content and constructs, as well as reliable. The results of the item analysis show that all items have good sequential step parameters (b1 < b2 < b3), all items have a good discriminant index (0.995 ≤ ai ≤ 1.764), and the instrument is reliable for measuring students with an ability range of -6.15 < θ < 4.05. Thus, this instrument can define students' abilities well in a wide range of abilities.
APA, Harvard, Vancouver, ISO, and other styles
4

Piumatti, Giovanni, Bernard Cerutti, and Noëlle Junod Perron. "Assessing communication skills during OSCE: need for integrated psychometric approaches." BMC Medical Education 21, no. 1 (February 16, 2021). http://dx.doi.org/10.1186/s12909-021-02552-8.

Full text
Abstract:
Abstract Background Physicians’ communication skills (CS) are known to significantly affect the quality of health care. Communication skills training programs are part of most undergraduate medical curricula and are usually assessed in Objective Structured Clinical Examinations (OSCE) throughout the curriculum. The adoption of reliable measurement instruments is thus essential to evaluate such skills. Methods Using Exploratory Factor Analysis (EFA), Multi-Group Confirmatory Factor Analysis (MGCFA) and Item Response Theory analysis (IRT) the current retrospective study tested the factorial validity and reliability of a four-item global rating scale developed by Hodges and McIlroy to measure CS among 296 third- and fourth-year medical students at the Faculty of Medicine in Geneva, Switzerland, during OSCEs. Results EFA results at each station showed good reliability scores. However, measurement invariance assessments through MGCFA across different stations (i.e., same students undergoing six or three stations) and across different groups of stations (i.e., different students undergoing groups of six or three stations) were not satisfactory, failing to meet the minimum requirements to establish measurement invariance and thus possibly affecting reliable comparisons between students’ communication scores across stations. IRT revealed that the four communication items provided overlapping information focusing especially on high levels of the communication spectrum. Conclusions Using this four-item set in its current form it may be difficult to adequately differentiate between students who are poor in CS from those who perform better. Future directions in best-practices to assess CS among medical students in the context of OSCE may thus focus on (1) training examiners so to obtain scores that are more coherent across stations; and (2) evaluating items in terms of their ability to cover a wider spectrum of medical students’ CS. In this respect, IRT can prove to be very useful for the continuous evaluation of CS measurement instruments in performance-based assessments.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Ability Examinations Sequential analysis. Item response theory"

1

Zhang, Yanwei. "Impacts of multidimensionality and content misclassification on ability estimation in computerized adaptive sequential testing (CAST)." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 156 p, 2006. http://proquest.umi.com/pqdweb?did=1179954311&sid=8&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Ability Examinations Sequential analysis. Item response theory"

1

The performance of the Mantel-Haenszel and logistic regression DIF identification procedures with real data. 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography