To see the other types of publications on this topic, follow the link: Item analysis.

Journal articles on the topic 'Item analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Item analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Farley, Joanne K. "ITEM ANALYSIS." Nurse Educator 15, no. 1 (January 1990): 8–9. http://dx.doi.org/10.1097/00006223-199001000-00002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Couturier, Raphaël, and Rubén Pazmiño. "Use of Statistical Implicative Analysis in Complement of Item Analysis." International Journal of Information and Education Technology 6, no. 1 (2016): 39–43. http://dx.doi.org/10.7763/ijiet.2016.v6.655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Harrell, Murphy, Melissa Myers, Nanako Hawley, Jasmin Pizer, and Benjamin Hill. "A-185 An Analysis of Rarely Missed Items on the TOMM." Archives of Clinical Neuropsychology 36, no. 6 (August 30, 2021): 1240. http://dx.doi.org/10.1093/arclin/acab062.203.

Full text
Abstract:
Abstract Objective This study examined item performance on Trial 1 of the Test of Memory Malingering (TOMM). We also identified items that were most often missed in individuals with genuine effort. Method Participants were 106 adults seen for disability claims (87.7% male; 70.5% Caucasian, 26.7% Black; age range 22–84 years, Mage = 44.42 years, SD = 13.07; Meducation = 13.58, SD = 2.05) who completed and passed the TOMM as part of a larger battery. Mean score Trial 1 was 43.08, SD = 5.49. Mean score on Trial 2 was 48.98, SD = 1.54. Results Frequency analysis indicated that >95% of the sample correctly identified six items on Trial 1: item 1-spinning wheel (97.2%), item 8-musical notes (99.1%), item 38-ice cream (98.1%), item 41-life preserver (95.3%), item 45-iron (95.3%), and item 47-dart (98.1%). Nine items were correctly identified on Trial 1 by <80% of the sample: item 2-tissue box (77.4%), item 6-suitcase (77.4%), item 20-motorcycle (77.4%), item 22-jack-in-the box (71.7%), item 26-light bulb (75.5%), item 27-maple leaf (72.6%), item 32-racket (79.2%), item 36-birdhouse (79.2%), item 44-pail & shovel (66.0%). Conclusions These findings suggest that items on Trial 1 of the TOMM differ in difficulty in a disability claims sample who performed genuinely on the TOMM. Items 1, 8, 38, 41, 45, and 47 are good candidates for a rarely missed index where failure of these items would be probabilistically unlikely. Future research should evaluate whether these items are failed at higher rates in cases of borderline TOMM performance to improve sensitivity to feigning.
APA, Harvard, Vancouver, ISO, and other styles
4

Hashimoto, Takamitsu. "Item Relational Structure Analysis Using Item Response Theory." Japanese Journal of Applied Statistics 40, no. 3 (2011): 125–40. http://dx.doi.org/10.5023/jappstat.40.125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Toksöz, Sibel, and Ayşe Ertunç. "Item Analysis of a Multiple-Choice Exam." Advances in Language and Literary Studies 8, no. 6 (December 25, 2017): 141. http://dx.doi.org/10.7575/aiac.alls.v.8n.6p.141.

Full text
Abstract:
Although foreign language testing has been subject to some changes in line with the different perspectives on learning and language teaching, multiple-choice items have been considerably popular regardless of these perspectives and trends in foreign language teaching. There have been some studies focusing on the efficiency of multiple choice items in different contexts. In Turkish context multiple choice items have been commonly used as standardized stake holder tests as a requirement for undergraduate level for the departments such as English Language Teaching, Western Languages and Literatures and Translation Studies and academic progress of the students in departments. Moreover, multiple choice items have been used noticeably in all levels of language instruction. However, there hasn’t been enough item analysis of multiple-choice tests in terms of item discrimination, item facility and distractor efficiency. The present study aims to analyze the multiple choice items aiming to test grammar, vocabulary and reading comprehension and administrated at a state university to preptory class students. In the study, 453 students’ responses have been analyzed in terms of item facility, item discrimination and distractor efficiency by using the frequency showing the distribution of the responses of prepatory students. The study results reveal that, most of the items are at the moderate level in terms of item facility. Besides, the results show that 28% of the items have a low item discrimination value. Finally, the frequency results were analyzed in terms of distractor efficiency and it has been found that some distractors in the exam are significantly ineffective and they should be revised.
APA, Harvard, Vancouver, ISO, and other styles
6

Danuwijaya, Ari Arifin. "ITEM ANALYSIS OF READING COMPREHENSION TEST FOR POST-GRADUATE STUDENTS." English Review: Journal of English Education 7, no. 1 (December 9, 2018): 29. http://dx.doi.org/10.25134/erjee.v7i1.1493.

Full text
Abstract:
Developing a test is a complex and reiterative process which subject to revision even if the items were developed by skilful item writers. Many commercial test publishers need to conduct test analysis, rather than trusting the item writers� judgement and skills to improve the quality of items that need to be proven statistically after trying out was performed. This study is a part of test development process which aims to analyse the reading comprehension test items. One hundred multiple choice questions were pilot tested to 50 postgraduate students in one university. The pilot testing was aimed to investigate item quality which can further be developed better. The responses were then analysed using Classical Test Theory and using psychometric software called Lertap. The results showed that item difficulty level was mostly average. In terms of item discrimination, more than half of the total items were categorized marginal which required further modifications. This study suggests some recommendation that can be useful to improve the quality of the developed items.��Keywords: reading comprehension; item analysis; classical test theory; item difficulty; test development.
APA, Harvard, Vancouver, ISO, and other styles
7

Bechtel, Gordon G., and Chezy Ofir. "Aggregate item response analysis." Psychometrika 53, no. 1 (March 1988): 93–107. http://dx.doi.org/10.1007/bf02294196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Muntinga, Jaap H. J., and Henk A. Schuil. "Effects of automatic item eliminations based on item test analysis." Advances in Physiology Education 31, no. 3 (September 2007): 247–52. http://dx.doi.org/10.1152/advan.00019.2007.

Full text
Abstract:
Item test analysis is an aid to identify items that need to be eliminated from an assessment. An automatic elimination procedure based on item statistics, therefore, could help to increase the quality of a test in an objective manner. This was investigated by studying the effect of a standardized elimination procedure on the test results of a second-year course over a period of 6 successive years in 1,624 candidates. Cohort effects on the item elimination were examined by determining the number of additional items that had to be eliminated from three different tests in 3 successive academic years in two cohorts. The items that were part of more than one test and had to be eliminated according to the procedure in at least one of the tests appeared to have to be retained according to the same procedure in most of the other tests. The procedure harmed the high scoring students relatively more often than the other students, and the number of eliminated items appeared to be cohort dependent. As a consequence, automatic elimination procedures obscure the transparency of the grading process unacceptably and transform valid tests into inadequate samples of the course content.
APA, Harvard, Vancouver, ISO, and other styles
9

., Fitriati. "Differential Item Functioning: Item Level Analysis of TIMSS Mathematics Test Items Using Australian and Indonesian Database." Hubs-Asia 18, no. 2 (December 1, 2014): 127. http://dx.doi.org/10.7454/mssh.v18i2.170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fitriati, Fitriati. "Differential Item Functioning: Item Level Analysis of TIMSS Mathematics Test Items Using Australian and Indonesian Database." Makara Human Behavior Studies in Asia 18, no. 2 (December 1, 2014): 127. http://dx.doi.org/10.7454/mssh.v18i2.3467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Fukuhara, Hirotaka, and Akihito Kamata. "A Bifactor Multidimensional Item Response Theory Model for Differential Item Functioning Analysis on Testlet-Based Items." Applied Psychological Measurement 35, no. 8 (November 2011): 604–22. http://dx.doi.org/10.1177/0146621611428447.

Full text
Abstract:
A differential item functioning (DIF) detection method for testlet-based data was proposed and evaluated in this study. The proposed DIF model is an extension of a bifactor multidimensional item response theory (MIRT) model for testlets. Unlike traditional item response theory (IRT) DIF models, the proposed model takes testlet effects into account, thus estimating DIF magnitude appropriately when a test is composed of testlets. A fully Bayesian estimation method was adopted for parameter estimation. The recovery of parameters was evaluated for the proposed DIF model. Simulation results revealed that the proposed bifactor MIRT DIF model produced better estimates of DIF magnitude and higher DIF detection rates than the traditional IRT DIF model for all simulation conditions. A real data analysis was also conducted by applying the proposed DIF model to a statewide reading assessment data set.
APA, Harvard, Vancouver, ISO, and other styles
12

Wyse, Adam E., and Raymond Mapuranga. "Differential Item Functioning Analysis Using Rasch Item Information Functions." International Journal of Testing 9, no. 4 (November 10, 2009): 333–57. http://dx.doi.org/10.1080/15305050903352040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Sheng, Xiaoming, Atanu Biswas, and K. C. Carrière. "Incorporating Inter-item Correlations in Item Response Data Analysis." Biometrical Journal 45, no. 7 (October 2003): 837–50. http://dx.doi.org/10.1002/bimj.200390053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Haladyna, Thomas M., and Michael C. Rodriguez. "Using Full-information Item Analysis to Improve Item Quality." Educational Assessment 26, no. 3 (July 3, 2021): 198–211. http://dx.doi.org/10.1080/10627197.2021.1946390.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Ullstadius, Eva, Berit Carlstedt, and Jan-Eric Gustafsson. "Multidimensional item analysis of ability factors in spatial test items." Personality and Individual Differences 37, no. 5 (October 2004): 1003–12. http://dx.doi.org/10.1016/j.paid.2003.11.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Zickar, Michael J., and Chet Robie. "Modeling faking good on personality items: An item-level analysis." Journal of Applied Psychology 84, no. 4 (1999): 551–63. http://dx.doi.org/10.1037/0021-9010.84.4.551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Maharani, Amalia Vidya, and Nur Hidayanto Pancoro Setyo Putro. "Item Analysis of English Final Semester Test." Indonesian Journal of EFL and Linguistics 5, no. 2 (December 1, 2020): 491. http://dx.doi.org/10.21462/ijefl.v5i2.302.

Full text
Abstract:
Numerous studies have been conducted on the item test analysis in English test. However, investigation on the characteristics of a good test of English final semester test is still rare in several districts in East Java. This research sought to examine the quality of the English final semester test in the academic year of 2018/2019 in Ponorogo. A total of 151 samples in the form of students’ answers to the test were analysed based on item difficulty, item discrimination, and distractors’ effectiveness using Quest program. This descriptive quantitative research revealed that the test does not have good proportion among easy, medium, and difficult item. In the item discrimination, the test had 39 excellent items (97.5%) which meant that the test could discriminate among high and low achievers. Besides, the distractors could distract students since there were 32 items (80%) that had effective distractors. The findings of this research provided insights that item analysis became important process in constructing test. It related to find the quality of the test that directly affects the accuracy of students’ score.
APA, Harvard, Vancouver, ISO, and other styles
18

Burns, Daniel J., Nicholas J. Martens, Alicia A. Bertoni, Emily J. Sweeney, and Michelle D. Lividini. "An item gains and losses analysis of false memories suggests critical items receive more item-specific processing than list items." Journal of Experimental Psychology: Learning, Memory, and Cognition 32, no. 2 (2006): 277–89. http://dx.doi.org/10.1037/0278-7393.32.2.277.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Cuhadar, Ismail, Yanyun Yang, and Insu Paek. "Consequences of Ignoring Guessing Effects on Measurement Invariance Analysis." Applied Psychological Measurement 45, no. 4 (May 17, 2021): 283–96. http://dx.doi.org/10.1177/01466216211013915.

Full text
Abstract:
Pseudo-guessing parameters are present in item response theory applications for many educational assessments. When sample size is not sufficiently large, the guessing parameters may be ignored from the analysis. This study examines the impact of ignoring pseudo-guessing parameters on measurement invariance analysis, specifically, on item difficulty, item discrimination, and mean and variance of ability distribution. Results show that when non-zero guessing parameters are ignored from the measurement invariance analysis, item discrimination estimates tend to decrease particularly for more difficult items, and item difficulty estimates decrease unless the items are highly discriminating and difficult. As the guessing parameter increases, the size of the decrease in item discrimination and difficulty tends to increase, and the estimated mean and variance of ability distribution tend to be inaccurate. When two groups have heterogeneous ability distributions, ignoring the guessing parameter affects the reference group and the focal group differently. Implications of result findings are discussed.
APA, Harvard, Vancouver, ISO, and other styles
20

Shamsuddin, Hasni, Nordin Abdul Razak, and Ahmad Zamri Khairani. "Calibrating Students’ Performance in Mathematics: A Rasch Model Analysis." International Journal of Engineering & Technology 7, no. 3.20 (September 1, 2018): 109. http://dx.doi.org/10.14419/ijet.v7i3.20.18991.

Full text
Abstract:
Rasch model analysis is an important tools in analysing students’ performance at item level. As such, the purpose of this study is to calibrate 14 years old students’ performance in mathematics test based on the item difficulty parameter. 307 Form 2 students provide responses for this study. A 40-item multiple choice test was developed to gauge the responses. Results show that two of the items need to be dropped since they did not meet the Rasch model’s expectations. Analysis on the remaining items showed that the students were most competent in item related to Directed Numbers (mean = -1.445 logits), while they are least competent in the topic of Circle (mean = 1.065 logits). We also provide calibration of the performance at item level. In addition, we discuss how to the findings might be helpful for teachers in addressing students’ difficulty in the topics.
APA, Harvard, Vancouver, ISO, and other styles
21

Yang, Ji Seung, and Xiaying Zheng. "Item Response Data Analysis Using Stata Item Response Theory Package." Journal of Educational and Behavioral Statistics 43, no. 1 (December 20, 2017): 116–29. http://dx.doi.org/10.3102/1076998617749186.

Full text
Abstract:
The purpose of this article is to introduce and review the capability and performance of the Stata item response theory ( irt) package that is available from Stata V.14, 2015. Using a simulated data set and a publicly available item response data set extracted from Programme of International Student Assessment, we review the irt package from applied and methodological researchers’ perspectives. After discussing the supported item response models and estimation methods implemented in the package, we demonstrate the accuracy of estimation compared to results from other typically used software packages. Other application features for differential item function analysis, scoring, and the package generating graphs are also reviewed.
APA, Harvard, Vancouver, ISO, and other styles
22

Sinharay, Sandip. "BAYESIAN ITEM FIT ANALYSIS FOR DICHOTOMOUS ITEM RESPONSE THEORY MODELS." ETS Research Report Series 2003, no. 2 (December 2003): i—47. http://dx.doi.org/10.1002/j.2333-8504.2003.tb01926.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Toribio, S. G., and J. H. Albert. "Discrepancy measures for item fit analysis in item response theory." Journal of Statistical Computation and Simulation 81, no. 10 (October 2011): 1345–60. http://dx.doi.org/10.1080/00949655.2010.485131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Sinharay, Sandip. "Bayesian item fit analysis for unidimensional item response theory models." British Journal of Mathematical and Statistical Psychology 59, no. 2 (November 2006): 429–49. http://dx.doi.org/10.1348/000711005x66888.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Gamarnik, David, and Petar Momčilović. "A Transposition Rule Analysis Based on a Particle Process." Journal of Applied Probability 42, no. 01 (March 2005): 235–46. http://dx.doi.org/10.1017/s0021900200000188.

Full text
Abstract:
A linear list is a collection of items that can be accessed sequentially. The cost of a request is the number of items that need to be examined before the desired item is located, i.e. the distance of the requested item from the beginning of the list. The transposition rule is one of the algorithms designed to reduce the search cost by organizing the list. In particular, upon a request for a given item, the item is transposed with the preceding one. We develop a new approach for analyzing the algorithm, based on a coupling to a certain constrained asymmetric exclusion process. This allows us to establish an asymptotic optimality of the rule for two families of request distributions.
APA, Harvard, Vancouver, ISO, and other styles
26

Gamarnik, David, and Petar Momčilović. "A Transposition Rule Analysis Based on a Particle Process." Journal of Applied Probability 42, no. 1 (March 2005): 235–46. http://dx.doi.org/10.1239/jap/1110381383.

Full text
Abstract:
A linear list is a collection of items that can be accessed sequentially. The cost of a request is the number of items that need to be examined before the desired item is located, i.e. the distance of the requested item from the beginning of the list. The transposition rule is one of the algorithms designed to reduce the search cost by organizing the list. In particular, upon a request for a given item, the item is transposed with the preceding one. We develop a new approach for analyzing the algorithm, based on a coupling to a certain constrained asymmetric exclusion process. This allows us to establish an asymptotic optimality of the rule for two families of request distributions.
APA, Harvard, Vancouver, ISO, and other styles
27

Suruchi, Suruchi, and Surender Singh Rana. "Test Item Analysis and Relationship Between Difficulty Level and Discrimination Index of Test Items in an Achievement Test in Biology." Paripex - Indian Journal Of Research 3, no. 6 (January 15, 2012): 56–58. http://dx.doi.org/10.15373/22501991/june2014/18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Lailiyah, Lailiyah, Yetti Supriyati, and Komarudin Komarudin. "ANALYSIS OF MEASURES ITEMS IN DEVELOPMENT OF INSTRUMENTS SELF-ASSESSMENT (RASCH MODELING APPLICATION)." JISAE: JOURNAL OF INDONESIAN STUDENT ASSESMENT AND EVALUATION 4, no. 1 (February 21, 2018): 1–9. http://dx.doi.org/10.21009/jisae.041.01.

Full text
Abstract:
This analysis aims to determine the quality of the instrument items that have been developed in the empirical test phase one. Tests were carried out on 46 items to 219 respondents in SMA Ksatrya Jakarta. The item quality is seen from the fit or not fit and the level of difficulty of the item that has been developed. The fit or unfit criteria are seen in INFIT and OUTFIT, both MNSQ and ZSTD, and Pt-Measure Correlation values. The level of difficulty of the item is seen in the entry number column which is indicated by the magnitude of the logit value and has been sorted from the hardest to the easiest. Based on the results of analysis with the help of software winstep obtained 39 items statement fit with the model and the number of respondents 194, the three criteria above (MNSQ, ZSTD, and Pt.Measure Correlation) has been met. This means that 39 items are valid. The result of the analysis also shows the most difficult item sequence is item 5 with logit value 63,32, and the easiest item is item 44 with logit value 36,13. The resulting fit instrument must have gone through several stages of analysis. When there are items that are not fit, the item is issued, as well as the respondent. So that obtained a set of measuring instruments that are valid / fit with the model and can be used for the purposes of assessment. Keyword: self-assessment, infit, outfit, ZSTD, and Rasch Model.
APA, Harvard, Vancouver, ISO, and other styles
29

Rosli, Roslinda, Mardina Abdullah, Nur Choiro Siregar, Nurul Shazana Abdul Hamid, Sabirin Abdullah, Gan Kok Beng, Lilia Halim, et al. "Student Awareness of Space Science: Rasch Model Analysis for Validity and Reliability." World Journal of Education 10, no. 3 (June 20, 2020): 170. http://dx.doi.org/10.5430/wje.v10n3p170.

Full text
Abstract:
Validity and reliability are crucial when conducting research to ensure the truthfulness of an instrument. This study investigated the measurement functioning of an instrument on students' awareness of space science. The instrument was administered to 206 secondary school students involved in the Sudden Ionospheric Disturbance π outreach program. Two experts evaluated the content validity of the instrument. Data were analyzed using the Winsteps 3.71.0.1 software to obtain the Rasch model analysis (RMA) on item reliability and persons' separation, item measure, item fit based on PTMEA CORR, polarity items, misfit items, unidimensionality, and a person-item map. The findings revealed that the items are valid, reliable, and appropriate to measure awareness of space science.
APA, Harvard, Vancouver, ISO, and other styles
30

Steenbergen, Marco R. "Item Similarity in Scale Analysis." Political Analysis 8, no. 3 (March 23, 2000): 261–83. http://dx.doi.org/10.1093/oxfordjournals.pan.a029816.

Full text
Abstract:
A statistic—the similarity coefficient—is developed for assessing the property that a set of scale items measures one and only one construct. This statistic is rooted in an explicit measurement model and is flexible enough to be used in exploratory scale analyses, even in small samples. Methods for analyzing similarity coefficients are described and illustrated in analyses of Stimson's (1991) policy mood data and Markus' (1990) popular individualism items. The Appendix discusses the statistical properties of similarity coefficients.
APA, Harvard, Vancouver, ISO, and other styles
31

Wainer, Howard. "The Future of Item Analysis." Journal of Educational Measurement 26, no. 2 (June 1989): 191–208. http://dx.doi.org/10.1111/j.1745-3984.1989.tb00328.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Wainer, Howard. "THE FUTURE OF ITEM ANALYSIS." ETS Research Report Series 1988, no. 2 (December 1988): i—27. http://dx.doi.org/10.1002/j.2330-8516.1988.tb00306.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Tandon, Aseem, and Rajan Bhatnagar. "Item analysis: An innovative approach." Journal of the Anatomical Society of India 64 (September 2015): S32. http://dx.doi.org/10.1016/j.jasi.2015.07.349.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Jones, Andrew T. "Comparing Methods for Item Analysis." Applied Psychological Measurement 35, no. 7 (October 2011): 566–71. http://dx.doi.org/10.1177/0146621611414406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Bock, R. Darrell, Robert Gibbons, and Eiji Muraki. "Full-Information Item Factor Analysis." Applied Psychological Measurement 12, no. 3 (September 1988): 261–80. http://dx.doi.org/10.1177/014662168801200305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Maydeu-Olivares, Albert, and Donna L. Coffman. "Random intercept item factor analysis." Psychological Methods 11, no. 4 (2006): 344–62. http://dx.doi.org/10.1037/1082-989x.11.4.344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Ünlü, Ali, and Martin Schrepp. "Generalized inductive item tree analysis." Journal of Mathematical Psychology 103 (August 2021): 102547. http://dx.doi.org/10.1016/j.jmp.2021.102547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Rijkeboer, Marleen M., Huub van den Bergh, and Jan van den Bout. "Item Bias Analysis of the Young Schema-Questionnaire for Psychopathology, Gender, and Educational Level." European Journal of Psychological Assessment 27, no. 1 (January 2011): 65–70. http://dx.doi.org/10.1027/1015-5759/a000044.

Full text
Abstract:
This study examines the construct validity of the Young Schema-Questionnaire at the item level in a Dutch population. Possible bias of items in relation to the presence or absence of psychopathology, gender, and educational level was analyzed, using a cross-validation design. None of the items of the YSQ exhibited differential item functioning (DIF) for gender, and only one item showed DIF for educational level. Furthermore, item bias analysis did not identify DIF for the presence or absence of psychopathology in as much as 195 of the 205 items comprising the YSQ. Ten items, however, spread over the questionnaire, were found to yield relatively inconsistent response patterns for patients and nonclinical participants.
APA, Harvard, Vancouver, ISO, and other styles
39

Lightfoot, Courtney J., Thomas J. Wilkinson, Katherine E. Memory, Jared Palmer, and Alice C. Smith. "Reliability and Validity of the Patient Activation Measure in Kidney Disease: Results of Rasch Analysis." Clinical Journal of the American Society of Nephrology 16, no. 6 (June 2021): 880–88. http://dx.doi.org/10.2215/cjn.19611220.

Full text
Abstract:
Background and objectivesDespite the increasing prioritization of the promotion of patient activation in nephrology, its applicability to people with CKD is not well established. Before the Patient Activation Measure is universally adopted for use in CKD, it is important to critically evaluate this measure. The aim of this study was to describe the psychometric properties of the Patient Activation Measure in CKD.Design, setting, participants, & measurementsA survey containing the 13-item Patient Activation Measure was completed by 942 patients with CKD, not treated with dialysis. Data quality was assessed by mean, item response, missing values, floor and ceiling effects, internal consistency (Cronbach’s alpha and average interitem correlation), and item-rest correlations. Rasch modeling was used to assess item performance and scaling (item statistics, person and item reliability, rating scale diagnostics, factorial test of residuals, and differential item functioning).ResultsThe item response was high, with a small number of missing values (<1%). Floor effect was small (range 1%–5%), but the ceiling effect was above 15% for nine items (range 15%–38%). The Patient Activation Measure demonstrated good internal consistency overall (Cronbach α=0.925, and average interitem correlation 0.502). The difficulty of the Patient Activation Measure items ranged from −0.90 to 0.86. Differential item functioning was found for disease type (item 3) and age (item 12). The person separation index was 9.48 and item separation index was 3.21.ConclusionsThe 13-item Patient Activation Measure appears to be a suitably reliable and valid instrument for assessing patient activation in CKD. In the absence of a kidney-specific instrument, our results support the 13-item Patient Activation Measure as a promising measure to assess activation in those with CKD, although consideration for several items is warranted. The high ceiling effect may be a problem when using the 13-item Patient Activation Measure to measure changes over time.
APA, Harvard, Vancouver, ISO, and other styles
40

Laela, Madiana, Dewi Rochsantiningsih, and Martono Martono. "Item Analysis of Preparation Test for English National Examination." English Education 6, no. 1 (September 29, 2017): 36. http://dx.doi.org/10.20961/eed.v6i1.35897.

Full text
Abstract:
<p>This research aims to reveal the quality of English national<strong> </strong>examination preparation test in terms of qualitative and quantitative aspects. Qualitative aspect includes content validity, technical item quality and cognitive domain learning outcome while quantitative aspect include reliability, difficulty level, item discrimination, and distractor effectiveness. Sample was taken from 3 out of 10 schools in Pati district using simple random sampling. This research employs both qualitative and quantitative analysis in which expert judgement is used to analyze content validity and technical item quality while ITEMAN is used for quantitative analysis. The result showed that the test has good content validity, 99.06% items appropriate with competence being measured, good technical item quality and most items (81.13%)are categorized as cognitive domain learning outcome C2 (Understand). Moreover, the test has high reliability index (&gt; 0.8), fair difficulty, and good discrimination. However, 35.85% items have ineffective distractors.</p>
APA, Harvard, Vancouver, ISO, and other styles
41

Raubenheimer, Rita I., and D. J. Prinsloo. "Item analysis for improving multiple-choice test items in North Sotho." South African Journal of African Languages 9, no. 2 (January 1989): 70–73. http://dx.doi.org/10.1080/02572117.1989.10586781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Eichenbaum, Alexander E., David K. Marcus, and Brian F. French. "Item Response Theory Analysis of the Psychopathic Personality Inventory–Revised." Assessment 26, no. 6 (June 22, 2017): 1046–58. http://dx.doi.org/10.1177/1073191117715729.

Full text
Abstract:
This study examined item and scale functioning in the Psychopathic Personality Inventory–Revised (PPI-R) using an item response theory analysis. PPI-R protocols from 1,052 college student participants (348 male, 704 female) were analyzed. Analyses were conducted on the 131 self-report items comprising the PPI-R’s eight content scales, using a graded response model. Scales collected a majority of their information about respondents possessing higher than average levels of the traits being measured. Each scale contained at least some items that evidenced limited ability to differentiate between respondents with differing levels of the trait being measured. Moreover, 80 items (61.1%) yielded significantly different responses between men and women presumably possessing similar levels of the trait being measured. Item performance was also influenced by the scoring format (directly scored vs. reverse-scored) of the items. Overall, the results suggest that the PPI-R, despite identifying psychopathic personality traits in individuals possessing high levels of those traits, may not identify these traits equally well for men and women, and scores are likely influenced by the scoring format of the individual item and scale.
APA, Harvard, Vancouver, ISO, and other styles
43

Costa, Daniel S. J., Ali Asghari, and Michael K. Nicholas. "Item response theory analysis of the Pain Self-Efficacy Questionnaire." Scandinavian Journal of Pain 14, no. 1 (January 1, 2017): 113–17. http://dx.doi.org/10.1016/j.sjpain.2016.08.001.

Full text
Abstract:
AbstractBackground and aimsThe Pain Self-Efficacy Questionnaire (PSEQ) is a 10-item instrument designed to assess the extent to which a person in pain believes s/he is able to accomplish various activities despite their pain. There is strong evidence for the validity and reliability of both the full-length PSEQ and a 2-item version. The purpose of this study is to further examine the properties of the PSEQ using an item response theory (IRT) approach.MethodsWe used the two-parameter graded response model to examine the category probability curves, and location and discrimination parameters of the 10 PSEQ items. In item response theory, responses to a set of items are assumed to be probabilistically determined by a latent (unobserved) variable. In the graded-response model specifically, item response threshold (the value of the latent variable for which adjacent response categories are equally likely) and discrimination parameters are estimated for each item. Participants were 1511 mixed, chronic pain patients attending for initial assessment at a tertiary pain management centre.ResultsAll items except item 7 (‘I can cope with my pain without medication’) performed well in IRT analysis, and the category probability curves suggested that participants used the 7-point response scale consistently. Items 6 (‘I can still do many of the things I enjoy doing, such as hobbies or leisure activity, despite pain’), 8 (‘I can still accomplish most of my goals in life, despite the pain’) and 9 (‘I can live a normal lifestyle, despite the pain’) captured higher levels of the latent variable with greater precision.ConclusionsThe results from this IRT analysis add to the body of evidence based on classical test theory illustrating the strong psychometric properties of the PSEQ. Despite the relatively poor performance of Item 7, its clinical utility warrants its retention in the questionnaire.ImplicationsThe strong psychometric properties of the PSEQ support its use as an effective tool for assessing self-efficacy in people with pain.
APA, Harvard, Vancouver, ISO, and other styles
44

Wilson, Damián Vergara. "Developing a Placement Exam for Spanish Heritage Language Learners: Item Analysis and Learner Characteristics." Heritage Language Journal 9, no. 1 (March 30, 2012): 27–50. http://dx.doi.org/10.46538/hlj.9.1.3.

Full text
Abstract:
This paper illustrates a method of item analysis used to identify discriminating multiple-choice items in placement data. The data come from two rounds of pilots given to both SHL students and Spanish as a Second Language (SSL) students. In the first round, 104 items were administered to 507 students. After discarding poor items, the second round presented 64 items to 330 students. Both graphical and statistical item analyses were employed. Graphical analysis involved an examination of trace-line graphs of each item. A fine-grained statistical analysis was conducted using point-biserial correlation coefficients. Both of these methods were useful and contributed to measure reliability. Different sets of items were selected for each learner group: 31 items for SHL participants and 21 for SSL participants. These items are currently being used in a preliminary online placement exam; after taking a biographical questionnaire, students are piped to either the SHL exam or the SSL exam. Finally, this paper examines characteristics of SHL students found in these data and finds that regional characteristics should be considered in item creation in terms of answer variability and possible distinction between SHL and SSL students.
APA, Harvard, Vancouver, ISO, and other styles
45

Smith, Daniel R., Michael E. Hoffman, and James M. LeBreton. "Conditional Reasoning: An Integrated Approach to Item Analysis." Organizational Research Methods 23, no. 1 (October 20, 2019): 124–53. http://dx.doi.org/10.1177/1094428119879756.

Full text
Abstract:
This article provides a review of the approach that James used when conducting item analyses on his conditional reasoning test items. That approach was anchored in classical test theory. Our article extends this work in two important ways. First, we offer a set of test development protocols that are tailored to the unique nature of conditional reasoning tests. Second, we further extend James’s approach by integrating his early test validation protocols (based on classical test theory) with more recent protocols (based on item response theory). We then apply our integrated item analytic framework to data collected on James’s first test, the conditional reasoning test for relative motive strength. We illustrate how this integrated approach furnishes additional diagnostic information that may allow researchers to make more informed and targeted revisions to an initial set of items.
APA, Harvard, Vancouver, ISO, and other styles
46

Kim, Kyungyeol Anthony, Senyung Lee, and Kevin K. Byon. "How useful is each item in the Sport Spectator Identification Scale?: an item response theory analysis." International Journal of Sports Marketing and Sponsorship 21, no. 4 (April 18, 2020): 651–67. http://dx.doi.org/10.1108/ijsms-01-2020-0001.

Full text
Abstract:
PurposeThe purpose of this study is to evaluate the psychometric properties of each item in the Sport Spectator Identification Scale (SSIS) (Wann and Branscombe, 1993) using the item response theory (IRT) and to provide evidence for modifications in the scale.Design/methodology/approachA total of 635 spectators of US professional sports responded to the seven-item SSIS on an eight-point semantic differential scale. The general partial credit model was fitted to the data.FindingsThe results revealed that four items (Items 1, 2, 3 and 5) provide a relatively high amount of information, whereas three items (Items 4, 6 and 7) provide a low amount of information, indicating different levels of measurement precision among the items. Furthermore, the results showed that some low-level response options were rarely selected by participants, indicating that it may not be necessary to include response options as many as eight within each item.Originality/valueUnlike previous studies examining the psychometric properties of the SSIS as a whole, the present study provides information about the usefulness of each item of the SSIS in measuring individuals' team identification. Based on the findings, the authors identified some issues with the three problematic items, including the wording of the items and the link between the question and the target construct. The authors make several suggestions for researchers and practitioners in improving individual item quality and in making informed decisions when using the SSIS in the future.
APA, Harvard, Vancouver, ISO, and other styles
47

Burud, Ismail, Kavitha Nagandla, and Puneet Agarwal. "Impact of distractors in item analysis of multiple choice questions." International Journal of Research in Medical Sciences 7, no. 4 (March 27, 2019): 1136. http://dx.doi.org/10.18203/2320-6012.ijrms20191313.

Full text
Abstract:
Background: Item analysis is a quality assurance of examining the performance of the individual test items that measures the validity and reliability of exams. This study was performed to evaluate the quality of the test items with respect to their performance on difficulty index (DFI), Discriminatory index (DI) and assessment of functional and non-functional distractors (FD and NFD).Methods: This study was performed on the summative examination undertaken by 113 students. The analyses include 120 one best answers (OBAs) and 360 distractors.Results: Out of the 360 distractors, 85 distractors were chosen by less than 5% with the distractor efficiency of 23.6%. About 47 (13%) items had no NFDs while 51 (14%), 30 (8.3%), and 4 (1.1%) items contained 1, 2, and 3 NFDs respectively. Majority of the items showed excellent difficulty index (50.4%, n=42) and fair discrimination (37%, n=33). The questions with excellent difficulty index and discriminatory index showed statistical significance with 1NFD and 2 NFD (p=0.03).Conclusions: The post evaluation of item performance in any exam in one of the quality assurance method of identifying the best performing item for quality question bank. The distractor efficiency gives information on the overall quality of item.
APA, Harvard, Vancouver, ISO, and other styles
48

Wolf, David B. "A Psychometric Analysis of the Three Gunas." Psychological Reports 84, no. 3_suppl (June 1999): 1379–90. http://dx.doi.org/10.2466/pr0.1999.84.3c.1379.

Full text
Abstract:
The Vedic Personality Inventory was devised to assess the validity of the Vedic concept of the three gunas or modes of nature as a psychological categorization system. The sample of 619 subjects included persons of varying ages and occupations from a middle-size city in southeastern United States, and also of subscribers to a magazine focusing on Eastern-style spirituality. The original 90-item inventory was shortened to 56 items on the basis of reliability and validity analyses. Cronbach alpha for the three subscales ranged from .93 to .94, and the corrected item-total correlation of every item score with its subscale score was greater than .50. Three measures of convergent validity and four measures of discriminant validity provide evidence for construct validity. The loading of every item on the scale is stronger for the intended subscale than for any other subscale. Although each subscale contains congeneric items, the factors are not independent. The nonorthogonality is consistent with Vedic theory. This inventory requires psychometric development and testing cross-culturally as well as to be experimentally implemented in group research and individual assessment.
APA, Harvard, Vancouver, ISO, and other styles
49

Jiaxi Peng, Danmin Miao, Yebing Yang, Yuan Jiang, and Wei Xiao. "Item Analysis of Combined Raven's Test Based on Item Response Theory." INTERNATIONAL JOURNAL ON Advances in Information Sciences and Service Sciences 4, no. 18 (October 31, 2012): 357–488. http://dx.doi.org/10.4156/aiss.vol4.issue18.43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Alpusari, Mahmud. "ANALISIS BUTIR SOAL KONSEP DASAR IPA 1 MELALUI PENGGUNAAN PROGRAM KOMPUTER ANATES VERSI 4.0 FOR WINDOWS." Primary: Jurnal Pendidikan Guru Sekolah Dasar 3, no. 2 (January 8, 2015): 106. http://dx.doi.org/10.33578/jpfkip.v3i2.2501.

Full text
Abstract:
This research was qualitative research with descriptive method. Subject was student teachers who took Fundamental Science 1. Based on validity analysis of item on 1 % significance level, there were 16 valid items, 26 valid ietms for 5 % significance level, and 14 invalid items. Then, analysis of distinguishing items, the item number 20 was very worst, 15 items were poor, other 15 item were fair and the other items were good. Meanwhile, analysis of level of difficulty, 17 item were very easy, 9 items were easy, 11 items were moderate, an item was difficult, and the others were very difficult. Analysis for whole items, there were only 21 items that were ready to be used, 5 items were needed to be revised, and the others could not be used in a testKey words : Konsep Dasar IPA 1, validity, distinguishing items, level of difficulity
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography