Academic literature on the topic 'Differential Item Functioning (DIF)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Differential Item Functioning (DIF).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Differential Item Functioning (DIF)"

1

Prieto-Marañón, Pedro, María Ester Aguerri, María Silvia Galibert, and Horacio Félix Attorresi. "Detection of Differential Item Functioning." Methodology 8, no. 2 (August 1, 2012): 63–70. http://dx.doi.org/10.1027/1614-2241/a000038.

Full text
Abstract:
This study analyzes Differential Item Functioning (DIF) with three combined decision rules and compares the results with the variation of the Mantel-Haenszel procedure (vaMH) proposed by Mazor, Clauser, and Hambleton (1994). One decision rule combines the Mantel-Haenszel procedure (MH) with the Breslow-Day test of trend in odds ratio heterogeneity (BDT), having performed the Bonferroni adjustment, as Randall Penfield proposed. The second uses both MH and BDT without the Bonferroni adjustment. The third combines MH with the Breslow-Day test for homogeneity of the odds ratio without the Bonferroni adjustment. The three decision rules yielded satisfactory results, showed similar power, and none of them detected DIF erroneously. The second rule proved to be the most powerful in the presence of nonuniform DIF. Only in the presence of uniform DIF with the smallest difference of difficulty parameters, was there evidence of vaMH’s superiority.
APA, Harvard, Vancouver, ISO, and other styles
2

Walker, Cindy M., Bo Zhang, Kathleen Banks, and Kevin Cappaert. "Establishing Effect Size Guidelines for Interpreting the Results of Differential Bundle Functioning Analyses Using SIBTEST." Educational and Psychological Measurement 72, no. 3 (October 11, 2011): 415–34. http://dx.doi.org/10.1177/0013164411422250.

Full text
Abstract:
The purpose of this simulation study was to establish general effect size guidelines for interpreting the results of differential bundle functioning (DBF) analyses using simultaneous item bias test (SIBTEST). Three factors were manipulated: number of items in a bundle, test length, and magnitude of uniform differential item functioning (DIF) against the focal group in each item in a bundle. A secondary purpose was to validate the current effect size guidelines for interpreting the results of single-item DIF analyses using SIBTEST. The results of this study clearly demonstrate that ability estimation bias can only be attributed to DIF or DBF when a large number of items in a bundle are functioning differentially against focal examinees in a small way or a small number of items are functioning differentially against focal examinees in a large way. In either of these situations, the presence of DIF or DBF should be a cause for concern because it would lead one to erroneously believe that distinct groups differ in ability when in fact they do not.
APA, Harvard, Vancouver, ISO, and other styles
3

French, Brian F., and Thao T. Vo. "Differential Item Functioning of a Truancy Assessment." Journal of Psychoeducational Assessment 38, no. 5 (July 19, 2019): 642–48. http://dx.doi.org/10.1177/0734282919863215.

Full text
Abstract:
The Washington Assessment of Risk and Needs of Students (WARNS) is a brief self-report measure designed for schools, courts, and youth service providers to identify student behaviors and contexts related to school truancy. Empirical support for WARNS item invariance between ethnic groups is lacking. This study examined differential item functioning (DIF) to ensure that items on the WARNS function similarly across groups, especially for groups where truancy rates are highest. The item response theory graded response model was used to examine DIF between Caucasian, African American, and Latinx students. DIF was identified in six items across WARNS domains. The DIF amount and magnitude likely will not influence decisions based on total scores. Implications for practice and suggestions for an ecological framework to explain the DIF results are discussed.
APA, Harvard, Vancouver, ISO, and other styles
4

Johanson, George A. "Differential Item Functioning in Attitude Assessment." Evaluation Practice 18, no. 2 (June 1997): 127–35. http://dx.doi.org/10.1177/109821409701800204.

Full text
Abstract:
Differential item functioning (DIF) is not often seen in the literature on attitude assessment. A brief discussion of DIF and methods of implementation is followed by an illustrative example from a program evaluation, using an attitude-towards-science scale with 1550 children in grades one through six. An item exhibiting substantial DIF with respect to gender was detected using the Mantel-Haenszel procedure. In a second example, data from workshop evaluations with 1682 adults were recoded to a binary format, and it was found that an item suspected of functioning differentially with respect to age groups was, in fact, not doing so. Implications for evaluation practice are discussed.
APA, Harvard, Vancouver, ISO, and other styles
5

ALWI, IDRUS. "SENSITIVITY OF MANTEL HAENSZEL MODEL AND RASCH MODEL AS VIEWED FROM SAMPLE SIZE." Jurnal Evaluasi Pendidikan 2, no. 1 (May 9, 2017): 18. http://dx.doi.org/10.21009/jep.021.02.

Full text
Abstract:
The aims of this research is to study the sensitivity comparison of Mantel Haenszel and Rasch Model for detection differential item functioning, observed from the sample size. These two differential item functioning (DIF) methods were compared using simulate binary item respon data sets of varying sample size, 200 and 400 examinees were used in the analyses, a detection method of differential item functioning (DIF) based on gender difference. These test conditions were replication 4 times. For both differential item functioning (DIF) detection methods, a test length of 42 items was sufficient for satisfactory differential item functioning (DIF) detection with detection rate increasing as sample size increased. Finding the study revealed that the empirical result show Rasch Model are more sensitive to detection differential item functioning (DIF) than Mantel Haenszel. With reference to findings of this study, it is recomended that the use of Rasch Model in evaluation activities with multiple choice test. For this purpose, it is necessary for every school to have some teachers who are skillfull in analyzing results of test using modern methods (Item Response Theory).
APA, Harvard, Vancouver, ISO, and other styles
6

Wetzel, Eunike, and Benedikt Hell. "Gender-Related Differential Item Functioning in Vocational Interest Measurement." Journal of Individual Differences 34, no. 3 (August 1, 2013): 170–83. http://dx.doi.org/10.1027/1614-0001/a000112.

Full text
Abstract:
Large mean differences are consistently found in the vocational interests of men and women. These differences may be attributable to real differences in the underlying traits. However, they may also depend on the properties of the instrument being used. It is conceivable that, in addition to the intended dimension, items assess a second dimension that differentially influences responses by men and women. This question is addressed in the present study by analyzing a widely used German interest inventory (Allgemeiner Interessen-Struktur-Test, AIST-R) regarding differential item functioning (DIF) using a DIF estimate in the framework of item response theory. Furthermore, the impact of DIF at the scale level is investigated using differential test functioning (DTF) analyses. Several items on the AIST-R’s scales showed significant DIF, especially on the Realistic, Social, and Enterprising scales. Removal of DIF items reduced gender differences on the Realistic scale, though gender differences on the Investigative, Artistic, and Social scales remained practically unchanged. Thus, responses to some AIST-R items appear to be influenced by a secondary dimension apart from the interest domain the items were intended to measure.
APA, Harvard, Vancouver, ISO, and other styles
7

Jin, Kuan-Yu, Hui-Fang Chen, and Wen-Chung Wang. "Using Odds Ratios to Detect Differential Item Functioning." Applied Psychological Measurement 42, no. 8 (March 21, 2018): 613–29. http://dx.doi.org/10.1177/0146621618762738.

Full text
Abstract:
Differential item functioning (DIF) makes test scores incomparable and substantially threatens test validity. Although conventional approaches, such as the logistic regression (LR) and the Mantel–Haenszel (MH) methods, have worked well, they are vulnerable to high percentages of DIF items in a test and missing data. This study developed a simple but effective method to detect DIF using the odds ratio (OR) of two groups’ responses to a studied item. The OR method uses all available information from examinees’ responses, and it can eliminate the potential influence of bias in the total scores. Through a series of simulation studies in which the DIF pattern, impact, sample size (equal/unequal), purification procedure (with/without), percentages of DIF items, and proportions of missing data were manipulated, the performance of the OR method was evaluated and compared with the LR and MH methods. The results showed that the OR method without a purification procedure outperformed the LR and MH methods in controlling false positive rates and yielding high true positive rates when tests had a high percentage of DIF items favoring the same group. In addition, only the OR method was feasible when tests adopted the item matrix sampling design. The effectiveness of the OR method with an empirical example was illustrated.
APA, Harvard, Vancouver, ISO, and other styles
8

Rome, Logan, and Bo Zhang. "Investigating the Effects of Differential Item Functioning on Proficiency Classification." Applied Psychological Measurement 42, no. 4 (August 29, 2017): 259–74. http://dx.doi.org/10.1177/0146621617726789.

Full text
Abstract:
This study provides a comprehensive evaluation of the effects of differential item functioning (DIF) on proficiency classification. Using Monte Carlo simulation, item- and test-level DIF magnitudes were varied systematically to investigate their impact on proficiency classification at multiple decision points. Findings from this study clearly show that the presence of DIF affects proficiency classification not by lowering the overall correct classification rates but by affecting classification error rates differently for reference and focal group members. The study also reveals that multiple items with low levels of DIF can be particularly problematic. They can do similar damage to proficiency classification as high-level DIF items with the same cumulative magnitudes but are much harder to detect with current DIF and differential bundle functioning (DBF) techniques. Finally, how DIF affects proficiency classification errors at multiple cut scores is fully described and discussed.
APA, Harvard, Vancouver, ISO, and other styles
9

Rudas, Tamás, and Rebecca Zwick. "Estimating the Importance of Differential Item Functioning." Journal of Educational and Behavioral Statistics 22, no. 1 (March 1997): 31–45. http://dx.doi.org/10.3102/10769986022001031.

Full text
Abstract:
Several methods have been proposed to detect differential item functioning (DIF), an item response pattern in which members of different demographic groups have different conditional probabilities of answering a test item correctly, given the same level of ability. In this article, the mixture index of fit, proposed by Rudas, Clogg, and Lindsay (1994) , is used to estimate the fraction of the population for which DIF occurs, and this approach is compared to the Mantel-Haenszel ( Mantel & Haenszel, 1959 ) test of DIF developed by Holland (1985 ; see Holland & Thayer, 1988) . The proposed estimation procedure, which is noniterative, can provide information about which portions of the item response data appear to be contributing to DIF.
APA, Harvard, Vancouver, ISO, and other styles
10

SUDARYONO, SUDARYONO. "SENSITIVITAS METODE PENDETEKSIAN DIFFERENTIAL ITEM FUNCTIONING (DIF)." Jurnal Evaluasi Pendidikan 3, no. 1 (May 9, 2017): 82. http://dx.doi.org/10.21009/jep.031.07.

Full text
Abstract:
Tujuan utama dari penelitian ini adalah untuk mengetahui data empiris tentang sensitivitas perbandingan antara metode Chi-Square Scheuneman, metode Mantel-Haenszel metode teori responsi butir model Rasch dalam mendeteksi keberadaan Differential Item Functioning (DIF). Penelitian ini menggunakan metode eksperimental dengan desain 1 x 3. Variabel independen dalam penelitian ini adalah, Chi-square Scheuneman, metode Mantel-Haenszel, dan teori responsi butir model Rasch. Secara spesifik tujuan penelitian ini adalah untuk mengungkapkan: (1) karakteristik butir tes berdasarkan teori tes klasik, (2) kesalahan pengukuran standar berdasarkan teori tes klasik, dan (3) deteksi butir soal yang terindikasi mengandung Differential Item Functioning (DIF) berdasarkan perbedaan gender. Analisis data penelitian ini didasarkan pada responsi peserta tes Ujian Nasional Matematika pada Sekolah Menengah Atas di Tangerang tahun akademik 2008/2009. Sumber data penelitian ini adalah lembar jawaban komputer dari 5000 siswa yang terdiri dari 2500 siswa pria dan 2500 siswa perempuan yang diambil secara acak dengan menggunakan teknik simple random sampling. Hasil analisis deskriptif dengan menggunakan teori tes klasik menunjukkan bahwa ada 28 butir tes dari 40 butir tes matematika, dengan indeks keandalan reliabilitas sebesar 0,827. Hasil penelitian menunjukkan bahwa semua metode yang digunakan untuk mendeteksi DIF cukup baik, namun metode teori responsi butir model Rasch adalah model yang paling sensitif dibandingkan dengan metode Mantel-Haenszel, dan metode Chi-square Scheuneman.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Differential Item Functioning (DIF)"

1

Lee, Yoonsun. "The impact of a multidimensional item on differential item functioning (DIF) /." Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/7920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yildirim, Huseyin Husnu. "The Differential Item Functioning (dif) Analysis Of Mathematics Items In The International Assessment Programs." Phd thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607135/index.pdf.

Full text
Abstract:
Cross-cultural studies, like TIMSS and PISA 2003, are being conducted since 1960s with an idea that these assessments can provide a broad perspective for evaluating and improving education. In addition countries can assess their relative positions in mathematics achievement among their competitors in the global world. However, because of the different cultural and language settings of different countries, these international tests may not be functioning as expected across all the countries. Thus, tests may not be equivalent, or fair, linguistically and culturally across the participating countries. In this conte! ! xt, the present study aimed at assessing the equivalence of mathematics items of TIMSS 1999 and PISA 2003 across cultures and languages, to fin! d out if mathematics achievement possesses any culture specifi! c aspect s. For this purpose, the present study assessed Turkish and English versions of TIMSS 1999 and PISA 2003 mathematics items with respect to, (a) psychometric characteristics of items, and (b) possible sources of Differential Item Functioning (DIF) between these two versions. The study used Restricted Factor Analysis, Mantel-Haenzsel Statistics and Item Response Theory Likelihood Ratio methodologies to determine DIF items. The results revealed that there were adaptation problems in both TIMSS and PISA studies. However it was still possible to determine a subtest of items functioning fairly between cultures, to form a basis for a cross-cultural comparison. In PISA, there was a high rate of agreement among the DIF methodologies used. However, in TIMSS, the agree! ment ra! te decreased considerably possibly because the rate o! f differ e! ntially functioning items within TIMSS was higher, and differential guessing and differential discriminating were also issues in the test. The study! also revealed that items requiring competencies of reproduction of practiced knowledge, knowledge of facts, performance of routine procedures, application of technical skills were less likely to be biased against Turkish students with respect to American students at the same ability level. On the other hand, items requiring students to communicate mathematically, items where various results must be compared, and items that had real-world context were less likely to be in favor of Turkish students.
APA, Harvard, Vancouver, ISO, and other styles
3

Lees, Jared Andrew. "Differential Item Functioning Analysis of the Herrmann Brain Dominance Instrument." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd2103.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Duncan, Susan Cromwell. "Improving the prediction of differential item functioning: a comparison of the use of an effect size for logistic regression DIF and Mantel-Haenszel DIF methods." Diss., Texas A&M University, 2003. http://hdl.handle.net/1969.1/5876.

Full text
Abstract:
Psychometricians and test developers use DIF analysis to determine if there is possible bias in a given test item. This study examines the conditions under which two predominant methods for determining differential item function compare with each other in item bias detection using an effect size statistic as the basis for comparison. The main focus of the present research was to test whether or not incorporating an effect size for LR DIF will more accurately detect DIF and to compare the utility of an effect size index across MH DIF and LR DIF methods. A simulation study was used to compare the accuracy of MH DIF and LR DIF methods using a p value or supplemented with an effect size. Effect sizes were found to increase the accuracy of DIF and the possibility of the detection of DIF across varying ability distributions, population distributions, and sample size combinations. Varying ability distributions and sample size combinations affected the detection of DIF, while population distributions did not seem to affect the detection of DIF.
APA, Harvard, Vancouver, ISO, and other styles
5

Clark, Patrick Carl Jr. "An Examination of Type I Errors and Power for Two Differential Item Functioning Indices." Wright State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=wright1284475420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Swander, Carl Joseph. "Assessing the Differential Functioning of Items and Tests of a Polytomous Employee Attitude Survey." Thesis, Virginia Tech, 1999. http://hdl.handle.net/10919/9863.

Full text
Abstract:
Dimensions of a polytomous employee attitude survey were examined for the presence of differential item functioning (DIF) and differential test functioning (DTF) utilizing Raju, van der Linden, & Fleer's (1995) differential functioning of items and tests (DFIT) framework. Comparisons were made between managers and non-managers on the 'Management' dimension and between medical staff and nurse staff employees on both the 'Management' and 'Quality of Care and Service' dimensions. 2 out of 21 items from the manager/non-manager comparison were found to have significant DIF, supporting the generalizability of Lynch, Barnes-Farell, and Kulikowich (1998). No items from the medical staff/nurse staff comparisons were found to have DIF. The DTF results indicated that in two out of the three comparisons 1 item could be removed to create dimensions free from DTF. Based on the current findings implications and future research are discussed.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
7

Conoley, Colleen Adele. "Differential item functioning in the Peabody Picture Vocabulary Test - Third Edition: partial correlation versus expert judgment." Diss., Texas A&M University, 2003. http://hdl.handle.net/1969.1/151.

Full text
Abstract:
This study had three purposes: (1) to identify differential item functioning (DIF) on the PPVT-III (Forms A & B) using a partial correlation method, (2) to find a consistent pattern in items identified as underestimating ability in each ethnic minority group, and (3) to compare findings from an expert judgment method and a partial correlation method. Hispanic, African American, and white subjects for the study were provided by American Guidance Service (AGS) from the standardization sample of the PPVT-III; English language learners (ELL) of Mexican descent were recruited from school districts in Central and South Texas. Content raters were all self-selected volunteers, each had advanced degrees, a career in education, and no special expertise of ELL or ethnic minorities. Two groups of teachers participated as judges for this study. The "expert" group was selected because of their special knowledge of ELL students of Mexican descent. The control group was all regular education teachers with limited exposure to ELL. Using the partial correlation method, DIF was detected within each group comparison. In all cases except with the ELL on form A of the PPVT-III, there were no significant differences in numbers of items found to have significant positive correlations versus significant negative correlations. On form A, the ELL group comparison indicated more items with negative correlation than positive correlation [χ2 (1) = 5.538; p=.019]. Among the items flagged as underestimating ability of the ELL group, no consistent trend could be detected. Also, it was found that none of the expert judges could adequately predict those items that would underestimate ability for the ELL group, despite expertise. Discussion includes possible consequences of item placement and recommendations regarding further research and use of the PPVT-III.
APA, Harvard, Vancouver, ISO, and other styles
8

Asil, Mustafa. "Differential item functioning (DIF) analysis of the verbal section of the 2003 student selection examination (SSE)." The Ohio State University, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=osu1399553097.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kim, Jihye. "Controlling Type 1 Error Rate in Evaluating Differential Item Functioning for Four DIF Methods: Use of Three Procedures for Adjustment of Multiple Item Testing." Digital Archive @ GSU, 2010. http://digitalarchive.gsu.edu/eps_diss/67.

Full text
Abstract:
In DIF studies, a Type I error refers to the mistake of identifying non-DIF items as DIF items, and a Type I error rate refers to the proportion of Type I errors in a simulation study. The possibility of making a Type I error in DIF studies is always present and high possibility of making such an error can weaken the validity of the assessment. Therefore, the quality of a test assessment is related to a Type I error rate and to how to control such a rate. Current DIF studies regarding a Type I error rate have found that the latter rate can be affected by several factors, such as test length, sample size, test group size, group mean difference, group standard deviation difference, and an underlying model. This study focused on another undiscovered factor that may affect a Type I error rate; the effect of multiple testing. DIF analysis conducts multiple significance testing of items in a test, and such multiple testing may increase the possibility of making a Type I error at least once. The main goal of this dissertation was to investigate how to control a Type I error rate using adjustment procedures for multiple testing which have been widely used in applied statistics but rarely used in DIF studies. In the simulation study, four DIF methods were performed under a total of 36 testing conditions; the methods were the Mantel-Haenszel method, the logistic regression procedure, the Differential Functioning Item and Test framework, and the Lord’s chi-square test. Then the Bonferroni correction, the Holm’s procedure, and the BH method were applied as an adjustment of multiple significance testing. The results of this study showed the effectiveness of three adjustment procedures in controlling a Type I error rate.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhao, Jing. "Contextual Differential Item Functioning: Examining the Validity of Teaching Self-Efficacy Instruments Using Hierarchical Generalized Linear Modeling." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1339551861.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Differential Item Functioning (DIF)"

1

Spray, Judith A. Performance of three conditional DIF Statistics in detecting differential item functioning on simulated tests. Iowa City, Iowa: American College Testing Program, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

T, Everson Howard, and Osterlind Steven J, eds. Differential item functioning. 2nd ed. Thousand Oaks: SAGE, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Osterlind, Steven, and Howard Everson. Differential Item Functioning. 2455 Teller Road, Thousand Oaks California 91320 United States of America: SAGE Publications, Inc., 2009. http://dx.doi.org/10.4135/9781412993913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hamilton, Laura S. Exploring differential item functioning on science achievement tests. Los Angeles, CA: Center for the Study of Evaluation, National Center for Research on Evaluation, Standards, and Student Testing, Graduate School of Education & Information Studies, University of California, Los Angeles, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Schnipke, Deborah L. A comparison of Mantel-Haenszel differential item functioning parameters. Newtown, PA: Law School Admission Council, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

McLeod, Lori Davis. A polynomial logistic regression approach to graphical differential item functioning. Newtown, PA: Law School Admission Council, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Le, Vi-Nhuan. Identifying differential item functioning on the NELS:88 history achievement test. Los Angeles, CA: Center for the Study of Evaluation, National Center for Research on Evaluation, Standards, and Student Testing, Graduate School of Education & Information Studies, University of California, Los Angeles, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Roussos, Louis A. A generalized formula for the Mantel-Haenszel differential item functioning parameter. Newtown, PA: Law School Admission Council, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Roussos, Louis A. A formulation of the Mantel-Haenszel differential item functioning parameter with practical implications. Newtown, PA: Law School Admission Council, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lawrence, Ida M. Differential item functioning for males and females on SAT-verbal reading subscore items. New York (Box 886, New York 10101-0886): College Entrance Examination Board, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Differential Item Functioning (DIF)"

1

Chen, Wen-Hung, and Dennis Revicki. "Differential Item Functioning (DIF)." In Encyclopedia of Quality of Life and Well-Being Research, 1611–14. Dordrecht: Springer Netherlands, 2014. http://dx.doi.org/10.1007/978-94-007-0753-5_728.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ong, Mei Ling, Seock-Ho Kim, Allan Cohen, and Stephen Cramer. "A Comparison of Differential Item Functioning (DIF) Detection for Dichotomously Scored Items Using IRTPRO, BILOG-MG, and IRTLRDIF." In Quantitative Psychology Research, 121–32. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-19977-1_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wetzel, Eunike, and Jan R. Böhnke. "Differential Item Functioning." In Encyclopedia of Personality and Individual Differences, 1121–26. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-319-24612-3_1297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wetzel, Eunike, and Jan R. Böhnke. "Differential Item Functioning." In Encyclopedia of Personality and Individual Differences, 1–5. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-28099-8_1297-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Boone, William J., John R. Staver, and Melissa S. Yale. "Differential Item Functioning." In Rasch Analysis in the Human Sciences, 273–97. Dordrecht: Springer Netherlands, 2013. http://dx.doi.org/10.1007/978-94-007-6857-4_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Thissen, David. "Similar DIFs: Differential Item Functioning and Factorial Invariance for Scales with Seven (“Plus or Minus Two”) Response Alternatives." In Springer Proceedings in Mathematics & Statistics, 81–91. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-56294-0_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kok, Frank. "9. Differential item functioning." In The Construct of Language Proficiency, 115. Amsterdam: John Benjamins Publishing Company, 1992. http://dx.doi.org/10.1075/z.62.13kok.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Drasgow, Fritz, Christopher D. Nye, Stephen Stark, and Oleksandr S. Chernyshenko. "Differential Item and Test Functioning." In The Wiley Handbook of Psychometric Testing, 885–99. Chichester, UK: John Wiley & Sons, Ltd, 2018. http://dx.doi.org/10.1002/9781118489772.ch27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Desjardins, Christopher D., and Okan Bulut. "Measurement Invariance and Differential Item Functioning." In Handbook of Educational Measurement and Psychometrics Using R, 249–75. Boca Raton, Florida : CRC Press, [2018]: Chapman and Hall/CRC, 2018. http://dx.doi.org/10.1201/b20498-11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Glas, Cees A. W. "Differential Item Functioning Depending on General Covariates." In Essays on Item Response Theory, 131–48. New York, NY: Springer New York, 2001. http://dx.doi.org/10.1007/978-1-4613-0169-1_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Differential Item Functioning (DIF)"

1

Davidson, Matt J., Brett Wortzman, Amy J. Ko, and Min Li. "Investigating Item Bias in a CS1 Exam with Differential Item Functioning." In SIGCSE '21: The 52nd ACM Technical Symposium on Computer Science Education. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3408877.3432397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hadi, Samsul, Basukiyatno Basukiyatno, and Purwo Susongko. "Differential Item Functioning National Examination on Device Test Mathematics High School in Central Java." In Proceedings of the 1st International Conference on Social Science, Humanities, Education and Society Development, ICONS 2020, 30 November, Tegal, Indonesia. EAI, 2021. http://dx.doi.org/10.4108/eai.30-11-2020.2303726.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sadak, Musa. "THE PERFORMANCE OF TURKISH AND US STUDENTS ON PISA 2012 MATHEMATICS ITEMS: A DIFFERENTIAL ITEM FUNCTIONING ANALYSIS." In International Technology, Education and Development Conference. IATED, 2016. http://dx.doi.org/10.21125/iceri.2016.1746.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lan, Tian, Zhongxuan Lin, and Tour Liu. "Differential Item Functioning Analysis for Repeated Measures Design Social Survey Data: A Method for Detecting Social Demands Difference in Big-Data Era." In 2018 5th International Conference on Behavioral, Economic, and Socio-Cultural Computing (BESC). IEEE, 2018. http://dx.doi.org/10.1109/besc.2018.8697289.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Panthong, Aweeporn, and Pairatana Wongnam. "A Comparison of the Efficiency of Likelihood Ratio Test and Bayesian Procedures in the Detection Differential Item Functioning for Polytomous Scored Items." In Annual International Conference on Cognitive and Behavioral Psychology. Global Science & Technology Forum (GSTF), 2015. http://dx.doi.org/10.5176/2251-1865_cbp15.58.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Altman, Brianna, Maha Mian, Luna Ueno, and Mitch Earleywine. "Cannabis’s Link to Schizotypy: Phenomenon, Measurement Bias, or Delusion?" In 2020 Virtual Scientific Meeting of the Research Society on Marijuana. Research Society on Marijuana, 2021. http://dx.doi.org/10.26828/cannabis.2021.01.000.5.

Full text
Abstract:
Links between cannabis use and psychosis continue to generate research and media attention. Cannabis users have outscored non-users on the Schizotypal Personality Questionnaire-Brief (SPQ-B) by a small amount in multiple studies, but previous work on biased items suggests that the groups do not differ if these items are removed. The present study examined links between schizotypal personality, as measured by the SPQ-B, and cannabis use in a large sample recruited from Amazon’s MTurk platform. Over 500 participants (72.5%) reported lifetime cannabis exposure, 259 participants (36.7%) reported current cannabis use, and on average, used 3.5 days per week. Users and non-users failed to differ significantly on total SPQ-B scores or any of the three established subscales, with effect sizes all lower than d = .20. The null results inspired a re-examination of the SPQ-B’s factor structure, which identified a novel 3-factor solution (difficulty opening up to others, hyperawareness, and odd or unusual behavior). Only the “odd or unusual behavior” factor showed cannabis-related differences (g = .234), but a differential item functioning test revealed that one item on that subscale showed potential bias against users. Removing this item dropped the group differences to a non-significant g = 0.149. These results suggest that links between schizotypy and cannabis require cautious interpretation with careful attention to potential measurement bias. In addition, the Schizotypal Personality Questionnaire-Brief might have an alternative factor structure that could help answer important questions in psychopathology.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Differential Item Functioning (DIF)"

1

Truhon, Stephen A. Comparing Two Versions of the MEOCS Using Differential Item Functioning. Fort Belvoir, VA: Defense Technical Information Center, May 2003. http://dx.doi.org/10.21236/ada417193.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Welsh, John R., Androlewicz Jr., and Thomas R. Armed Services Vocational Aptitude Battery (ASVAB): Analyses of Differential Item Functioning on Forms 15, 16 and 17. Fort Belvoir, VA: Defense Technical Information Center, September 1990. http://dx.doi.org/10.21236/ada227183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography