Academic literature on the topic 'Multiple choice questions (MCQs)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multiple choice questions (MCQs).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Multiple choice questions (MCQs)"

1

Iqbal, Muhammad Zafar, Shumaila Irum, and Muhammad Sohaib Yousaf. "MULTIPLE CHOICE QUESTIONS;." Professional Medical Journal 24, no. 09 (2017): 1409–14. http://dx.doi.org/10.29309/tpmj/2017.24.09.824.

Full text
Abstract:
Objectives: The main objective of this study was to judge the quality of MCQs interms of their cognition level and item writing flaws, developed by the faculty of a public sectormedical college. Setting: This study was conducted in Sheikh Zayed Medical College, RahimYar Khan. Duration with Dates: Data was collected between June 2014 to March 2015 andthis study was completed in July 2016. Sample Size: A sample of 500 MCQs collected from25 faculty members were included in the study. Study Design: Quantitative method. StudyType: Cross sectional descriptive analysis. Material and Methods: This quantitative study wasconducted in Sheikh Zayed Medical College Rahim Yar Khan over six months period after theapproval of the study proposal. Every faculty member is supposed to write 25 MCQs in order tobecome supervisor. I collected 500 multiple choice questions from 25 faculty members readyfor submission to CPSP. The quality of all MCQs was checked in terms of item writing flawsand cognition level by panel of experts. Results: Absolute terms were observed in 10(2%),vague terms in 15(3%), implausible distracters in 75(15%), extra detail in correct option 15(3%),unfocused stem 63(12.6%), grammatical clues 39(7.8%), logical clues 18(3.6%), word repeats19(3.8%), >then one correct answer 21(4.2%), unnecessary information in stem 37(7.4%),lost sequence in data 15(3%), all of above16(3.2%), none of above 12(2.4%) and negativestem 23(4.6%). Cognition level l (recall) was observed in 363(72.6%), level ll (interpretation) in115(23%) and level lll (problem solving) in 22(4.4%) items. Total 378(75.6%) flaws were identifiedand four commonest flaws were implausible distracter 75(15%), unfocused stem 63(12.6%),grammatical clues 39(7.8%) and unnecessary information in stem 37(7.4%). Conclusion: It isconcluded that assessment of medical students is very demanding and need of the time. A wellconstructed,peer-reviewed single best type MCQ is best one to complete this task becauseof cost effectiveness, better reliability and computerized marking. It is very important to startfaculty development program in order to decrease the number of item writing flaws and improvecognition level towards problem solving and application of knowledge.
APA, Harvard, Vancouver, ISO, and other styles
2

Jia, Bing, Dan He, and Zhemin Zhu. "QUALITY AND FEATURE OF MULTIPLE-CHOICE QUESTIONS IN EDUCATION." Problems of Education in the 21st Century 78, no. 4 (2020): 576–94. http://dx.doi.org/10.33225/pec/20.78.576.

Full text
Abstract:
The quality of multiple-choice questions (MCQs) as well as the student's solve behavior in MCQs are educational concerns. MCQs cover wide educational content and can be immediately and accurately scored. However, many studies have found some flawed items in this exam type, thereby possibly resulting in misleading insights into students’ performance and affecting important decisions. This research sought to determine the characteristics of MCQs and factors that may affect the quality of MCQs by using item response theory (IRT) to evaluate data. For this, four samples of different sizes from US and China in secondary and higher education were chosen. Item difficulty and discrimination were determined using item response theory statistical item analysis models. Results were as follows. First, only a few guessing behaviors are included in MCQ exams because all data fit the two-parameter logistic model better than the three-parameter logistic model. Second, the quality of MCQs depended more on the degree of training of examiners and less on middle or higher education levels. Lastly, MCQs must be evaluated to ensure that high-quality items can be used as bases of inference in middle and higher education. Keywords: higher education, item evaluation, item response theory, multiple-choice test, secondary education
APA, Harvard, Vancouver, ISO, and other styles
3

Stringer, J. K., Sally A. Santen, Eun Lee, et al. "Examining Bloom’s Taxonomy in Multiple Choice Questions: Students’ Approach to Questions." Medical Science Educator 31, no. 4 (2021): 1311–17. http://dx.doi.org/10.1007/s40670-021-01305-y.

Full text
Abstract:
Abstract Background Analytic thinking skills are important to the development of physicians. Therefore, educators and licensing boards utilize multiple-choice questions (MCQs) to assess these knowledge and skills. MCQs are written under two assumptions: that they can be written as higher or lower order according to Bloom’s taxonomy, and students will perceive questions to be the same taxonomical level as intended. This study seeks to understand the students’ approach to questions by analyzing differences in students’ perception of the Bloom’s level of MCQs in relation to their knowledge and confidence. Methods A total of 137 students responded to practice endocrine MCQs. Participants indicated the answer to the question, their interpretation of it as higher or lower order, and the degree of confidence in their response to the question. Results Although there was no significant association between students’ average performance on the content and their question classification (higher or lower), individual students who were less confident in their answer were more than five times as likely (OR = 5.49) to identify a question as higher order than their more confident peers. Students who responded incorrectly to the MCQ were 4 times as likely to identify a question as higher order than their peers who responded correctly. Conclusions The results suggest that higher performing, more confident students rely on identifying patterns (even if the question was intended to be higher order). In contrast, less confident students engage in higher-order, analytic thinking even if the question is intended to be lower order. Better understanding of the processes through which students interpret MCQs will help us to better understand the development of clinical reasoning skills.
APA, Harvard, Vancouver, ISO, and other styles
4

Salam, Abdus, Rabeya Yousuf, and Sheikh Muhammad Abu Bakar. "Multiple Choice Questions in Medical Education: How to Construct High Quality Questions." International Journal of Human and Health Sciences (IJHHS) 4, no. 2 (2020): 79. http://dx.doi.org/10.31344/ijhhs.v4i2.180.

Full text
Abstract:
Multiple choice questions (MCQ) are the most widely used objective test items. Students often learn what we assess, and not what we teach, although teaching and assessment are the two sides of the same coin. So, assessment in medical education is very important to ensure that qualified competent doctors are being produced.A good test is the test that assesses higher level of thinking skills. Many inhouse MCQs are found faulty which assess lower level of thinking skills. The main problems in constructing good MCQs are that (i) very few faculty members have formal training in questions construction, (ii) most of the questions are prepared in the last minutes where little time exist for vetting to review the quality of questions and (iii) lack of promise on the standard of the question format and underestimation of the use of blueprint in medical schools. Constructing good MCQs, emphasis should be given that, the stem is meaningful and present a definite problem, it contains only relevant material and avoid negativity. It should be ensuring that, all options present as plausible, clear and concise, mutually exclusive, logical in order, free from clues and avoid ‘all of the above’ and ‘none of the above’. The MCQs can tests well any higher level of the cognitive domain, if it is constructed well. Efforts must be made to prepare and use of test blueprint as a guide to construct good MCQs. This paper describes and offers medical teachers a window to a comprehensive understanding of different types and aspects of MCQs and how to construct test blueprint and good MCQs that tests higher order thinking skills in the future medical graduates, thereby ensures competent doctors are being produced.International Journal of Human and Health Sciences Vol. 04 No. 02 April’20 Page : 79-88
APA, Harvard, Vancouver, ISO, and other styles
5

Walsh, Kieran. "Advice on writing multiple choice questions (MCQs)." BMJ 330, no. 7483 (2005): s25.2—s27. http://dx.doi.org/10.1136/bmj.330.7483.s25-a.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ibrahim, Azza Fathi. "Development of multiple choice questions' instructional plan for the nursing educators." Journal of Nursing Education and Practice 9, no. 9 (2019): 12. http://dx.doi.org/10.5430/jnep.v9n9p12.

Full text
Abstract:
Multiple-choice questions (MCQs) are commonly employed tests configuration in healthcare sciences for several years. They are efficient means for formative and summative evaluation of the students in nursing education. If the MCQs are designed competently, it will be a resourceful measurement for a valid assessment of the nursing students. The present study aimed to evaluate the construction quality of MCQs and its common item flaws in core nursing subjects at Faculty of Nursing, then develop an instructional plan for MCQs construction (one best answer format) to guide nursing educators. The study passed through exploratory-descriptive and methodological research designs at the Faculty of Nursing, Alexandria University, Egypt. Two samples were selected: First, 253 MCQs were selected from the different final exams (2017-2018) of the core twelve nursing subjects; Second, 21 academic nursing educators for evaluating of the suggested instructional plan. Both samples were chosen by convenience sampling technique. MCQs Assessment Form (MCQAF) was the first tool used to assess the construction quality of MCQs development and its items flaws. The second tool was MCQs' Instructional Plan Evaluation Sheet (MCQ IPES) which measured the content and face validity of the suggested instructional plan. The results revealed that 45.5% (115 MCQs) of the study sample contained ten item flaws. As regards the assessment of MCQs construction quality, evidently, the majority of nursing subject's exams has mean scores around a satisfactory level in MCQs quality construction. After submitting the developed plan to the expert group, almost the entire group found that the developed plan is accurate, with sound information, considered the useful and valued educational resource. Moreover, it has appropriate content in vocabulary, sentence structure, grammar, and concepts. Likewise, the experts reported that the developed plan is clear enough to be used by nursing educators and it is attractive and interesting self-reference tool. Conclusion & recommendation: less than one half of the study MCQs has ten item flaws and almost all of them obtained satisfactory mean scores in MCQs quality construction. The developed instructional plan is a beginning and beneficial step for nursing educators and considered an instructional mean for MCQs construction guidance. Further studies are application of the developed plan among nursing educators and investigation of the awareness and compliance of nursing educators with the MCQs construction rules is needed.
APA, Harvard, Vancouver, ISO, and other styles
7

McKenna, Peter. "Multiple choice questions: answering correctly and knowing the answer." Interactive Technology and Smart Education 16, no. 1 (2019): 59–73. http://dx.doi.org/10.1108/itse-09-2018-0071.

Full text
Abstract:
PurposeThis paper aims to examine whether multiple choice questions (MCQs) can be answered correctly without knowing the answer and whether constructed response questions (CRQs) offer more reliable assessment.Design/methodology/approachThe paper presents a critical review of existing research on MCQs, then reports on an experimental study where two objective tests (using MCQs and CRQs) were set for an introductory undergraduate course. To maximise completion, tests were kept short; consequently, differences between individuals’ scores across both tests are examined rather than overall averages and pass rates.FindingsMost students who excelled in the MCQ test did not do so in the CRQ test. Students could do well without necessarily understanding the principles being tested.Research limitations/implicationsConclusions are limited by the small number of questions in each test and by delivery of the tests at different times. This meant that statistical average data would be too coarse to use, and that some students took one test but not the other. Conclusions concerning CRQs are limited to disciplines where numerical answers or short and constrained text answers are appropriate.Practical implicationsMCQs, while useful in formative assessment, are best avoided for summative assessments. Where appropriate, CRQs should be used instead.Social implicationsMCQs are commonplace as summative assessments in education and training. Increasing the use of CRQs in place of MCQs should increase the reliability of tests, including those administered in safety-critical areas.Originality/valueWhile others have recommended that MCQs should not be used (Hinchliffe 2014, Srivastavaet al., 2004) because they are vulnerable to guessing, this paper presents an experimental study designed to demonstrate whether this hypothesis is correct.
APA, Harvard, Vancouver, ISO, and other styles
8

Tenzin, Karma, Thinley Dorji, and Tashi Tenzin. "Construction of Multiple Choice Questions Before and After An Educational Intervention." Journal of Nepal Medical Association 56, no. 205 (2017): 112–16. http://dx.doi.org/10.31729/jnma.2976.

Full text
Abstract:
Introduction: Khesar Gyalpo University of Medical Sciences of Bhutan, established in 2014, has ushered in a new era in medical education in Bhutan. Multiple Choice Questions are a common means of written assessment in medical education.
 Methods: This was a quasi-experimental study conducted at the Faculty of Postgraduate Medicine, KGUMSB, Thimphu in December 2016. A total of 8 MCQs were prepared by four teaching faculties from different fields who had no prior training on construction of MCQs. It was delivered to a group of 16 randomly selected intern doctors. A 2 hours long workshop on construction of MCQs was conducted. After the workshop, the same MCQs were modified according to standard guidelines on developing MCQs and were tested in the same group of intern doctors. An analysis on the performance, difficulty factor, discrimination index and distractor analysis was done on the two sets of MCQs using Microsoft Excel and SPSS 20.0.
 Results: For the pre- and post-workshop questions respectively, the pass percentage was 69.8% (11) and 81.3% (13), difficulty factor was 0.51 and 0.53, discrimination index was 0.59 and 0.47, distractor effectiveness was 83.3% and 74.9%.
 Conclusions: The workshop on MCQ development apparently seemed highly valuable and effective in changing the learning and performances of medical educators in the development of MCQs.
 Keywords: difficulty factor; discrimination index; faculty development; medical education.
APA, Harvard, Vancouver, ISO, and other styles
9

Budiyono, Bartholomeus. "Five-Option vs Four-Option Multiple-Choice Questions." IJET (Indonesian Journal of English Teaching) 8, no. 2 (2019): 1–7. http://dx.doi.org/10.15642/ijet2.2019.8.2.1-7.

Full text
Abstract:
Abstract: Multiple-choice questions (MCQs) may provide test takers with three, four, or five options and are appreciated for reliability and economic scoring. Five-option MCQs demand much more energy, experience, time, and expertise and may probably be considered to be more difficult four-option and three-option MCQs. Previous studies involved a great number of questions and participants. This study investigated the difference between five-option and four-option MCQs through deletion of non-functioning distracters (NFDs) in proportion to a classroom-based test by administering 28 MCQs to two intact classes of 34 participants. The results show that there was significant difference in participants’ scores (p 0.030< 0.05), significant difference in the number of NFDs (p 0.01<0.05), no significant difference in item facility (p 0.485>0.05), and significant difference in item discrimination (p 0.01<0.05). Classroom teachers are free to choose either the 5-option or 4-option version, depending on the purpose of the test. 
 
 Key words: five-option, four-option, non-functioning distractor
APA, Harvard, Vancouver, ISO, and other styles
10

Amo-Salas, Mariano, María del Mar Arroyo-Jimenez, David Bustos-Escribano, Eva Fairén-Jiménez, and Jesús López-Fidalgo. "New Indices for Refining Multiple Choice Questions." Journal of Probability and Statistics 2014 (2014): 1–8. http://dx.doi.org/10.1155/2014/240263.

Full text
Abstract:
Multiple choice questions (MCQs) are one of the most popular tools to evaluate learning and knowledge in higher education. Nowadays, there are a few indices to measure reliability and validity of these questions, for instance, to check the difficulty of a particular question (item) or the ability to discriminate from less to more knowledge. In this work two new indices have been constructed: (i) the no answer index measures the relationship between the number of errors and the number of no answers; (ii) the homogeneity index measures homogeneity of the wrong responses (distractors). The indices are based on the lack-of-fit statistic, whose distribution is approximated by a chi-square distribution for a large number of errors. An algorithm combining several traditional and new indices has been developed to refine continuously a database of MCQs. The final objective of this work is the classification of MCQs from a large database of items in order to produce an automated-supervised system of generating tests with specific characteristics, such as more or less difficulty or capacity of discriminating knowledge of the topic.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Multiple choice questions (MCQs)"

1

Luger, Sarah Kaitlin Kelly. "Algorithms for assessing the quality and difficulty of multiple choice exam questions." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/20986.

Full text
Abstract:
Multiple Choice Questions (MCQs) have long been the backbone of standardized testing in academia and industry. Correspondingly, there is a constant need for the authors of MCQs to write and refine new questions for new versions of standardized tests as well as to support measuring performance in the emerging massive open online courses, (MOOCs). Research that explores what makes a question difficult, or what questions distinguish higher-performing students from lower-performing students can aid in the creation of the next generation of teaching and evaluation tools. In the automated MCQ answering component of this thesis, algorithms query for definitions of scientific terms, process the returned web results, and compare the returned definitions to the original definition in the MCQ. This automated method for answering questions is then augmented with a model, based on human performance data from crowdsourced question sets, for analysis of question difficulty as well as the discrimination power of the non-answer alternatives. The crowdsourced question sets come from PeerWise, an open source online college-level question authoring and answering environment. The goal of this research is to create an automated method to both answer and assesses the difficulty of multiple choice inverse definition questions in the domain of introductory biology. The results of this work suggest that human-authored question banks provide useful data for building gold standard human performance models. The methodology for building these performance models has value in other domains that test the difficulty of questions and the quality of the exam takers.
APA, Harvard, Vancouver, ISO, and other styles
2

Alsubait, Tahani. "Ontology-based multiple-choice question generation." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/ontologybased-multiplechoice-question-generation(07bf2890-6f41-4a11-8189-02d5bb08e686).html.

Full text
Abstract:
Assessment is a well understood educational topic with a really long history and a wealth of literature. Given this level of understanding of the topic, educational practitioners are able to differentiate, for example, between valid and invalid assessments. Despite the fact that we can test for the validity of an assessment, knowing how to systematically generate a valid assessment is still challenging and needs to be understood. In this thesis we introduce a similarity-based method to generate a specific type of questions, namely multiple choice questions, and control their difficulty. This form of questions is widely used especially in contexts where automatic grading is a necessity. The generation of MCQs is more challenging than generating open-ended questions due to the fact that their construction includes the generation of a set of answers. These answers need to be all plausible, otherwise the validity of the question can be questionable. Our proposed generation method is applicable to both manual and automatic gener- ation. We show how to implement it by utilising ontologies for which we also develop similarity measures. Those measures are simply functions which compute the similarity, i.e., degree of resemblance, between two concepts based on how they are described in a given ontology. We show that it is possible to control the difficulty of an MCQ by varying the degree of similarity between its answers. The thesis and its contributions can be summarised in a few points. Firstly, we provide literature reviews for the two main pillars of the thesis, namely question generation and similarity measures. Secondly, we propose a method to automatically generate MCQs from ontologies and control their difficulty. Thirdly, we introduce a new family of similarity measures. Fourthly, we provide a protocol to evaluate a set of automatically generated assessment questions. The evaluation takes into account experts' reviews and students' performance. Finally, we introduce an automatic approach which makes it possible to evaluate a large number of assessment questions by simulating a student trying to answer the questions.
APA, Harvard, Vancouver, ISO, and other styles
3

Thayn, Kim Scott. "An Evaluation of Multiple Choice Test Questions Deliberately Designed to Include Multiple Correct Answers." BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2450.

Full text
Abstract:
The multiple-choice test question is a popular item format used for tests ranging from classroom assessments to professional licensure exams. The popularity of this format stems from its administration and scoring efficiencies. The most common multiple-choice format consists of a stem that presents a problem to be solved accompanied by a single correct answer and two, three, or four incorrect answers. A well-constructed item using this format can result in a high quality assessment of an examinee's knowledge, skills and abilities. However, for some complex, higher-order knowledge, skills and abilities, a single correct answer is often insufficient. Test developers tend to avoid using multiple correct answers out of a concern about the increased difficulty and lower discrimination of such items. However, by avoiding the use of multiple correct answers, test constructors may inadvertently create validity concerns resulting from incomplete content coverage and construct irrelevant variance. This study explored an alternative way of implementing multiple-choice questions with two or more correct answers by specifying in each question the number of answers examinees should select instead of using the traditional guideline to select all that apply. This study investigated the performance of three operational exams that use a standard multiple-choice format where the examinees are told how many answers they are to select. The collective statistical performance of multiple-choice items that included more than one answer that is keyed as correct was compared with the performance of traditional single-answer, multiple-choice (SA) items within each exam. The results indicate that the multiple-answer, multiple-choice (MA) items evaluated from these three exams performed at least as well as to the single-answer questions within the same exams.
APA, Harvard, Vancouver, ISO, and other styles
4

Brits, Gideon Petrus. "University student performance in multiple choice questions : an item analysis of Mathematics assessments." Diss., University of Pretoria, 2017. http://hdl.handle.net/2263/65477.

Full text
Abstract:
The University of Pretoria has experienced a significant increase in student numbers in recent years. This increase has necessarily impacted on the Department of Mathematics and Applied Mathematics. The department is understaffed in terms of lecturing staff, which impacts negatively on postgraduate study and research outputs. The disproportion between teaching staff and the lecturing load and research demands has led to an excessive grading and administrative load on staff. The department decided to use multiple choice questions in assessments that could be graded by means of computer software. The responses of the multiple choice questions are captured on optical reader forms that are processed centrally. Multiple choice questions are combined with constructed response questions (written questions) in semester tests and end-of-term examinations. The quality of the multiple choice questions has never before been determined. This research project asks the research question: How do the multiple choice questions in mathematics, as posed to first-year engineering students at the University of Pretoria, comply with the principles of good assessment for determining quality? A quantitative secondary analysis is performed on data that was sourced from the first-year engineering calculus module WTW 158 for the years 2015, 2016 and 2017. The study shows that, in most cases, the questions are commendable with well-balanced indices of discrimination and difficulty including well-chosen functional distractors. The item analysis included determining the cognitive level of each multiple choice question. The problematic questions are highlighted and possible recommendations are made to improve or revise such questions for future usage.<br>Dissertation (MEd)--University of Pretoria, 2017.<br>Science, Mathematics and Technology Education<br>MEd<br>Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
5

King, Stephen. ""None of the Above" as an Answer Option in Observatoin Based Multiple-Choice Questions." TopSCHOLAR®, 2006. http://digitalcommons.wku.edu/theses/288.

Full text
Abstract:
This study examined the characteristics of items using none of the above (NOT A) as an answer option in observation based multiple-choice questions. Previous research has examined only the use of a NOTA option in academic knowledge based testing, not in visual recognition testing. Item difficulty and discrimination were examined for three different item formats: (a) items without a NOTA option, (b) items with NOTA as a distracter, and (c) items with NOTA as the correct answer. The questions were based on two photographs with similar content. A total of 98 participants from a large southeastern university completed a visual recognition test containing all three item types. Results revealed no difference in item discrimination between items without a NOTA option and items with a NOTA option, but did indicate that items with a NOTA option were more difficulty. A discussion of the results, limitations, and suggestions for future research is provided.
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, Jia-Ying. "Second language reading topic familiarity and test score: test-taking strategies for multiple-choice comprehension questions." Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/2737.

Full text
Abstract:
The main purpose of this study was to compare the strategies used by Chinese- speaking students when confronted with familiar versus unfamiliar topics in a multiple-choice format reading comprehension test. The focus was on describing what students do when they are taking reading comprehension tests by asking students to verbalize their thoughts. The strategies were further compared with participants' level of familiarity with different reading topics and their reading scores. Twenty Chinese-speaking participants at the University of Iowa performed three tasks: a topical knowledge vocabulary assessment that served as an indicator of each participant's topical knowledge about the four selected content areas in this study (law, business, language teaching, and engineering); two Test of English as a Foreign Language (TOEFL) internet-based test (iBT) practice reading comprehension passages, one with a familiar topic and the other with an unfamiliar topic, and both with retrospective think-aloud protocols; and an interview related to participants' test-taking strategies. Two stages of analysis, qualitative and quantitative, were undertaken in this study. For the qualitative analysis, all verbal reports provided by participants in the think-aloud protocols and the interviews were recorded and transcribed. Six categories of strategies emerged: general approaches to reading the passages, identification of important information by the discourse structure of the passages, vocabulary/sentence-in-context approaches, multiple-choice test-management strategies, test-wiseness, and background knowledge. For the quantitative analysis, an analysis of variance (ANOVA) with repeated measures was completed to determine if there were significant differences based on the frequency of strategy use and level of topic familiarity. The results showed that the types of test-taking strategies adopted by Chinese-speaking graduate students remained similar when they read passages with familiar versus unfamiliar topics. However, participants all reported feeling more relief and more confidence when reading passages related to their background knowledge. The second ANOVA employed a split-plot statistical design to examine whether there were significant differences based on participants' strategy use and their reading scores as measured by the iBT reading comprehension tests. High scorers employed strategies in categories one, two, three, and four significantly more frequently than low scorers. However, low scorers adopted significantly more strategies in category five than high scorers. In category six, high and low scorers seemed to use a similar number of strategies. Findings that emerged from the two perspectives are discussed; implications related to test-taking and reading pedagogy are provided in the conclusion.
APA, Harvard, Vancouver, ISO, and other styles
7

Liao, Jui-Teng. "Multiple-choice and short-answer questions in language assessment: the interplay between item format and second language reading." Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6178.

Full text
Abstract:
Multiple-choice (MCQs) and short-answer questions (SAQs) are the most common test formats for assessing English reading proficiency. While the former provides test-takers with prescribed options, the latter requires short written responses. Test developers favor MCQs over SAQs for the following reasons: less time required for rating, high rater agreement, and wide content coverage. This mixed methods dissertation investigated the impacts of test format on reading performance, metacognitive awareness, test-completion processes, and task perceptions. Participants were eighty English as a second language (ESL) learners from a Midwestern community college. They were first divided into two groups of approximately equivalent reading proficiencies and then completed MCQ and SAQ English reading tests in different orders. After completing each format, participants filled out a survey about demographic information, strategy use, and perceptions of test formats. They also completed a 5-point Likert-scale survey to assess their degree of metacognitive awareness. At the end, sixteen participants were randomly chosen to engage in retrospective interviews focusing on their strategy use and task perceptions. This study employed a mixed methods approach in which quantitative and qualitative strands converged to draw an overall meta-inference. For the quantitative strand, descriptive statistics, paired sample t-tests, item analyses, two-way ANOVAs, and correlation analyses were conducted to investigate 1) the differences between MCQ and SAQ test performance and 2) the relationship between test performance and metacognitive awareness. For the qualitative strand, test-takers’ MCQ and SAQ test completion processes and task perceptions were explored using coded interview and survey responses related to strategy use and perceptions of test formats. Results showed that participants performed differently on MCQ and SAQ reading tests, even though both tests were highly correlated. The paired sample t-tests revealed that participants’ English reading and writing proficiencies might account for the MCQ and SAQ performance disparity. Moreover, there was no positive relationship between reading test performance and the degree of metacognitive awareness generated by the frequency of strategy use. Correlation analyses suggested whether a higher or lower English reading proficiency of the participants was more important than strategy use. Although the frequency of strategy use did not benefit test performance, strategies implemented for MCQ and SAQ tests were found to generate interactive processes allowing participants to gain deeper understanding of the source texts. Furthermore, participants’ perceptions toward MCQs, SAQs, and a combination of both revealed positive and negative influences among test format, reading comprehension, and language learning. Therefore, participants’ preferences of test format should be considered when measuring their English reading proficiency. This study has pedagogical implications on the use of various test formats in L2 reading classrooms.
APA, Harvard, Vancouver, ISO, and other styles
8

Oellermann, Susan Wilma, and der merwe Alexander Dawid Van. "Can Using Online Formative Assessment Boost the Academic Performance of Business Students? An Empirical Study." Kamla-Raj, 2015. http://hdl.handle.net/10321/1571.

Full text
Abstract:
The declining quality of first year student intake at the Durban University of Technology (DUT) prompted the addition of online learning to traditional instruction. The time spent by students in an online classroom and their scores in subsequent multiple-choice question (MCQ) tests were measured. Tests on standardised regression coefficients showed self-test time as a significant predictor of summative MCQ performance while controlling for ability. Exam MCQ performance was found to be associated, positively and significantly, with annual self-test time at the 5 percent level and a significant relationship was found between MCQ marks and year marks. It was concluded that students’ use of the self-test tool in formative assessments has a significant bearing on students’ year marks and final grades. The negative nature of the standardised beta coefficient for gender indicates that, when year marks and annual self-test time are considered, males appear to have performed slightly better than females.
APA, Harvard, Vancouver, ISO, and other styles
9

Standifer, Scott. "The influence on learning of short-essay and multiple-choice adjunct questions in a World Wide Web environment /." free to MU campus, to others for purchase, 2000. http://wwwlib.umi.com/cr/mo/fullcit?p9974687.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Neupane, Ramesh. "A QUANTITATIVE STUDY EXAMINING THE RELATIONSHIP BETWEEN LEARNING PREFERENCES AND STANDADIZED MULTIPLE CHOICE ACHIEVEMENT TEST PERFORMANCE OF NURSE AIDE STUDENTS." OpenSIUC, 2019. https://opensiuc.lib.siu.edu/dissertations/1663.

Full text
Abstract:
The research purpose was to investigate the differences between learning preferences (i.e., Active-Reflective, Sensing-Intuitive, Visual-Verbal, and Sequential-Global) determined by the Index of Learning Style and gender (i.e., Male and Female) in regards to standardized achievement multiple-choice test performance determined by the Illinois Nurse Aide Competency Examination (INACE), i.e., overall INACE performance and INACE performance based on six duty areas (i.e., communicating information, performing basic nursing skills, performing personal care, performing basic restorative skills, providing mental health-services, and providing for resident’s rights) of nurse aide students. The study explored the relationship between variables using a non-experimental, comparative and descriptive approach. The nurse aide students who completed the Illinois approved Basic Nurse Aide Training (BNAT) and 21-mandated skills assessment and were ready to take the Illinois Nurse Aide Competency Examination (INACE) in the month of October 2018 and December 2018 at various community colleges across the state of Illinois were the participants of the study. A sample of 800 nurse aide students were selected through stratified (north, central, and south) random sampling out of which N = 472 participated in the study representing the actual sample.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Multiple choice questions (MCQs)"

1

Physics: Structured questions and multiple choice : structured questions & multiple choice. Philip Allan Publishers Ltd, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

1947-, Turner A. J., and Wood Edward J. 1941-, eds. Multiple choice questions in biochemistry. Pittman, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Crozier, Ann. Multiple choice questions in radiodiagnosis. Churchill Livingstone, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pegington, John. Multiple choice questions in anatomy. Edward Arnold, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Patrick, McKeon, and Power Kieran, eds. Multiple choice questions in psychiatry. Pitman, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Maurice, Lipsedge, ed. Multiple choice questions in psychiatry. Arnold, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Crozier, Ann. Multiple choice questions in radiodiagnosis. Churchill Livingstone, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Coleman, R. Multiple choice questions in histology. Churchill Livingstone, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hassall, H. Multiple choice questions in biochemistry. Churchill Livingstone, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Oliver, Holmes, ed. Multiple choice questions in physiology. Churchill Livingstone, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Multiple choice questions (MCQs)"

1

Cottrell, Stella. "Multiple choice question exams (MCQs)." In The Exam Skills Handbook. Macmillan Education UK, 2012. http://dx.doi.org/10.1007/978-1-137-01356-9_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gwinnett, Claire. "The Design and Implementation of Multiple Choice Questions (MCQs) in Forensic Science Assessment." In Forensic Science Education and Training. John Wiley & Sons, Ltd, 2017. http://dx.doi.org/10.1002/9781118689196.ch17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Leclercq, D., E. Boxus, P. de Brogniez, H. Wuidar, and F. Lambert. "The TASTE Approach: General Implicit Solutions in Multiple Choice Questions (MCQs), Open Books Exams and Interactive Testing." In Item Banking: Interactive Testing and Self-Assessment. Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/978-3-642-58033-8_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Black, Dennis D., Eugene B. Chang, Po Sing Leung, and Michael D. Sitrin. "Multiple Choice Questions." In The Gastrointestinal System. Springer Netherlands, 2014. http://dx.doi.org/10.1007/978-94-017-8771-0_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Elsheikha, H. M., and X. Q. Zhu. "Multiple choice questions." In 555 Questions in veterinary and tropical parasitology. CABI, 2019. http://dx.doi.org/10.1079/9781789242348.0001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Stevens, P. "Multiple Choice Questions." In Work Out Accounting GCSE. Macmillan Education UK, 1987. http://dx.doi.org/10.1007/978-1-349-09460-8_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kulkarni, Lalita, and Smt Kashibai Navale Medical College and General Hospital Smt. Kashibai Navale Medical College and General Hospital. "Multiple Choice Questions (MCQs)." In Anatomy Simplified. Jaypee Brothers Medical Publishers (P) Ltd., 2015. http://dx.doi.org/10.5005/jp/books/12400_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Barrett, Tristan, Nadeem Shaida, Ashley Shaw, and Adrian K. Dixon. "Multiple choice questions (MCQs)." In Radiology for Undergraduate Finals and Foundation Years. CRC Press, 2018. http://dx.doi.org/10.4324/9781315375854-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

"Multiple Choice Questions." In MCQs & Short Answer Questions for MRCOG. CRC Press, 2004. http://dx.doi.org/10.1201/b13305-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

"Multiple Choice Questions." In MCQs & Short Answer Questions for MRCOG. CRC Press, 2004. http://dx.doi.org/10.1201/b13305-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Multiple choice questions (MCQs)"

1

Hameed, Ibrahim A. "A Fuzzy System to Automatically Evaluate and Improve Fariness of Multiple-Choice Questions (MCQs) based Exams." In 8th International Conference on Computer Supported Education. SCITEPRESS - Science and and Technology Publications, 2016. http://dx.doi.org/10.5220/0005897204760481.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Slattery, Robyn Maree. "Objective versus subjective methods to assess discipline-specific knowledge: a case for Extended Matching Questions (EMQs)." In Third International Conference on Higher Education Advances. Universitat Politècnica València, 2017. http://dx.doi.org/10.4995/head17.2017.5473.

Full text
Abstract:
Background: Extended matching questions (EMQs) were introduced as an objective assessment tool into third year immunology undergraduate units at Monash University, Australia. Aim: The performance of students examined objectively by multiple choice questions (MCQs) was compared to their performance assessed by EMQs; there was a high correlation coefficient between the two methods. EMQs were then introduced and the correlation of student performance between related units was measured as a function of percentage objective assessment. The correlation of student performance between units increased proportionally with objective assessment. Student performance in tasks assessed objectively and subjectively was then compared. The findings indicate marker bias contributes to the poor correlation between marks awarded objectively and subjectively. Conclusion: EMQs are a valid method to objectively assess students and their increased inclusion in the assessment process increases the consistency of student marks. The subjective assessment of science communication skills introduces marker bias, indicating a need to identify, validate and implement, more objective methods for their assessment. Keywords: Extended matching question (EMQ); Objective assessment (OA); SA (SA); Marker bias; Discipline-specific assessment; Science communication assessment
APA, Harvard, Vancouver, ISO, and other styles
3

Farthing, Dave W., Dave M. Jones, and Duncan McPhee. "Permutational multiple-choice questions." In the 6th annual conference on the teaching of computing and the 3rd annual conference. ACM Press, 1998. http://dx.doi.org/10.1145/282991.283036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Petersen, Andrew, Michelle Craig, and Paul Denny. "Employing Multiple-Answer Multiple Choice Questions." In ITiCSE '16: Innovation and Technology in Computer Science Education Conference 2016. ACM, 2016. http://dx.doi.org/10.1145/2899415.2925503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hegde, Anusha, Nayanika Ghosh, and Viraj Kumar. "Multiple Choice Questions with Justifications." In 2014 IEEE Sixth International Conference on Technology for Education (T4E). IEEE, 2014. http://dx.doi.org/10.1109/t4e.2014.30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Smetanová, Dana. "MULTIPLE-CHOICE QUESTIONS IN MATHEMATICS." In 10th International Conference on Education and New Learning Technologies. IATED, 2018. http://dx.doi.org/10.21125/edulearn.2018.0608.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Welbl, Johannes, Nelson F. Liu, and Matt Gardner. "Crowdsourcing Multiple Choice Science Questions." In Proceedings of the 3rd Workshop on Noisy User-generated Text. Association for Computational Linguistics, 2017. http://dx.doi.org/10.18653/v1/w17-4413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lin, Shih-Yin, Chandralekha Singh, N. Sanjay Rebello, Paula V. Engelhardt, and Chandralekha Singh. "Can multiple-choice questions simulate free-response questions?" In 2011 PHYSICS EDUCATION RESEARCH CONFERENCE. AIP, 2012. http://dx.doi.org/10.1063/1.3679990.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kao, Yvonne S. "Alternatives to Simple Multiple-Choice Questions." In SIGCSE '18: The 49th ACM Technical Symposium on Computer Science Education. ACM, 2018. http://dx.doi.org/10.1145/3159450.3162301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lambert, Nicolas, and Yoav Shoham. "Eliciting truthful answers to multiple-choice questions." In the tenth ACM conference. ACM Press, 2009. http://dx.doi.org/10.1145/1566374.1566391.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Multiple choice questions (MCQs)"

1

Diehl, Grover E., and Robert Doucette. Why Have Four-Option Multiple Choice Questions? Defense Technical Information Center, 1998. http://dx.doi.org/10.21236/ada362211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hamill, Daniel D., Jeremy J. Giovando, Chandler S. Engel, Travis A. Dahl, and Michael D. Bartles. Application of a Radiation-Derived Temperature Index Model to the Willow Creek Watershed in Idaho, USA. U.S. Army Engineer Research and Development Center, 2021. http://dx.doi.org/10.21079/11681/41360.

Full text
Abstract:
The ability to simulate snow accumulation and melting processes is fundamental to developing real-time hydrological models in watersheds with a snowmelt-dominated flow regime. A primary source of uncertainty with this model development approach is the subjectivity related to which historical periods to use and how to combine parameters from multiple calibration events. The Hydrologic Engineering Center, Hydrological Modeling System, has recently implemented a hybrid temperature index (TI) snow module that has not been extensively tested. This study evaluates a radiatative temperature index (RTI) model’s performance relative to the traditional air TI model. The TI model for Willow Creek performed reasonably well in both the calibration and validation years. The results of the RTI calibration and validation simulations resulted in additional questions related to how best to parameterize this snow model. An RTI parameter sensitivity analysis indicates that the choice of calibration years will have a substantial impact on the parameters and thus the streamflow results. Based on the analysis completed in this study, further refinement and verification of the RTI model calculations are required before an objective comparison with the TI model can be completed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography