Academic literature on the topic 'Multiple-choice question answering'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multiple-choice question answering.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Multiple-choice question answering"

1

AlMahmoud, Tahra, Dybesh Regmi, Margaret Elzubeir, Frank Christopher Howarth, and Sami Shaban. "Medical student question answering behaviour during high-stakes multiple choice examinations." International Journal of Technology Enhanced Learning 11, no. 2 (2019): 157. http://dx.doi.org/10.1504/ijtel.2019.098777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shaban, Sami, Frank Christopher Howarth, Tahra AlMahmoud, Dybesh Regmi, and Margaret Elzubeir. "Medical student question answering behaviour during high-stakes multiple choice examinations." International Journal of Technology Enhanced Learning 11, no. 2 (2019): 157. http://dx.doi.org/10.1504/ijtel.2019.10018872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Khot, Tushar, Peter Clark, Michal Guerquin, Peter Jansen, and Ashish Sabharwal. "QASC: A Dataset for Question Answering via Sentence Composition." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 8082–90. http://dx.doi.org/10.1609/aaai.v34i05.6319.

Full text
Abstract:
Composing knowledge from multiple pieces of texts is a key challenge in multi-hop question answering. We present a multi-hop reasoning dataset, Question Answering via Sentence Composition (QASC), that requires retrieving facts from a large corpus and composing them to answer a multiple-choice question. QASC is the first dataset to offer two desirable properties: (a) the facts to be composed are annotated in a large corpus, and (b) the decomposition into these facts is not evident from the question itself. The latter makes retrieval challenging as the system must introduce new concepts or relations in order to discover potential decompositions. Further, the reasoning model must then learn to identify valid compositions of these retrieved facts using common-sense reasoning. To help address these challenges, we provide annotation for supporting facts as well as their composition. Guided by these annotations, we present a two-step approach to mitigate the retrieval challenges. We use other multiple-choice datasets as additional training data to strengthen the reasoning model. Our proposed approach improves over current state-of-the-art language models by 11% (absolute). The reasoning and retrieval problems, however, remain unsolved as this model still lags by 20% behind human performance.
APA, Harvard, Vancouver, ISO, and other styles
4

Kim, Hyeondey, and Pascale Fung. "Learning to Classify the Wrong Answers for Multiple Choice Question Answering (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 10 (2020): 13843–44. http://dx.doi.org/10.1609/aaai.v34i10.7194.

Full text
Abstract:
Multiple-Choice Question Answering (MCQA) is the most challenging area of Machine Reading Comprehension (MRC) and Question Answering (QA), since it not only requires natural language understanding, but also problem-solving techniques. We propose a novel method, Wrong Answer Ensemble (WAE), which can be applied to various MCQA tasks easily. To improve performance of MCQA tasks, humans intuitively exclude unlikely options to solve the MCQA problem. Mimicking this strategy, we train our model with the wrong answer loss and correct answer loss to generalize the features of our model, and exclude likely but wrong options. An experiment on a dialogue-based examination dataset shows the effectiveness of our approach. Our method improves the results on a fine-tuned transformer by 2.7%.
APA, Harvard, Vancouver, ISO, and other styles
5

Suwito, Abi, Ipung Yuwono, I. Nengah Parta, Santi Irawati, and Ervin Oktavianingtyas. "Solving Geometric Problems by Using Algebraic Representation for Junior High School Level 3 in Van Hiele at Geometric Thinking Level." International Education Studies 9, no. 10 (2016): 27. http://dx.doi.org/10.5539/ies.v9n10p27.

Full text
Abstract:
<p class="apa">This study aims to determine the ability of algebra students who have 3 levels van Hiele levels. Follow its framework Dindyal framework (2007). Students are required to do 10 algebra shaped multiple choice, then students work 15 about the geometry of the van Hiele level in the form of multiple choice questions. The question has been tested levels of validity and reliability. After learning abilities and levels van Hiele algebra, students were asked to answer two questions descriptions to determine the ability of students in answering the question of algebraic geometry punctuated by interviews. From this study illustrated that students who have achieved level 3 van Hiele able to properly solve problems of algebraic geometry in the content by utilizing the deduction reasoning thinking skills to build the structure geometry in an axiomatic system in solving the problems faced. Teachers play an important role in pushing the speed students through a higher level of thinking through the right exercises. Suggestions for further research can develop on different topics but still within the context of algebraic geometry.</p>
APA, Harvard, Vancouver, ISO, and other styles
6

Clark, Peter, Oren Etzioni, Tushar Khot, et al. "From ‘F’ to ‘A’ on the N.Y. Regents Science Exams: An Overview of the Aristo Project." AI Magazine 41, no. 4 (2020): 39–53. http://dx.doi.org/10.1609/aimag.v41i4.5304.

Full text
Abstract:

 
 
 AI has achieved remarkable mastery over games such as Chess, Go, and Poker, and even Jeopardy!, but the rich variety of standardized exams has remained a landmark challenge. Even as recently as 2016, the best AI system could achieve merely 59.3 percent on an 8th grade science exam. This article reports success on the Grade 8 New York Regents Science Exam, where for the first time a system scores more than 90 percent on the exam’s nondiagram, multiple choice (NDMC) questions. In addition, our Aristo system, building upon the success of recent language models, exceeded 83 percent on the corresponding Grade 12 Science Exam NDMC questions. The results, on unseen test questions, are robust across different test years and different variations of this kind of test. They demonstrate that modern natural language processing methods can result in mastery on this task. While not a full solution to general question-answering (the questions are limited to 8th grade multiple-choice science) it represents a significant milestone for the field.
 
 
APA, Harvard, Vancouver, ISO, and other styles
7

Jin, Di, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. "What Disease Does This Patient Have? A Large-Scale Open Domain Question Answering Dataset from Medical Exams." Applied Sciences 11, no. 14 (2021): 6421. http://dx.doi.org/10.3390/app11146421.

Full text
Abstract:
Open domain question answering (OpenQA) tasks have been recently attracting more and more attention from the natural language processing (NLP) community. In this work, we present the first free-form multiple-choice OpenQA dataset for solving medical problems, MedQA, collected from the professional medical board exams. It covers three languages: English, simplified Chinese, and traditional Chinese, and contains 12,723, 34,251, and 14,123 questions for the three languages, respectively. We implement both rule-based and popular neural methods by sequentially combining a document retriever and a machine comprehension model. Through experiments, we find that even the current best method can only achieve 36.7%, 42.0%, and 70.1% of test accuracy on the English, traditional Chinese, and simplified Chinese questions, respectively. We expect MedQA to present great challenges to existing OpenQA systems and hope that it can serve as a platform to promote much stronger OpenQA models from the NLP community in the future.
APA, Harvard, Vancouver, ISO, and other styles
8

Jansen, Peter, Rebecca Sharp, Mihai Surdeanu, and Peter Clark. "Framing QA as Building and Ranking Intersentence Answer Justifications." Computational Linguistics 43, no. 2 (2017): 407–49. http://dx.doi.org/10.1162/coli_a_00287.

Full text
Abstract:
We propose a question answering (QA) approach for standardized science exams that both identifies correct answers and produces compelling human-readable justifications for why those answers are correct. Our method first identifies the actual information needed in a question using psycholinguistic concreteness norms, then uses this information need to construct answer justifications by aggregating multiple sentences from different knowledge bases using syntactic and lexical information. We then jointly rank answers and their justifications using a reranking perceptron that treats justification quality as a latent variable. We evaluate our method on 1,000 multiple-choice questions from elementary school science exams, and empirically demonstrate that it performs better than several strong baselines, including neural network approaches. Our best configuration answers 44% of the questions correctly, where the top justifications for 57% of these correct answers contain a compelling human-readable justification that explains the inference required to arrive at the correct answer. We include a detailed characterization of the justification quality for both our method and a strong baseline, and show that information aggregation is key to addressing the information need in complex questions.
APA, Harvard, Vancouver, ISO, and other styles
9

Ardiansah, M. Masykuri, and S. B. Rahardjo. "Student certainty answering misconception question: study of Three-Tier Multiple-Choice Diagnostic Test in Acid-Base and Solubility Equilibrium." Journal of Physics: Conference Series 1006 (April 2018): 012018. http://dx.doi.org/10.1088/1742-6596/1006/1/012018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Preddie, Martha Ingrid. "Clinician-selected Electronic Information Resources do not Guarantee Accuracy in Answering Primary Care Physicians' Information Needs." Evidence Based Library and Information Practice 3, no. 1 (2008): 78. http://dx.doi.org/10.18438/b8n011.

Full text
Abstract:
A review of: 
 McKibbon, K. Ann, and Douglas B. Fridsma. “Effectiveness of Clinician-selected Electronic Information Resources for Answering Primary Care Physicians’ Information Needs.” Journal of the American Medical Informatics Association 13.6 (2006): 653-9.
 
 Objective – To determine if electronic information resources selected by primary care physicians improve their ability to answer simulated clinical questions. 
 
 Design – An observational study utilizing hour-long interviews and think-aloud protocols. 
 
 Setting – The offices and clinics of primary care physicians in Canada and the United States. 
 
 Subjects – 25 primary care physicians of whom 4 were women, 17 were from Canada, 22 were family physicians, and 24 were board certified. 
 
 Methods – Participants provided responses to 23 multiple-choice questions. Each physician then chose two questions and looked for the answers utilizing information resources of their own choice. The search processes, chosen resources and search times were noted. These were analyzed along with data on the accuracy of the answers and certainties related to the answer to each clinical question prior to the search. 
 
 Main results – Twenty-three physicians sought answers to 46 simulated clinical questions. Utilizing only electronic information resources, physicians spent a mean of 13.0 (SD 5.5) minutes searching for answers to the questions, an average of 7.3 (SD 4.0) minutes for the first question and 5.8 (SD 2.2) minutes to answer the second question. On average, 1.8 resources were utilized per question. Resources that summarized information, such as the Cochrane Database of Systematic Reviews, UpToDate and Clinical Evidence, were favored 39.2% of the time, MEDLINE (Ovid and PubMed) 35.7%, and Internet resources including Google 22.6%. Almost 50% of the search and retrieval strategies were keyword-based, while MeSH, subheadings and limiting were used less frequently. On average, before searching physicians answered 10 of 23 (43.5%) questions accurately. For questions that were searched using clinician-selected electronic resources, 18 (39.1%) of the 46 answers were accurate before searching, while 19 (42.1%) were accurate after searching. The difference of one correct answer was due to the answers from 5 (10.9%) questions changing from correct to incorrect, while the answers to 6 questions (13.0%) changed from incorrect to correct. The ability to provide correct answers differed among the various resources. Google and Cochrane provided the correct answers about 50% of the time while PubMed, Ovid MEDLINE, UpToDate, Ovid Evidence Based Medicine Reviews and InfoPOEMs were more likely to be associated with incorrect answers. Physicians also seemed unable to determine when they needed to search for information in order to make an accurate decision. 
 
 Conclusion – Clinician-selected electronic information resources did not guarantee accuracy in the answers provided to simulated clinical questions. At times the use of these resources caused physicians to change self-determined correct answers to incorrect ones. The authors state that this was possibly due to factors such as poor choice of resources, ineffective search strategies, time constraints and automation bias. Library and information practitioners have an important role to play in identifying and advocating for appropriate information resources to be integrated into the electronic medical record systems provided by health care institutions to ensure evidence based health care delivery.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Multiple-choice question answering"

1

Luger, Sarah Kaitlin Kelly. "Algorithms for assessing the quality and difficulty of multiple choice exam questions." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/20986.

Full text
Abstract:
Multiple Choice Questions (MCQs) have long been the backbone of standardized testing in academia and industry. Correspondingly, there is a constant need for the authors of MCQs to write and refine new questions for new versions of standardized tests as well as to support measuring performance in the emerging massive open online courses, (MOOCs). Research that explores what makes a question difficult, or what questions distinguish higher-performing students from lower-performing students can aid in the creation of the next generation of teaching and evaluation tools. In the automated MCQ answering component of this thesis, algorithms query for definitions of scientific terms, process the returned web results, and compare the returned definitions to the original definition in the MCQ. This automated method for answering questions is then augmented with a model, based on human performance data from crowdsourced question sets, for analysis of question difficulty as well as the discrimination power of the non-answer alternatives. The crowdsourced question sets come from PeerWise, an open source online college-level question authoring and answering environment. The goal of this research is to create an automated method to both answer and assesses the difficulty of multiple choice inverse definition questions in the domain of introductory biology. The results of this work suggest that human-authored question banks provide useful data for building gold standard human performance models. The methodology for building these performance models has value in other domains that test the difficulty of questions and the quality of the exam takers.
APA, Harvard, Vancouver, ISO, and other styles
2

Silveira, Igor Cataneo. "Solving University entrance assessment using information retrieval." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-04112018-225438/.

Full text
Abstract:
Answering questions posed in natural language is a key task in Artificial Intelligence. However, producing a successful Question Answering (QA) system is challenging, since it requires text understanding, information retrieval, information extraction and text production. This task is made even harder by the difficulties in collecting reliable datasets and in evaluating techniques, two pivotal points for machine learning approaches. This has led many researchers to focus on Multiple-Choice Question Answering (MCQA), a special case of QA where systems must select the correct answers from a small set of alternatives. One particularly interesting type of MCQA is solving Standardized Tests, such as Foreign Language Proficiency exams, Elementary School Science exams and University Entrance exams. These exams provide easy-to-evaluate challenging multiple-choice questions of varying difficulties about large, but limited, domains. The Exame Nacional do Ensino Médio (ENEM) is a High School level exam taken every year by students all over Brazil. It is widely used by Brazilian universities as an entrance exam and is the world\'s second biggest university entrance examination in number of registered candidates. This exam consists in writing an essay and solving a multiple-choice test comprising questions on four major topics: Humanities, Language, Science and Mathematics. Questions inside each major topic are not segmented by standard scholar disciplines (e.g. Geography, Biology, etc.) and often require interdisciplinary reasoning. Moreover, the previous editions of the exam and their solutions are freely available online, making it a suitable benchmark for MCQA. In this work we automate solving the ENEM focusing, for simplicity, on purely textual questions that do not require mathematical thinking. We formulate the problem of answering multiple-choice questions as finding the candidate-answer most similar to the statement. We investigate two approaches for measuring textual similarity of candidate-answer and statement. The first approach addresses this as a Text Information Retrieval (IR) problem, that is, as a problem of finding in a database the most relevant document to a query. Our queries are made of statement plus candidate-answer and we use three different corpora as database: the first comprises plain-text articles extracted from a dump of the Wikipedia in Portuguese language; the second contains only the text given in the question\'s header and the third is composed by pairs of question and correct answer extracted from ENEM assessments. The second approach is based on Word Embedding (WE), a method to learn vectorial representation of words in a way such that semantically similar words have close vectors. WE is used in two manners: to augment IR\'s queries by adding related words to those on the query according to the WE model, and to create vectorial representations for statement and candidate-answers. Using these vectorial representations we answer questions either directly, by selecting the candidate-answer that maximizes the cosine similarity to the statement, or indirectly, by extracting features from the representations and then feeding them into a classifier that decides which alternative is the answer. Along with the two mentioned approaches we investigate how to enhance them using WordNet, a structured lexical database where words are connected according to some relations like synonymy and hypernymy. Finally, we combine different configurations of the two approaches and their WordNet variations by creating an ensemble of algorithms found by a greedy search. This ensemble chooses an answer by the majority voting of its components. The first approach achieved an average of 24% accuracy using the headers, 25% using the pairs database and 26.9% using Wikipedia. The second approach achieved 26.6% using WE indirectly and 28% directly. The ensemble achieved 29.3% accuracy. These results, slightly above random guessing (20%), suggest that these techniques can capture some of the necessary skills to solve standardized tests. However, more sophisticated techniques that perform text understanding and common sense reasoning might be required to achieve human-level performance.<br>Responder perguntas feitas em linguagem natural é uma capacidade há muito desejada pela Inteligência Artificial. Porém, produzir um sistema de Question Answering (QA) é uma tarefa desafiadora, uma vez que ela requer entendimento de texto, recuperação de informação, extração de informação e produção de texto. Além disso, a tarefa se torna ainda mais difícil dada a dificuldade em coletar datasets confiáveis e em avaliar as técnicas utilizadas, sendo estes pontos de suma importância para abordagens baseadas em aprendizado de máquina. Isto tem levado muitos pesquisadores a focar em Multiple-Choice Question Answering (MCQA), um caso especial de QA no qual os sistemas devem escolher a resposta correta dentro de um grupo de possíveis respostas. Um caso particularmente interessante de MCQA é o de resolver testes padronizados, tal como testes de proficiência linguística, teste de ciências para ensino fundamental e vestibulares. Estes exames fornecem perguntas de múltipla escolha de fácil avaliação sobre diferentes domínios e de diferentes dificuldades. O Exame Nacional do Ensino Médio (ENEM) é um exame realizado anualmente por estudantes de todo Brasil. Ele é utilizado amplamente por universidades brasileiras como vestibular e é o segundo maior vestibular do mundo em número de candidatos inscritos. Este exame consiste em escrever uma redação e resolver uma parte de múltipla escolha sobre questões de: Ciências Humanas, Linguagens, Matemática e Ciências Naturais. As questões nestes tópicos não são divididas por matérias escolares (Geografia, Biologia, etc.) e normalmente requerem raciocínio interdisciplinar. Ademais, edições passadas do exame e suas soluções estão disponíveis online, tornando-o um benchmark adequado para MCQA. Neste trabalho nós automatizamos a resolução do ENEM focando, por simplicidade, em questões puramente textuais que não requerem raciocínio matemático. Nós formulamos o problema de responder perguntas de múltipla escolha como um problema de identificar a alternativa mais similar à pergunta. Nós investigamos duas abordagens para medir a similaridade textual entre pergunta e alternativa. A primeira abordagem trata a tarefa como um problema de Recuperação de Informação Textual (IR), isto é, como um problema de identificar em uma base de dados qualquer qual é o documento mais relevante dado uma consulta. Nossas consultas são feitas utilizando a pergunta mais alternativa e utilizamos três diferentes conjuntos de texto como base de dados: o primeiro é um conjunto de artigos em texto simples extraídos da Wikipedia em português; o segundo contém apenas o texto dado no cabeçalho da pergunta e o terceiro é composto por pares de questão-alternativa correta extraídos de provas do ENEM. A segunda abordagem é baseada em Word Embedding (WE), um método para aprender representações vetoriais de palavras de tal modo que palavras semanticamente próximas possuam vetores próximos. WE é usado de dois modos: para aumentar o texto das consultas de IR e para criar representações vetoriais para a pergunta e alternativas. Usando essas representações vetoriais nós respondemos questões diretamente, selecionando a alternativa que maximiza a semelhança de cosseno em relação à pergunta, ou indiretamente, extraindo features das representações e dando como entrada para um classificador que decidirá qual alternativa é a correta. Junto com as duas abordagens nós investigamos como melhorá-las utilizando a WordNet, uma base estruturada de dados lexicais onde palavras são conectadas de acordo com algumas relações, tais como sinonímia e hiperonímia. Por fim, combinamos diferentes configurações das duas abordagens e suas variações usando WordNet através da criação de um comitê de resolvedores encontrado através de uma busca gulosa. O comitê escolhe uma alternativa através de voto majoritário de seus constituintes. A primeira abordagem teve 24% de acurácia utilizando o cabeçalho, 25% usando a base de dados de pares e 26.9% usando Wikipedia. A segunda abordagem conseguiu 26.6% de acurácia usando WE indiretamente e 28% diretamente. O comitê conseguiu 29.3%. Estes resultados, pouco acima do aleatório (20%), sugerem que essas técnicas conseguem captar algumas das habilidades necessárias para resolver testes padronizados. Entretanto, técnicas mais sofisticadas, capazes de entender texto e de executar raciocínio de senso comum talvez sejam necessárias para alcançar uma performance humana.
APA, Harvard, Vancouver, ISO, and other styles
3

chi, sheng, and 季昇. "A Research on Helping Students Learn Better By Answering Multiple-Choice Questions in a More Complete Way." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/ftcy2e.

Full text
Abstract:
碩士<br>中原大學<br>資訊工程研究所<br>107<br>For a long time, traditional multiple-choice questions have often been used as exercises or exams. Students have been completely accustomed to reading and answering questions. This mode is easy to cause bad habits, such as reading keywords. For reading and thinking, this quick mode of answering is easy to follow the question of "incomplete understanding of the meaning of the question", "easy to read the wrong word, confused with the concept that was previously done". When you practice, you don&apos;&apos;t pay attention to the details. When the "official exam" is used, it is easy to forget or remember the wrong concept. Therefore, how to let the students face the practice questions and induce the students to fully read the answer questions. It is a subject worthy of study to increase students&apos;&apos; thinking and understanding of the topic to construct a complete knowledge concept structure. This study developed a game-based learning system and divided the students into three groups: “General Group”, “Confidence Group”, and “Induction Group”. The induction group is the experimental Group of this experiment, and the “general group” students are traditional. The four-choice multiple-choice question is used for learning. The "confidence group" students add the confidence index to the question in the traditional multiple-choice question mode. The "induction group" students need to mark each option for each of the four options. The degree of confidence. This study explores which kind of answering strategy is better for student learning. From the experimental results, it was found that the game-based learning system was helpful for the learning of the three groups of students, but the three groups did not have significant differences in the post-test; after five weeks of retention test, the induction group was significantly better than the confidence group and the general Groups, showing the inducing group&apos;&apos;s answering strategy allows students to retain the knowledge structure. In the induction of complete reading and the clearer understanding of students, the induction group was also significantly better than the general group and the confidence group. This study further found that inducing students to fully read will make students spend more time answering, but it will make students have a clearer understanding of cognition.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Multiple-choice question answering"

1

Monaghan, Nicola. 17. Study skills. Oxford University Press, 2018. http://dx.doi.org/10.1093/he/9780198811824.003.0017.

Full text
Abstract:
Without assuming prior legal knowledge, books in the Directions series introduce and guide readers through key points of law and legal debate. Questions, diagrams, and exercises help readers to engage fully with each subject and check their understanding as they progress. This chapter offers some guidance as to how to study effectively and how to approach assessments and exams. It discusses how to manage your time effectively and how to get the most out of criminal law lectures and seminars/workshops. It covers how to deal with multiple choice questions and provides advice on writing assignments (answering problem questions and essay questions is considered separately). It also provides advice on how to prepare for and approach exams.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Multiple-choice question answering"

1

Awadallah, Rawia, and Andreas Rauber. "Web-Based Multiple Choice Question Answering for English and Arabic Questions." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11735106_54.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nicula, Bogdan, Stefan Ruseti, and Traian Rebedea. "Improving Deep Learning for Multiple Choice Question Answering with Candidate Contexts." In Lecture Notes in Computer Science. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-76941-7_62.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Martinez-Gil, Jorge, Bernhard Freudenthaler, and A. Min Tjoa. "Multiple Choice Question Answering in the Legal Domain Using Reinforced Co-occurrence." In Lecture Notes in Computer Science. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-27615-7_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Martinez-Gil, Jorge, Bernhard Freudenthaler, and A. Min Tjoa. "A General Framework for Multiple Choice Question Answering Based on Mutual Information and Reinforced Co-occurrence." In Transactions on Large-Scale Data- and Knowledge-Centered Systems XLII. Springer Berlin Heidelberg, 2019. http://dx.doi.org/10.1007/978-3-662-60531-8_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ding, Jiwei, Yuan Wang, Wei Hu, Linfeng Shi, and Yuzhong Qu. "Answering Multiple-Choice Questions in Geographical Gaokao with a Concept Graph." In The Semantic Web. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-93417-4_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Galitsky, Boris. "Natural Language Front-End for a Database." In Encyclopedia of Database Technologies and Applications. IGI Global, 2005. http://dx.doi.org/10.4018/978-1-59140-560-3.ch068.

Full text
Abstract:
Whatever knowledge a database contains, one of the essential questions in its design and usability is how its users will interact with it. If these users are human agents, the most ordinary way to query a database would be in the natural language (Gazdar, 1999; Popescu, Etzioni, &amp; Kautz, 2003; Sabourin, 1994). Natural language question answering (NL Q/A), wherein questions are posed in a plain language, may be considered the most universal but not always the best (i.e., fastest) way to provide the information access to a database. One should be aware that approaches to data access, such as visualization, menus and multiple choice, FAQ lists, and so forth, have been successfully employed long before the NL Q/A systems came into play. In the following, I discuss situations in which a particular information access approach is optimal.
APA, Harvard, Vancouver, ISO, and other styles
7

"NOTES ON ANSWERING THE MULTIPLE-CHOICE QUESTIONS." In A Synopsis of Rheumatic Diseases. Elsevier, 1989. http://dx.doi.org/10.1016/b978-0-7236-0850-9.50045-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Monaghan, Nicola. "17. Study skills." In Criminal Law Directions. Oxford University Press, 2020. http://dx.doi.org/10.1093/he/9780198848783.003.0017.

Full text
Abstract:
Without assuming prior legal knowledge, books in the Directions series introduce and guide readers through key points of law and legal debate. Questions, diagrams, and exercises help readers to engage fully with each subject and check their understanding as they progress. This chapter offers some guidance as to how to study effectively and how to approach assessments and exams. It discusses how to manage your time effectively and how to get the most out of criminal law lectures and seminars/workshops. It covers how to deal with multiple choice questions and provides advice on writing assignments (answering problem questions and essay questions is considered separately). It also provides advice on how to prepare for and approach exams.
APA, Harvard, Vancouver, ISO, and other styles
9

Sorger, Bettina, Brigitte Dahmen, Joel Reithler, et al. "Another kind of ‘BOLD Response’: answering multiple-choice questions via online decoded single-trial brain signals." In Progress in Brain Research. Elsevier, 2009. http://dx.doi.org/10.1016/s0079-6123(09)17719-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Multiple-choice question answering"

1

Chaturvedi, Akshay, Onkar Pandit, and Utpal Garain. "CNN for Text-Based Multiple Choice Question Answering." In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/p18-2044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dalal, Dhairya, Mihael Arcan, and Paul Buitelaar. "Enhancing Multiple-Choice Question Answering with Causal Knowledge." In Proceedings of Deep Learning Inside Out (DeeLIO): The 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures. Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.deelio-1.8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Luo, Shang-Bao, Hung-Shin Lee, Kuan-Yu Chen, and Hsin-Min Wang. "Spoken Multiple-Choice Question Answering Using Multimodal Convolutional Neural Networks." In 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). IEEE, 2019. http://dx.doi.org/10.1109/asru46091.2019.9003966.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yan, Ming, Hao Zhang, Di Jin, and Joey Tianyi Zhou. "Multi-source Meta Transfer for Low Resource Multiple-Choice Question Answering." In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.acl-main.654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kuo, Chia-Chih, Shang-Bao Luo, and Kuan-Yu Chen. "An Audio-Enriched BERT-Based Framework for Spoken Multiple-Choice Question Answering." In Interspeech 2020. ISCA, 2020. http://dx.doi.org/10.21437/interspeech.2020-1763.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chitta, Radha, and Alexander K. Hudek. "A Reliable and Accurate Multiple Choice Question Answering System for Due Diligence." In ICAIL '19: Seventeenth International Conference on Artificial Intelligence and Law. ACM, 2019. http://dx.doi.org/10.1145/3322640.3326711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jiang, Zhengping, and Qi Sun. "CSReader at SemEval-2018 Task 11: Multiple Choice Question Answering as Textual Entailment." In Proceedings of The 12th International Workshop on Semantic Evaluation. Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/s18-1176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Akef, Soroosh, and Mohammad Hadi Bokaei. "Answering Poetic Verses' Thematic Similarity Multiple-Choice Questions with BERT." In 2020 28th Iranian Conference on Electrical Engineering (ICEE). IEEE, 2020. http://dx.doi.org/10.1109/icee50131.2020.9260736.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ghosh, Sayantani, Shraman Pramanick, Anurag Bagchi, and Amit Konar. "Automatic Detection of Confidence Level of Examinees in Answering Multiple Choice Questions using P1000 Brain Signal." In 2019 International Conference on Wireless Communications Signal Processing and Networking (WiSPNET). IEEE, 2019. http://dx.doi.org/10.1109/wispnet45539.2019.9032803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography