To see the other types of publications on this topic, follow the link: The question of the task.

Journal articles on the topic 'The question of the task'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'The question of the task.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Rus, Vasile, Brendan Wyse, Paul Piwek, Mihai Lintean, Svetlana Stoyanchev, and Cristian Moldovan. "A Detailed Account of The First Question Generation Shared Task Evaluation Challenge." Dialogue & Discourse 3, no. 2 (2012): 177–204. http://dx.doi.org/10.5087/dad.2012.208.

Full text
Abstract:
The paper provides a detailed account of the First Shared Task Evaluation Challenge on Question Generation that took place in 2010. The campaign included two tasks that take text as input and produce text, i.e. questions, as output: Task A - “ Question Generation from Paragraphs and Task B - “ Question Generation from Sentences. Motivation, data sets, evaluation criteria, guidelines for judges, and results are presented for the two tasks. Lessons learned and advice for future Question Generation Shared Task Evaluation Challenges (QG-STEC) are also offered.
APA, Harvard, Vancouver, ISO, and other styles
2

Lei, Chenyi, Lei Wu, Dong Liu, et al. "Multi-Question Learning for Visual Question Answering." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (2020): 11328–35. http://dx.doi.org/10.1609/aaai.v34i07.6794.

Full text
Abstract:
Visual Question Answering (VQA) raises a great challenge for computer vision and natural language processing communities. Most of the existing approaches consider video-question pairs individually during training. However, we observe that there are usually multiple (either sequentially generated or not) questions for the target video in a VQA task, and the questions themselves have abundant semantic relations. To explore these relations, we propose a new paradigm for VQA termed Multi-Question Learning (MQL). Inspired by the multi-task learning, MQL learns from multiple questions jointly together with their corresponding answers for a target video sequence. The learned representations of video-question pairs are then more general to be transferred for new questions. We further propose an effective VQA framework and design a training procedure for MQL, where the specifically designed attention network models the relation between input video and corresponding questions, enabling multiple video-question pairs to be co-trained. Experimental results on public datasets show the favorable performance of the proposed MQL-VQA framework compared to state-of-the-arts.
APA, Harvard, Vancouver, ISO, and other styles
3

VOORHEES, ELLEN M. "The TREC question answering track." Natural Language Engineering 7, no. 4 (2001): 361–78. http://dx.doi.org/10.1017/s1351324901002789.

Full text
Abstract:
The Text REtrieval Conference (TREC) question answering track is an effort to bring the benefits of large-scale evaluation to bear on a question answering (QA) task. The track has run twice so far, first in TREC-8 and again in TREC-9. In each case, the goal was to retrieve small snippets of text that contain the actual answer to a question rather than the document lists traditionally returned by text retrieval systems. The best performing systems were able to answer about 70% of the questions in TREC-8 and about 65% of the questions in TREC-9. While the 65% score is a slightly worse result than the TREC-8 scores in absolute terms, it represents a very significant improvement in question answering systems. The TREC-9 task was considerably harder than the TREC-8 task because TREC-9 used actual users’ questions while TREC-8 used questions constructed for the track. Future tracks will continue to challenge the QA community with more difficult, and more realistic, question answering tasks.
APA, Harvard, Vancouver, ISO, and other styles
4

Emerson, John, and Yllias Chali. "Transformer-Based Multi-Hop Question Generation (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (2023): 16206–7. http://dx.doi.org/10.1609/aaai.v37i13.26963.

Full text
Abstract:
Question generation is the parallel task of question answering, where given an input context and, optionally, an answer, the goal is to generate a relevant and fluent natural language question. Although recent works on question generation have experienced success by utilizing sequence-to-sequence models, there is a need for question generation models to handle increasingly complex input contexts to produce increasingly detailed questions. Multi-hop question generation is a more challenging task that aims to generate questions by connecting multiple facts from multiple input contexts. In this work, we apply a transformer model to the task of multi-hop question generation without utilizing any sentence-level supporting fact information. We utilize concepts that have proven effective in single-hop question generation, including a copy mechanism and placeholder tokens. We evaluate our model’s performance on the HotpotQA dataset using automated evaluation metrics, including BLEU, ROUGE and METEOR and show an improvement over the previous work.
APA, Harvard, Vancouver, ISO, and other styles
5

Filipi, Anna. "Interaction or interrogation? a study of talk occurring in a sample of the 1992 VCE Italian oral common assessment task (cat 2)." Australian Review of Applied Linguistics 21, no. 2 (1998): 123–37. http://dx.doi.org/10.1075/aral.21.2.07fil.

Full text
Abstract:
Abstract The study reported in this paper examined turn-taking and sequence organisation in a sample of twenty-one interactions derived from the 1992 Victorian Certificate of Education Italian oral Common Assessment Task. The most common adjacency pair was found to be the question and answer, the assessor having the right to ask questions and the student to answer. Student initiated questions occurred in five environments and only when conditions were created for them to do so. The assessor’s role was to open and close sequences and sections and to initiate topics principally through the question. Two types of sequences were identified, question/answer and expanded sequences. It was found that there were two groups of assessors. Those who predominantly set up question/answer sequences, and those who set up post sequences.
APA, Harvard, Vancouver, ISO, and other styles
6

Greenwood, Alex, Rose Mary Zbiek, and Amy Brass. "Task, Information, Rubric: A Mathematical Modeling Task." Mathematics Teacher: Learning and Teaching PK-12 117, no. 12 (2024): 907–16. https://doi.org/10.5951/mtlt.2023.0369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nguyen, Van-Tu, Anh-Cuong Le, and Ha-Nam Nguyen. "A Model of Convolutional Neural Network Combined with External Knowledge to Measure the Question Similarity for Community Question Answering Systems." International Journal of Machine Learning and Computing 11, no. 3 (2021): 194–201. http://dx.doi.org/10.18178/ijmlc.2021.11.3.1035.

Full text
Abstract:
Automatically determining similar questions and ranking the obtained questions according to their similarities to each input question is a very important task to any community Question Answering system (cQA). Various methods have applied for this task including conventional machine learning methods with feature extraction and some recent studies using deep learning methods. This paper addresses the problem of how to combine advantages of different methods into one unified model. Moreover, deep learning models are usually only effective for large data, while training data sets in cQA problems are often small, so the idea of integrating external knowledge into deep learning models for this cQA problem becomes more important. To this objective, we propose a neural network-based model which combines a Convolutional Neural Network (CNN) with features from other methods so that the deep learning model is enhanced with addtional knowledge sources. In our proposed model, the CNN component will learn the representation of two given questions, then combined additional features through a Multilayer Perceptron (MLP) to measure similarity between the two questions. We tested our proposed model on the SemEval 2016 task-3 data set and obtain better results in comparison with previous studies on the same task.
APA, Harvard, Vancouver, ISO, and other styles
8

Hasbi Nasution, Nur'afifah. "THE ANALYSIS OF APPLYING QUESTION-MAKING TASK TO COMPREHEND TOEFL SUBJECTS." Journal MELT (Medium for English Language Teaching) 2, no. 1 (2018): 36. http://dx.doi.org/10.22303/melt.2.1.2017.36-47.

Full text
Abstract:
TOEFL (Test of English as Foreign Language) is widely used to measure one’s ability in English. In fact, learning TOEFL is always by answering questions. Whatever the subjects (listening, structure, error analysis or reading), students will be faced on a lot of questions. In many cases, students were tired and bored of answering questions. They even did not comprehend the subjects. To solve that problem, applying question-making task can be a solution. This study explained how the task improved students’ comprehension on TOEFL subjects. Furthermore, the task would build students’ motivation to involve in teaching-learning process. This study was also focused on exploring the result of applying question-making task as a TOEFL teaching tool. The aim of this study is to introduce TOEFL teaching technique that could enable students to comprehend TOEFL subjects. In addition, by making their own questions would motivate them to be involved in learning process.
APA, Harvard, Vancouver, ISO, and other styles
9

Bach, Ngo Xuan, Phan Duc Thanh, and Tran Thi Oanh. "Question Analysis towards a Vietnamese Question Answering System in the Education Domain." Cybernetics and Information Technologies 20, no. 1 (2020): 112–28. http://dx.doi.org/10.2478/cait-2020-0008.

Full text
Abstract:
AbstractBuilding a computer system, which can automatically answer questions in the human language, speech or text, is a long-standing goal of the Artificial Intelligence (AI) field. Question analysis, the task of extracting important information from the input question, is the first and crucial step towards a question answering system. In this paper, we focus on the task of Vietnamese question analysis in the education domain. Our goal is to extract important information expressed by named entities in an input question, such as university names, campus names, major names, and teacher names. We present several extraction models that utilize the advantages of both traditional statistical methods with handcrafted features and more recent advanced deep neural networks with automatically learned features. Our best model achieves 88.11% in the F1 score on a corpus consisting of 3,600 Vietnamese questions collected from the fan page of the International School, Vietnam National University, Hanoi.
APA, Harvard, Vancouver, ISO, and other styles
10

Kwiatkowski, Tom, Jennimaria Palomaki, Olivia Redfield, et al. "Natural Questions: A Benchmark for Question Answering Research." Transactions of the Association for Computational Linguistics 7 (November 2019): 453–66. http://dx.doi.org/10.1162/tacl_a_00276.

Full text
Abstract:
We present the Natural Questions corpus, a question answering data set. Questions consist of real anonymized, aggregated queries issued to the Google search engine. An annotator is presented with a question along with a Wikipedia page from the top 5 search results, and annotates a long answer (typically a paragraph) and a short answer (one or more entities) if present on the page, or marks null if no long/short answer is present. The public release consists of 307,373 training examples with single annotations; 7,830 examples with 5-way annotations for development data; and a further 7,842 examples with 5-way annotated sequestered as test data. We present experiments validating quality of the data. We also describe analysis of 25-way annotations on 302 examples, giving insights into human variability on the annotation task. We introduce robust metrics for the purposes of evaluating question answering systems; demonstrate high human upper bounds on these metrics; and establish baseline results using competitive methods drawn from related literature.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhou, Tom Chao, Xiance Si, Edward Y. Chang, Irwin King, and Michael R. Lyu. "A Data-Driven Approach to Question Subjectivity Identification in Community Question Answering." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (2021): 164–70. http://dx.doi.org/10.1609/aaai.v26i1.8111.

Full text
Abstract:
Automatic Subjective Question Answering (ASQA), which aims at answering users'subjective questions using summaries of multiple opinions, becomes increasingly important. One challenge of ASQA is that expected answers for subjective questions may not readily exist in the Web. The rising and popularity of Community Question Answering (CQA) sites, which provide platforms for people to post and answer questions, provides an alternative to ASQA. One important task of ASQA is question subjectivity identification, which identifies whether a user is asking a subjective question. Unfortunately, there has been little labeled training data available for this task. In this paper, we propose an approach to collect training data automatically by utilizing social signals in CQA sites without involving any manual labeling. Experimental results show that our data-driven approach achieves 9.37% relative improvement over the supervised approach using manually labeled data, and achieves 5.15% relative gain over a state-of-the-art semi-supervised approach. In addition, we propose several heuristic features for question subjectivity identification. By adding these features, we achieve 11.23% relative improvement over word n-gram feature under the same experimental setting.
APA, Harvard, Vancouver, ISO, and other styles
12

Minga, Jamila, Davida Fromm, ClarLynda Williams-DeVane, and Brian MacWhinney. "Question Use in Adults With Right-Hemisphere Brain Damage." Journal of Speech, Language, and Hearing Research 63, no. 3 (2020): 738–48. http://dx.doi.org/10.1044/2019_jslhr-19-00063.

Full text
Abstract:
Purpose Right-hemisphere brain damage (RHD) can affect pragmatic aspects of communication that may contribute to an impaired ability to gather information. Questions are an explicit means of gathering information. Question types vary in terms of the demands they place on cognitive resources. The purpose of this exploratory descriptive study is to test the hypothesis that adults with RHD differ from neurologically healthy adults in the types of questions asked during a structured task. Method Adults who sustained a single right-hemisphere stroke and neurologically healthy controls from the RHDBank Database completed the Unfamiliar Object Task of the RHDBank Discourse Protocol (Minga et al., 2016). Each task was video-recorded. Questions were transcribed using the Codes for the Human Analysis of Transcripts format. Coding and analysis of each response were conducted using Computerized Language Analysis (MacWhinney, 2000) programs. Results The types of questions used differed significantly across groups, with the RHD group using significantly more content questions and significantly fewer polar questions than the neurologically healthy control group. In their content question use, adults with RHD used significantly more “what” questions than other question subtypes. Conclusion Question-asking is an important aspect of pragmatic communication. Differences in the relative usage of question types, such as the reduced use of polar questions or increased use of content questions, may reflect cognitive limitations arising from RHD. Further investigations examining question use in this population are encouraged to replicate the current findings and to expand on the study tasks and measures. Supplemental Material https://doi.org/10.23641/asha.11936295
APA, Harvard, Vancouver, ISO, and other styles
13

Mutabazi, Emmanuel, Jianjun Ni, Guangyi Tang, and Weidong Cao. "An Improved Model for Medical Forum Question Classification Based on CNN and BiLSTM." Applied Sciences 13, no. 15 (2023): 8623. http://dx.doi.org/10.3390/app13158623.

Full text
Abstract:
Question Classification (QC) is the fundamental task for Question Answering Systems (QASs) implementation, and is a vital task, as it helps in identifying the question category. It plays a big role in predicting the answer to a question while building a QAS. However, classifying medical questions is still a challenging task due to the complexity of medical terms. Many researchers have proposed different techniques to solve these problems, but some of these problems remain partially solved or unsolved. With the help of deep learning technology, various text-processing problems have become much easier to solve. In this paper, an improved deep learning-based model for Medical Forum Question Classification (MFQC) is proposed to classify medical questions. In the proposed model, feature representation is performed using Word2Vec, which is a word embedding model. Additionally, the features are extracted from the word embedding layer based on Convolutional Neural Networks (CNNs). Finally, a Bidirectional Long Short Term Memory (BiLSTM) network is used to classify the extracted features. The BiLSTM model analyzes the target information of the representation and then outputs the question category via a SoftMax layer. Our model achieves state-of-the-art performance by effectively capturing semantic and syntactic features from the input questions. We evaluate the proposed CNN-BiLSTM model on two benchmark datasets and compare its performance with existing methods, demonstrating its superiority in accurately categorizing medical forum questions.
APA, Harvard, Vancouver, ISO, and other styles
14

Hill, Thomas E. "MORAL CONSTRUCTION AS A TASK: SOURCES AND LIMITS." Social Philosophy and Policy 25, no. 1 (2007): 214–36. http://dx.doi.org/10.1017/s0265052508080084.

Full text
Abstract:
This essay first distinguishes different questions regarding moral objectivity and relativism and then sketches a broadly Kantian position on two of these questions. First, how, if at all, can we derive, justify, or support specific moral principles and judgments from more basic moral standards and values? Second, how, if at all, can the basic standards such as my broadly Kantian perspective, be defended? Regarding the first question, the broadly Kantian position is that from ideas in Kant's later formulations of the Categorical Imperative, especially human dignity and rational autonomous law-making, we can develop an appropriate moral perspective for identifying and supporting more specific principles. Both the deliberative perspective and the derivative principles can be viewed as “constructed,” but in different senses. In response to the second question, the essay examines two of Kant's strategies for defending his basic perspective and the important background of his arguments against previous moral theories.
APA, Harvard, Vancouver, ISO, and other styles
15

Sonawale, Ruchita. "Visual Mind: Visual Question Answering (VQA) with CLIP Model." International Journal for Research in Applied Science and Engineering Technology 12, no. 4 (2024): 3843–49. http://dx.doi.org/10.22214/ijraset.2024.60786.

Full text
Abstract:
Abstract: This paper proposes a Visual Question Answering (VQA) problem using CLIP models. The proposed approach suggests an enhanced VQA-CLIP model with additional layers for better computational performance. VQA is an increasingly important task that aims to answer open-ended questions based on images. This task has numerous applications in various fields such as medicine, education, and surveillance. The VizWiz dataset, specifically designed to assist visually impaired individuals, consists of image/question pairs along with 10 answers per question, recorded by blind participants in a natural setting. The task involves predicting answers to questions and determining when a question is unanswerable. In this study, we will utilize the VizWiz dataset and employ the CLIP model with an additional linear layer, a multimodal, zero-shot model known for its efficiency in processing image and text data. By leveraging the unique capabilities of CLIP, and benchmarked against state-ofthe-art approaches. Results indicate a competitive or better performance of the VQA model.
APA, Harvard, Vancouver, ISO, and other styles
16

Kim, YouJin, Caroline Payant, and Pamela Pearson. "THE INTERSECTION OF TASK-BASED INTERACTION, TASK COMPLEXITY, AND WORKING MEMORY." Studies in Second Language Acquisition 37, no. 3 (2015): 549–81. http://dx.doi.org/10.1017/s0272263114000618.

Full text
Abstract:
The extent to which individual differences in cognitive abilities affect the relationship among task complexity, attention to form, and second language development has been addressed only minimally in the cognition hypothesis literature. The present study explores how reasoning demands in tasks and working memory (WM) capacity predict learners’ ability to notice English question structures provided in the form of recasts and how this contributes to subsequent development of English question formation. Eighty-one nonnative speakers of English completed three interactive tasks with a native speaker interlocutor, one WM task, and three oral production tests. Prior to the first interactive task, participants were randomly assigned to a task group (simple or complex). During task performance, all learners were provided with recasts targeting errors in question formation. The results showed that learners’ cognitive processes during tasks were in line with the cognitive demands of the tasks, at two complexity levels. The findings suggest that WM was the only significant predictor of the amount of noticing of recasts as well as of learners’ question development. With regard to interaction effects between WM and task complexity, high WM learners who carried out a complex version of the tasks benefitted the most from task-based interaction.
APA, Harvard, Vancouver, ISO, and other styles
17

Thai, Triet, and Son T. Luu. "INTEGRATING IMAGE FEATURES WITH CONVOLUTIONAL SEQUENCE-TO-SEQUENCE NETWORK FOR MULTILINGUAL VISUAL QUESTION ANSWERING." Journal of Computer Science and Cybernetics 40, no. 2 (2024): 117–34. http://dx.doi.org/10.15625/1813-9663/18155.

Full text
Abstract:
Visual question answering is a task that requires computers to give correct answers for the input questions based on the images. This task can be solved by humans with ease, but it is a challenge for computers. The VLSP2022-EVJVQA shared task carries the Visual question answering task in the multilingual domain on a newly released dataset UIT-EVJVQA, in which the questions and answers are written in three different languages: English, Vietnamese, and Japanese. We approached the challenge as a sequence-to-sequence learning task, in which we integrated hints from pre-trained state-of-the-art VQA models and image features with a convolutional sequence-to-sequence network to generate the desired answers. Our results obtained up to 0.3442 by F1 score on the public test set and 0.4210 on the private test set.
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Xiao, Yawei Sun, and Gong Cheng. "TSQA: Tabular Scenario Based Question Answering." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 15 (2021): 13297–305. http://dx.doi.org/10.1609/aaai.v35i15.17570.

Full text
Abstract:
Scenario-based question answering (SQA) has attracted an increasing research interest. Compared with the well-studied machine reading comprehension (MRC), SQA is a more challenging task: a scenario may contain not only a textual passage to read but also structured data like tables, i.e., tabular scenario based question answering (TSQA). AI applications of TSQA such as answering multiple-choice questions in high-school exams require synthesizing data in multiple cells and combining tables with texts and domain knowledge to infer answers. To support the study of this task, we construct GeoTSQA. This dataset contains 1k real questions contextualized by tabular scenarios in the geography domain. To solve the task, we extend state-of-the-art MRC methods with TTGen, a novel table-to-text generator. It generates sentences from variously synthesized tabular data and feeds the downstream MRC method with the most useful sentences. Its sentence ranking model fuses the information in the scenario, question, and domain knowledge. Our approach outperforms a variety of strong baseline methods on GeoTSQA.
APA, Harvard, Vancouver, ISO, and other styles
19

Máñez Sáez, Ignacio, Eduardo Vidal-Abarca, and Joseph Magliano. "Comprehension processes on question-answering activities: A think-aloud study." Electronic Journal of Research in Education Psychology 20, no. 56 (2022): 1–26. http://dx.doi.org/10.25115/ejrep.v20i56.3776.

Full text
Abstract:
Introduction. Students often answer questions from available expository texts for assessment and learning purposes. These activities require readers to activate not only meaning-making processes (e.g., paraphrases or elaborations), but also metacognitive operations (e.g., monitoring readers’ own comprehension or self-regulating reading behaviors) in order to successfully use textual information to meet the task demands. The aim of the study was to explore the meaning-making processes readers activate while answering questions and using textual information at hand, and how they monitor and control the processing of these tasks. Method. Forty eighth graders read two expository texts and answered ten multiple-choice comprehension questions per text. For each question, participants were forced to select the relevant pieces of textual information to provide the correct answer. Further, participants thought aloud in one of the texts, whereas they performed the task in silence in the other. This reading scenario served to collect valuable data on the reader’s question-answering process. The task was administered individually on a computer-based environment, which allowed recording the students’ reading behavior online. Results. Results showed processing differences between the two recursive steps of question-answering. While reading the question readers mainly restated information and focused on monitoring the comprehension of the questions and their correspondence with the alternatives. However, they tended to paraphrase and assess textual relevance when searching the text. The study also reveals that think-aloud methodology affected these processes and outcomes differently. Discussion and Conclusion. This study advances on our knowledge of cognitive and metacognitive processes involved in answering questions from an available text, a task extensively used to measure reading literacy skills as well as for learning purposes in academic settings. It shows how readers activate meaning-making processes while reading questions and searching the text for task-relevant information, and how they monitor the question-answering process. The study also reveals that think-aloud methodology may affect these processes differently. Apart from advancing our theoretical knowledge, our study also has important practical applications for the assessment of reading literacy skills.
APA, Harvard, Vancouver, ISO, and other styles
20

Zitnick, C. Lawrence, Aishwarya Agrawal, Stanislaw Antol, Margaret Mitchell, Dhruv Batra, and Devi Parikh. "Measuring Machine Intelligence Through Visual Question Answering." AI Magazine 37, no. 1 (2016): 63–72. http://dx.doi.org/10.1609/aimag.v37i1.2647.

Full text
Abstract:
As machines have become more intelligent, there has been a renewed interest in methods for measuring their intelligence. A common approach is to propose tasks for which a human excels, but one which machines find difficult. However, an ideal task should also be easy to evaluate and not be easily gameable. We begin with a case study exploring the recently popular task of image captioning and its limitations as a task for measuring machine intelligence. An alternative and more promising task is Visual Question Answering that tests a machine’s ability to reason about language and vision. We describe a dataset unprecedented in size created for the task that contains over 760,000 human generated questions about images. Using around 10 million human generated answers, machines may be easily evaluated.
APA, Harvard, Vancouver, ISO, and other styles
21

Arzu, Mehmet, and Murat Aydoğan. "Comparison of Transformer-Based Turkish Models for Question-Answering Task." Balkan Journal of Electrical and Computer Engineering 12, no. 4 (2025): 387–93. https://doi.org/10.17694/bajece.1576976.

Full text
Abstract:
Question-answering systems facilitate information access processes by providing fast and accurate answers to questions that users express in natural language. Today, advances in Natural Language Processing (NLP) techniques increase the effectiveness of such systems and improve the user experience. However, for these systems to work effectively, an accurate understanding of the structural properties of language is required. Traditional rule-based and knowledge retrieval-based systems are not able to analyze the contextual meaning of questions and texts deeply enough and therefore cannot produce satisfactory answers to complex questions. For this reason, Transformer-based models that can better capture the contextual and semantic integrity of the language have been developed. In this study, within the scope of the developed models, the performances of BERTurk, ELECTRA Turkish and DistilBERTurk models for Turkish question-answer tasks were compared by fine-tuning under the same hyperparameters and the results obtained were evaluated. According to the findings, it was observed that higher Exact Match (EM) and F1 scores were obtained in models with case sensitivity; the best performance was obtained with 63.99 EM and 80.84 F1 scores in the BERTurk (Cased, 128k) model.
APA, Harvard, Vancouver, ISO, and other styles
22

Lee, Jina, and Eun Kyung Kim. "Analyzing Collaborative Talk in a Student Managed Task-based Fine Art Activity." Korean Association For Learner-Centered Curriculum And Instruction 22, no. 18 (2022): 767–82. http://dx.doi.org/10.22251/jlcci.2022.22.18.767.

Full text
Abstract:
Objectives This is a qualitative study of the fine art group activity in the eyes of sociolinguistics. Using conversation analysis, this study aims to scrutinize how the actual collaborative talk dynamics take place in a student managed task-based art activity.
 Methods In order to have a close look at the collaborative talk, we focused directly on face-to-face talk-in-interaction of four students’ during their two hour group activity in a college art class, ‘understanding of contemporary art’. The student managed task-based group activity was a part of liberal arts course especially in the 11th week of spring semester, 2022 at a university in Seoul, Korea. The data was collected via video recording of the two hour group discussion, and the recorded talk was all transcribed for conversation analysis which is the main tool of analyzing students’ talk.
 Results The procedure of students task-based activity included the following four steps: (1) sharing and choosing an agreed task topic to work on, (2) cooperative opinion sharing for making the art work, such as materials and method (3) actual making and art work completion, (4) wrapping up and cleaning. Conversation analysis is a study of turn-taking organization of talk-in-interaction to find how the participants manage and display intersubjectivity in talk. In the students’ talk occurred in the process of the student managed group task, we found several turn-taking types resulting reciprocal support in order to develop the task product: (1) topic change, (2) other-initiated second turn repair, (3) reactive tokens, and (4) other question-answer sequences with mutually supporting verbal and nonverbal moves.
 Conclusions This study is quite valuable in terms that it addressed the naturally occuring process of turn-taking sequences by analyzing students’ talk-in-interaction in depth when the students were engaged in collaborative task-based activity to complete an act work. As noted in the result, most of the various types of turn-taking lead mutual support and agreement of each other’s thought in order to produce the collaborative task completion rather than aggressive debate and disagreement.
APA, Harvard, Vancouver, ISO, and other styles
23

Li, Haonan, Ehsan Hamzei, Ivan Majic, et al. "Neural factoid geospatial question answering." Journal of Spatial Information Science, no. 23 (December 24, 2021): 65–90. http://dx.doi.org/10.5311/josis.2021.23.159.

Full text
Abstract:
Existing question answering systems struggle to answer factoid questions when geospatial information is involved. This is because most systems cannot accurately detect the geospatial semantic elements from the natural language questions, or capture the semantic relationships between those elements. In this paper, we propose a geospatial semantic encoding schema and a semantic graph representation which captures the semantic relations and dependencies in geospatial questions. We demonstrate that our proposed graph representation approach aids in the translation from natural language to a formal, executable expression in a query language. To decrease the need for people to provide explanatory information as part of their question and make the translation fully automatic, we treat the semantic encoding of the question as a sequential tagging task, and the graph generation of the query as a semantic dependency parsing task. We apply neural network approaches to automatically encode the geospatial questions into spatial semantic graph representations. Compared with current template-based approaches, our method generalises to a broader range of questions, including those with complex syntax and semantics. Our proposed approach achieves better results on GeoData201 than existing methods.
APA, Harvard, Vancouver, ISO, and other styles
24

Diaconu, Bogdan-Alexandru, and Beáta Lázár-Lőrincz. "Romanian Question Answering Using Transformer Based Neural Networks." Studia Universitatis Babeș-Bolyai Informatica 67, no. 1 (2022): 37–44. http://dx.doi.org/10.24193/subbi.2022.1.03.

Full text
Abstract:
"Question answering is the task of predicting answers for questions based on a context paragraph. It has become especially important, as the large amounts of textual data available online requires not only gathering information but also the task of findings specific answers to specific questions. In this work, we present experiments evaluated on the XQuAD-ro question answering dataset that has been recently published based on the translation of the SQuAD dataset into Romanian. Our bestperforming model, Romanian fine-tuned BERT, achieves an F1 score of 0.80 and an EM score of 0.73. We show that fine-tuning the model with the addition of the Romanian translation slightly increases the evaluation metrics. Keywords and phrases: question answering, deep learning, Transformer, Romanian. "
APA, Harvard, Vancouver, ISO, and other styles
25

Zhang, Pei Ying. "Sentence Similarity Metric and its Application in FAQ System." Advanced Materials Research 718-720 (July 2013): 2248–51. http://dx.doi.org/10.4028/www.scientific.net/amr.718-720.2248.

Full text
Abstract:
FAQ system is a question answering system which finds the question sentence from question-answer collection and then returns its corresponding answer to user. The task of matching questions to corresponding question-answer pairs has become a major challenge in FAQ system. This paper proposes a method for sentence similarity metric between questions according to its semantic similarity as well as the length of question length. Experiments show that this method can improve the accuracy and intelligence of answering system, has some practical value.
APA, Harvard, Vancouver, ISO, and other styles
26

NAKOV, PRESLAV, LLUÍS MÀRQUEZ, ALESSANDRO MOSCHITTI, and HAMDY MUBARAK. "Arabic community question answering." Natural Language Engineering 25, no. 1 (2018): 5–41. http://dx.doi.org/10.1017/s1351324918000426.

Full text
Abstract:
AbstractWe analyze resources and models for Arabic community Question Answering (cQA). In particular, we focus on CQA-MD, our cQA corpus for Arabic in the domain of medical forums. We describe the corpus and the main challenges it poses due to its mix of informal and formal language, and of different Arabic dialects, as well as due to its medical nature. We further present a shared task on cQA at SemEval, the International Workshop on Semantic Evaluation, based on this corpus. We discuss the features and the machine learning approaches used by the teams who participated in the task, with focus on the models that exploit syntactic information using convolutional tree kernels and neural word embeddings. We further analyze and extend the outcome of the SemEval challenge by training a meta-classifier combining the output of several systems. This allows us to compare different features and different learning algorithms in an indirect way. Finally, we analyze the most frequent errors common to all approaches, categorizing them into prototypical cases, and zooming into the way syntactic information in tree kernel approaches can help solve some of the most difficult cases. We believe that our analysis and the lessons learned from the process of corpus creation as well as from the shared task analysis will be helpful for future research on Arabic cQA.
APA, Harvard, Vancouver, ISO, and other styles
27

Huang, Xiang, Sitao Cheng, Yiheng Shu, Yuheng Bao, and Yuzhong Qu. "Question Decomposition Tree for Answering Complex Questions over Knowledge Bases." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (2023): 12924–32. http://dx.doi.org/10.1609/aaai.v37i11.26519.

Full text
Abstract:
Knowledge base question answering (KBQA) has attracted a lot of interest in recent years, especially for complex questions which require multiple facts to answer. Question decomposition is a promising way to answer complex questions. Existing decomposition methods split the question into sub-questions according to a single compositionality type, which is not sufficient for questions involving multiple compositionality types. In this paper, we propose Question Decomposition Tree (QDT) to represent the structure of complex questions. Inspired by recent advances in natural language generation (NLG), we present a two-staged method called Clue-Decipher to generate QDT. It can leverage the strong ability of NLG model and simultaneously preserve the original questions. To verify that QDT can enhance KBQA task, we design a decomposition-based KBQA system called QDTQA. Extensive experiments show that QDTQA outperforms previous state-of-the-art methods on ComplexWebQuestions dataset. Besides, our decomposition method improves an existing KBQA system by 12% and sets a new state-of-the-art on LC-QuAD 1.0.
APA, Harvard, Vancouver, ISO, and other styles
28

Ye, Hongbin, Ningyu Zhang, Zhen Bi, et al. "Learning to Ask for Data-Efficient Event Argument Extraction (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (2022): 13099–100. http://dx.doi.org/10.1609/aaai.v36i11.21686.

Full text
Abstract:
Event argument extraction (EAE) is an important task for information extraction to discover specific argument roles. In this study, we cast EAE as a question-based cloze task and empirically analyze fixed discrete token template performance. As generating human-annotated question templates is often time-consuming and labor-intensive, we further propose a novel approach called “Learning to Ask,” which can learn optimized question templates for EAE without human annotations. Experiments using the ACE-2005 dataset demonstrate that our method based on optimized questions achieves state-of-the-art performance in both the few-shot and supervised settings.
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Chufei. "Answer Me! Factors Affecting Students Propensity to Ask Questions in Class and How Stimuli Polish Student Questioning." Lecture Notes in Education Psychology and Public Media 23, no. 1 (2023): 133–46. http://dx.doi.org/10.54254/2753-7048/23/20230416.

Full text
Abstract:
Student voluntary question-asking enhances student engagement and academic achievements. Nevertheless, there is little understanding of the mechanism of questioning. Against this background, the primary objectives of the present study were to do a comprehensive evaluation encompassing three dimensions of students questioning behaviour, namely the motivations underlying their question-asking, the types of questions, and the strategies employed in encouraging more questions. This study has found that classroom environment, students self-efficacy and self-awareness, peer influence, and gender & cultural factors are the main factors influencing students propensity to ask questions. Question types can be divided into procedural and content-related, from which students generate their questions. In addition to the unimportant off-task attention questioning, confirmation, on-task attention questioning, and clarification are low-order questioning while evaluation, comparison, problem-solving, cause and effect/correlation, and extended questions are valued as high-order ones. Class material, class strategies, and teacher intervention are supposed to be utilized and upgraded to polish students question-asking in both quantity and quality. Based on the comprehensive analysis, more research should be done into teachers perceptions of receiving and responding to different kinds of questions and their proper intervention in student questioning.
APA, Harvard, Vancouver, ISO, and other styles
30

Zulqarnain, Muhammad, Ahmed Khalaf Zager Alsaedi, Rozaida Ghazali, Muhammad Ghulam Ghouse, Wareesa Sharif, and Noor Aida Husaini. "A comparative analysis on question classification task based on deep learning approaches." PeerJ Computer Science 7 (August 3, 2021): e570. http://dx.doi.org/10.7717/peerj-cs.570.

Full text
Abstract:
Question classification is one of the essential tasks for automatic question answering implementation in natural language processing (NLP). Recently, there have been several text-mining issues such as text classification, document categorization, web mining, sentiment analysis, and spam filtering that have been successfully achieved by deep learning approaches. In this study, we illustrated and investigated our work on certain deep learning approaches for question classification tasks in an extremely inflected Turkish language. In this study, we trained and tested the deep learning architectures on the questions dataset in Turkish. In addition to this, we used three main deep learning approaches (Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNN)) and we also applied two different deep learning combinations of CNN-GRU and CNN-LSTM architectures. Furthermore, we applied the Word2vec technique with both skip-gram and CBOW methods for word embedding with various vector sizes on a large corpus composed of user questions. By comparing analysis, we conducted an experiment on deep learning architectures based on test and 10-cross fold validation accuracy. Experiment results were obtained to illustrate the effectiveness of various Word2vec techniques that have a considerable impact on the accuracy rate using different deep learning approaches. We attained an accuracy of 93.7% by using these techniques on the question dataset.
APA, Harvard, Vancouver, ISO, and other styles
31

Cho, Byeong-Young, Lindsay Woodward, Dan Li, and Wendy Barlow. "Examining Adolescents’ Strategic Processing During Online Reading With a Question-Generating Task." American Educational Research Journal 54, no. 4 (2017): 691–724. http://dx.doi.org/10.3102/0002831217701694.

Full text
Abstract:
Forty-three high school students participated in an online reading task to generate a critical question on a controversial topic. Participants’ concurrent verbal reports of strategy use (i.e., information location, meaning making, source evaluation, self-monitoring) and their reading outcome (i.e., the generated question) were evaluated with scoring rubrics. Path analysis indicated that strategic meaning making coordinated with self-monitoring and source evaluation positively influenced the quality of the generated questions, whereas information-locating strategies alone contributed little to the participants’ question generation. Further, source evaluation played a positive role when readers monitored and regulated their strategies for information location and meaning making. The findings on the interplay of metacognitive, critical, and intertextual strategies in online reading are discussed with regard to research and practice.
APA, Harvard, Vancouver, ISO, and other styles
32

An, Suyeong, Junghoon Kim, Minsam Kim, and Juneyoung Park. "No Task Left Behind: Multi-Task Learning of Knowledge Tracing and Option Tracing for Better Student Assessment." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 4 (2022): 4424–31. http://dx.doi.org/10.1609/aaai.v36i4.20364.

Full text
Abstract:
Student assessment is one of the most fundamental tasks in the field of AI Education (AIEd). One of the most common approach to student assessment is Knowledge Tracing (KT), which evaluates a student's knowledge state by predicting whether the student will answer a given question correctly or not. However, in the context of multiple choice (polytomous) questions, conventional KT approaches are limited in that they only consider the binary (dichotomous) correctness label (i.e., correct or incorrect), and disregard the specific option chosen by the student. Meanwhile, Option Tracing (OT) attempts to model a student by predicting which option they will choose for a given question, but overlooks the correctness information. In this paper, we propose Dichotomous-Polytomous Multi-Task Learning (DP-MTL), a multi-task learning framework that combines KT and OT for more precise student assessment. In particular, we show that the KT objective acts as a regularization term for OT in the DP-MTL framework, and propose an appropriate architecture for applying our method on top of existing deep learning-based KT models. We experimentally confirm that DP-MTL significantly improves both KT and OT performances, and also benefits downstream tasks such as Score Prediction (SP).
APA, Harvard, Vancouver, ISO, and other styles
33

Olney, Andrew M., Arthur C. Graesser, and Natalie K. Person. "Question Generation from Concept Maps." Dialogue & Discourse 3, no. 2 (2012): 75–99. http://dx.doi.org/10.5087/dad.2012.204.

Full text
Abstract:
In this paper we present a question generation approach suitable for tutorial dialogues. The approach is based on previous psychological theories that hypothesize questions are generated from a knowledge representation modeled as a concept map. Our model automatically extracts concept maps from a textbook and uses them to generate questions. The purpose of the study is to generate and evaluate pedagogically-appropriate questions at varying levels of specificity across one or more sentences. The evaluation metrics include scales from the Question Generation Shared Task and Evaluation Challenge and a new scale specific to the pedagogical nature of questions in tutoring.
APA, Harvard, Vancouver, ISO, and other styles
34

Zhang, Shuailiang, Hai Zhao, Yuwei Wu, Zhuosheng Zhang, Xi Zhou, and Xiang Zhou. "DCMN+: Dual Co-Matching Network for Multi-Choice Reading Comprehension." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 9563–70. http://dx.doi.org/10.1609/aaai.v34i05.6502.

Full text
Abstract:
Multi-choice reading comprehension is a challenging task to select an answer from a set of candidate options when given passage and question. Previous approaches usually only calculate question-aware passage representation and ignore passage-aware question representation when modeling the relationship between passage and question, which cannot effectively capture the relationship between passage and question. In this work, we propose dual co-matching network (DCMN) which models the relationship among passage, question and answer options bidirectionally. Besides, inspired by how humans solve multi-choice questions, we integrate two reading strategies into our model: (i) passage sentence selection that finds the most salient supporting sentences to answer the question, (ii) answer option interaction that encodes the comparison information between answer options. DCMN equipped with the two strategies (DCMN+) obtains state-of-the-art results on five multi-choice reading comprehension datasets from different domains: RACE, SemEval-2018 Task 11, ROCStories, COIN, MCTest.
APA, Harvard, Vancouver, ISO, and other styles
35

Лупандин and Vitaliy Lupandin. "On the methodology of question-answer situations in the professional activity of civil servants." Journal of Public and Municipal Administration 4, no. 2 (2015): 135–40. http://dx.doi.org/10.12737/13184.

Full text
Abstract:
The article discusses the use of question-answer procedures in the activity of civil servants. The author examines the question-answer situations, depending on functions of a civil servant, emphasizing that the main task of clarifying questions is to complement the information already available.
APA, Harvard, Vancouver, ISO, and other styles
36

Ramadhan, M. Hasbi, Ratu Ilma Indra Putri, and Zulkardi Hasbi Zulkardi. "Designing Fractional Task using the PMRI Approach." Jurnal Pendidikan Matematika RAFA 8, no. 1 (2022): 1–8. http://dx.doi.org/10.19109/jpmrafa.v8i1.10714.

Full text
Abstract:
Fraction was very important to be taught as a basis for learning mathematics for the next stage. This research aimed to design fractions learning for fourth grade students. The method used was design research with type of validation studies which consists three stages, namely the preliminary design, the design experiment, and the retrospective analysis. This research was conducted at Madrasah Ibtidaiyah (MI) in Jambi City with research subjects were eleven students in the fourth grade. Based on the research findings, it was concluded that students really followed the command of questions, so as a teacher or question maker, we must design questions clearly. In addition, students tended to operate fractions starting from addition or subtraction, then multiplication, and lastly division.
APA, Harvard, Vancouver, ISO, and other styles
37

Nan, Linyong, Chiachun Hsieh, Ziming Mao, et al. "FeTaQA: Free-form Table Question Answering." Transactions of the Association for Computational Linguistics 10 (2022): 35–49. http://dx.doi.org/10.1162/tacl_a_00446.

Full text
Abstract:
Abstract Existing table question answering datasets contain abundant factual questions that primarily evaluate a QA system’s comprehension of query and tabular data. However, restricted by their short-form answers, these datasets fail to include question–answer interactions that represent more advanced and naturally occurring information needs: questions that ask for reasoning and integration of information pieces retrieved from a structured knowledge source. To complement the existing datasets and to reveal the challenging nature of the table-based question answering task, we introduce FeTaQA, a new dataset with 10K Wikipedia-based {table, question, free-form answer, supporting table cells} pairs. FeTaQA is collected from noteworthy descriptions of Wikipedia tables that contain information people tend to seek; generation of these descriptions requires advanced processing that humans perform on a daily basis: Understand the question and table, retrieve, integrate, infer, and conduct text planning and surface realization to generate an answer. We provide two benchmark methods for the proposed task: a pipeline method based on semantic parsing-based QA systems and an end-to-end method based on large pretrained text generation models, and show that FeTaQA poses a challenge for both methods.
APA, Harvard, Vancouver, ISO, and other styles
38

Ouyang, Jianquan, and Mengen Fu. "Improving Machine Reading Comprehension with Multi-Task Learning and Self-Training." Mathematics 10, no. 3 (2022): 310. http://dx.doi.org/10.3390/math10030310.

Full text
Abstract:
Machine Reading Comprehension (MRC) is an AI challenge that requires machines to determine the correct answer to a question based on a given passage, in which extractive MRC requires extracting an answer span to a question from a given passage, such as the task of span extraction. In contrast, non-extractive MRC infers answers from the content of reference passages, including Yes/No question answering to unanswerable questions. Due to the specificity of the two types of MRC tasks, researchers usually work on one type of task separately, but real-life application situations often require models that can handle many different types of tasks in parallel. Therefore, to meet the comprehensive requirements in such application situations, we construct a multi-task fusion training reading comprehension model based on the BERT pre-training model. The model uses the BERT pre-training model to obtain contextual representations, which is then shared by three downstream sub-modules for span extraction, Yes/No question answering, and unanswerable questions, next we fuse the outputs of the three sub-modules into a new span extraction output and use the fused cross-entropy loss function for global training. In the training phase, since our model requires a large amount of labeled training data, which is often expensive to obtain or unavailable in many tasks, we additionally use self-training to generate pseudo-labeled training data to train our model to improve its accuracy and generalization performance. We evaluated the SQuAD2.0 and CAIL2019 datasets. The experiments show that our model can efficiently handle different tasks. We achieved 83.2EM and 86.7F1 scores on the SQuAD2.0 dataset and 73.0EM and 85.3F1 scores on the CAIL2019 dataset.
APA, Harvard, Vancouver, ISO, and other styles
39

Ning, Yuting, Zhenya Huang, Xin Lin, et al. "Towards a Holistic Understanding of Mathematical Questions with Contrastive Pre-training." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (2023): 13409–18. http://dx.doi.org/10.1609/aaai.v37i11.26573.

Full text
Abstract:
Understanding mathematical questions effectively is a crucial task, which can benefit many applications, such as difficulty estimation. Researchers have drawn much attention to designing pre-training models for question representations due to the scarcity of human annotations (e.g., labeling difficulty). However, unlike general free-format texts (e.g., user comments), mathematical questions are generally designed with explicit purposes and mathematical logic, and usually consist of more complex content, such as formulas, and related mathematical knowledge (e.g., Function). Therefore, the problem of holistically representing mathematical questions remains underexplored. To this end, in this paper, we propose a novel contrastive pre-training approach for mathematical question representations, namely QuesCo, which attempts to bring questions with more similar purposes closer. Specifically, we first design two-level question augmentations, including content-level and structure-level, which generate literally diverse question pairs with similar purposes. Then, to fully exploit hierarchical information of knowledge concepts, we propose a knowledge hierarchy-aware rank strategy (KHAR), which ranks the similarities between questions in a fine-grained manner. Next, we adopt a ranking contrastive learning task to optimize our model based on the augmented and ranked questions. We conduct extensive experiments on two real-world mathematical datasets. The experimental results demonstrate the effectiveness of our model.
APA, Harvard, Vancouver, ISO, and other styles
40

Wu, Chenfei, Jinlai Liu, Xiaojie Wang, and Ruifan Li. "Differential Networks for Visual Question Answering." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 8997–9004. http://dx.doi.org/10.1609/aaai.v33i01.33018997.

Full text
Abstract:
The task of Visual Question Answering (VQA) has emerged in recent years for its potential applications. To address the VQA task, the model should fuse feature elements from both images and questions efficiently. Existing models fuse image feature element vi and question feature element qi directly, such as an element product viqi. Those solutions largely ignore the following two key points: 1) Whether vi and qi are in the same space. 2) How to reduce the observation noises in vi and qi. We argue that two differences between those two feature elements themselves, like (vi − vj) and (qi −qj), are more probably in the same space. And the difference operation would be beneficial to reduce observation noise. To achieve this, we first propose Differential Networks (DN), a novel plug-and-play module which enables differences between pair-wise feature elements. With the tool of DN, we then propose DN based Fusion (DF), a novel model for VQA task. We achieve state-of-the-art results on four publicly available datasets. Ablation studies also show the effectiveness of difference operations in DF model.
APA, Harvard, Vancouver, ISO, and other styles
41

Sari, Debora Novita, Wennyta Wennyta, and Efa Silfia. "An Analysis of Writing Tasks in “Pathway to English” Textbook for Twelfth Grade of SMAN 5 Kota Jambi." JELT: Journal of English Language Teaching 5, no. 1 (2021): 71. http://dx.doi.org/10.33087/jelt.v5i1.76.

Full text
Abstract:
This research was aimed to find out whether the writing task in “Pathway to English” textbook appropriate with the criteria of writing task as suggested by Paul Nation, in Nation’s theory. There are four kinds of tasks that suggested. Each of tasks has the following type : (1) Experienced Task (linked skills, draw and write, ten perfect sentences, partial writing, setting your own questions), (2) Shared Task (reproduction exercise/dicto-gloss, blackboard composition, group-class composition, writing with a secretary), (3) Guided Task (translation, picture composition, delayed copying, writing with grammar help, answer the questions, correction, complete the sentences, back writing, put the words in order, follow the model, what is it?, change the sentences, join the sentences, writing by steps, marking guided writing), (4) Independent Task. The method of this research was descriptive qualitative. To analyzed the data, the researcher used qualitative content analysis method. Research findings there are 54 types of writing task: 2 Experience task (1 Partial Writing, 1 Setting your own question), 1 Shared task ( 5 group class composition), 8 Guided task (1 translation, 7 look and write, 1 picture composition, 7 writing with grammar help, 10 answer the question, 4 correction, 11 complete the sentences, 2 put the words in order, 1 Independent task (6 independent). The result of this research was that there are criteria are found that match with the criteria as suggested by Paul Nation, this proves that this book is appropriate with the criteria as suggested by Paul Nation.Key Words: Writing Task, English Textbook, Task Analysis
APA, Harvard, Vancouver, ISO, and other styles
42

KATO, TSUNEAKI, JUN'ICHI FUKUMOTO, FUMITO MASUI, and NORIKO KANDO. "Evaluation Task of Question Answering for Information Access Dialogues." Journal of Natural Language Processing 15, no. 3 (2008): 53–75. http://dx.doi.org/10.5715/jnlp.15.3_53.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Nwosu, Amara, Stephen Mason, Anita Roberts, and Heino Hugel. "The evaluation of a peer-led question-writing task." Clinical Teacher 10, no. 3 (2013): 151–54. http://dx.doi.org/10.1111/j.1743-498x.2012.00632.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Davey, Beth. "Postpassage Questions: Task and Reader Effects on Comprehension and Metacomprehension Processes." Journal of Reading Behavior 19, no. 3 (1987): 261–83. http://dx.doi.org/10.1080/10862968709547604.

Full text
Abstract:
This investigation explored the effects of question task conditions on reading comprehension and metacomprehension for subjects differing in reading ability and English language proficiency. Proficient readers, disabled readers, and deaf readers read expository passages and completed selected-response and constructed-response question tasks under both lookback and no-lookback conditions. In addition, subjects rated their perceived comprehension adequacy both after reading each passage and after responding to the questions. Several significant interaction effects were found for both demonstrated and perceived comprehension performance, most notably with lookback tasks. However, overlaps between comprehension and metacomprehension processes were not comparable across reader groups. Implications are drawn for further research concerning interactions of individual differences with reading comprehension tasks.
APA, Harvard, Vancouver, ISO, and other styles
45

GROSSE, GERLIND, and MICHAEL TOMASELLO. "Two-year-old children differentiate test questions from genuine questions." Journal of Child Language 39, no. 1 (2011): 192–204. http://dx.doi.org/10.1017/s0305000910000760.

Full text
Abstract:
ABSTRACTChildren are frequently confronted with so-called ‘test questions’. While genuine questions are requests for missing information, test questions ask for information obviously already known to the questioner. In this study we explored whether two-year-old children respond differentially to one and the same question used as either a genuine question or as a test question based on the situation (playful game versus serious task) and attitude (playful ostensive cues versus not). Results indicated that children responded to questions differently on the basis of the situation but not the expressed attitude of the questioner. Two-year-old children thus understand something of the very special communicative intentions behind test questions.
APA, Harvard, Vancouver, ISO, and other styles
46

Stankovic, Sanda, and Dejan Lalovic. "Strategies identification in an experimental reading comprehension task." Zbornik Instituta za pedagoska istrazivanja 42, no. 2 (2010): 232–46. http://dx.doi.org/10.2298/zipi1002232s.

Full text
Abstract:
Standardized reading comprehension tests (RCTs) usually consist of a small number of texts each accompanied by several multiple-choice questions, with texts and questions simultaneously presented. The score the common measure of reading comprehension ability in RCTs is the score. Literature review suggests that strategies subjects employ may influence their performance on RCT, however the score itself provides no information on the specific strategy employed. Knowledge of test-taking strategies could have impact on understanding of the actual purpose and benefits of using RCTs in pedagogical and psychological practice. With the ultimate objective of constructing a first standard RCT in Serbian language, the preliminary step we took was to conduct an experimental reading comprehension task (ERCT) consisting of 27 short texts displayed in succession, each followed by a single multiplechoice question. Using qualitative analysis of subjects? responses in semi-structured postexperimental interview, we identified four overall strategies used on ERCT. Our results show that groups of students who used specific strategies differed significantly from one another in text reading time, with no differences found regarding the question reading and answering time. More importantly, there were no significant between-group differences found in terms of ERCT score. These findings suggest that choice of strategy is a way to optimize the relation between one?s own potential and ERCT task requirements. RCT based on ERCT principles would allow for a flexible choice of strategy which would not influence the final score.
APA, Harvard, Vancouver, ISO, and other styles
47

Rahul, Bhatia, Gautam Vishakha, and Kumar |. Ankush Garg Yash. "Dynamic Question Answer Generator An Enhanced Approach to Question Generation." International Journal of Trend in Scientific Research and Development 3, no. 4 (2019): 785–89. https://doi.org/10.31142/ijtsrd23730.

Full text
Abstract:
Teachers and educational institutions seek new questions with different difficulty levels for setting up tests for their students. Also, students long for distinct and new questions to practice for their tests as redundant questions are found everywhere. However, setting up new questions every time is a tedious task for teachers. To overcome this conundrum, we have concocted an artificially intelligent system which generates questions and answers for the mathematical topic -Quadratic equations. The system uses i Randomization technique for generating unique questions each time and ii First order logic and Automated deduction to produce solution for the generated question. The goal was achieved and the system works efficiently. It is robust, reliable and helpful for teachers, students and other organizations for retrieving Quadratic equations questions, hassle free. Rahul Bhatia | Vishakha Gautam | Yash Kumar | Ankush Garg "Dynamic Question Answer Generator: An Enhanced Approach to Question Generation" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23730.pdf
APA, Harvard, Vancouver, ISO, and other styles
48

Varnava, Marina, and Kleanthes K. Grohmann. "Developments in the acquisition of Wh-interrogatives in Cypriot Greek." Linguistic Variation 14, no. 1 (2014): 69–108. http://dx.doi.org/10.1075/lv.14.1.04var.

Full text
Abstract:
This cross-sectional study investigates the acquisition of the interpretation of syntactic and semantic aspects of wh-questions by Cypriot Greek-speaking children aged 4 to 9 years. Two experimental tools were employed, a question–picture-matching task examining the comprehension of D-linked and non-D-linked questions for subject and object, and a question-after-picture task examining the comprehension of the notion of exhaustivity in single and multiple wh-questions. The results from these experiments are interpreted in light of current theoretical advances and cross-linguistic comparisons. The apparent discrepancies found in the development of Greek Cypriot children’s comprehension of wh-questions and exhaustivity are put in perspective with their particular linguistic environment – diglossia, in which children grow up with two varieties, Cypriot Greek and Standard Modern Greek. Keywords: bilectalism; D(iscourse)-linking; first language acquisition; multiple wh-questions; single wh-questions
APA, Harvard, Vancouver, ISO, and other styles
49

Zanibbi, Richard, Behrooz Mansouri, Anurag Agarwal, and Douglas W. Oard. "ARQMath." ACM SIGIR Forum 54, no. 2 (2020): 1–9. http://dx.doi.org/10.1145/3483382.3483388.

Full text
Abstract:
The Answer Retrieval for Questions on Math (ARQMath) evaluation was run for the first time at CLEF 2020. ARQMath is the first Community Question Answering (CQA) shared task for math, retrieving existing answers from Math Stack Exchange (MSE) that can help to answer previously unseen math questions. ARQMath also introduces a new protocol for math formula search, where formulas are evaluated in context using a query formula's associated question post, and posts associated with each retrieved formula. Over 70 topics were annotated for each task by eight undergraduate students supervised by a professor of mathematics. A formula index is provided in three formats: LATEX, Presentation MathML, and Content MathML, avoiding the need for participants to extract these themselves. In addition to detailed relevance judgments, tools are provided to parse MSE data, generate question threads in HTML, and evaluate retrieval results. To make comparisons with participating systems fairer, nDCG' (i.e., nDCG for assessed hits only) is used to compare systems for each task. ARQMath will continue in CLEF 2021, with training data from 2020 and baseline systems for both tasks to reduce barriers to entry for this challenging problem domain.
APA, Harvard, Vancouver, ISO, and other styles
50

Koshti, Dipali, Ashutosh Gupta, Mukesh Kalla, and Arvind Sharma. "TRANS-VQA: Fully Transformer-Based Image Question-Answering Model Using Question-guided Vision Attention." Inteligencia Artificial 27, no. 73 (2024): 111–28. http://dx.doi.org/10.4114/intartif.vol27iss73pp111-128.

Full text
Abstract:
Understanding multiple modalities and relating them is an easy task for humans. But for machines, this is a stimulating task. One such multi-modal reasoning task is Visual question answering which demands the machine to produce an answer for the natural language query asked based on the given image. Although plenty of work is done in this field, there is still a challenge of improving the answer prediction ability of the model and breaching human accuracy. A novel model for answering image-based questions based on a transformer has been proposed. The proposed model is a fully Transformer-based architecture that utilizes the power of a transformer for extracting language features as well as for performing joint understanding of question and image features. The proposed VQA model utilizes F-RCNN for image feature extraction. The retrieved language features and object-level image features are fed to a decoder inspired by the Bi-Directional Encoder Representation Transformer - BERT architecture that learns jointly the image characteristics directed by the question characteristics and rich representations of the image features are obtained. Extensive experimentation has been carried out to observe the effect of various hyperparameters on the performance of the model. The experimental results demonstrate that the model’s ability to predict the answer increases with the increase in the number of layers in the transformer’s encoder and decoder. The proposed model improves upon the previous models and is highly scalable due to the introduction of the BERT. Our best model reports 72.31% accuracy on the test-standard split of the VQAv2 dataset.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!