Academic literature on the topic 'Questions and answers'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Questions and answers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Questions and answers"

1

Chua, Alton Y. K., and Snehasish Banerjee. "Measuring the effectiveness of answers in Yahoo! Answers." Online Information Review 39, no. 1 (2015): 104–18. http://dx.doi.org/10.1108/oir-10-2014-0232.

Full text
Abstract:
Purpose – The purpose of this paper is to investigate the ways in which effectiveness of answers in Yahoo! Answers, one of the largest community question answering sites (CQAs), is related to question types and answerer reputation. Effective answers are defined as those that are detailed, readable, superior in quality and contributed promptly. Five question types that were studied include factoid, list, definition, complex interactive and opinion. Answerer reputation refers to the past track record of answerers in the community. Design/methodology/approach – The data set comprises 1,459 answers posted in Yahoo! Answers in response to 464 questions that were distributed across the five question types. The analysis was done using factorial analysis of variance. Findings – The results indicate that factoid, definition and opinion questions are comparable in attracting high quality as well as readable answers. Although reputed answerers generally fared better in offering detailed and high-quality answers, novices were found to submit more readable responses. Moreover, novices were more prompt in answering factoid, list and definition questions. Originality/value – By analysing variations in answer effectiveness with a twin focus on question types and answerer reputation, this study explores a strand of CQA research that has hitherto received limited attention. The findings offer insights to users and designers of CQAs.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhao, Yiming, Linrong Wu, Jin Zhang, and Taowen Le. "How Question Characteristics Impact Answer Outcomes on Social Question-and-Answer Websites." Journal of Global Information Management 29, no. 6 (2021): 1–21. http://dx.doi.org/10.4018/jgim.20211101.oa20.

Full text
Abstract:
Inducing more and higher-quality answers to questions is essential to sustainable development of Social Question-and-Answer (SQA) websites. Previous research has studied factors affecting question success and user motivation in answering questions, but how a question’s own characteristics affect the question’s answer outcome on SQA websites remains unknown. This study examines the impact of the characteristics of a question, namely readability, emotionality, additional descriptions, and question type, on the question’s answer outcome as measured by number of answers, average answer length, and number of “likes” received by answers to the question. Regression analyses reveal that readability, additional descriptions, and question type have significant impact on multiple measurements of answer outcome, while emotionality only affects the average answer length. This study provides insights to SQA website builders as they instruct users on question construction. It also provides insights to SQA website users on how to induce more and higher-quality answers to their questions.
APA, Harvard, Vancouver, ISO, and other styles
3

Yang, Lichun, Shenghua Bao, Qingliang Lin, et al. "Analyzing and Predicting Not-Answered Questions in Community-based Question Answering Services." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (2011): 1273–78. http://dx.doi.org/10.1609/aaai.v25i1.8082.

Full text
Abstract:
This paper focuses on analyzing and predicting not-answered questions in Community based Question Answering (CQA) services, such as Yahoo! Answers. In CQA services, users express their information needs by submitting natural language questions and await answers from other human users. Comparing to receiving results from web search engines using keyword queries, CQA users are likely to get more specific answers, because human answerers may catch the main point of the question. However, one of the key problems of this pattern is that sometimes no one helps to give answers, while web search engines hardly fail to response. In this paper, we analyze the not-answered questions and give a first try of predicting whether questions will receive answers. More specifically, we first analyze the questions of Yahoo Answers based on the features selected from different perspectives. Then, we formalize the prediction problem as supervised learning – binary classification problem and leverage the proposed features to make predictions. Extensive experiments are made on 76,251 questions collected from Yahoo! Answers. We analyze the specific characteristics of not-answered questions and try to suggest possible reasons for why a question is not likely to be answered. As for prediction, the experimental results show that classification based on the proposed features outperforms the simple word-based approach significantly.
APA, Harvard, Vancouver, ISO, and other styles
4

Moeschler, Jacques. "Answers to questions about questions and answers." Journal of Pragmatics 10, no. 2 (1986): 227–53. http://dx.doi.org/10.1016/0378-2166(86)90089-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kabutoya, Yutaka, Tomoharu Iwata, Hisako Shiohara, and Ko Fujimura. "Effective Question Recommendation Based on Multiple Features for Question Answering Communities." Proceedings of the International AAAI Conference on Web and Social Media 4, no. 1 (2010): 259–62. http://dx.doi.org/10.1609/icwsm.v4i1.14042.

Full text
Abstract:
We propose a new method of recommending questions to answerers so as to suit the answerers’ knowledge and interests in User-Interactive Question Answering (QA) communities. A question recommender can help answerers select the questions that interest them. This increases the number of answers, which will activate QA communities. An effective question recommender should satisfy the following three requirements: First, its accuracy should be higher than the existing category-based approach; more than 50% of answerers select the questions to answer according a fixed system of categories. Second, it should be able to recommend unanswered questions because more than 2,000 questions are posted every day. Third, it should be able to support even those people who have never answered a question previously, because more than 50% of users in current QA communities have never given any answer. To achieve an effective question recommender, we use question histories as well as the answer histories of each user by combining collaborative filtering schemes and content-base filtering schemes. Experiments on real log data sets of a famous Japanese QA community, Oshiete goo, show that our recommender satisfies the three requirements.
APA, Harvard, Vancouver, ISO, and other styles
6

Et. al., Vaishali Fulmal,. "The Implementation of Question Answer System Using Deep Learning." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 1S (2021): 176–82. http://dx.doi.org/10.17762/turcomat.v12i1s.1604.

Full text
Abstract:
Question-answer systems are referred to as advanced systems that can be used to provide answers to the questions which are asked by the user. The typical problem in natural language processing is automatic question-answering. The question-answering is aiming at designing systems that can automatically answer a question, in the same way as a human can find answers to questions. Community question answering (CQA) services are becoming popular over the past few years. It allows the members of the community to post as well as answer the questions. It helps users to get information from a comprehensive set of questions that are well answered. In the proposed system, a deep learning-based model is used for the automatic answering of the user’s questions. First, the questions from the dataset are embedded. The deep neural network is trained to find the similarity between questions. The best answer for each question is found as the one with the highest similarity score. The purpose of the proposed system is to design a model that helps to get the answer of a question automatically. The proposed system uses a hierarchical clustering algorithm for clustering the questions.
APA, Harvard, Vancouver, ISO, and other styles
7

Cahyo, Puji Winar, and Landung Sudarmana. "Klasterisasi Penjawab Berdasar Kualitas Jawaban pada Platform Brainly Menggunakan K-Means." Jurnal Sisfokom (Sistem Informasi dan Komputer) 11, no. 2 (2022): 148–53. http://dx.doi.org/10.32736/sisfokom.v11i2.1314.

Full text
Abstract:
Brainly is a Community Question Answering (CQA) educational platform that makes it easy for users to find answers based on questions posed by students. Questions from students are often answered quickly by many answerers interested in the field being asked. The number of available answers is the choice of students to be able to receive answers and give a good rating to the answerer. Based on the number of good ratings, an answerer can be said to be an expert in certain subjects. Therefore, this research focuses on finding expert answering groups who have quality answers. K-means clustering is possible to group the answering data into two different clusters. The first cluster is expert users with ten respondents, and The second cluster is a non-expert cluster with 474 respondents. The expert cluster data is expected to help the questioner to be able to ask questions directly to the experts and obtain quality answers. Meanwhile, the number of clusters is determined based on the test results using a silhouette score that obtains a value of 0.971, with the optimal number of clusters being two clusters.
APA, Harvard, Vancouver, ISO, and other styles
8

Editorial Submission, Haworth. "QUESTIONS/ANSWERS." Journal of Library Administration 6, no. 3 (1985): 45–50. http://dx.doi.org/10.1300/j111v06n03_08.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tzamaloukas, A., and S. I. Vas. "Questions, Answers." Peritoneal Dialysis International: Journal of the International Society for Peritoneal Dialysis 5, no. 3 (1985): 202. http://dx.doi.org/10.1177/089686088500500315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Bingning, Xiaochuan Wang, Ting Tao, Qi Zhang, and Jingfang Xu. "Neural Question Generation with Answer Pivot." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 9138–45. http://dx.doi.org/10.1609/aaai.v34i05.6449.

Full text
Abstract:
Neural question generation (NQG) is the task of generating questions from the given context with deep neural networks. Previous answer-aware NQG methods suffer from the problem that the generated answers are focusing on entity and most of the questions are trivial to be answered. The answer-agnostic NQG methods reduce the bias towards named entities and increasing the model's degrees of freedom, but sometimes result in generating unanswerable questions which are not valuable for the subsequent machine reading comprehension system. In this paper, we treat the answers as the hidden pivot for question generation and combine the question generation and answer selection process in a joint model. We achieve the state-of-the-art result on the SQuAD dataset according to automatic metric and human evaluation.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography