To see the other types of publications on this topic, follow the link: Questions and answers.

Journal articles on the topic 'Questions and answers'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Questions and answers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Chua, Alton Y. K., and Snehasish Banerjee. "Measuring the effectiveness of answers in Yahoo! Answers." Online Information Review 39, no. 1 (2015): 104–18. http://dx.doi.org/10.1108/oir-10-2014-0232.

Full text
Abstract:
Purpose – The purpose of this paper is to investigate the ways in which effectiveness of answers in Yahoo! Answers, one of the largest community question answering sites (CQAs), is related to question types and answerer reputation. Effective answers are defined as those that are detailed, readable, superior in quality and contributed promptly. Five question types that were studied include factoid, list, definition, complex interactive and opinion. Answerer reputation refers to the past track record of answerers in the community. Design/methodology/approach – The data set comprises 1,459 answers posted in Yahoo! Answers in response to 464 questions that were distributed across the five question types. The analysis was done using factorial analysis of variance. Findings – The results indicate that factoid, definition and opinion questions are comparable in attracting high quality as well as readable answers. Although reputed answerers generally fared better in offering detailed and high-quality answers, novices were found to submit more readable responses. Moreover, novices were more prompt in answering factoid, list and definition questions. Originality/value – By analysing variations in answer effectiveness with a twin focus on question types and answerer reputation, this study explores a strand of CQA research that has hitherto received limited attention. The findings offer insights to users and designers of CQAs.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhao, Yiming, Linrong Wu, Jin Zhang, and Taowen Le. "How Question Characteristics Impact Answer Outcomes on Social Question-and-Answer Websites." Journal of Global Information Management 29, no. 6 (2021): 1–21. http://dx.doi.org/10.4018/jgim.20211101.oa20.

Full text
Abstract:
Inducing more and higher-quality answers to questions is essential to sustainable development of Social Question-and-Answer (SQA) websites. Previous research has studied factors affecting question success and user motivation in answering questions, but how a question’s own characteristics affect the question’s answer outcome on SQA websites remains unknown. This study examines the impact of the characteristics of a question, namely readability, emotionality, additional descriptions, and question type, on the question’s answer outcome as measured by number of answers, average answer length, and number of “likes” received by answers to the question. Regression analyses reveal that readability, additional descriptions, and question type have significant impact on multiple measurements of answer outcome, while emotionality only affects the average answer length. This study provides insights to SQA website builders as they instruct users on question construction. It also provides insights to SQA website users on how to induce more and higher-quality answers to their questions.
APA, Harvard, Vancouver, ISO, and other styles
3

Yang, Lichun, Shenghua Bao, Qingliang Lin, et al. "Analyzing and Predicting Not-Answered Questions in Community-based Question Answering Services." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (2011): 1273–78. http://dx.doi.org/10.1609/aaai.v25i1.8082.

Full text
Abstract:
This paper focuses on analyzing and predicting not-answered questions in Community based Question Answering (CQA) services, such as Yahoo! Answers. In CQA services, users express their information needs by submitting natural language questions and await answers from other human users. Comparing to receiving results from web search engines using keyword queries, CQA users are likely to get more specific answers, because human answerers may catch the main point of the question. However, one of the key problems of this pattern is that sometimes no one helps to give answers, while web search engines hardly fail to response. In this paper, we analyze the not-answered questions and give a first try of predicting whether questions will receive answers. More specifically, we first analyze the questions of Yahoo Answers based on the features selected from different perspectives. Then, we formalize the prediction problem as supervised learning – binary classification problem and leverage the proposed features to make predictions. Extensive experiments are made on 76,251 questions collected from Yahoo! Answers. We analyze the specific characteristics of not-answered questions and try to suggest possible reasons for why a question is not likely to be answered. As for prediction, the experimental results show that classification based on the proposed features outperforms the simple word-based approach significantly.
APA, Harvard, Vancouver, ISO, and other styles
4

Moeschler, Jacques. "Answers to questions about questions and answers." Journal of Pragmatics 10, no. 2 (1986): 227–53. http://dx.doi.org/10.1016/0378-2166(86)90089-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kabutoya, Yutaka, Tomoharu Iwata, Hisako Shiohara, and Ko Fujimura. "Effective Question Recommendation Based on Multiple Features for Question Answering Communities." Proceedings of the International AAAI Conference on Web and Social Media 4, no. 1 (2010): 259–62. http://dx.doi.org/10.1609/icwsm.v4i1.14042.

Full text
Abstract:
We propose a new method of recommending questions to answerers so as to suit the answerers’ knowledge and interests in User-Interactive Question Answering (QA) communities. A question recommender can help answerers select the questions that interest them. This increases the number of answers, which will activate QA communities. An effective question recommender should satisfy the following three requirements: First, its accuracy should be higher than the existing category-based approach; more than 50% of answerers select the questions to answer according a fixed system of categories. Second, it should be able to recommend unanswered questions because more than 2,000 questions are posted every day. Third, it should be able to support even those people who have never answered a question previously, because more than 50% of users in current QA communities have never given any answer. To achieve an effective question recommender, we use question histories as well as the answer histories of each user by combining collaborative filtering schemes and content-base filtering schemes. Experiments on real log data sets of a famous Japanese QA community, Oshiete goo, show that our recommender satisfies the three requirements.
APA, Harvard, Vancouver, ISO, and other styles
6

Et. al., Vaishali Fulmal,. "The Implementation of Question Answer System Using Deep Learning." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 1S (2021): 176–82. http://dx.doi.org/10.17762/turcomat.v12i1s.1604.

Full text
Abstract:
Question-answer systems are referred to as advanced systems that can be used to provide answers to the questions which are asked by the user. The typical problem in natural language processing is automatic question-answering. The question-answering is aiming at designing systems that can automatically answer a question, in the same way as a human can find answers to questions. Community question answering (CQA) services are becoming popular over the past few years. It allows the members of the community to post as well as answer the questions. It helps users to get information from a comprehensive set of questions that are well answered. In the proposed system, a deep learning-based model is used for the automatic answering of the user’s questions. First, the questions from the dataset are embedded. The deep neural network is trained to find the similarity between questions. The best answer for each question is found as the one with the highest similarity score. The purpose of the proposed system is to design a model that helps to get the answer of a question automatically. The proposed system uses a hierarchical clustering algorithm for clustering the questions.
APA, Harvard, Vancouver, ISO, and other styles
7

Cahyo, Puji Winar, and Landung Sudarmana. "Klasterisasi Penjawab Berdasar Kualitas Jawaban pada Platform Brainly Menggunakan K-Means." Jurnal Sisfokom (Sistem Informasi dan Komputer) 11, no. 2 (2022): 148–53. http://dx.doi.org/10.32736/sisfokom.v11i2.1314.

Full text
Abstract:
Brainly is a Community Question Answering (CQA) educational platform that makes it easy for users to find answers based on questions posed by students. Questions from students are often answered quickly by many answerers interested in the field being asked. The number of available answers is the choice of students to be able to receive answers and give a good rating to the answerer. Based on the number of good ratings, an answerer can be said to be an expert in certain subjects. Therefore, this research focuses on finding expert answering groups who have quality answers. K-means clustering is possible to group the answering data into two different clusters. The first cluster is expert users with ten respondents, and The second cluster is a non-expert cluster with 474 respondents. The expert cluster data is expected to help the questioner to be able to ask questions directly to the experts and obtain quality answers. Meanwhile, the number of clusters is determined based on the test results using a silhouette score that obtains a value of 0.971, with the optimal number of clusters being two clusters.
APA, Harvard, Vancouver, ISO, and other styles
8

Editorial Submission, Haworth. "QUESTIONS/ANSWERS." Journal of Library Administration 6, no. 3 (1985): 45–50. http://dx.doi.org/10.1300/j111v06n03_08.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tzamaloukas, A., and S. I. Vas. "Questions, Answers." Peritoneal Dialysis International: Journal of the International Society for Peritoneal Dialysis 5, no. 3 (1985): 202. http://dx.doi.org/10.1177/089686088500500315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Bingning, Xiaochuan Wang, Ting Tao, Qi Zhang, and Jingfang Xu. "Neural Question Generation with Answer Pivot." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 9138–45. http://dx.doi.org/10.1609/aaai.v34i05.6449.

Full text
Abstract:
Neural question generation (NQG) is the task of generating questions from the given context with deep neural networks. Previous answer-aware NQG methods suffer from the problem that the generated answers are focusing on entity and most of the questions are trivial to be answered. The answer-agnostic NQG methods reduce the bias towards named entities and increasing the model's degrees of freedom, but sometimes result in generating unanswerable questions which are not valuable for the subsequent machine reading comprehension system. In this paper, we treat the answers as the hidden pivot for question generation and combine the question generation and answer selection process in a joint model. We achieve the state-of-the-art result on the SQuAD dataset according to automatic metric and human evaluation.
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Linfeng, Licheng Zhang, Chiwei Zhu, and Zhendong Mao. "QGAE: an End-to-end Answer-Agnostic Question Generation Model for Generating Question-Answer Pairs." JUSTC 53 (2023): 1. http://dx.doi.org/10.52396/justc-2023-0002.

Full text
Abstract:
Question generation aims to generate meaningful and fluent questions, which can address the lack of question-answer type annotated corpus by augmenting the available data. Using unannotated text with optional answers as input contents, question generation can be divided into two types based on whether answers are provided: answer-aware and answer-agnostic. While generating questions with providing answers is challenging, generating high-quality questions without providing answers is even more difficult, for both humans and machines. In order to address this issue, we proposed a novel end-to-end model called QGAE, which is able to transform answer-agnostic question generation into answer-aware question generation by directly extracting candidate answers. This approach effectively utilizes unlabeled data for generating high-quality question-answer pairs, and its end-to-end design makes it more convenient compared to a multi-stage method that requires at least two pre-trained models. Moreover, our model achieves better average scores and greater diversity. Our experiments show that QGAE achieves significant improvements in generating question-answer pairs, making it a promising approach for question generation.
APA, Harvard, Vancouver, ISO, and other styles
12

Brass, Tom. "Free Markets, Unfree Labour: Old Questions Answered, New Answers Questioned." Journal of Contemporary Asia 45, no. 3 (2015): 531–40. http://dx.doi.org/10.1080/00472336.2015.1007517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Cahyo, Puji Winar, Kartikadyota Kusumaningtyas, and Ulfi Saidata Aesyi. "A User Recommendation Model for Answering Questions on Brainly Platform." JURNAL INFOTEL 13, no. 1 (2021): 7–12. http://dx.doi.org/10.20895/infotel.v13i1.548.

Full text
Abstract:
Brainly is a Community Question Answer (CQA) application that allows students or parents to ask questions related to their homework. The current mechanism is that users ask questions, then other users who are in the same subject interest can see and answer it. As a reward for answering questions, Brainly gives points. The number of points varies by question. The greater of total points users have, Brainly will automatically display them in the smartest user leaderboard on the site's front page. But sometimes, some users do not have good activity in answering questions. Thus, it is possible to have an urgent question that has not been answered by anyone. This study implements Fuzzy C-Means cluster method to improve Brainly's feature regarding the speed and accuracy of answers. The idea is to create student clusters by utilizing the smartest students' leaderboard, subjects interest, and answering activities. The stages applied in this research started with Data Extraction, Preprocessing, Cluster Process, and User Recommender. The optimal number of clusters in the answerer recommendation in the Brainly platform is 2 clusters. The value of the fuzzy partition coefficient for two clusters reached 0.97 for Mathematics and 0.93 for Indonesian. Meanwhile, the results of the recommendations were influenced by answer ratings. Many numbers of the answer are not given rating because the possibility of the answers are not appropriate or user's insensitivity in giving ratings.
APA, Harvard, Vancouver, ISO, and other styles
14

Hong, Ziying, Zhaohua Deng, Richard Evans, and Haiyan Wu. "Patient Questions and Physician Responses in a Chinese Health Q&A Website: Content Analysis." Journal of Medical Internet Research 22, no. 4 (2020): e13071. http://dx.doi.org/10.2196/13071.

Full text
Abstract:
Background Since the turn of this century, the internet has become an invaluable resource for people seeking health information and answers to health-related queries. Health question and answer websites have grown in popularity in recent years as a means for patients to obtain health information from medical professionals. For patients suffering from chronic illnesses, it is vital that health care providers become better acquainted with patients’ information needs and learn how they express them in text format. Objective The aims of this study were to: (1) explore whether patients can accurately and adequately express their information needs on health question and answer websites, (2) identify what types of problems are of most concern to those suffering from chronic illnesses, and (3) determine the relationship between question characteristics and the number of answers received. Methods Questions were collected from a leading Chinese health question and answer website called “All questions will be answered” in January 2018. We focused on questions relating to diabetes and hepatitis, including those that were free and those that were financially rewarded. Content analysis was completed on a total of 7068 (diabetes) and 6685 (hepatitis) textual questions. Correlations between the characteristics of questions (number of words per question, value of reward) and the number of answers received were evaluated using linear regression analysis. Results The majority of patients are able to accurately express their problem in text format, while some patients may require minor social support. The questions posted were related to three main topics: (1) prevention and examination, (2) diagnosis, and (3) treatment. Patients with diabetes were most concerned with the treatment received, whereas patients with hepatitis focused on the diagnosis results. The number of words per question and the value of the reward were negatively correlated with the number of answers. The number of words per question and the value of the reward were negatively correlated with the number of answers. Conclusions This study provides valuable insights into the ability of patients suffering from chronic illnesses to make an understandable request on health question and answer websites. Health topics relating to diabetes and hepatitis were classified to address the health information needs of chronically ill patients. Furthermore, identification of the factors affecting the number of answers received per question can help users of these websites to better frame their questions to obtain more valuable answers.
APA, Harvard, Vancouver, ISO, and other styles
15

Melo, Dora, Irene Pimenta Rodrigues, and Vitor Beires Nogueira. "Work Out the Semantic Web Search: The Cooperative Way." Advances in Artificial Intelligence 2012 (August 2, 2012): 1–9. http://dx.doi.org/10.1155/2012/867831.

Full text
Abstract:
We propose a Cooperative Question Answering System that takes as input natural language queries and is able to return a cooperative answer based on semantic web resources, more specifically DBpedia represented in OWL/RDF as knowledge base and WordNet to build similar questions. Our system resorts to ontologies not only for reasoning but also to find answers and is independent of prior knowledge of the semantic resources by the user. The natural language question is translated into its semantic representation and then answered by consulting the semantics sources of information. The system is able to clarify the problems of ambiguity and helps finding the path to the correct answer. If there are multiple answers to the question posed (or to the similar questions for which DBpedia contains answers), they will be grouped according to their semantic meaning, providing a more cooperative and clarified answer to the user.
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Hei Chia, Yu Hung Chiang, and Si Ting Lin. "Spam detection and high-quality features to analyse question –answer pairs." Electronic Library 38, no. 5/6 (2020): 1013–33. http://dx.doi.org/10.1108/el-05-2020-0120.

Full text
Abstract:
Purpose In community question and answer (CQA) services, because of user subjectivity and the limits of knowledge, the distribution of answer quality can vary drastically – from highly related to irrelevant or even spam answers. Previous studies of CQA portals have faced two important issues: answer quality analysis and spam answer filtering. Therefore, the purposes of this study are to filter spam answers in advance using two-phase identification methods and then automatically classify the different types of question and answer (QA) pairs by deep learning. Finally, this study proposes a comprehensive study of answer quality prediction for different types of QA pairs. Design/methodology/approach This study proposes an integrated model with a two-phase identification method that filters spam answers in advance and uses a deep learning method [recurrent convolutional neural network (R-CNN)] to automatically classify various types of questions. Logistic regression (LR) is further applied to examine which answer quality features significantly indicate high-quality answers to different types of questions. Findings There are four prominent findings. (1) This study confirms that conducting spam filtering before an answer quality analysis can reduce the proportion of high-quality answers that are misjudged as spam answers. (2) The experimental results show that answer quality is better when question types are included. (3) The analysis results for different classifiers show that the R-CNN achieves the best macro-F1 scores (74.8%) in the question type classification module. (4) Finally, the experimental results by LR show that author ranking, answer length and common words could significantly impact answer quality for different types of questions. Originality/value The proposed system is simultaneously able to detect spam answers and provide users with quick and efficient retrieval mechanisms for high-quality answers to different types of questions in CQA. Moreover, this study further validates that crucial features exist among the different types of questions that can impact answer quality. Overall, an identification system automatically summarises high-quality answers for each different type of questions from the pool of messy answers in CQA, which can be very useful in helping users make decisions.
APA, Harvard, Vancouver, ISO, and other styles
17

ENFIELD, N. J., TANYA STIVERS, PENELOPE BROWN, et al. "Polar answers." Journal of Linguistics 55, no. 2 (2018): 277–304. http://dx.doi.org/10.1017/s0022226718000336.

Full text
Abstract:
How do people answer polar questions? In this fourteen-language study of answers to questions in conversation, we compare the two main strategies; first, interjection-type answers such asuh-huh(or equivalentsyes,mm, head nods, etc.), and second, repetition-type answers that repeat some or all of the question. We find that all languages offer both options, but that there is a strong asymmetry in their frequency of use, with a global preference for interjection-type answers. We propose that this preference is motivated by the fact that the two options are not equivalent in meaning. We argue that interjection-type answers are intrinsically suited to be the pragmatically unmarked, and thus more frequent, strategy for confirming polar questions, regardless of the language spoken. Our analysis is based on the semantic-pragmatic profile of the interjection-type and repetition-type answer strategies, in the context of certain asymmetries inherent to the dialogic speech act structure of question–answer sequences, including sequential agency and thematic agency. This allows us to see possible explanations for the outlier distributions found in ǂĀkhoe Haiǁom and Tzeltal.
APA, Harvard, Vancouver, ISO, and other styles
18

Preddie, Martha Ingrid. "Clinician-selected Electronic Information Resources do not Guarantee Accuracy in Answering Primary Care Physicians' Information Needs." Evidence Based Library and Information Practice 3, no. 1 (2008): 78. http://dx.doi.org/10.18438/b8n011.

Full text
Abstract:
A review of: 
 McKibbon, K. Ann, and Douglas B. Fridsma. “Effectiveness of Clinician-selected Electronic Information Resources for Answering Primary Care Physicians’ Information Needs.” Journal of the American Medical Informatics Association 13.6 (2006): 653-9.
 
 Objective – To determine if electronic information resources selected by primary care physicians improve their ability to answer simulated clinical questions. 
 
 Design – An observational study utilizing hour-long interviews and think-aloud protocols. 
 
 Setting – The offices and clinics of primary care physicians in Canada and the United States. 
 
 Subjects – 25 primary care physicians of whom 4 were women, 17 were from Canada, 22 were family physicians, and 24 were board certified. 
 
 Methods – Participants provided responses to 23 multiple-choice questions. Each physician then chose two questions and looked for the answers utilizing information resources of their own choice. The search processes, chosen resources and search times were noted. These were analyzed along with data on the accuracy of the answers and certainties related to the answer to each clinical question prior to the search. 
 
 Main results – Twenty-three physicians sought answers to 46 simulated clinical questions. Utilizing only electronic information resources, physicians spent a mean of 13.0 (SD 5.5) minutes searching for answers to the questions, an average of 7.3 (SD 4.0) minutes for the first question and 5.8 (SD 2.2) minutes to answer the second question. On average, 1.8 resources were utilized per question. Resources that summarized information, such as the Cochrane Database of Systematic Reviews, UpToDate and Clinical Evidence, were favored 39.2% of the time, MEDLINE (Ovid and PubMed) 35.7%, and Internet resources including Google 22.6%. Almost 50% of the search and retrieval strategies were keyword-based, while MeSH, subheadings and limiting were used less frequently. On average, before searching physicians answered 10 of 23 (43.5%) questions accurately. For questions that were searched using clinician-selected electronic resources, 18 (39.1%) of the 46 answers were accurate before searching, while 19 (42.1%) were accurate after searching. The difference of one correct answer was due to the answers from 5 (10.9%) questions changing from correct to incorrect, while the answers to 6 questions (13.0%) changed from incorrect to correct. The ability to provide correct answers differed among the various resources. Google and Cochrane provided the correct answers about 50% of the time while PubMed, Ovid MEDLINE, UpToDate, Ovid Evidence Based Medicine Reviews and InfoPOEMs were more likely to be associated with incorrect answers. Physicians also seemed unable to determine when they needed to search for information in order to make an accurate decision. 
 
 Conclusion – Clinician-selected electronic information resources did not guarantee accuracy in the answers provided to simulated clinical questions. At times the use of these resources caused physicians to change self-determined correct answers to incorrect ones. The authors state that this was possibly due to factors such as poor choice of resources, ineffective search strategies, time constraints and automation bias. Library and information practitioners have an important role to play in identifying and advocating for appropriate information resources to be integrated into the electronic medical record systems provided by health care institutions to ensure evidence based health care delivery.
APA, Harvard, Vancouver, ISO, and other styles
19

Kumar, Krishnamoorthi Magesh, and P. Valarmathie. "Domain and Intelligence Based Multimedia Question Answering System." International Journal of Evaluation and Research in Education (IJERE) 5, no. 3 (2016): 227. http://dx.doi.org/10.11591/ijere.v5i3.4544.

Full text
Abstract:
Multimedia question answering systems have become very popular over the past few years. It allows users to share their thoughts by answering given question or obtain information from a set of answered questions. However, existing QA systems support only textual answer which is not so instructive for many users. The user’s discussion can be enhanced by adding suitable multimedia data. Multimedia answers offer intuitive information with more suitable image, voice and video. This system includes a set of information as well as classification of question and answer, query generation, multimedia data selection and presentation. This system will take all kinds of media such as text, images, videos, and videos which will be combined with a textual answer. In a way, it automatically collects information from the user to improvising the answer. This method consists of ranking for answers to select the best answer. By dealing out a huge set of QA pairs and adding them to a database, multimedia question answering approach for users which finds multimedia answers by matching their questions with those in the database. The effectiveness of Multimedia system is determined by ranking of text, image, audio and video in users answer. The answer which is given by the user it’s processed by Semantic match algorithm and the best answers can be viewed by Naive Bayesian ranking system.
APA, Harvard, Vancouver, ISO, and other styles
20

&NA;. "Questions and Answers." Drugs 47, Supplement 6 (1994): 63–68. http://dx.doi.org/10.2165/00003495-199400476-00010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

&NA;. "Questions and Answers." Drugs 50, Supplement 1 (1995): 57–60. http://dx.doi.org/10.2165/00003495-199500501-00009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

&NA;. "Questions and Answers." Drugs 51, Supplement 1 (1996): 43–45. http://dx.doi.org/10.2165/00003495-199600511-00009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

&NA;. "Questions and Answers." Drugs 52, Supplement 3 (1996): 59–62. http://dx.doi.org/10.2165/00003495-199600523-00009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

&NA;. "Questions and Answers." Drugs 52, Supplement 6 (1996): 47–55. http://dx.doi.org/10.2165/00003495-199600526-00008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

&NA;. "Questions and Answers." Drugs 53, Supplement 1 (1997): 42–44. http://dx.doi.org/10.2165/00003495-199700531-00007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

&NA;. "Questions and Answers." Drugs 54, Supplement 5 (1997): 71–74. http://dx.doi.org/10.2165/00003495-199700545-00010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Hacimustafaoglu, Mustafa. "Questions and Answers." Çocuk Enfeksiyon Dergisi/Journal of Pediatric Infection 4, no. 2 (2010): 92. http://dx.doi.org/10.5152/ced.2010.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Hacımustafaoğlu, Mustafa. "Questions and Answers." Journal of Pediatric Infection 5, no. 4 (2011): 157–58. http://dx.doi.org/10.5152/ced.2011.52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Hacımustafaoğlu, Mustafa. "Questions and Answers." Journal of Pediatric Infection 6, no. 4 (2012): 169. http://dx.doi.org/10.5152/ced.2012.48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

&NA;. "Questions and Answers." Drugs 58, Supplement 4 (1999): 51–53. http://dx.doi.org/10.2165/00003495-199958004-00007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

&NA;. "Questions and Answers." Drugs 59, Supplement 1 (2000): 43–45. http://dx.doi.org/10.2165/00003495-200059001-00007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

&NA;. "Questions and Answers." Drugs 59, Supplement 2 (2000): 39–40. http://dx.doi.org/10.2165/00003495-200059002-00005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

&NA;. "Questions and Answers." Drugs 59, Supplement 3 (2000): 47–49. http://dx.doi.org/10.2165/00003495-200059003-00006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

&NA;. "Questions and Answers." Drugs 59, Supplement 4 (2000): 37–38. http://dx.doi.org/10.2165/00003495-200059004-00005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

&NA;. "Questions and Answers." Drugs 60, Supplement 1 (2000): 41–42. http://dx.doi.org/10.2165/00003495-200060001-00005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Peniston-Bird, Fiona. "Questions and Answers." Nurse Prescribing 5, no. 11 (2007): 518. http://dx.doi.org/10.12968/npre.2007.5.11.518.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Ali, Summeih. "Questions and answers." Egyptian Journal of Internal Medicine 27, no. 2 (2015): 80. http://dx.doi.org/10.4103/1110-7782.159479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

&NA;. "Questions and Answers." CNS Drugs 5, Supplement 1 (1996): 36–37. http://dx.doi.org/10.2165/00023210-199600051-00006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

&NA;. "Questions and Answers." CNS Drugs 18, Supplement 1 (2004): 43–45. http://dx.doi.org/10.2165/00023210-200418001-00009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Cohen, Phyllis, Adam L. Carley, Perry J. Radoff, and Thomas F. White. "Questions without Answers." Science News 128, no. 9 (1985): 142. http://dx.doi.org/10.2307/3969920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Sutton, Julie. "Questions and Answers." British Journal of Music Therapy 19, no. 1 (2005): 2–4. http://dx.doi.org/10.1177/135945750501900101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Caon, Martin, and Nils Schneider. "Questions and answers." Journal of Health Services Research & Policy 13, no. 1 (2008): 55. http://dx.doi.org/10.1258/jhsrp.2008.080001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Black, Beverly L. "Questions and Answers." American Journal of Health-System Pharmacy 43, no. 10 (1986): 2374–76. http://dx.doi.org/10.1093/ajhp/43.10.2374.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Kleijnen, Jos, and Daniela Andrén. "Questions and Answers." Journal of Health Services Research & Policy 8, no. 1 (2003): 64. http://dx.doi.org/10.1177/135581960300800116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Dingwall, Robert, Pauline Savy, and Evan Willis. "Questions and answers." Journal of Health Services Research & Policy 10, no. 1 (2005): 64. http://dx.doi.org/10.1177/135581960501000115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Jones, Gareth. "Questions and Answers." Journal of Health Services Research & Policy 3, no. 3 (1998): 191–92. http://dx.doi.org/10.1177/135581969800300312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

"How Question Characteristics Impact Answer Outcomes on Social Question-and-Answer Websites." Journal of Global Information Management 29, no. 6 (2021): 0. http://dx.doi.org/10.4018/jgim.20211101oa31.

Full text
Abstract:
Inducing more and higher-quality answers to questions is essential to sustainable development of Social Question-and-Answer (SQA) websites. Previous research has studied factors affecting question success and user motivation in answering questions, but how a question’s own characteristics affect the question’s answer outcome on SQA websites remains unknown. This study examines the impact of the characteristics of a question, namely readability, emotionality, additional descriptions, and question type, on the question’s answer outcome as measured by number of answers, average answer length, and number of “likes” received by answers to the question. Regression analyses reveal that readability, additional descriptions, and question type have significant impact on multiple measurements of answer outcome, while emotionality only affects the average answer length. This study provides insights to SQA website builders as they instruct users on question construction. It also provides insights to SQA website users on how to induce more and higher-quality answers to their questions.
APA, Harvard, Vancouver, ISO, and other styles
48

"How Question Characteristics Impact Answer Outcome on Social Question-and-Answer Websites." Journal of Global Information Management 29, no. 6 (2021): 0. http://dx.doi.org/10.4018/jgim.20211101oa21.

Full text
Abstract:
Inducing more and higher-quality answers to questions is essential to sustainable development of Social Question-and-Answer (SQA) websites. Previous research has studied factors affecting question success and user motivation in answering questions, but how a question’s own characteristics affect the question’s answer outcome on SQA websites remains unknown. This study examines the impact of the characteristics of a question, namely readability, emotionality, additional descriptions, and question type, on the question’s answer outcome as measured by number of answers, average answer length, and number of “likes” received by answers to the question. Regression analyses reveal that readability, additional descriptions, and question type have significant impact on multiple measurements of answer outcome, while emotionality only affects the average answer length. This study provides insights to SQA website builders as they instruct users on question construction. It also provides insights to SQA website users on how to induce more and higher-quality answers to their questions.
APA, Harvard, Vancouver, ISO, and other styles
49

Pang, Liang, Yanyan Lan, Jiafeng Guo, Jun Xu, and Xueqi Cheng. "SPAN: Understanding a Question with Its Support Answers." Proceedings of the AAAI Conference on Artificial Intelligence 30, no. 1 (2016). http://dx.doi.org/10.1609/aaai.v30i1.9928.

Full text
Abstract:
Matching a question to its best answer is a common task in community question answering. In this paper, we focus on the non-factoid questions and aim to pick out the best answer from its candidate answers. Most of the existing deep models directly measure the similarity between question and answer by their individual sentence embeddings. In order to tackle the problem of the information lack in question's descriptions and the lexical gap between questions and answers, we propose a novel deep architecture namely SPAN in this paper. Specifically we introduce support answers to help understand the question, which are defined as the best answers of those similar questions to the original one. Then we can obtain two kinds of similarities, one is between question and the candidate answer, and the other one is between support answers and the candidate answer. The matching score is finally generated by combining them. Experiments on Yahoo! Answers demonstrate that SPAN can outperform the baseline models.
APA, Harvard, Vancouver, ISO, and other styles
50

"Answers to short answer questions." African Journal of Emergency Medicine 2, no. 4 (2012): 173. http://dx.doi.org/10.1016/s2211-419x(12)00148-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography