To see the other types of publications on this topic, follow the link: Chatbot.

Journal articles on the topic 'Chatbot'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Chatbot.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Musheyev, David, Alexander Pan, Preston Gross, Daniel Kamyab, Peter Kaplinsky, Mark Spivak, Marie A. Bragg, Stacy Loeb, and Abdo E. Kabarriti. "Readability and Information Quality in Cancer Information From a Free vs Paid Chatbot." JAMA Network Open 7, no. 7 (July 26, 2024): e2422275. http://dx.doi.org/10.1001/jamanetworkopen.2024.22275.

Full text
Abstract:
ImportanceThe mainstream use of chatbots requires a thorough investigation of their readability and quality of information.ObjectiveTo identify readability and quality differences in information between a free and paywalled chatbot cancer-related responses, and to explore if more precise prompting can mitigate any observed differences.Design, Setting, and ParticipantsThis cross-sectional study compared readability and information quality of a chatbot’s free vs paywalled responses with Google Trends’ top 5 search queries associated with breast, lung, prostate, colorectal, and skin cancers from January 1, 2021, to January 1, 2023. Data were extracted from the search tracker, and responses were produced by free and paywalled ChatGPT. Data were analyzed from December 20, 2023, to January 15, 2024.ExposuresFree vs paywalled chatbot outputs with and without prompt: “Explain the following at a sixth grade reading level: [nonprompted input].”Main Outcomes and MeasuresThe primary outcome measured the readability of a chatbot’s responses using Flesch Reading Ease scores (0 [graduate reading level] to 100 [easy fifth grade reading level]). Secondary outcomes included assessing consumer health information quality with the validated DISCERN instrument (overall score from 1 [low quality] to 5 [high quality]) for each response. Scores were compared between the 2 chatbot models with and without prompting.ResultsThis study evaluated 100 chatbot responses. Nonprompted free chatbot responses had lower readability (median [IQR] Flesh Reading ease scores, 52.60 [44.54-61.46]) than nonprompted paywalled chatbot responses (62.48 [54.83-68.40]) (P < .05). However, prompting the free chatbot to reword responses at a sixth grade reading level was associated with increased reading ease scores than the paywalled chatbot nonprompted responses (median [IQR], 71.55 [68.20-78.99]) (P < .001). Prompting was associated with increases in reading ease in both free (median [IQR], 71.55 [68.20-78.99]; P < .001)and paywalled versions (median [IQR], 75.64 [70.53-81.12]; P < .001). There was no significant difference in overall DISCERN scores between the chatbot models, with and without prompting.Conclusions and RelevanceIn this cross-sectional study, paying for the chatbot was found to provide easier-to-read responses, but prompting the free version of the chatbot was associated with increased response readability without changing information quality. Educating the public on how to prompt chatbots may help promote equitable access to health information.
APA, Harvard, Vancouver, ISO, and other styles
2

Suh, Jeehae. "A Study on the Conformity of Chatbot Builder as a Korean Speech Practice Tool." Korean Society of Culture and Convergence 45, no. 1 (January 31, 2023): 61–70. http://dx.doi.org/10.33645/cnc.2023.01.45.01.61.

Full text
Abstract:
The purpose of this study is to verify whether chatbots made with chatbot builders are suitable as a Korean speaking practice tool. Chatbot builders, which can be easily produced as chatbots without separate coding knowledge and can design conversations meaningful for learning, have recently been in the spotlight as a learning tool. In this study, chatbots were created using dialog flows, and conversation patterns with chatbots shared by study participants were analyzed. As a result of the analysis, it was found that 35% of all conversations were not successfully completed. Such a conversation failure was found to be due to the inaccuracy of chatbot's recognition of Korean learner pronunciation, error in handling learner utterance intention, and inaccuracy in handling learner error sentences. In this regard, in order for chatbot builder to be used as a Korean language learning tool now, learner's proficiency or academic achievement should be considered for smooth processing of chatbot's learner utterance. In addition, this study is meaningful in that it verified the suitability of chatbot builders as a learning tool not covered in previous studies.
APA, Harvard, Vancouver, ISO, and other styles
3

Septiyanti, Nisa Dwi, Muhammad Irfan Luthfi, and Darmawansah Darmawansah. "Effect of Chatbot-Assisted Learning on Students’ Learning Motivation and Its Pedagogical Approaches." Khazanah Informatika : Jurnal Ilmu Komputer dan Informatika 10, no. 1 (April 30, 2024): 69–77. http://dx.doi.org/10.23917/khif.v10i1.4246.

Full text
Abstract:
Abstract- The use of chatbots in the learning process has been increasingly investigated and applied. While many studies have discussed the chatbot's ability to motivate students' interest in learning, few have examined whether students' perception of learning affects the effectiveness of chatbots and the pedagogical approach taken by chatbots as conversational agents during the learning process. There is a need for new analysis to capture the effects of Chatbot-Assisted Learning (Chatbot-AL) and student-chatbot conversations. In an eight-week semester, 48 first-year undergraduate students participated in a chatbot-assisted learning environment integrated into an engineering course. Data were collected through questionnaires on students' learning motivation and discourse in chatbot conversations. Statistical non-parametric analysis and Epistemic Network Analysis (ENA) were used to explore the research questions. The results showed that students with high learning perception had better learning motivation using chatbot-AL than students with low learning perception. Additionally, most of the questions asked by students were aimed at receiving emotional support through casual conversation with the chatbot. Finally, the implications, limitations, and conclusions were discussed.
APA, Harvard, Vancouver, ISO, and other styles
4

Sabna, Eka. "CHATBOT SEBAGAI GURU VIRTUAL UNTUK MATA KULIAH DATA MINING." Jurnal Ilmu Komputer 11, no. 2 (November 4, 2022): 110–15. http://dx.doi.org/10.33060/jik/2022/vol11.iss2.276.

Full text
Abstract:
Abstract A chatbot is an application (service) that interacts with users through text conversations. Chatbots work to replace the role of humans in serving conversations through messaging applications. The chatbot that is built will be virtual assisting that will help students learn at home. Chatbots can only answer questions based on patterns that have been stored in the chatbot's knowledge base. Chatbots are automated conversational agents that interact with users using natural human language that can help anytime and anywhere. This chatbot is applied as a Virtual Teacher who can provide information and learning materials to students in Data Mining courses. Keywords: C4.5, NBC, GPA, performance, student Abstrak Chatbot merupakan aplikasi (layanan) yang berinteraksi dengan pengguna melalui percakapan teks. Chatbot bekerja untuk menggantikan peranan manusia dalam melayani pembicaraan melalui aplikasi pesan. chatbot yang dibangun akan menjadi virtual assisting yang akan membantu Mahasiswa dalam belajar dirumah. Chatbot hanya dapat menjawab pertanyaan berdasarkan pola yang telah disimpan di dalam knowledge base chatbot. Chatbot adalah agen percakapan otomatis yang berinteraksi dengan pengguna menggunakan bahasa alami manusia yang dapat membantu kapan saja dan dimana saja. Chatbot ini di aplikasikan sebagai Guru Virtual yang dapat memberikan informasi dan materi pembelajaran terhadap mahasiswa dalam matakuliah Data Mining. Kata Kunci : Chatbot, Data Mining, Virtual Teacher, Student, Learning
APA, Harvard, Vancouver, ISO, and other styles
5

Juliansyah, Dias, Hannie Hannie, and Ade Andri Hendriadi. "Penerapan Uji-T Independen untuk Sistem Chatbot Gaotek." Jurnal Syntax Admiration 5, no. 6 (June 15, 2024): 2137–46. http://dx.doi.org/10.46799/jsa.v5i6.1224.

Full text
Abstract:
Artificial intelligence has become a primary focus in the development of computer systems that mimic human brain functions. One application is the development of Chatbots to assist in various fields. In this context, the research aims to evaluate the effectiveness of two types of Chatbots, namely Chatbot Gemini and ChatGPT, in providing customer service at GAOTek Inc. Through the application of Independent T-tests, the research findings indicate a significant difference in the performance of the two Chatbots. ChatGPT exhibits a lower average score compared to Chatbot Gemini, demonstrating significant efficiency superiority. The Independent T-test statistical results show the rejection of the null hypothesis, indicating a significant difference between the two groups. The alternative hypothesis is accepted, confirming a distinction in the performance of the two Chatbots. Thus, ChatGPT is declared as the most efficient Chatbot in the context of customer service at GAOTek Inc. These results are expected to provide valuable insights for the development and implementation of Chatbots in enhancing customer service effectiveness across various industries, particularly at GAOTek Inc.
APA, Harvard, Vancouver, ISO, and other styles
6

Lapeña, José Florencio. "The Updated World Association of Medical Editors (WAME) Recommendations on Chatbots and Generative AI in Relation to Scholarly Publications and International Committee of Medical Journal Editors (ICMJE) Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals (May 2023)." Philippine Journal of Otolaryngology Head and Neck Surgery 38, no. 1 (June 4, 2023): 4. http://dx.doi.org/10.32412/pjohns.v38i1.2127.

Full text
Abstract:
On January 20, 2023, the World Association of Medical Editors published a policy statement on Chatbots, ChatGPT, and Scholarly Manuscripts: WAME Recommendations on ChatGPT and Chatbots in Relation to Scholarly Publications.1 There were four recommendations, namely: 1. Chatbots cannot be authors; 2. Authors should be transparent when chatbots are used and provide information about how they were used; 3. Authors are responsible for the work performed by a chatbot in their paper (including the accuracy of what is presented, and the absence of plagiarism) and for appropriate attribution of all sources (including for material produced by the chatbot); and 4. Editors need appropriate tools to help them detect content generated or altered by AI and these tools must be available regardless of their ability to pay.1 This statement was spurred in part by some journals beginning to publish papers in which chatbots such as ChatGPT were listed as co-authors.2 First, only humans can be authors. Chatbots cannot be authors because they cannot meet authorship requirements “as they cannot understand the role of authors or take responsibility for the paper.”1 In particular, they cannot meet the third and fourth ICMJE criteria for authorship, namely “Final approval of the version to be published” and “Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.”1,3 Moreover, “a chatbot cannot understand a conflict of interest statement, or have the legal standing to sign (such a) statement,” nor can they “hold copyright.”1 Because authors submitting a manuscript must ensure that all those named as authors meet ICMJE authorship criteria, chatbots clearly should not be included as authors.1 Second, authors should acknowledge the sources of their materials. When chatbots are used, authors “should declare this fact and provide full technical specifications of the chatbot used (name, version, model, source) and method of application in the paper they are submitting (query structure, syntax),” “consistent with the ICMJE recommendation of acknowledging writing assistance.”1,4 Third, authors must take public responsibility for their work; “Human authors of articles written with the help of a chatbot are responsible for the contributions made by chatbots, including their accuracy,” and “must be able to assert that there is no plagiarism in their paper, including in text produced by the chatbot.”1Consequently, authors must “ensure … appropriate attribution of all quoted material, including full citations,” “seek and cite the sources that support,” as well as oppose (since chatbots can be designed to omit counterviews), the chatbot’s statements.1 Fourth, to facilitate all this, medical journal editors (who “use manuscript evaluation approaches from the 20th century”) “need appropriate (digital) tools … that will help them evaluate … 21st century … content (generated or altered by AI) efficiently and accurately.”1
APA, Harvard, Vancouver, ISO, and other styles
7

Abu-Haifa, Mohammad, Bara'a Etawi, Huthaifa Alkhatatbeh, and Ayman Ababneh. "Comparative Analysis of ChatGPT, GPT-4, and Microsoft Copilot Chatbots for GRE Test." International Journal of Learning, Teaching and Educational Research 23, no. 6 (June 30, 2024): 327–47. http://dx.doi.org/10.26803/ijlter.23.6.15.

Full text
Abstract:
This paper presents an analysis of how well three artificial intelligence chatbots: Copilot, ChatGPT, and GPT-4, perform when answering questions from standardized tests, mainly the Graduate Record Examination (GRE). A total of 137 questions with different forms of quantitative reasoning and 157 questions with verbal categories were used to assess the chatbot’s capabilities. This paper presents the performance of each chatbot across various skills and styles tested in the exam. The proficiency of the chatbots in addressing image-based questions is also explored, and the uncertainty level of each chatbot is illustrated. The results show varying degrees of success among the chatbots. ChatGPT primarily makes arithmetic errors, whereas the highest percentage of errors made by Copilot and GPT-4 are conceptual. However, GPT-4 exhibited the highest success rates, particularly in tasks involving complex language understanding and image-based questions. Results highlight the ability of these chatbots in helping examinees to pass the GRE with a high score, which encourages the use of them in test preparation. The results also show the importance of preventing access to similar chatbots when tests are conducted online, such as during the COVID-19 pandemic, to ensure a fair environment for all test takers competing for higher education opportunities.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Kaicheng. "From ELIZA to ChatGPT: A brief history of chatbots and their evolution." Applied and Computational Engineering 39, no. 1 (February 21, 2024): 57–62. http://dx.doi.org/10.54254/2755-2721/39/20230579.

Full text
Abstract:
Over the years, chatbots have grown to be used in a variety of industries. From their humble beginnings to their current prominence, chatbots have come a long way. From the earliest chatbot ELIZA in the 1960s to todays popular Chatgpt, chatbot language models, codes, and databases have improved greatly with the advancement of artificial intelligence technology.This paper introduces the development of chatbots through literature review and theoretical analysis. It also analyzes and summarizes the advantages and challenges of chatbots according to the current status of chatbot applications and social needs. Personalized interaction will be an important development direction for chatbots, because providing personalized responses through user data analysis can provide users with a personalized experience, thus increasing user engagement and satisfaction.
APA, Harvard, Vancouver, ISO, and other styles
9

Srivastava, Praveen Ranjan, Harshit Kumar Singh, Surabhi Sakshi, Justin Zuopeng Zhang, and Qiuzheng Li. "Identifying Alternative Options for Chatbots With Multi-Criteria Decision-Making." Journal of Database Management 35, no. 1 (July 17, 2024): 1–25. http://dx.doi.org/10.4018/jdm.345917.

Full text
Abstract:
Artificial intelligence-powered chatbot usage continues to grow worldwide, and there is ongoing research to identify features that maximize the utility of chatbots. This study uses the multi-criteria decision-making (MCDM) method to find the best available alternative chatbot for task completion. We identify chatbot evaluation criteria from literature followed by inputs from experts using the Delphi method. We apply CRITIC to evaluate the relative importance of the specified criteria. Finally, we list popular alternatives of chatbots and features offered and apply WASPAS and EDAS techniques to rank the available alternatives. The alternatives explored in this study include YOU, ChatGPT, PerplexityAI, ChatSonic, and CharacterAI. Both methods yield identical results in ranking, with ChatGPT emerging as the most preferred alternative based on the criteria identified.
APA, Harvard, Vancouver, ISO, and other styles
10

Holland, Alexis M., William R. Lorenz, Jack C. Cavanagh, Neil J. Smart, Sullivan A. Ayuso, Gregory T. Scarola, Kent W. Kercher, et al. "Comparison of Medical Research Abstracts Written by Surgical Trainees and Senior Surgeons or Generated by Large Language Models." JAMA Network Open 7, no. 8 (August 2, 2024): e2425373. http://dx.doi.org/10.1001/jamanetworkopen.2024.25373.

Full text
Abstract:
ImportanceArtificial intelligence (AI) has permeated academia, especially OpenAI Chat Generative Pretrained Transformer (ChatGPT), a large language model. However, little has been reported on its use in medical research.ObjectiveTo assess a chatbot’s capability to generate and grade medical research abstracts.Design, Setting, and ParticipantsIn this cross-sectional study, ChatGPT versions 3.5 and 4.0 (referred to as chatbot 1 and chatbot 2) were coached to generate 10 abstracts by providing background literature, prompts, analyzed data for each topic, and 10 previously presented, unassociated abstracts to serve as models. The study was conducted between August 2023 and February 2024 (including data analysis).ExposureAbstract versions utilizing the same topic and data were written by a surgical trainee or a senior physician or generated by chatbot 1 and chatbot 2 for comparison. The 10 training abstracts were written by 8 surgical residents or fellows, edited by the same senior surgeon, at a high-volume hospital in the Southeastern US with an emphasis on outcomes-based research. Abstract comparison was then based on 10 abstracts written by 5 surgical trainees within the first 6 months of their research year, edited by the same senior author.Main Outcomes and MeasuresThe primary outcome measurements were the abstract grades using 10- and 20-point scales and ranks (first to fourth). Abstract versions by chatbot 1, chatbot 2, junior residents, and the senior author were compared and judged by blinded surgeon-reviewers as well as both chatbot models. Five academic attending surgeons from Denmark, the UK, and the US, with extensive experience in surgical organizations, research, and abstract evaluation served as reviewers.ResultsSurgeon-reviewers were unable to differentiate between abstract versions. Each reviewer ranked an AI-generated version first at least once. Abstracts demonstrated no difference in their median (IQR) 10-point scores (resident, 7.0 [6.0-8.0]; senior author, 7.0 [6.0-8.0]; chatbot 1, 7.0 [6.0-8.0]; chatbot 2, 7.0 [6.0-8.0]; P = .61), 20-point scores (resident, 14.0 [12.0-7.0]; senior author, 15.0 [13.0-17.0]; chatbot 1, 14.0 [12.0-16.0]; chatbot 2, 14.0 [13.0-16.0]; P = .50), or rank (resident, 3.0 [1.0-4.0]; senior author, 2.0 [1.0-4.0]; chatbot 1, 3.0 [2.0-4.0]; chatbot 2, 2.0 [1.0-3.0]; P = .14). The abstract grades given by chatbot 1 were comparable to the surgeon-reviewers’ grades. However, chatbot 2 graded more favorably than the surgeon-reviewers and chatbot 1. Median (IQR) chatbot 2-reviewer grades were higher than surgeon-reviewer grades of all 4 abstract versions (resident, 14.0 [12.0-17.0] vs 16.9 [16.0-17.5]; P = .02; senior author, 15.0 [13.0-17.0] vs 17.0 [16.5-18.0]; P = .03; chatbot 1, 14.0 [12.0-16.0] vs 17.8 [17.5-18.5]; P = .002; chatbot 2, 14.0 [13.0-16.0] vs 16.8 [14.5-18.0]; P = .04). When comparing the grades of the 2 chatbots, chatbot 2 gave higher median (IQR) grades for abstracts than chatbot 1 (resident, 14.0 [13.0-15.0] vs 16.9 [16.0-17.5]; P = .003; senior author, 13.5 [13.0-15.5] vs 17.0 [16.5-18.0]; P = .004; chatbot 1, 14.5 [13.0-15.0] vs 17.8 [17.5-18.5]; P = .003; chatbot 2, 14.0 [13.0-15.0] vs 16.8 [14.5-18.0]; P = .01).Conclusions and RelevanceIn this cross-sectional study, trained chatbots generated convincing medical abstracts, undifferentiable from resident or senior author drafts. Chatbot 1 graded abstracts similarly to surgeon-reviewers, while chatbot 2 was less stringent. These findings may assist surgeon-scientists in successfully implementing AI in medical research.
APA, Harvard, Vancouver, ISO, and other styles
11

Satiti, Laras Hayu, Endang Fauziati, and Endang Seytaningsih. "AI Chatbot as an Effective English Teaching Partner for University Students." International Journal of Educational Research & Social Sciences 5, no. 3 (June 29, 2024): 463–69. https://doi.org/10.51601/ijersc.v5i3.820.

Full text
Abstract:
This study investigated the effectiveness of an AI chatbot for improving English language skills among Indonesian university students. In order to address the research question, the researchers conducted an analysis of responses to a 20-item questionnaire distributed via Google Form among a sample of 95 university students. The survey was conducted to evaluate student perceptions of the chatbot's usability, language accuracy, feedback quality, and overall strengths. The findings revealed high student satisfaction with all aspects of the chatbot. Students agreed that the chatbot was user-friendly, understood their questions, provided helpful feedback, and offered valuable learning opportunities. These results align with previous research on the effectiveness of chatbots in enhancing English language proficiency. The study suggests that AI chatbots can be a beneficial tool for both independent language practice and supplementing classroom instruction. The chatbot's ability to personalise learning experiences and adapt to individual needs contributes to improved student engagement and language acquisition. This research adds to the growing body of evidence supporting the use of AI chatbots in computer-assisted language learning (CALL). Future research could investigate the effectiveness of the chatbot for learners with varying proficiency levels.
APA, Harvard, Vancouver, ISO, and other styles
12

Lee, Ju Yoen. "Can an artificial intelligence chatbot be the author of a scholarly article?" Journal of Educational Evaluation for Health Professions 20 (February 27, 2023): 6. http://dx.doi.org/10.3352/jeehp.2022.20.6.

Full text
Abstract:
At the end of 2022, the appearance of ChatGPT, an artificial intelligence (AI) chatbot with amazing writing ability, caused a great sensation in academia. The chatbot turned out to be very capable, but also capable of deception, and the news broke that several researchers had listed the chatbot (including its earlier version) as co-authors of their academic papers. In response, Nature and Science expressed their position that this chatbot cannot be listed as an author in the papers they publish. Since an AI chatbot is not a human being, in the current legal system, the text automatically generated by an AI chatbot cannot be a copyrighted work; thus, an AI chatbot cannot be an author of a copyrighted work. Current AI chatbots such as ChatGPT are much more advanced than search engines in that they produce original text, but they still remain at the level of a search engine in that they cannot take responsibility for their writing. For this reason, they also cannot be authors from the perspective of research ethics.
APA, Harvard, Vancouver, ISO, and other styles
13

Lee, Ju Yoen. "Can an artificial intelligence chatbot be the author of a scholarly article?" Journal of Educational Evaluation for Health Professions 20 (February 27, 2023): 6. http://dx.doi.org/10.3352/jeehp.2023.20.6.

Full text
Abstract:
At the end of 2022, the appearance of ChatGPT, an artificial intelligence (AI) chatbot with amazing writing ability, caused a great sensation in academia. The chatbot turned out to be very capable, but also capable of deception, and the news broke that several researchers had listed the chatbot (including its earlier version) as co-authors of their academic papers. In response, Nature and Science expressed their position that this chatbot cannot be listed as an author in the papers they publish. Since an AI chatbot is not a human being, in the current legal system, the text automatically generated by an AI chatbot cannot be a copyrighted work; thus, an AI chatbot cannot be an author of a copyrighted work. Current AI chatbots such as ChatGPT are much more advanced than search engines in that they produce original text, but they still remain at the level of a search engine in that they cannot take responsibility for their writing. For this reason, they also cannot be authors from the perspective of research ethics.
APA, Harvard, Vancouver, ISO, and other styles
14

Lee, Ju Yoen. "Can an artificial intelligence chatbot be the author of a scholarly article?" Science Editing 10, no. 1 (February 16, 2023): 7–12. http://dx.doi.org/10.6087/kcse.292.

Full text
Abstract:
At the end of 2022, the appearance of ChatGPT, an artificial intelligence (AI) chatbot with amazing writing ability, caused a great sensation in academia. The chatbot turned out to be very capable, but also capable of deception, and the news broke that several researchers had listed the chatbot (including its earlier version) as co-authors of their academic papers. In response, Nature and Science expressed their position that this chatbot cannot be listed as an author in the papers they publish. Since an AI chatbot is not a human being, in the current legal system, the text automatically generated by an AI chatbot cannot be a copyrighted work; thus, an AI chatbot cannot be an author of a copyrighted work. Current AI chatbots such as ChatGPT are much more advanced than search engines in that they produce original text, but they still remain at the level of a search engine in that they cannot take responsibility for their writing. For this reason, they also cannot be authors from the perspective of research ethics.
APA, Harvard, Vancouver, ISO, and other styles
15

Lin, Chien-Chang, Anna Y. Q. Huang, and Stephen J. H. Yang. "A Review of AI-Driven Conversational Chatbots Implementation Methodologies and Challenges (1999–2022)." Sustainability 15, no. 5 (February 22, 2023): 4012. http://dx.doi.org/10.3390/su15054012.

Full text
Abstract:
A conversational chatbot or dialogue system is a computer program designed to simulate conversation with human users, especially over the Internet. These chatbots can be integrated into messaging apps, mobile apps, or websites, and are designed to engage in natural language conversations with users. There are also many applications in which chatbots are used for educational support to improve students’ performance during the learning cycle. The recent success of ChatGPT also encourages researchers to explore more possibilities in the field of chatbot applications. One of the main benefits of conversational chatbots is their ability to provide an instant and automated response, which can be leveraged in many application areas. Chatbots can handle a wide range of inquiries and tasks, such as answering frequently asked questions, booking appointments, or making recommendations. Modern conversational chatbots use artificial intelligence (AI) techniques, such as natural language processing (NLP) and artificial neural networks, to understand and respond to users’ input. In this study, we will explore the objectives of why chatbot systems were built and what key methodologies and datasets were leveraged to build a chatbot. Finally, the achievement of the objectives will be discussed, as well as the associated challenges and future chatbot development trends.
APA, Harvard, Vancouver, ISO, and other styles
16

Ardiansyah, Rizky, Dianthy Marya, and Atik Novianti. "Penggunaan metode string matching pada sistem informasi mahasiswa Polinema dengan chatbot." JURNAL ELTEK 21, no. 1 (April 30, 2023): 28–35. http://dx.doi.org/10.33795/eltek.v21i1.381.

Full text
Abstract:
Pelayanan kampus yang baik sangat penting untuk menunjang proses belajar mengajar di perguruan tinggi. Salah satu cara untuk meningkatkan pelayanan adalah dengan menggunakan chatbot. Chatbot dapat memberikan pelayanan yang cepat dan efisien kepada mahasiswa, serta membantu dalam mendapatkan informasi yang dibutuhkan. Chatbot dapat memberikan pelayanan yang efektif dan efisien dalam menyediakan informasi dan melakukan transaksi. Penggunaan chatbot juga mengurangi beban kerja petugas administrasi dan memungkinkan mereka untuk fokus pada tugas yang lebih kompleks. Meskipun demikian, perlu perbaikan dalam aspek keamanan data dan privasi. Penelitian ini bertujuan untuk menguji aplikasi chatbot pelayanan administrasi dengan menggunakan metode string matching. Pengujian dilakukan melalui beberapa tahap, termasuk pengujian fungsi, interaksi, kompatibilitas, keamanan, dan performa chatbot. Hasil pengujian menunjukkan bahwa chatbot memiliki tingkat akurasi sebesar 90% dengan metode string matching. Mayoritas mahasiswa merasa puas dengan kecepatan dan akurasi respon chatbot. Selain itu, aplikasi ini juga dianggap mudah digunakan oleh mahasiswa. ABSTRACTCampus services are crucial for supporting the teaching and learning process in higher education institutions. One effective approach to enhance these services is through the implementation of chatbots. Chatbots provide prompt and efficient assistance to students, enabling them to easily access the necessary information. They offer an efficient and effective means of delivering services by providing accurate information and facilitating transactions. Furthermore, the utilization of chatbots alleviates the burden on administrative staff, allowing them to focus on more complex tasks. However, it is important to address and improve data security and privacy aspects. This research aims to evaluate an administrative service chatbot using the string matching method. The evaluation process encompasses assessing the chatbot's functionality, interaction capabilities, compatibility, security measures, and overall performance. The results demonstrate an impressive 90% accuracy rate achieved through the implementation of the string matching method. The majority of students’s express satisfaction with the chatbot's promptness and accuracy in providing responses. Additionally, the application is highly regarded for its user-friendly interface, as reported by the students.
APA, Harvard, Vancouver, ISO, and other styles
17

Saransh, Saransh. "SOCIAL COMPANION CHATBOT FOR HUMAN COMMUNICATION USING ML AND NLP." International Journal of Engineering Applied Sciences and Technology 8, no. 1 (May 1, 2023): 321–24. http://dx.doi.org/10.33564/ijeast.2023.v08i01.048.

Full text
Abstract:
The advent of chatbot technology has led to a significant shift in the humans communicate with machines. Chatbots powered by Machine Learning (ML) and Natural Language Processing (NLP) can interact with humans naturally and conversationally. This chatbot's primary objective is to provide companionship to individuals who may feel lonely or isolated. The chatbot prompts customers to express their feelings and provides a personalized response based on whether the customer's feelings are positive or negative. The chatbot's development involved designing a userfriendly interface and integrating natural language processing techniques to enable more human-like conversations. If the customer's response matches one of the feelings in the respective list, the chatbot responds with empathy and requests the customer to describe their feelings in more detail. Overall, the chatbot enhances customer support by providing personalized communication with customers.
APA, Harvard, Vancouver, ISO, and other styles
18

Aksoyalp, Zinnet Şevval, and Betül Rabia Erdoğan. "COMPARATIVE EVALUATION OF ARTIFICIAL INTELLIGENCE AND DRUG INTERACTION TOOLS: A PERSPECTIVE WITH THE EXAMPLE OF CLOPIDOGREL." Ankara Universitesi Eczacilik Fakultesi Dergisi 48, no. 3 (August 5, 2024): 22. http://dx.doi.org/10.33483/jfpau.1460173.

Full text
Abstract:
Objective: The study aims to compare the ability of free artificial intelligence (AI) chatbots to detect drug interactions with freely available drug interaction tools, using clopidogrel as an example. Material and Method: The Lexicomp database was used as a reference to determine drug interactions with clopidogrel. ChatGPT-3.5 AI and Bing AI were selected as the free AI chatbots. Medscape Drug Interaction Checker, DrugBank Drug Interaction Checker and Epocrates Interaction Check were selected as free drug interaction tools. Accuracy score and comprehensiveness score were calculated for each drug interaction tool and AI chatbots. The kappa coefficient was calculated to assess inter-source agreement for interaction severity. Result and Discussion: The results most similar to those of Lexicomp were obtained from the DrugBank and the ChatGPT-3.5 AI chatbot. The ChatGPT-3.5 AI chatbot performed best, with 69 correct results and an accuracy score of 307. ChatGPT-3.5 AI has the highest overall score of 387 points for accuracy and comprehensiveness. In addition, the highest kappa coefficient with Lexicomp was found for ChatGPT-3.5 AI chatbot (0.201, fair agreement). However, some of the results obtained by ChatGPT-3.5 AI need to be improved as they are incorrect/inadequate. Therefore, information obtained using AI tools should not be used as a reference for clinical applications by healthcare professionals and patients should not change their treatment without consulting doctor.
APA, Harvard, Vancouver, ISO, and other styles
19

Kim, Trudy S., Catherine T. Yu, Chandler Hinson, Ethan Fung, Omar Allam, Rahim S. Nazerali, and Haripriya S. Ayyala. "ChatGPT Virtual Assistant for Breast Reconstruction: Assessing Preferences for a Traditional Chatbot versus a Human AI VideoBot." Plastic and Reconstructive Surgery - Global Open 12, no. 10 (October 2024): e6202. http://dx.doi.org/10.1097/gox.0000000000006202.

Full text
Abstract:
Background: Recent advancements in artificial intelligence (AI) have reshaped telehealth, with AI chatbots like Chat Generative Pretrained Transformer (ChatGPT) showing promise in various medical applications. ChatGPT is capable of offering basic patient education on procedures in plastic and reconstructive surgery (PRS), yet the preference between human AI VideoBots and traditional chatbots in plastic and reconstructive surgery remains unexplored. Methods: We developed a VideoBot by integrating ChatGPT with Synthesia, a human AI avatar video platform. The VideoBot was then integrated into Tolstoy to create an interactive experience that answered four of the most asked questions related to breast reconstruction. We used Zapier to develop a ChatGPT-integrated chatbot. A 16-item survey adapted from the 2005 validated measurement of online trust by Corritore et al was distributed online to female participants via Amazon Mechanical Turk. Results: A total of 396 responses were gathered. Participants were 18 to 64 years old. Perceptions of truthfulness, believability, content expertise, ease of use, and safety were similar between the VideoBot and chatbot. Most participants preferred the VideoBot compared with the traditional chatbot (63.5% versus 28.1%), as they found it more captivating than the text-based chatbot. Of the participants, 77% would have preferred to see someone who they identified with in terms of gender and race. Conclusions: Both the VideoBot and text-based chatbot show comparable effectiveness, usability, and trust. Nonetheless, the VideoBot’s human-like qualities enhance interactivity. Future research should explore the impact of race and gender concordance in telehealth to provide a more personalized experience for patients.
APA, Harvard, Vancouver, ISO, and other styles
20

Cui, Yichao, Yu-Jen Lee, Jack Jamieson, Naomi Yamashita, and Yi-Chieh Lee. "Exploring Effects of Chatbot's Interpretation and Self-disclosure on Mental Illness Stigma." Proceedings of the ACM on Human-Computer Interaction 8, CSCW1 (April 17, 2024): 1–33. http://dx.doi.org/10.1145/3637329.

Full text
Abstract:
Chatbots are increasingly being used in mental healthcare - e.g., for assessing mental-health conditions and providing digital counseling - and have been found to have considerable potential for facilitating people's behavioral changes. Nevertheless, little research has examined how specific chatbot designs may help reduce public stigmatization of mental illness. To help fill that gap, this study explores how stigmatizing attitudes toward mental illness may be affected by conversations with chatbots that have 1) varying ways of expressing their interpretations of participants' statements and 2) different styles of self-disclosure. More specifically, we implemented and tested four chatbot designs that varied in terms of whether they interpreted participants' comments as stigmatizing or non-stigmatizing, and whether they provided stigmatizing, non-stigmatizing, or no self-disclosure of chatbot's own views. Over the two-week period of the experiment, all four chatbots' conversations with our participants centered on seven mental-illness vignettes, all featuring the same character. We found that the chatbot featuring non-stigmatizing interpretations and non-stigmatizing self-disclosure performed best at reducing the participants' stigmatizing attitudes, while the one that provided stigmatizing interpretations and stigmatizing self-disclosures had the least beneficial effect. We also discovered side effects of chatbot's self-disclosure: notably, that chatbots were perceived to have inflexible and strong opinions, which undermined their credibility. As such, this paper contributes to knowledge about how chatbot designs shape users' perceptions of the chatbots themselves, and how chatbots' interpretation and self-disclosure may be leveraged to help reduce mental-illness stigma.
APA, Harvard, Vancouver, ISO, and other styles
21

Nguyen Xuan Bac, Luu Van Sang, Nguyen Duc Vuong, Luong Quoc Le, and Dang Duc Thinh. "Enhancing retrieval performance of embedding models via fine-tuning on synthetic data in RAG chatbot for Vietnamese military science domain." Journal of Military Science and Technology 99 (November 25, 2024): 109–18. http://dx.doi.org/10.54939/1859-1043.j.mst.99.2024.109-118.

Full text
Abstract:
Retrieval-Augmented Generation (RAG) is a technology that combines information retrieval with large language models, enabling chatbots to provide accurate answers by querying relevant documents from a data repository before generating responses. While RAG chatbot has demonstrated effectiveness in many applications, there are still limitations in specialized Vietnamese data domains, particularly in the military science field. To address this challenge, this paper proposes a framework for fine-tuning embedding models using synthetic datasets generated by ChatGPT to enhance retrieval performance in a Q&A application focused on the history of the Institute of Information Technology (IoIT). Our approach evaluates 11 popular embedding models and shows a significant average improvement of 18.15% in the MAP@K metric. The resulting IoIT history Q&A chatbot, developed with fine-tuned embedding models and the Vietnamese language model Vistral-7B, outperforms chatbots utilizing OpenAI's embedding models and ChatGPT. These findings highlight the potential of RAG chatbot technology for advancing information retrieval applications in specialized fields like military science.
APA, Harvard, Vancouver, ISO, and other styles
22

Temsah, Mohamad-Hani, Fadi Aljamaan, Khalid H. Malki, Khalid Alhasan, Ibraheem Altamimi, Razan Aljarbou, Faisal Bazuhair, et al. "ChatGPT and the Future of Digital Health: A Study on Healthcare Workers’ Perceptions and Expectations." Healthcare 11, no. 13 (June 21, 2023): 1812. http://dx.doi.org/10.3390/healthcare11131812.

Full text
Abstract:
This study aimed to assess the knowledge, attitudes, and intended practices of healthcare workers (HCWs) in Saudi Arabia towards ChatGPT, an artificial intelligence (AI) Chatbot, within the first three months after its launch. We also aimed to identify potential barriers to AI Chatbot adoption among healthcare professionals. A cross-sectional survey was conducted among 1057 HCWs in Saudi Arabia, distributed electronically via social media channels from 21 February to 6 March 2023. The survey evaluated HCWs’ familiarity with ChatGPT-3.5, their satisfaction, intended future use, and perceived usefulness in healthcare practice. Of the respondents, 18.4% had used ChatGPT for healthcare purposes, while 84.1% of non-users expressed interest in utilizing AI Chatbots in the future. Most participants (75.1%) were comfortable with incorporating ChatGPT into their healthcare practice. HCWs perceived the Chatbot to be useful in various aspects of healthcare, such as medical decision-making (39.5%), patient and family support (44.7%), medical literature appraisal (48.5%), and medical research assistance (65.9%). A majority (76.7%) believed ChatGPT could positively impact the future of healthcare systems. Nevertheless, concerns about credibility and the source of information provided by AI Chatbots (46.9%) were identified as the main barriers. Although HCWs recognize ChatGPT as a valuable addition to digital health in the early stages of adoption, addressing concerns regarding accuracy, reliability, and medicolegal implications is crucial. Therefore, due to their unreliability, the current forms of ChatGPT and other Chatbots should not be used for diagnostic or treatment purposes without human expert oversight. Ensuring the trustworthiness and dependability of AI Chatbots is essential for successful implementation in healthcare settings. Future research should focus on evaluating the clinical outcomes of ChatGPT and benchmarking its performance against other AI Chatbots.
APA, Harvard, Vancouver, ISO, and other styles
23

Chaiprasurt, Chantorn, Ratchadaporn Amornchewin, and Piyamart Kunpitak. "Using motivation to improve learning achievement with a chatbot in blended learning." World Journal on Educational Technology: Current Issues 14, no. 4 (July 29, 2022): 1133–51. http://dx.doi.org/10.18844/wjet.v14i4.6592.

Full text
Abstract:
Chatbots have the potential to be used as motivational learning tools, particularly for boosting academic performance. The purpose of this study is to construct a Facebook Messenger chatbot to promote accomplishment through the use of blended learning, guided by the ARCS (attention, relevance, confidence, satisfaction) motivation model that compares how engagement works, and explores the chatbot in terms of its usability. Integrated with Facebook Messenger, chatbot software was designed to answer inquiries based on the chatbot's communication framework. This included course alerts, a gradebook for each student, attendance statistics, and assignment feedback. Using a quasi-experimental research approach, the influence of the chatbot on student motivation and academic achievement was empirically investigated. The trial covered 18 weeks, and the sample comprised 48 students enrolled in a course on Information Technology for Learning. The results suggest that the chatbot increased the learning accomplishment of the students to a considerable extent, and that a motivational setting may lead to a better outcome than a blended learning environment. Overall, our approach produced reliable findings which validated the chatbot's capacity to communicate with students. The students agreed that the chatbot facilitated their learning, but that a few modifications were required in terms of ongoing development. Keywords: Motivation; Learning Achievement; Chatbot; Blended Learning
APA, Harvard, Vancouver, ISO, and other styles
24

Sánchez-Vera, Fulgencio. "Subject-Specialized Chatbot in Higher Education as a Tutor for Autonomous Exam Preparation: Analysis of the Impact on Academic Performance and Students’ Perception of Its Usefulness." Education Sciences 15, no. 1 (December 30, 2024): 26. https://doi.org/10.3390/educsci15010026.

Full text
Abstract:
This study evaluates the impact of an AI chatbot as a support tool for second-year students in the Bachelor’s Degree in Early Childhood Education program during final exam preparation. Over 1-month, 42 students used the chatbot, generating 704 interactions across 186 conversations. The study aimed to assess the chatbot’s effectiveness in resolving specific questions, enhancing concept comprehension, and preparing for exams. Methods included surveys, in-depth interviews, and analysis of chatbot interactions. Results showed that the chatbot was highly effective in clarifying doubts (91.4%) and aiding concept understanding (95.7%), although its perceived usefulness was lower in content review (42.9%) and exam simulations (45.4%). Students with moderate chatbot use achieved better academic outcomes, while excessive use did not lead to further improvements. The study also identified challenges in students’ ability to formulate effective questions, limiting the chatbot’s potential in some areas. Overall, the chatbot was valued for fostering study autonomy, though improvements are needed in features supporting motivation and study organization. These findings highlight the potential of chatbots as complementary learning tools but underscore the need for better user training in “prompt engineering” to maximize their effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
25

Avisyah, Gisnaya Faridatul, Ivandi Julatha Putra, and Sidiq Syamsul Hidayat. "Open Artificial Intelligence Analysis using ChatGPT Integrated with Telegram Bot." Jurnal ELTIKOM 7, no. 1 (June 30, 2023): 60–66. http://dx.doi.org/10.31961/eltikom.v7i1.724.

Full text
Abstract:
Chatbot technology uses natural language processing with artificial intelligence that can interact quickly in answering a question and producing relevant answer. ChatGPT is the latest chatbot platform developed by Open AI which allows users to interact with text-based engines. This platform uses the GPT-3 (Generative Pre-trained Transformer) algorithm to help understand the response humans want and generate relevant responses. Using the platform, users can find answers to their questions quickly and relevantly. The method used for OpenAI's research on ChatGPT integrated through Telegram chatbot is using a waterfall method which utilizes open API tokens from Telegram. In this research we develop OpenAI application connected with telegram bot. This application can help provide a wide range of information, especially information related to the Semarang State Polytechnic. By using Telegram chatbot in the program, users can find it easy to ask because it is integrated with OpenAI using the API. Telegram chatbot, which has a chat feature, allows easy communication between users and chatbots. Thus, it may reduce system errors on the bot.
APA, Harvard, Vancouver, ISO, and other styles
26

Subramaniam, Nantha Kumar. "Enabling Learning of Programming through Educational Chatbot." International Journal of Research and Innovation in Applied Science IX, no. XII (2025): 470–77. https://doi.org/10.51584/ijrias.2024.912042.

Full text
Abstract:
This research investigates the integration of AI-powered chatbots as transformative tools to enhance the teaching and learning of programming, specifically targeting Java programming in an open and distance learning (ODL) environment. The chatbot was conceptualized and developed using principles from constructivist, problem-based, and active learning theories to support a learner-centric approach. Designed to offer multimodal content, real-time feedback, and interactive problem-solving exercises, the chatbot aims to reduce cognitive load and facilitate a deeper understanding of complex programming concepts. The implementation involved deploying the chatbot in an online Java programming course over one semester, engaging 32 ODL learners in a blended learning mode. Learners utilized the chatbot for interactive, one-on-one tutoring sessions, benefiting from multimodal materials such as animations, visualizations, and interactive exercises. These resources supported learners in addressing challenging concepts and completing programming tasks, enhancing their engagement and comprehension. Evaluation was conducted through a structured survey using a 4-point Likert scale, capturing learner perceptions on the chatbot’s effectiveness. The findings revealed high levels of satisfaction, with learners highlighting the chatbot’s ability to provide learning support and its unique learning experience. The survey further underscored the chatbot’s role in alleviating cognitive load and bridging the gap between abstract programming theories and their practical applications. The study demonstrates the potential of AI-powered chatbots to transform programming education.
APA, Harvard, Vancouver, ISO, and other styles
27

Biro, Joshua, Courtney Linder, and David Neyens. "The Effects of a Health Care Chatbot’s Complexity and Persona on User Trust, Perceived Usability, and Effectiveness: Mixed Methods Study." JMIR Human Factors 10 (February 1, 2023): e41017. http://dx.doi.org/10.2196/41017.

Full text
Abstract:
Background The rising adoption of telehealth provides new opportunities for more effective and equitable health care information mediums. The ability of chatbots to provide a conversational, personal, and comprehendible avenue for learning about health care information make them a promising tool for addressing health care inequity as health care trends continue toward web-based and remote processes. Although chatbots have been studied in the health care domain for their efficacy for smoking cessation, diet recommendation, and other assistive applications, few studies have examined how specific design characteristics influence the effectiveness of chatbots in providing health information. Objective Our objective was to investigate the influence of different design considerations on the effectiveness of an educational health care chatbot. Methods A 2×3 between-subjects study was performed with 2 independent variables: a chatbot’s complexity of responses (eg, technical or nontechnical language) and the presented qualifications of the chatbot’s persona (eg, doctor, nurse, or nursing student). Regression models were used to evaluate the impact of these variables on 3 outcome measures: effectiveness, usability, and trust. A qualitative transcript review was also done to review how participants engaged with the chatbot. Results Analysis of 71 participants found that participants who received technical language responses were significantly more likely to be in the high effectiveness group, which had higher improvements in test scores (odds ratio [OR] 2.73, 95% CI 1.05-7.41; P=.04). Participants with higher health literacy (OR 2.04, 95% CI 1.11-4.00, P=.03) were significantly more likely to trust the chatbot. The participants engaged with the chatbot in a variety of ways, with some taking a conversational approach and others treating the chatbot more like a search engine. Conclusions Given their increasing popularity, it is vital that we consider how chatbots are designed and implemented. This study showed that factors such as chatbots’ persona and language complexity are two design considerations that influence the ability of chatbots to successfully provide health care information.
APA, Harvard, Vancouver, ISO, and other styles
28

Rangasamy, Sangeetha, Aishwarya Nagarathinam, Aarthy Chellasamy, and Elangovan N. "Health-Seeking Behaviour and the use of Artificial Intelligence-based Healthcare Chatbots among Indian Patients." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 10s (October 7, 2023): 440–50. http://dx.doi.org/10.17762/ijritcc.v11i10s.7652.

Full text
Abstract:
Artificial Intelligence (AI) based healthcare chatbots can scale up healthcare services in terms of diagnosis and treatment. However, the use of such chatbots may differ among the Indian population. This study investigates the influence of health-seeking behaviour and the availability of traditional, complementary and alternative medicine systems on healthcare chatbots. A quantitative study using a survey technique collects data from the Indian population. Items measuring the awareness of chatbot’s attributes and services, trust in the chatbots, health-seeking behaviour, traditional, complementary and alternative medicine, and use of chatbots are adapted from previous scales. A convenience sample is used to collect the data from the urban population. 397 samples were fetched, and statistical analysis was done. Awareness of the chatbot’s attributes and services impacted the trust in the chatbots. Health-seeking behaviour positively impacted the use of chatbots and enhanced the impact of trust of a chatbot on the use of a chatbot. Traditional, complementary and alternative medicine was not included in the chatbot, which negatively impacted the use of chatbots. At the same time, it dampened the impact of trust in chatbots on the use of chatbots. The study was limited to the urban population and a convenience sampling because of the need to use the Internet and a smart device for accessing the chatbots. The results of the study need to be used cautiously. The results can be inferred from the relationships’ existence rather than the impact’s magnitude. The study’s outcome encourages the availability of chatbots due to the health-seeking behaviour of the Indian urban population. The study also highlights the need for creating intelligent agents with knowledge of Traditional, complementary and alternative medicine. The study contributes to the knowledge of using chatbots in the Indian context. When earlier studies focus mainly on the chatbot features or user characteristics in the intention studies, this study looks at the healthcare system and the services unique to India.
APA, Harvard, Vancouver, ISO, and other styles
29

Joshi, Kalpesh. "AI Mental Health Therapist Chatbot." International Journal for Research in Applied Science and Engineering Technology 11, no. 11 (November 30, 2023): 308–11. http://dx.doi.org/10.22214/ijraset.2023.56393.

Full text
Abstract:
Chatbots have become very popular these days as the technology is growing with a very high rate. Due to advancements in the technology chatbots have made our lives easier as we can get to know about many things at our finger tips. So, there are many chatbots available which do the work related to particular things. One such chatbot is ChatGPT, Bard etc. AI chatbots provide a more human like experience with the help of natural language processing and leverage semantics to understand the context of what a person says. Thinking of it we have created a AI Mental Health Therapist Chatbot to provide a medical recommendations according to the problem the user might be facing. It will be able to provide medical support in minimal cost and also recommend the treatment required to the user. This can be a type of advancement in the field of AI which can gain popularity among people. The best AI chatbots can unlock incredible efficiency and also the breadth of AI chatbots available today is incredible.
APA, Harvard, Vancouver, ISO, and other styles
30

Anwar, Muchamad Taufiq, Azzahra Nurwanda, Fajar Rahmat, Muhammad Aufal, Hindriyanto Dwi Purnomo, and Aji Supriyanto. "Fast and Accurate Indonesian QnA Chatbot Using Bag-of-Words and Deep-Learning For Car Repair Shop Customer Service." Advance Sustainable Science Engineering and Technology 5, no. 2 (July 31, 2023): 0230201. http://dx.doi.org/10.26877/asset.v5i2.14891.

Full text
Abstract:
A chatbot is a software that simulates human conversation through a text chat. Chatbot is a complex task and recent approaches to Indonesian chatbot have low accuracy and are slow because it needs high resources. Chatbots are expected to be fast and accurate especially in business settings so that they can increase customer satisfaction. However, the currently available approach for Indonesian chatbots only has low to medium accuracy and high response time. This research aims to build a fast and accurate chatbot by using Bag-of-Words and Deep-Learning approach applied to a car repair shop customer service. Sixteen different intents with a set of their possible queries were used as the training dataset. The approach for this chatbot is by using a text classification task where intents will be the target classes and the queries are the text to classify. The chatbot response then is based on the recognized intent. The deep learning model for the text classification was built by using Keras and the chatbot application was built using the Flask framework in Python. Results showed that the model is capable of giving 100% accuracy in predicting users’ intents so that the chatbot can give the appropriate responses and the response time is near zero milliseconds. This result implies that developers who aim to build fast and accurate chatbot software can use the combination of bag-of-words and deep-learning approaches. Several suggestions are presented to increase the probability of the chatbot’s success when released to the general public.
APA, Harvard, Vancouver, ISO, and other styles
31

Dinh, Hoa, and Thien Khai Tran. "EduChat: An AI-Based Chatbot for University-Related Information Using a Hybrid Approach." Applied Sciences 13, no. 22 (November 17, 2023): 12446. http://dx.doi.org/10.3390/app132212446.

Full text
Abstract:
The digital transformation has created an environment that fosters the development of effective chatbots. Through the fusion of artificial intelligence and data, these chatbots have the capability to provide automated services, optimize customer experiences, and reduce workloads for employees. These chatbots can offer 24/7 support, answer questions, perform transactions, and provide rapid information, contributing significantly to the sustainable development processes of businesses and organizations. ChatGPT has already been applied in various fields. However, to ensure that there is a chatbot providing accurate and useful information in a narrow domain, it is necessary to build, train, and fine-tune the model based on specific data. In this paper, we introduce EduChat, a chatbot system for university-related questions. EduChat is an effective artificial intelligence application designed by combining rule-based methods, an innovative improved random forest machine learning approach, and ChatGPT to automatically answer common questions related to universities, academic programs, admission procedures, student life, and other related topics. This chatbot system helps provide quick and easy information to users, thereby reducing the time spent searching for information directly from source documents or contacting support staff. The experiments have yielded positive results.
APA, Harvard, Vancouver, ISO, and other styles
32

Gevher, Merve, Sefa Emre Öncü, and Erdem Erdoğdu. "Exploring the potential use of generative ai for learner support in ODL at scale." Journal of Educational Technology and Online Learning 8, no. 1 (December 14, 2024): 78–97. https://doi.org/10.31681/jetol.1559442.

Full text
Abstract:
This study addresses the applicability of generative artificial intelligence (GenAI) within the administrative learner support services at Anadolu University Open Education System, a giga university with more than one million learners in Türkiye. The study reveals the performance differences between a rule-based chatbot and different GenAI-based chatbots using a qualitative case study approach. Learner inquiries that the rule-based chatbot could not answer, and frequently asked questions (FAQs) were posed to ChatGPT and Bing Copilot applications in four different cases. The 708 answers in these scenarios were evaluated by three experts. It was observed that GenAI matched learner questions more effectively than the rule-based chatbot. Additionally, Bing Copilot was more successful in generating responses to learners’ questions from the internet compared to ChatGPT, which utilized the FAQ dataset. The study reveals that Gen-AI based chatbots that can work together with rule-based chatbots are more successful in generating answers. Findings demonstrate the potential of GenAI to provide learners with continuous, more natural, and personalized support. These findings offer significant insights into how administrative support services in mass-scale educational institutions can be transformed.
APA, Harvard, Vancouver, ISO, and other styles
33

Ab Razak, Nur Izah, Muhammad Fawwaz Muhammad Yusoff, and Rahmita Wirza O.K. Rahmat. "ChatGPT Review: A Sophisticated Chatbot Models in Medical & Health-related Teaching and Learning." BMSC 19, s12 (November 20, 2023): 98–108. http://dx.doi.org/10.47836/mjmhs.19.s12.12.

Full text
Abstract:
Artificial intelligence (AI) has transformed our interactions with the world, spawning complex apps and gadgets known as intelligent agents. ChatGPT, a chatbot hybrid of AI and human-computer interaction, converse with humans and have a wide range of possible uses. Chatbots have showed potential in the field of medical education and health sciences by aiding learning, offering feedback, and increasing metacognitive thinking among undergraduate and postgraduate students. OpenAI’s ChatGPT, an dvanced language model, has substantially enhanced chatbot capabilities. Chatbots are being used in the medical related field for teaching & learning, mental state categorisation, medication recommendation, health education and awareness. While chatbots have been well accepted by users, further study is needed to fully grasp their use in medical and healthcare settings. This study looked at 32 research on ChatGPT and chatbots in medical-related fields and medical education. Medical education, anatomy, vaccines, internal medicine, psychiatry, dentistry, nursing, and psychology were among the topics discussed in the articles. The study designs ranged from pilot studies to controlled experimental trials. The findings show the exponential growth and potential of ChatGPT and chatbots in healthcare and medical education, as well as the necessity for more research and development in this sector.
APA, Harvard, Vancouver, ISO, and other styles
34

Younis, Muhanad T., Nadia Mahmood Hussien, Yasmin Makki Mohialden, Komeil Raisian, Prabhishek Singh, and Kapil Joshi. "Enhancement of ChatGPT using API Wrappers Techniques." Al-Mustansiriyah Journal of Science 34, no. 2 (June 30, 2023): 82–86. http://dx.doi.org/10.23851/mjs.v34i2.1350.

Full text
Abstract:
This study looks at how API (Application Programming Interface) wrapper technology can make it easier to use complex functions by putting together a lot of API calls. These packages have non-real-time interfaces that are hard to use. ChatGPT is a chatbot-specific GPT-3 language paradigm. It lets developers create chatbots that respond intelligently to natural language user input, creating a more engaging user experience. This article shows that ChatGPT, Python, and API wrapper technology may be used to develop a smart chatbot. We show how to use the OpenAI API library to add ChatGPT to Python programs. This makes it easier for developers to make chatbots that sound and act more like real people when they talk. Our contribution to this field is showing that it is possible to make smart chatbots with ChatGPT and API wrapper technology. To reach this goal, we use a system that combines the OpenAI API with ChatGPT and Python. This gives us valuable information about how to make smart chatbots. The efficiency of the system has been tested many times while applying it to different environments, and the results are satisfactory.
APA, Harvard, Vancouver, ISO, and other styles
35

Sarode, Prof Vaishali, Bhakti Joshi, Tejaswini Savakare, and Harshada Warule. "A Real Time Chatbot Using Python." International Journal for Research in Applied Science and Engineering Technology 11, no. 5 (May 31, 2023): 7385–89. http://dx.doi.org/10.22214/ijraset.2023.53453.

Full text
Abstract:
Abstract: Real-time chatbots developed using Python have emerged as powerful tools for enhancing customer support, improving user experiences, and streamlining business processes. Leveraging Python's extensive libraries and frameworks, such as NLTK, Flask, and telebot, developers can build intelligent and scalable chatbot systems.This paper provides an overview of the key components and techniques involved in developing a real-time chatbot using Python. It explores the process of requirement gathering, use case definition, conversation flow design, and performance optimization. Integration with backend services, error handling, validation, and user experience (UX) design are also discussed. The utilization of natural language understanding (NLU) algorithms and techniques allows chatbots to interpret and comprehend user intents, providing accurate and context-aware responses. Integration with REST APIs, Flask, and telebot facilitates seamless communication and interaction between the chatbot and users. Furthermore, the paper highlights the importance of security, privacy, and ethical considerations in chatbot systems. It emphasizes the significance of continuous testing, feedback iteration, and user-centric design principles to refine the chatbot's performance and enhance the user experience. Looking ahead, future work in real-time chatbot development using Python includes advancements in natural language processing (NLP), personalized user experiences, multi-modal capabilities, and the integration of voice assistants. Ethical considerations and explainable AI techniques will also be critical for building trustworthy and responsible chatbot systems. In conclusion, real-time chatbots developed using Python offer immense potential for transforming customer support, automating processes, and delivering personalized and efficient services. With ongoing advancements in NLP, AI, and user interface design, the future of real-time chatbots holds exciting possibilities for enhanced user interactions and seamless automation.
APA, Harvard, Vancouver, ISO, and other styles
36

Chow, James C. L., Valerie Wong, Leslie Sanders, and Kay Li. "Developing an AI-Assisted Educational Chatbot for Radiotherapy Using the IBM Watson Assistant Platform." Healthcare 11, no. 17 (August 29, 2023): 2417. http://dx.doi.org/10.3390/healthcare11172417.

Full text
Abstract:
Objectives: This study aims to make radiotherapy knowledge regarding healthcare accessible to the general public by developing an AI-powered chatbot. The interactive nature of the chatbot is expected to facilitate better understanding of information on radiotherapy through communication with users. Methods: Using the IBM Watson Assistant platform on IBM Cloud, the chatbot was constructed following a pre-designed flowchart that outlines the conversation flow. This approach ensured the development of the chatbot with a clear mindset and allowed for effective tracking of the conversation. The chatbot is equipped to furnish users with information and quizzes on radiotherapy to assess their understanding of the subject. Results: By adopting a question-and-answer approach, the chatbot can engage in human-like communication with users seeking information about radiotherapy. As some users may feel anxious and struggle to articulate their queries, the chatbot is designed to be user-friendly and reassuring, providing a list of questions for the user to choose from. Feedback on the chatbot’s content was mostly positive, despite a few limitations. The chatbot performed well and successfully conveyed knowledge as intended. Conclusions: There is a need to enhance the chatbot’s conversation approach to improve user interaction. Including translation capabilities to cater to individuals with different first languages would also be advantageous. Lastly, the newly launched ChatGPT could potentially be developed into a medical chatbot to facilitate knowledge transfer.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhang, Jingwen, Yoo Jung Oh, Patrick Lange, Zhou Yu, and Yoshimi Fukuoka. "Artificial Intelligence Chatbot Behavior Change Model for Designing Artificial Intelligence Chatbots to Promote Physical Activity and a Healthy Diet: Viewpoint." Journal of Medical Internet Research 22, no. 9 (September 30, 2020): e22845. http://dx.doi.org/10.2196/22845.

Full text
Abstract:
Background Chatbots empowered by artificial intelligence (AI) can increasingly engage in natural conversations and build relationships with users. Applying AI chatbots to lifestyle modification programs is one of the promising areas to develop cost-effective and feasible behavior interventions to promote physical activity and a healthy diet. Objective The purposes of this perspective paper are to present a brief literature review of chatbot use in promoting physical activity and a healthy diet, describe the AI chatbot behavior change model our research team developed based on extensive interdisciplinary research, and discuss ethical principles and considerations. Methods We conducted a preliminary search of studies reporting chatbots for improving physical activity and/or diet in four databases in July 2020. We summarized the characteristics of the chatbot studies and reviewed recent developments in human-AI communication research and innovations in natural language processing. Based on the identified gaps and opportunities, as well as our own clinical and research experience and findings, we propose an AI chatbot behavior change model. Results Our review found a lack of understanding around theoretical guidance and practical recommendations on designing AI chatbots for lifestyle modification programs. The proposed AI chatbot behavior change model consists of the following four components to provide such guidance: (1) designing chatbot characteristics and understanding user background; (2) building relational capacity; (3) building persuasive conversational capacity; and (4) evaluating mechanisms and outcomes. The rationale and evidence supporting the design and evaluation choices for this model are presented in this paper. Conclusions As AI chatbots become increasingly integrated into various digital communications, our proposed theoretical framework is the first step to conceptualize the scope of utilization in health behavior change domains and to synthesize all possible dimensions of chatbot features to inform intervention design and evaluation. There is a need for more interdisciplinary work to continue developing AI techniques to improve a chatbot’s relational and persuasive capacities to change physical activity and diet behaviors with strong ethical principles.
APA, Harvard, Vancouver, ISO, and other styles
38

Christanti, Viny, Jesslyn Jesslyn, and Fundroo Orlando. "Implementasi Chatbot Pelajaran Sekolah Dasar Dengan Pandorabots." CICES 9, no. 2 (August 18, 2023): 203–13. http://dx.doi.org/10.33050/cices.v9i2.2703.

Full text
Abstract:
Chatbot is a virtual conversation that can receive input in the form of voice or writing. A chatbot can be a generative or retrieval chatbot. The creation of the two chatbots provides an opportunity for users to write text freely. Sometimes giving freedom to type sentences freely can result in answers that are not to the user's liking. There are several ways that users can be directed to write text in the chat column. This study aims to develop a chatbot application that can be used as a medium for questioning and answering subject matter for elementary school students in a structured manner. Chatbots developed based on AIML (Artificial Intelligence Markup Language) use Pandorabots as a platform to process text input from users so that chatbots can direct users to type according to the patterns and knowledge contained in the chatbot. There are two kinds of chatbots created in this study, namely text-based interaction and button-based interaction chatbots. The chatbot test uses 20 data samples in the form of questions and answers from Elementary School subject matter grades 1 to grade 6. Chatbots using AIML can only answer questions that are relevant and according to patterns in AIML. To optimize the performance of text chatbots, we recommend using large amounts of data. Whereas Chatbots with UI Buttons should use less data to work optimally. Keywords— AIML, button-based, chatbot, Pandorabots, elementary school, text-based Chatbot merupakan sebuah percakapan virtual yang dapat menerima input berupa suara atau tulisan. Sebuah chatbot dapat merupakan chatbot generative atau retrieval. Pembuatan kedua chatbot tersebut memberikan kesempatan untuk pengguna menulis teks dengan bebas. Terkadang memberikan kebebasan mengetik kalimat dengan bebas dapat menghasilkan jawaban yang tidak sesuai dengan keinginan pengguna. Ada beberapa cara agar pengguna dapat diarahkan untuk menuliskan teks dalam kolom chat. Penelitian ini bertujuan untuk mengembangkan aplikasi chatbot yang dapat digunakan sebagai media tanya-jawab materi pelajaran untuk siswa Sekolah Dasar secara terstruktur. Chatbot yang dikembangkan berbasis AIML (Artificial Intelligence Markup Language) menggunakan Pandorabots sebagai platform untuk memproses input teks dari pengguna sehingga chatbot dapat mengarahkan pengguna untuk mengetikan sesuai pola dan pengetahuan yang ada didalam chatbot. Terdapat dua macam chatbot yang dibuat pada penelitian ini yaitu text-based interaction dan button-based interaction chatbot. Pengujian chatbot menggunakan 20 sampel data berupa soal dan jawaban dari materi pelajaran Sekolah Dasar kelas 1 sampai dengan kelas 6. Chatbot dengan menggunakan AIML hanya dapat menjawab pertanyaan yang relevan dan sesuai dengan pola pada AIML. Untuk mengoptimalkan kinerja Chatbot text, sebaiknya data yang digunakan berjumlah besar. Sedangkan Chatbot dengan UI Button sebaiknya menggunakan data yang sedikit untuk dapat bekerja secara optimal. Kata Kunci—AIML, button-based, chatbot, Pandorabots, sekolah dasar, text-based
APA, Harvard, Vancouver, ISO, and other styles
39

Mustafa, Sara Hassan, Elsir Abdelmutaal Mohammed, Ahmed Mustafa Salih, Kanagarajan Palani, Maha Mohamed Omer Albushra, Salma Taha Makkawi, and Amgad Hassan Mustafa. "A Qualitative Exploration of Acceptance of a Conversational Chatbot as a Tool for Mental Health Support among University Students in Sudan." International Journal of Medical Sciences and Nursing Research 4, no. 1 (March 31, 2024): 16–23. http://dx.doi.org/10.55349/ijmsnr.2024411623.

Full text
Abstract:
Background: Sudan’s political and economic challenges have increased mental health issues among university students, but access to mental healthcare is limited. Digital health interventions, such as chatbots, could provide a potential solution to inadequate care. This study aimed to evaluate the level of acceptance of a mental health chatbot prototype among university students in Khartoum, Sudan. Materials and Methods: This qualitative study investigated the perspectives of university students regarding a mental health chatbot prototype designed specifically for this research and deployed on Telegram. Twenty participants aged 18+, owning smartphones, and not receiving mental health treatment tested the prototype. Data was collected through individual, face-to-face, in-depth, semi-structured interviews. The data was analysed using both deductive and inductive content analysis methods. Results: Most of the participants acknowledged the importance of mental health but felt that it was an overlooked issue in Sudan. Participants considered the chatbot to be a unique and innovative concept, offering valuable features. They viewed the chatbot as a user-friendly and accessible tool, with advantages such as convenience, anonymity, and accessibility, and potential cost and time savings. However, most participants agreed that the chatbot has many limitations and should not be seen as a substitute for seeing a doctor or therapist. Conclusion: The mental health chatbot was viewed positively by participants in the study. Chatbots can be promising tools for providing accessible and confidential mental health support for university students in countries like Sudan. Long-term studies are required to assess chatbot’s mental health benefits and risks. Keywords: mental health, chatbots, university students, Sudan, young adults
APA, Harvard, Vancouver, ISO, and other styles
40

Kuhail, Mohammad Amin, Justin Thomas, Salwa Alramlawi, Syed Jawad Hussain Shah, and Erik Thornquist. "Interacting with a Chatbot-Based Advising System: Understanding the Effect of Chatbot Personality and User Gender on Behavior." Informatics 9, no. 4 (October 10, 2022): 81. http://dx.doi.org/10.3390/informatics9040081.

Full text
Abstract:
Chatbots with personality have been shown to affect engagement and user subjective satisfaction. Yet, the design of most chatbots focuses on functionality and accuracy rather than an interpersonal communication style. Existing studies on personality-imbued chatbots have mostly assessed the effect of chatbot personality on user preference and satisfaction. However, the influence of chatbot personality on behavioral qualities, such as users’ trust, engagement, and perceived authenticity of the chatbots, is largely unexplored. To bridge this gap, this study contributes: (1) A detailed design of a personality-imbued chatbot used in academic advising. (2) Empirical findings of an experiment with students who interacted with three different versions of the chatbot. Each version, vetted by psychology experts, represents one of the three dominant traits, agreeableness, conscientiousness, and extraversion. The experiment focused on the effect of chatbot personality on trust, authenticity, engagement, and intention to use the chatbot. Furthermore, we assessed whether gender plays a role in students’ perception of the personality-imbued chatbots. Our findings show a positive impact of chatbot personality on perceived chatbot authenticity and intended engagement, while student gender does not play a significant role in the students’ perception of chatbots.
APA, Harvard, Vancouver, ISO, and other styles
41

Journal, IJSREM. "CUSTOMER SUPPORT CHATBOT." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 01 (January 11, 2024): 1–10. http://dx.doi.org/10.55041/ijsrem27984.

Full text
Abstract:
Abstract—The food business has seen a dramatic transition in recent years toward digital platforms as more and more customers choose to order food through online channels. Businesses are using creative solutions, like customer care chatbots, to handle the growing demand and expedite the purchase process. The creation and application of a customer support chatbot designed especially for food orders is the main goal of this study. This study's main goals are to integrate a sophisticated chatbot system into the meal ordering process to improve operational efficiency and the overall user experience. The chatbot's purpose is to converse with consumers in natural language, providing a smooth, customized experience that resembles a human conversation. By utilizing sophisticated natural language processing (NLP), The chatbot's objectives are to handle orders, comprehend customer preferences, and deliver pertinent information. Keywords—Ease of user experience, Enhanced customer service, Voice recognition, user feedback, Time efficiency.
APA, Harvard, Vancouver, ISO, and other styles
42

Chinenye, Duru Juliet, Austine Ekekwe Duroha, and Nkwocha Mcdonald. "DEVELOPMENT OF THE NATURAL LANGUAGE PROCESSING-BASED CHATBOT FOR SHOPPRITE SHOPPING MALL." International Journal of Engineering Applied Sciences and Technology 7, no. 6 (October 1, 2022): 372–81. http://dx.doi.org/10.33564/ijeast.2022.v07i06.044.

Full text
Abstract:
Software-as-a-service (SaaS) solutions are frequently used to create chatbots, giving users the option to interact with them via desktop computers, mobile phones, and tablets. To increase customer accessibility, a chatbot is being developed for customers. The user can choose their own convenient language because the device offers text or audio support. This project's goal was to develop a chatbot for the Shopprite Shopping Mall. The chatbot's goal is to converse with the client in a clever, accurate, and timely manner using natural language processing. Customers can use this feature to communicate with the bot and ask questions about specific things they want to buy and the price before visiting the mall.Customers can access the chatbot from portable mobile devices or laptops at any time, making it possible to offer a round-the-clock online service. The discomfort customers feel when they visit the Shopprite shopping mall to look for things only to discover that they are either unavailable or out of stock will be lessened by the results of this study. The following methodologies were used to carry out the work: React.js to create the chatbot's front-end and admin login page; Spacy and React.ai to train the chatbot's NLP section andE-commerce datasets for the chatbot and MySQL to manage and create the data structure that will house the e-commerce datasets.It is recommended that new capabilities be added to the chatbot, such as the delivery of purchased things to a customer's home, more training phrases to give the chatbot a better social outlook, automated item addition to the chatbot database, and even adding a barcode reader option. Testing the chatbot using a bigger dataset would also be helpful.
APA, Harvard, Vancouver, ISO, and other styles
43

Chang, Yu, Chu-Yun Su, and Yi-Chun Liu. "Assessing the Performance of Chatbots on the Taiwan Psychiatry Licensing Examination Using the Rasch Model." Healthcare 12, no. 22 (November 18, 2024): 2305. http://dx.doi.org/10.3390/healthcare12222305.

Full text
Abstract:
Background/Objectives: The potential and limitations of chatbots in medical education and clinical decision support, particularly in specialized fields like psychiatry, remain unknown. By using the Rasch model, our study aimed to evaluate the performance of various state-of-the-art chatbots on psychiatry licensing exam questions to explore their strengths and weaknesses. Methods: We assessed the performance of 22 leading chatbots, selected based on LMArena benchmark rankings, using 100 multiple-choice questions from the 2024 Taiwan psychiatry licensing examination, a nationally standardized test required for psychiatric licensure in Taiwan. Chatbot responses were scored for correctness, and we used the Rasch model to evaluate chatbot ability. Results: Chatbots released after February 2024 passed the exam, with ChatGPT-o1-preview achieving the highest score of 85. ChatGPT-o1-preview showed a statistically significant superiority in ability (p < 0.001), with a 1.92 logits improvement compared to the passing threshold. It demonstrated strengths in complex psychiatric problems and ethical understanding, yet it presented limitations in up-to-date legal updates and specialized psychiatry knowledge, such as recent amendments to the Mental Health Act, psychopharmacology, and advanced neuroimaging. Conclusions: Chatbot technology could be a valuable tool for medical education and clinical decision support in psychiatry, and as technology continues to advance, these models are likely to play an increasingly integral role in psychiatric practice.
APA, Harvard, Vancouver, ISO, and other styles
44

Chen, David, Ryan S. Huang, Jane Jomy, Philip Wong, Michael Yan, Jennifer Croke, Daniel Tong, Andrew Hope, Lawson Eng, and Srinivas Raman. "Performance of Multimodal Artificial Intelligence Chatbots Evaluated on Clinical Oncology Cases." JAMA Network Open 7, no. 10 (October 23, 2024): e2437711. http://dx.doi.org/10.1001/jamanetworkopen.2024.37711.

Full text
Abstract:
ImportanceMultimodal artificial intelligence (AI) chatbots can process complex medical image and text-based information that may improve their accuracy as a clinical diagnostic and management tool compared with unimodal, text-only AI chatbots. However, the difference in medical accuracy of multimodal and text-only chatbots in addressing questions about clinical oncology cases remains to be tested.ObjectiveTo evaluate the utility of prompt engineering (zero-shot chain-of-thought) and compare the competency of multimodal and unimodal AI chatbots to generate medically accurate responses to questions about clinical oncology cases.Design, Setting, and ParticipantsThis cross-sectional study benchmarked the medical accuracy of multiple-choice and free-text responses generated by AI chatbots in response to 79 questions about clinical oncology cases with images.ExposuresA unique set of 79 clinical oncology cases from JAMA Network Learning accessed on April 2, 2024, was posed to 10 AI chatbots.Main Outcomes and MeasuresThe primary outcome was medical accuracy evaluated by the number of correct responses by each AI chatbot. Multiple-choice responses were marked as correct based on the ground-truth, correct answer. Free-text responses were rated by a team of oncology specialists in duplicate and marked as correct based on consensus or resolved by a review of a third oncology specialist.ResultsThis study evaluated 10 chatbots, including 3 multimodal and 7 unimodal chatbots. On the multiple-choice evaluation, the top-performing chatbot was chatbot 10 (57 of 79 [72.15%]), followed by the multimodal chatbot 2 (56 of 79 [70.89%]) and chatbot 5 (54 of 79 [68.35%]). On the free-text evaluation, the top-performing chatbots were chatbot 5, chatbot 7, and the multimodal chatbot 2 (30 of 79 [37.97%]), followed by chatbot 10 (29 of 79 [36.71%]) and chatbot 8 and the multimodal chatbot 3 (25 of 79 [31.65%]). The accuracy of multimodal chatbots decreased when tested on cases with multiple images compared with questions with single images. Nine out of 10 chatbots, including all 3 multimodal chatbots, demonstrated decreased accuracy of their free-text responses compared with multiple-choice responses to questions about cancer cases.Conclusions and RelevanceIn this cross-sectional study of chatbot accuracy tested on clinical oncology cases, multimodal chatbots were not consistently more accurate than unimodal chatbots. These results suggest that further research is required to optimize multimodal chatbots to make more use of information from images to improve oncology-specific medical accuracy and reliability.
APA, Harvard, Vancouver, ISO, and other styles
45

Cheng, Sheung-Tak, Peter Ng, and Tomoko Wakui. "ACCEPTABILITY OF A CHATBOT FOR INFORMATION AND ADVICE ON DEMENTIA CAREGIVING." Innovation in Aging 8, Supplement_1 (December 2024): 1285. https://doi.org/10.1093/geroni/igae098.4107.

Full text
Abstract:
Abstract Providing ongoing support to caregivers as their needs change in the long-term course of dementia is a serious challenge to any healthcare system. Conversational artificial intelligence (AI) operating 24/7 may help to tackle this problem. This presentation describes the development of a generative AI chatbot (called Chatbot-A) using the GPT-4o large language model, with a personality agent to constrain its behavior to providing advice on dementia caregiving based on a revised version of the Benefit-Finding Intervention manual (an intervention reported in JG:PS and other journals). The chatbot’s responses to 21 common questions were compared with those of another chatbot (called Chatbot-B) using a collection of authoritative sources (World Health Organization iSupport, By Us For Us Guides, and 185 webpages by Alzheimer’s Association, National Institute on Aging, and UK Alzheimer’s Society) as its knowledge base. Results showed highly context-sensitive and rather sophisticated answers by both chatbots, with Chatbot-A performing slightly better on mental health-related questions. Subsequently, 10 caregivers used Chatbot-A for two weeks and provided ratings and comments on its acceptability. They found Chatbot-A highly user-friendly, and its responses quite helpful and easy to understand. They were rather satisfied with it and would strongly recommend it to other caregivers. During the 2-week trial period, the majority used Chatbot-A more than once per day. Results supported conversational AI as a viable approach to improve support to caregivers. Due to space limitation, this poster will present the two chatbot’s answers to selected questions and the caregivers’ feedback on Chatbot-A in detail.
APA, Harvard, Vancouver, ISO, and other styles
46

Meng, Jingbo, and Yue (Nancy) Dai. "Emotional Support from AI Chatbots: Should a Supportive Partner Self-Disclose or Not?" Journal of Computer-Mediated Communication 26, no. 4 (May 19, 2021): 207–22. http://dx.doi.org/10.1093/jcmc/zmab005.

Full text
Abstract:
Abstract This study examined how and when a chatbot’s emotional support was effective in reducing people’s stress and worry. It compared emotional support from chatbot versus human partners in terms of its process and conditional effects on stress/worry reduction. In an online experiment, participants discussed a personal stressor with a chatbot or a human partner who provided none, or either one or both of emotional support and reciprocal self-disclosure. The results showed that emotional support from a conversational partner was mediated through perceived supportiveness of the partner to reduce stress and worry among participants, and the link from emotional support to perceived supportiveness was stronger for a human than for a chatbot. A conversational partner’s reciprocal self-disclosure enhanced the positive effect of emotional support on worry reduction. However, when emotional support was absent, a solely self-disclosing chatbot reduced even less stress than a chatbot not providing any response to participants’ stress. Lay Summary In recent years, AI chatbots have increasingly been used to provide empathy and support to people who are experiencing stressful times. This study compared emotional support from a chatbot compared to that of a human who provided support. We were interested in examining which approach could best effectively reduce people’s worry and stress. When either a person or a chatbot was able to engage with a stressed individual and tell that individual about their own experiences, they were able to build rapport. We found that this type of reciprocal self-disclosure was effective in calming the worry of the individual. Interestingly, if a chatbot only reciprocally self-disclosed but offered no emotional support, the outcome was worse than if the chatbot did not respond to people at all. This work will help in the development of supportive chatbots by providing insights into when and what they should self-disclose.
APA, Harvard, Vancouver, ISO, and other styles
47

Fang, Jiyang. "Analysis on Chatbot Performance based on Attention Mechanism." Highlights in Science, Engineering and Technology 39 (April 1, 2023): 151–56. http://dx.doi.org/10.54097/hset.v39i.6517.

Full text
Abstract:
The chatbot is a way to imitate the dialogue between people through natural language, enabling human beings to communicate with machines more naturally. The chatbot is a prevalent natural language processing task (NLP) because it has broad application prospects in real life. This is also a complex task involving many natural language processing tasks that must be studied. The chatbot is an intelligent dialogue system that can simulate human dialogue to achieve online guidance and support. The main work of this paper is to summarize the chatbot's academic background and research status and introduce the Cornell Movie-Dialogs Corpus dataset. The methods of artificial intelligence and natural language processing are outlined. Two attention mechanisms used to improve neural machine translation (NMT) are discussed. Finally, this paper tests the performance of chatbots under the influence of N_ITERATION and data scale summarizes the relevant optimization strategies and makes a prospect for the future of chatbots. The main work of this paper is to test the performance of the proposed method under different experimental Settings, including dialog templates, adjusting the amount of training data, and to adjust the number of iterations. The results show that the chatbot's vocabulary changes with N_ITERATION and that increasing the data in the training dataset improves the chatbot's understanding.
APA, Harvard, Vancouver, ISO, and other styles
48

Sholahuddin, Muhammad Rizqi, and Firas Atqiya. "Sistem Tanya Jawab Konsultasi Shalat Berbasis RASA Natural Language Understanding (NLU)." Jurnal Pendidikan Multimedia (Edsence) 3, no. 2 (December 27, 2021): 93–102. http://dx.doi.org/10.17509/edsence.v3i2.38732.

Full text
Abstract:
A chatbot is an intelligent system that provides users with direct interaction with machines via written media. This paper describes how to use chatbots to ask questions about prayer procedures. A Muslim sometimes has questions about the procedure for praying when he finds a difference between the procedures performed by other Muslims. In this case, the use of chatbots is to provide an explanation. This chatbot was developed using a deep learning model, especially LSTM, that was integrated with the RASA framework. LSTM (Long Short Term Memory) can efficiently save some of the needed memory while also removing some of the unnecessary memory. The Telegram platform was chosen for the chatbot's implementation. The results showed that the chatbot telegram prayer consultation with DIET Classifier and RASA was able to recognize questions and provide answers in the form of text and images, with 96 percent accuracy.
APA, Harvard, Vancouver, ISO, and other styles
49

Long, Ju, Juntao Yuan, and Hsun-Ming Lee. "How to Program a Chatbot – An Introductory Project and Student Perceptions." Issues in Informing Science and Information Technology 16 (2019): 001–31. http://dx.doi.org/10.28945/4282.

Full text
Abstract:
Aim/Purpose: One of the most fascinating developments in computer user interfaces in recent years is the rise of “chatbots”. Yet extent information system (IS) curriculum lacks teaching resources on chatbots programming.. Background: To better prepare students for this new technological development and to enhance the IS curriculum, we introduce a project that teaches students how to program simple chatbots, including a transactional chatbot and a conversational chatbot. Methodology: We demonstrated a project that teaches students how to program two types of simple chatbots: a transactional chatbot and a conversational chatbot. We also conducted a survey to examine students’ perceptions on their learning experience. Findings: Our survey on students’ perception of the project finds that learning chatbots is deemed very useful because chatbot programming projects have enabled the students to understand the subject better. We also found that social influence has positively motivated the students to learn chatbot programming. Though most of the students have no prior experiences programming chatbots, their self-efficacy towards chatbot programming remained high after working through the programming project. Despite the difficult tasks, over 71% of respondents agree to various degrees that chatbot programming is fun. Though most students agree that chatbot programming is not easy to learn, more than 70% of respondents indicated that they will use or learn chatbots in the near future. The overwhelmingly positive responses are impressive given that this is the first time for the students to program and learn chatbots. Recommendations for Practitioners: In this article, we introduced a step by step project on teaching chatbot programming in an information systems class. Following the project instructions, students can get their first intelligent chatbots up and running in a few hours using Slack. This article describes the project in detail as well as students’ perceptions. Recommendations for Researchers: We used UTAUT model to measure students’ perception of the projects. This study could be of value to researchers studying students’ technology learning and adoption behaviors. Impact on Society: To our best knowledge, pedagogical resources that teach IS students how to program chatbots, especially the introductory level materials, are limited. We hope this teaching case could be of value for IS educators when introducing IS students to the wonderful field of chatbot programming. Future Research: For future work, we plan to expand the teaching resources to cover more advanced chatbot programming projects, such as on how to make chatbot more human-like.
APA, Harvard, Vancouver, ISO, and other styles
50

Sri Rahayu, Dwi, Rice Novita, Tengku Khairil Ahsyar, and Zarnelly. "Sentiment Analysis ChatGPT Using the Multinominal Naïve Bayes Classifier (NBC) Algorithm." Jurnal Sistem Cerdas 7, no. 1 (April 29, 2024): 66–74. http://dx.doi.org/10.37396/jsc.v7i1.388.

Full text
Abstract:
Chatbots have become one of the popular solutions for improving customer service. One well-known chatbot is ChatGPT, a language model developed by OpenAI. As time goes by and more and more people use ChatGPT, sentiment analysis is needed about users' opinions about the ChatGPT service. Therefore, it is necessary to carry out sentiment analysis of the ChatGPT service on Twitter to find out how users respond to this chatbot service. In this research, the results showed positive sentiment of 57%, negative sentiment of 29% and neutral sentiment of 14%. Topics for each sentiment were also obtained and sentiment prediction results from 40% of the test data with results of 96% positive, 3.5% negative and 0.5% neutral with a test accuracy of 63%.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography