To see the other types of publications on this topic, follow the link: AI-based feedback.

Journal articles on the topic 'AI-based feedback'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'AI-based feedback.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Prof. Anuja Garande, Kushank Patil, Rasika Deshmukh, Siddhi Gurav, and Chaitanya Yadav. "AI Trainer : Video-Based Squat Analysis." International Journal of Scientific Research in Science, Engineering and Technology 11, no. 2 (2024): 172–79. http://dx.doi.org/10.32628/ijsrset2411221.

Full text
Abstract:
This research proposes a video-based system for analyzing human squats and providing real-time feedback to improve posture. The system leverages MediaPipe, an open-source pose estimation library, to identify key body joints during squats. By calculating crucial joint angles (knee flexion, hip flexion, ankle dorsiflexion), the system assesses squat form against established biomechanical principles. Deviations from these principles trigger real-time feedback messages or visual cues to guide users towards optimal squat posture. The paper details the system architecture, with a client-side application performing pose estimation and feedback generation. The methodology outlines data collection with various squat variations, system development integrating MediaPipe, and evaluation through user testing with comparison to expert evaluations. Key features include real-time feedback and customizable thresholds for user adaptation. Potential applications encompass fitness training, physical therapy, and sports training. Finally, the paper explores future work possibilities like mobile integration, advanced feedback mechanisms, and machine learning for automatic threshold adjustments. This research offers a valuable tool for squat analysis, empowering users to achieve their fitness goals with proper form and reduced injury risk.
APA, Harvard, Vancouver, ISO, and other styles
2

Choi, Jin-Young. "Generative AI-based Writing Feedback Model Development." Korean Association for Literacy 15, no. 5 (2024): 13–62. http://dx.doi.org/10.37736/kjlr.2024.10.15.5.01.

Full text
Abstract:
This study aimed to develop a systematic writing feedback model that reduces the burden on teachers in the writing process while allowing students to receive individualized feedback using generative AI. Thirteen experts were recruited and a Delphi survey was conducted. The results were as follows. First, we identified a writing process for utilizing generative AI and a feedback structure for that process. Second, we finalized the argumentative writing test and its evaluation factors and detailed scoring criteria to be applied to this model. Third, we added examples of generative AI question types and prompts to provide concrete examples of the model. The implications are as follows. First, generative AI has the potential to drive self-reflective feedback from students during the writing process. Second, generative AI can reduce teacher fatigue and provide more personalized feedback. Finally, we propose a concrete feedback model for generative AI in education. However, this study had a limitation in that it did not confirm the effect through the application of a practical model. Therefore, we will continue to conduct follow-up research.
APA, Harvard, Vancouver, ISO, and other styles
3

Cheong, Yunam. "A Study of the Effectiveness of Student Perception-Based AI Feedback in College Writing Classes." Korean Association of General Education 18, no. 5 (2024): 159–73. http://dx.doi.org/10.46392/kjge.2024.18.5.159.

Full text
Abstract:
This study examined the educational effectiveness of AI-based automated evaluation feedback tools in college writing courses. Amid active discussions on the use of AI in liberal arts education, the study applied AI automated evaluation tools to actual writing classes and analyzed the results. The findings show that AI feedback positively influenced students' motivation for writing and achieved a high level of satisfaction in areas such as spelling, vocabulary, grammar, and expression correction accuracy. AI automated evaluation feedback helps reduce the feedback burden on instructors and assists them in providing meaningful feedback to students. It is expected that AI-based automated writing feedback tools will contribute to fostering AI literacy. This case study is significant in that it offers insights for effectively incorporating AI automated evaluation feedback into college writing courses.
APA, Harvard, Vancouver, ISO, and other styles
4

Sailaja, Swetha. "AI-BASED BODY LANGUAGE ANALYSIS FOR INTERVIEW FEEDBACK." International Scientific Journal of Engineering and Management 04, no. 05 (2025): 1–7. https://doi.org/10.55041/isjem03424.

Full text
Abstract:
ABSTRACT: Body language is a vital communication component, particularly in job interviews, where non-verbal cues significantly influence a candidate’s perception and evaluation. This project proposes an AI-powered system that analyzes candidates’ body language during interviews to deliver structured feedback. Leveraging computer vision and machine learning, the system evaluates facial expressions, gestures, posture, and eye contact to assess confidence, engagement, and professionalism. By processing video inputs, it extracts behavioral patterns and generates personalized, data-driven insights to help candidates improve their non-verbal communication. The system benefits job seekers, HR professionals, and training institutions by offering unbiased, automated feedback to identify strengths and areas for improvement—promoting more effective interview preparation and decision-making. Keywords: Body Language, Interview Feedback, Machine Learning, Computer Vision, Posture Analysis, Facial Expression Recognition.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Yuze, Haojie Li, and Rui Huang. "The Effect of Tai Chi (Bafa Wubu) Training and Artificial Intelligence-Based Movement-Precision Feedback on the Mental and Physical Outcomes of Elderly." Sensors 24, no. 19 (2024): 6485. http://dx.doi.org/10.3390/s24196485.

Full text
Abstract:
(1) Background: This study aims to compare the effects of AI-based exercise feedback and standard training on the physical and mental health outcomes of older adults participating in a 4-week tai chi training program. (2) Methods: Participants were divided into three groups: an AI feedback group received real-time movement accuracy feedback based on AI and inertial measurement units (IMUs), a conventional feedback group received verbal feedback from supervisors, and a control group received no feedback. All groups trained three times per week for 8 weeks. Outcome measures, including movement accuracy, balance, grip strength, quality of life, and depression, were assessed before and after the training period. (3) Results: Compared to pre-training, all three groups showed significant improvements in movement accuracy, grip strength, quality of life, and depression. Only the AI feedback group showed significant improvements in balance. In terms of movement accuracy and balance, the AI feedback group showed significantly greater improvement compared to the conventional feedback group and the control group. (4) Conclusions: Providing real-time AI-based movement feedback during tai chi training offers greater health benefits for older adults compared to standard training without feedback.
APA, Harvard, Vancouver, ISO, and other styles
6

Ms. G. Illakiya, G.V. Jeeshitha, R. Manjushree, and P. Harshini. "AI Based Hallucination Detector." International Research Journal on Advanced Science Hub 6, no. 12 (2024): 381–89. https://doi.org/10.47392/irjash.2024.050.

Full text
Abstract:
The main goal of this project is to create an advanced system that can accept questions submitted by users, produce AI-generated answers, and guarantee their accuracy by using an integrated validation method. This involves connecting to external web APIs that have access to trustworthy and authoritative sources, allowing the system to compare AI-generated responses with verified factual data in real time. If the AI-generated answer is accurate, the system will show a confirmation, providing users with assurance of its reliability. However, if the answer is incorrect, the system will flag the error and present the accurate response, addressing the issue of AI generating believable but factually inaccurate answers. In addition, the system records inaccurate responses to detect recurring error trends, which helps to enhance and refine the AI model over time. It also includes an interactive explanation tool that allows users to comprehend the validation process, promoting transparency in decision- making. To increase user involvement, the system can provide information about the origin of the correct answer and offer insights into differences between the AI-generated answers and the correct ones. Additionally, the system will have real-time alerts for critical errors, promptly notifying users when high-risk or sensitive topics are involved. The system's overall accuracy will be evaluated through a periodic review mechanism, which will offer feedback on performance enhancements. Additionally, user feedback will be incorporated into the system to continuously improve it and adapt to changing information sources. Moreover, the system will utilize AI-based learning algorithms to anticipate and prevent potential errors, thus enhancing response quality over time. Ultimately, the project's goal is to establish a reliable and user-friendly AI environment that fosters trust through real-time verification, transparency, continuous enhancement, and minimized risks of incorrect AI outputs.
APA, Harvard, Vancouver, ISO, and other styles
7

Campos, Miguel. "AI-assisted feedback in CLIL courses as a self-regulated language learning mechanism: students’ perceptions and experiences." European Public & Social Innovation Review 10 (January 31, 2025): 1–14. https://doi.org/10.31637/epsir-2025-1568.

Full text
Abstract:
Introduction: The integration of AI in educational settings offers significant potential for enhancing learning experiences, particularly in Content and Language Integrated Learning (CLIL) contexts. AI tools, such as ChatGPT, provide personalized feedback on writing, addressing issues like unclear content, grammatical errors, or poor vocabulary. This study examines students' perceptions of AI-assisted feedback in a business CLIL course and evaluates the actual improvements in their writing based on the feedback provided by AI. Methodology: University students (n=205) participated in a 15-week Data Description writing course, using ChatGPT to receive specific criteria-based feedback on weekly compositions. Students revised their drafts based on this feedback before their submission. A survey (n=192) assessed their experiences and the perceived impact on writing skills and task efficiency. Additionally, a sample (n=336) of the writing compositions was coded and analyzed to evaluate linguistic enhancement. Results: Results indicate that students found AI feedback beneficial for improving writing skills and appreciated its immediacy and specificity. However, concerns were noted about the complexity and relevance of the feedback. Discussions: Despite these issues, students responded positively, showing significant improvement in content accuracy and linguistic proficiency. Conclusions: The study highlights the potential of AI tools and the need for refining AI feedback mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
8

Lee, Juyeong, and Hunkoog Jho. "The Impact of an AI-based Feedback System on the Improvement of Elementary Students’ Statistical Inquiry Question Posing: The Moderating Effects of AI Perception and Feedback Self-efficacy." Brain, Digital, & Learning 14, no. 3 (2024): 459–73. http://dx.doi.org/10.31216/bdl.20240026.

Full text
Abstract:
With the advent of the digital age, the amount of data has exponentially increased making statistical analysis essential for decision-making and problem-solving. This research examines the impact of an AI-based feedback system (FS) using the GPT-4 model on the ability of sixthgrade students to formulate statistical inquiry questions. Conducted at an elementary school in Gyeonggi-do, the research involved 95 students divided into experimental and control groups, who participated in an eight-session program. The experimental group received feedback from the AI-based FS, while the control group received teacher feedback. Pre- and post-tests measured the improvement in students’ statistical inquiry question levels. Additionally, the study analyzed how students’ self-efficacy regarding feedback and their perception of AI moderated the effectiveness of the FS. Results showed that the AI-based FS significantly improved the students’ ability to pose statistical inquiry questions compared to the control group. The study also found that a positive perception of AI enhanced the effectiveness of the FS, while self-efficacy regarding feedback did not show a significant impact. These findings suggest that AI-based FS can be an effective educational tool, particularly when students have a positive attitude toward AI. Future research should focus on developing fine-tuned AI-based FS capable of providing detailed feedback throughout all stages of statistical inquiry and investigate its effects on students with cognitive challenges in accepting feedback.
APA, Harvard, Vancouver, ISO, and other styles
9

Naz, Irum, and Rodney Robertson. "Exploring the Feasibility and Efficacy of ChatGPT3 for Personalized Feedback in Teaching." Electronic Journal of e-Learning 22, no. 2 (2024): 98–111. http://dx.doi.org/10.34190/ejel.22.2.3345.

Full text
Abstract:
This study explores the feasibility of using AI technology, specifically ChatGPT-3, to provide reliable, meaningful, and personalized feedback. Specifically, the study explores the benefits and limitations of using AI-based feedback in language learning; the pedagogical frameworks that underpin the effective use of AI-based feedback; the reliability of ChatGPT-3’s feedback; and the potential implications of AI integration in language instruction. A review of existing literature identifies key themes and findings related to AI-based teaching practices. The study found that social cognitive theory (SCT) supports the potential use of AI chatbots in the learning process as AI can provide students with instant guidance and support that fosters personalized, independent learning experiences. Similarly, Krashen’s second language acquisition theory (SLA) was found to support the hypothesis that AI use can enhance student learning by creating meaningful interaction in the target language wherein learners engage in genuine communication rather than focusing solely on linguistic form. To determine the reliability of AI-generated feedback, an analysis was performed on student writing. First, two rubrics were created by ChatGPT-3; AI then graded the papers, and the results were compared with human graded results using the same rubrics. The study concludes that e-Learning arning certainly has great potential; besides providing timely, personalized learning support, AI feedback can increase student motivation and foster learning independence. Not surprisingly, though, several caveats exist. It was found that ChatGPT-3 is prone to error and hallucination in providing student feedback, especially when presented with longer texts. To avoid this, rubrics must be carefully constructed, and teacher oversight is still very much required. This study will help educators transition to the new era of AI-assisted e-Learning by helping them make informed decisions about how to provide useful AI feedback that is underpinned by sound pedagogical principles.
APA, Harvard, Vancouver, ISO, and other styles
10

Alghannam, Manal Saleh M. "Artificial Intelligence as a Provider of Feedback on EFL Student Compositions." World Journal of English Language 15, no. 2 (2024): 161. https://doi.org/10.5430/wjel.v15n2p161.

Full text
Abstract:
In response to the arrival of advanced artificial intelligence (AI) in the form of ChatGPT, this study examines its potential for providing feedback to foreign language writers. This represents a more acceptable use of AI in the writing classroom, rather than students simply using AI to write their entire essay. The methodological procedure involved eliciting normal classroom writing-practice essays from 29 English major students at a Saudi university, with ChatGPT (2023) then given a simple prompt requesting feedback. Both the essays and the feedback were qualitatively analysed to respond to research questions concerning the feedback’s consistency and credibility, and the extent to which it represented the different potential feedback types, based on a review of the extensive literature on the subject. Although superficially impressive, close examination revealed certain weaknesses to the AI feedback. For example, there was inconsistency in how the feedback was handled across essays, and some statements were not fully accurate regarding the respective text. In focus, the feedback was primarily accuracy-oriented, while even-handed in attention to content, organisation, and lower-level language matters, providing both positive and negative comments. However, there was a paucity of message-oriented communicative and explicit affective feedback. Like many teachers, ChatGPT was selective in terms of the feedback provided, but the decisions of what to address did not seem altogether motivated by criteria that an expert human feedback provider would consider. The main conclusion is that while AI feedback on writing practice is useful, it does require human monitoring by a teacher.
APA, Harvard, Vancouver, ISO, and other styles
11

Drewery, David, Jennifer Woodside, and Kristen Eppel. "Artificial Intelligence and Résumé Critique Experiences." Canadian Journal of Career Development 21, no. 2 (2022): 28–39. http://dx.doi.org/10.53379/cjcd.2022.338.

Full text
Abstract:
Where résumés are concerned, student supports tend to include tactical feedback that addresses issues in students’ writing and strategic feedback aimed at coaching critical self-reflection. However, there is not always time to cover all that could be offered by both kinds of feedback in a single résumé critique. Given demands on staff time, many career services administrators are considering opportunities to leverage artificial intelligence-based (AI) products that might offer tactical feedback and allow staff to focus on offering strategic feedback. In a field experiment, we explored how novice job seekers’ use of an AI-based résumé critique product influenced their subsequent face-to-face résumé critique experiences, especially the kinds of feedback offered and learning outcomes that resulted from this. As expected, the AI offered substantial tactical feedback and less strategic feedback. Students’ use of the AI did not result in greater opportunity for strategic feedback and associated learning outcomes. Rather, the AI rendered issues in students’ writing more salient. In turn, this invited more attention to tactical aspects and less attention to strategic aspects of students’ résumés.
APA, Harvard, Vancouver, ISO, and other styles
12

Naseer, Fawad, and Sarwar Khawaja. "Mitigating Conceptual Learning Gaps in Mixed-Ability Classrooms: A Learning Analytics-Based Evaluation of AI-Driven Adaptive Feedback for Struggling Learners." Applied Sciences 15, no. 8 (2025): 4473. https://doi.org/10.3390/app15084473.

Full text
Abstract:
Adaptation through Artificial Intelligence (AI) creates individual-centered feedback strategies to reduce academic achievement disparities among students. The study evaluates the effectiveness of AI-driven adaptive feedback in mitigating these gaps by providing personalized learning support to struggling learners. A learning analytics-based evaluation was conducted on 700 undergraduate students enrolled in STEM-related courses across three different departments at Beaconhouse International College (BIC). The study employed a quasi-experimental design, where 350 students received AI-driven adaptive feedback while the control group followed traditional instructor-led feedback methods. Data were collected over 20 weeks, utilizing pre- and post-assessments, real-time engagement tracking, and survey responses. Results indicate that students receiving AI-driven adaptive feedback demonstrated a 28% improvement in conceptual mastery, compared to 14% in the control group. Additionally, student engagement increased by 35%, with a 22% reduction in cognitive overload. Analysis of interaction logs revealed that frequent engagement with AI-generated feedback led to a 40% increase in retention rates. Despite these benefits, variations in impact were observed based on prior knowledge levels and interaction consistency. The findings highlight the potential of AI-driven smart learning environments to enhance educational equity. Future research should explore long-term effects, scalability, and ethical considerations in adaptive AI-based learning systems.
APA, Harvard, Vancouver, ISO, and other styles
13

Biswas, Biswajit, Manas Kumar Sanyal, and Tuhin Mukherjee. "AI-Based Sales Forecasting Model for Digital Marketing." International Journal of E-Business Research 19, no. 1 (2023): 1–14. http://dx.doi.org/10.4018/ijebr.317888.

Full text
Abstract:
Sales prediction with minute accuracy plays a crucial role for an organization to sustain amidst the global competitive business environment. The use of artificial intelligence (AI) on top of the existing information technology environment has become one of the most exciting and promising area for any organizations in the current era of digital marketing. E-marketing provides customers to share their views with other customers. In this paper, the authors proposed a model which will be helpful to the digital marketers to find out the potential customers to extract value from customer feedback. The proposed model is based on artificial neural network and will make it possible to identify the customer demand depending on previous feedback and to predict the future sales volume of the product. The authors tried to utilize AI, mainly neural networks (NNs), to construct an intelligent sales prediction and also to apply ANNs for prediction regarding sales of mobile phone (Redmi, Note 6 Pro) one month ahead depending on customer feedback on two e-commerce platform, namely Amazon.in and Snapdeal.in.
APA, Harvard, Vancouver, ISO, and other styles
14

Roodsari, Sam Toorchi, and Shahram Azizi Ghanbari. "Motivation- and Competency-Oriented AI-Based Adaptive Feedback Systems." Literacy Information and Computer Education Journal 15, no. 1 (2024): 3789–97. https://doi.org/10.20533/licej.2040.2589.2024.0498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Bharath A, Prof. A Manusha Reddy, Gowrishankar N, Hursh, and Bharat V P. "AI Based Fitness Game - Fittronix." International Journal of Latest Technology in Engineering Management & Applied Science 14, no. 5 (2025): 241–45. https://doi.org/10.51583/ijltemas.2025.140500030.

Full text
Abstract:
Abstract: The integration of intelligent technology into fitness equipment is transforming how individuals train, track progress, and prevent injuries. This paper examines FitTronix, a smart weight training system that delivers real-time feedback, personalized tracking, and advanced biomechanical monitoring. The study evaluates its impact on performance and injury reduction using data from user surveys, expert interviews, and system logs. Results show notable improvements in form accuracy, training consistency, and muscle engagement, with a marked decrease in injuries, particularly among beginners. FitTronix’s AI-driven analytics also support personalized fitness programming. The paper concludes by emphasizing its potential to boost training safety and efficiency, calling for further long-term studies and broader adoption of smart gym technologies.
APA, Harvard, Vancouver, ISO, and other styles
16

Marafie, Zahraa, Kwei-Jay Lin, Daben Wang, et al. "AutoCoach: An Intelligent Driver Behavior Feedback Agent with Personality-Based Driver Models." Electronics 10, no. 11 (2021): 1361. http://dx.doi.org/10.3390/electronics10111361.

Full text
Abstract:
Nowadays, AI has many applications in everyday human activities such as exercise, eating, sleeping, and automobile driving. Tech companies can apply AI to identify individual behaviors (e.g., walking, eating, driving), analyze them, and offer personalized feedback to help individuals make improvements accordingly. While offering personalized feedback is more beneficial for drivers, most smart driver systems in the current market do not use it. This paper presents AutoCoach, an intelligent AI agent that classifies drivers’ into different driving-personality groups to offer personalized feedback. We have built a cloud-based Android application to collect, analyze and learn from a driver’s past driving data to provide personalized, constructive feedback accordingly. Our GUI interface provides real-time user feedback for both warnings and rewards for the driver. We have conducted an on-the-road pilot user study. We conducted a pilot study where drivers were asked to use different agent versions to compare personality-based feedback versus non-personality-based feedback. The study result proves our design’s feasibility and effectiveness in improving the user experience when using a personality-based driving agent, with 61% overall acceptance that it is more accurate than non-personality-based.
APA, Harvard, Vancouver, ISO, and other styles
17

A.M. Chimanna(Umrani),. "EduConvo: AI-Driven Student Feedback System." Journal of Information Systems Engineering and Management 10, no. 53s (2025): 223–35. https://doi.org/10.52783/jisem.v10i53s.10861.

Full text
Abstract:
Student feedback is a crucial tool for educational institutions to assess teaching effectiveness and improve course delivery. However, traditional feedback collection methods, e.g., static surveys, suffer from low engagement, vague responses, and a lack of actionable insights. To address these limitations, this paper presents a Conversational AI-based Student Feedback System that uses a Large Language Model (LLM) to facilitate dynamic, interactive, and adaptive feedback collection. The system personalizes questions based on course content, allowing in-depth responses while maintaining anonymity. The system uses Next.js for the frontend, Flask for the backend, and a MongoDB database for data storage, integrating OpenAI’s GPT model for conversational interactions. A real-time analytics dashboard enables faculty to interpret feedback effectively. To evaluate the system, a comparative study was conducted against a survey-based feedback approach, measuring student engagement, response quality, and usability of the system. The results indicate a significant improvement in feedback depth, participation rates, and user satisfaction. This research highlights the role of AI-driven feedback systems in enhancing student engagement and providing richer insights for academic institutions.
APA, Harvard, Vancouver, ISO, and other styles
18

Sabuncuoğlu İnanç, Ayda, Nesrin Akıncı Çötok, and Tufan Çötok. "Internal and External Factors Shaping Motivation in AI-Based Language Education." Revista Romaneasca pentru Educatie Multidimensionala 17, no. 2 (2025): 783–817. https://doi.org/10.18662/rrem/17.2/1005.

Full text
Abstract:
The rapid growth of AI has led to the integration of AI-powered educational tools, such as intelligent tutors and chatbots, enhancing student engagement through personalized learning experiences. These tools automate tasks like grading and feedback, adapt content to individual needs, and improve both motivation and academic performance. By addressing learning motivations, AI-based applications promote sustainable learning outcomes. Personal factors, such as arousal, beliefs, goals, and needs, significantly shape motivation, influencing both internal and external drivers. In foreign language learning, AI tools provide personalized feedback, interactive content, and real-time conversation opportunities to improve language acquisition, while AI-supported role-play exercises enhance fluency and pronunciation. Despite progress in AI-based education, a gap remains in understanding how various factors influence motivation within these tools. Existing literature lacks exploration of how factors like arousal, beliefs, goals, and needs shape motivation, particularly in AI-driven learning applications. This study aims to fill this gap by examining the internal and external factors that influence users' learning motivations in AI-based apps. Semi-structured interviews with 29 users were conducted, and data were analyzed descriptively, with key findings presented in tables and significant statements highlighted. The research results showed that both internal and external factors significantly influenced motivation. Participants were driven by the desire to improve their language skills, and the AI apps’ personalized feedback and features like role-playing supported their motivation. Clear goals, the freedom to progress at their own pace, and a sense of competence were key motivators, while autonomy, competence, and relatedness were reinforced by the apps’ positive feedback and tailored content
APA, Harvard, Vancouver, ISO, and other styles
19

Donmez, Mehmet. "AI-based feedback tools in education: a comprehensive bibliometric analysis study." International Journal of Assessment Tools in Education 11, no. 4 (2024): 622–46. http://dx.doi.org/10.21449/ijate.1467476.

Full text
Abstract:
This bibliometric analysis offers a comprehensive examination of AI-based feedback tools in education, utilizing data retrieved from the Web of Science (WoS) database. Encompassing a total of 239 articles from an expansive timeframe, spanning from inception to February 2024, this study provides a thorough overview of the evolution and current state of research in this domain. Through meticulous analysis, it tracks the growth trajectory of publications over time, revealing the increasing scholarly attention towards AI-driven feedback mechanisms in educational contexts. By describing critical thematic areas such as the role of feedback in enhancing learning outcomes, the integration of AI technologies into educational practices, and the efficacy of AI-based feedback tools in facilitating personalized learning experiences, the analysis offers valuable insights into the multifaceted nature of this field. By employing sophisticated bibliometric mapping techniques, including co-citation analysis and keyword co-occurrence analysis, the study uncovers the underlying intellectual structure of the research landscape, identifying prominent themes, influential articles, and emerging trends. Furthermore, it identifies productive authors, institutions, and countries contributing to the discourse, providing a detailed understanding of the collaborative networks and citation patterns within the community. This comprehensive synthesis of the literature serves as a valuable resource for researchers, practitioners, and policymakers alike, offering guidance on harnessing the potential of AI technologies to revolutionize teaching and learning practices in education.
APA, Harvard, Vancouver, ISO, and other styles
20

Hawanti, Santhy, and Khudoiberdieva Munisa Zubaydulloevna. "AI chatbot-based learning: alleviating students' anxiety in english writing classroom." Bulletin of Social Informatics Theory and Application 7, no. 2 (2023): 182–92. http://dx.doi.org/10.31763/businta.v7i2.659.

Full text
Abstract:
In the ever-evolving landscape of education, integrating innovative technologies can enhance the learning experience for students. ChatGPT, a cutting-edge language processing tool developed by OpenAI, offers exciting possibilities for teaching writing. This advanced AI model can be a powerful asset in the classroom, providing students with valuable resources and support as they develop their writing skills. Seventy-three college students participated in the quasi-experiment. The findings demonstrate that AI chatbot-based instruction reduces students' anxiety about learning English writing. AI chatbots offer instant feedback, allowing students to correct errors immediately. This quick feedback loop can prevent students from ruminating over their mistakes, thus reducing anxiety. With AI chatbot, students can learn at their own pace. They can take time to understand concepts, practice writing, and receive feedback without feeling rushed. This flexibility can alleviate the pressure of strict deadlines in traditional classroom settings. The findings imply teachers to implement chatbot-based learning in the classroom.
APA, Harvard, Vancouver, ISO, and other styles
21

Sazandeh, Mozhdeh, and Maryam Beiki. "Comparing AI and Teacher Corrective Feedback on Iranian EFL Learners’ Essay Writing Skills." Asian Journal of Education and Social Studies 51, no. 5 (2025): 473–86. https://doi.org/10.9734/ajess/2025/v51i51934.

Full text
Abstract:
Aims: This study aimed to compare the effectiveness of teacher-generated versus AI-generated corrective feedback on the essay writing skills of Iranian intermediate EFL learners. Study Design: Quasi-experimental design. Place and Duration of Study: Department of TEFL, Islamic Azad University, North Tehran Branch, conducted over a 16-week period in 2025. Methodology: A total of 80 Iranian intermediate EFL learners were selected through convenience sampling and divided into two equal groups: Teacher-Generated Corrective Feedback Group (TGFG, n=40) and AI-Generated Corrective Feedback Group (AGFG, n=40). Both groups participated in 16 instructional sessions, receiving feedback on their essays either from a human instructor or an AI-based application (ChatGPT). Writing performance was assessed using IELTS-based pretests and posttests, evaluated based on content, organization, vocabulary, language use, and mechanics. The 6+1 Traits Writing Rubric was used for scoring, and inter-rater reliability was established. Results: While both groups showed improvement in essay writing performance, the AI-generated feedback group (M = 18.08, SD = 1.32) significantly outperformed the teacher-generated group (M = 16.93, SD = 1.29) on the posttest. An independent-samples t-test indicated a significant difference between the two groups (t(78) = 3.92, P = .000, Cohen’s d = .892), favoring the AI feedback group. Conclusion: The findings suggest that AI-generated corrective feedback is more effective than teacher-generated feedback in improving the essay writing performance of EFL learners. These findings have important implications for future teaching practices, particularly in enhancing learner autonomy and reducing teacher workload in large classrooms. AI tools can serve as a reliable and efficient alternative in writing instruction, particularly in contexts with limited instructional resources. However, their effectiveness may vary depending on learners’ proficiency levels, task complexity, and their familiarity with digital tools.
APA, Harvard, Vancouver, ISO, and other styles
22

Reseacher. "LEVERAGING GENERATIVE AI FOR EFFICIENT MOBILE APP FEEDBACK CLASSIFICATION AND QUALITY IMPROVEMENT." International Journal of Computer Engineering and Technology (IJCET) 15, no. 4 (2024): 444–51. https://doi.org/10.5281/zenodo.13285291.

Full text
Abstract:
This paper introduces an innovative approach to mobile app user feedback analysis by harnessing the power of Generative AI technologies. We present an integrated system architecture that seamlessly combines an API-based scheduler for comprehensive data collection, a cutting-edge Generative AI classifier for nuanced feedback categorization, robust database integration for efficient data management, and an interactive visualization module for actionable insights. Our system goes beyond traditional classification methods by not only categorizing feedback based on predefined themes but also dynamically generating new themes for previously unclassified feedback. This adaptive approach ensures comprehensive coverage of user concerns while maintaining a structured classification system. We evaluate our method using a diverse and extensive dataset of feedback items collected from multiple sources, including app store reviews, in-app feedback, customer care logs, and direct user emails. The experimental results demonstrate the significant advantages of our GenAI classifier over conventional approaches in terms of accuracy, theme coverage, and scalability. This paper illustrates a substantial contribution to the field of AI-driven feedback analysis, illustrating the transformative potential of Generative AI in enhancing mobile app quality and user experience through more efficient, accurate, and adaptable feedback processing methodologies.
APA, Harvard, Vancouver, ISO, and other styles
23

Bulut, Okan, and Tarid Wongvorachan. "Feedback Generation through Artificial Intelligence." Open/Technology in Education, Society, and Scholarship Association Conference 2, no. 1 (2022): 1–9. http://dx.doi.org/10.18357/otessac.2022.2.1.125.

Full text
Abstract:
Feedback is an essential part of the educational assessment that improves student learning. As education changes with the advancement of technology, educational assessment has also adapted to the advent of Artificial Intelligence (AI). Despite the increasing use of online assessments during the last decade, a limited number of studies have discussed the feedback generation process as implemented through AI. To address this gap, we propose a conceptual paper to organize and discuss the application of AI in the feedback generation and delivery processes. Among different branches of AI, Natural Language Processing (NLP), Educational Data Mining (EDM), and Learning Analytics (LA) play the most critical roles in the feedback generation process. The process begins with analyzing students’ data from educational assessments to build a predictive machine learning model with additional features such as students’ interaction with course material using EDM methods to predict students’ learning outcomes. Written feedback can be generated from a model with NLP-based algorithms before being delivered, along with non-verbal feedback via a LA dashboard or a digital score report. Also, ethical recommendations for using AI for feedback generation are discussed. This paper contributes to understanding the feedback generation process to serve as a venue for the future development of digital feedback.
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Wan. "Research on AI Application in English Writing Instruction." Lecture Notes in Education Psychology and Public Media 94, no. 1 (2025): 56–61. https://doi.org/10.54254/2753-7048/2025.cb23805.

Full text
Abstract:
With the enhancement of AI technology and the emergence of numerous AI tools that can assist in English writing, traditional English teaching methods have experienced both challenges and opportunities. This study analyzed the positive impacts and existing problems of AI-based English writing teaching. Through analysis of the findings of the empirical studies, conclusions are drawn on the following aspects. First, AI-based English writing teaching has a positive impact on personalized writing feedback, vocabulary and grammar enhancement, and improvement in writing capabilities. Nevertheless, problems such as over-reliance, feedback inaccuracy, misguidance, and cultural and linguistic limitations are seen in AI-based English writing teaching. Therefore, this study argues that AI tools can be integrated into student writing grading to reduce stress on teachers and, more importantly, provide in-depth and timely feedback that helps expand students' knowledge of the topic and write more effectively, correct their grammatical mistakes, and provide vocabulary choices. Moreover, suggesting students use prompts can aid them in generating ideas and writing, which accelerates innovation and creativity. By contrast, solutions to the declining critical thinking and over-reliance of students require a supervision mechanism to guide students to use AI tools correctly. Meanwhile, promoting the optimization of AI technology by professionals to improve the accuracy and reliability of feedback is significant. Lastly, cultural and linguistic barriers needed to be diminished by localization strategies and multilingual support.
APA, Harvard, Vancouver, ISO, and other styles
25

Ponnusamy, Pragaash, Alireza Roshan Ghias, Chenlei Guo, and Ruhi Sarikaya. "Feedback-Based Self-Learning in Large-Scale Conversational AI Agents." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 08 (2020): 13180–87. http://dx.doi.org/10.1609/aaai.v34i08.7022.

Full text
Abstract:
Today, most of the large-scale conversational AI agents such as Alexa, Siri, or Google Assistant are built using manually annotated data to train the different components of the system including Automatic Speech Recognition (ASR), Natural Language Understanding (NLU) and Entity Resolution (ER). Typically, the accuracy of the machine learning models in these components are improved by manually transcribing and annotating data. As the scope of these systems increase to cover more scenarios and domains, manual annotation to improve the accuracy of these components becomes prohibitively costly and time consuming. In this paper, we propose a system that leverages customer/system interaction feedback signals to automate learning without any manual annotation. Users of these systems tend to modify a previous query in hopes of fixing an error in the previous turn to get the right results. These reformulations, which are often preceded by defective experiences caused by either errors in ASR, NLU, ER or the application. In some cases, users may not properly formulate their requests (e.g. providing partial title of a song), but gleaning across a wider pool of users and sessions reveals the underlying recurrent patterns. Our proposed self-learning system automatically detects the errors, generate reformulations and deploys fixes to the runtime system to correct different types of errors occurring in different components of the system. In particular, we propose leveraging an absorbing Markov Chain model as a collaborative filtering mechanism in a novel attempt to mine these patterns. We show that our approach is highly scalable, and able to learn reformulations that reduce Alexa-user errors by pooling anonymized data across millions of customers. The proposed self-learning system achieves a win-loss ratio of 11.8 and effectively reduces the defect rate by more than 30% on utterance level reformulations in our production A/B tests. To the best of our knowledge, this is the first self-learning large-scale conversational AI system in production.
APA, Harvard, Vancouver, ISO, and other styles
26

Ponnusamy, Pragaash, Alireza Ghias, Yi Yi, Benjamin Yao, Chenlei Guo, and Ruhi Sarikaya. "Feedback-Based Self-Learning in Large-Scale Conversational AI Agents." AI Magazine 42, no. 4 (2022): 43–56. http://dx.doi.org/10.1609/aimag.v42i4.15102.

Full text
Abstract:
Today, most of the large-scale conversational AI agents such as Alexa, Siri, or Google Assistant are built using manually annotated data to train the different components of the system including automatic speech recognition (ASR), natural language understanding (NLU), and entity resolution (ER). Typically, the accuracy of the machine learning models in these components are improved by manually transcribing and annotating data. As the scope of these systems increase to cover more scenarios and domains, manual annotation to improve the accuracy of these components becomes prohibitively costly and time con-suming. In this paper, we propose a system that leverages customer/system interaction feedback signals to automate learning without any manual annotation. Users of these systems tend to modify a previous query in hopes of fixing an error in the previous turn to get the right results. These reformulations, which are often preceded by defective experiences caused by either errors in ASR, NLU, ER, or the application. In some cases, users may not properly formulate their requests (e.g., providing partial title of a song), but gleaning across a wider pool of users and sessions reveals the underlying recurrent patterns. Our proposed self-learning system automatically detects the errors, generates reformulations, and deploys fixes to the runtime system to correct different types of errors occurring in different components of the system. In particular, we propose leveraging an absorbing Markov Chain model as a collaborative filtering mechanism in a novel attempt to mine these patterns, and coupling it with a guardrail rewrite selection mechanism that reactively evaluates these fixes using feedback friction data. We show that our approach is highly scalable, and able to learn reformulations that reduce Alexa-user errors by pooling anonymized data across millions of customers. The proposed self-learning system achieves a win-loss ratio of 11.8 and effectively reduces the defect rate by more than 30 percent on utterance level reformulations in our production A/B tests. To the best of our knowledge, this is the first self-learning large-scale conversational AI system in production.
APA, Harvard, Vancouver, ISO, and other styles
27

Ponnusamy, Pragaash, Alireza Ghias, Yi Yi, Benjamin Yao, Chenlei Guo, and Ruhi Sarikaya. "Feedback-Based Self-Learning in Large-Scale Conversational AI Agents." AI Magazine 42, no. 4 (2022): 43–56. http://dx.doi.org/10.1609/aaai.12025.

Full text
Abstract:
Today, most of the large-scale conversational AI agents such as Alexa, Siri, or Google Assistant are built using manually annotated data to train the different components of the system including automatic speech recognition (ASR), natural language understanding (NLU), and entity resolution (ER). Typically, the accuracy of the machine learning models in these components are improved by manually transcribing and annotating data. As the scope of these systems increase to cover more scenarios and domains, manual annotation to improve the accuracy of these components becomes prohibitively costly and time con-suming. In this paper, we propose a system that leverages customer/system interaction feedback signals to automate learning without any manual annotation. Users of these systems tend to modify a previous query in hopes of fixing an error in the previous turn to get the right results. These reformulations, which are often preceded by defective experiences caused by either errors in ASR, NLU, ER, or the application. In some cases, users may not properly formulate their requests (e.g., providing partial title of a song), but gleaning across a wider pool of users and sessions reveals the underlying recurrent patterns. Our proposed self-learning system automatically detects the errors, generates reformulations, and deploys fixes to the runtime system to correct different types of errors occurring in different components of the system. In particular, we propose leveraging an absorbing Markov Chain model as a collaborative filtering mechanism in a novel attempt to mine these patterns, and coupling it with a guardrail rewrite selection mechanism that reactively evaluates these fixes using feedback friction data. We show that our approach is highly scalable, and able to learn reformulations that reduce Alexa-user errors by pooling anonymized data across millions of customers. The proposed self-learning system achieves a win-loss ratio of 11.8 and effectively reduces the defect rate by more than 30 percent on utterance level reformulations in our production A/B tests. To the best of our knowledge, this is the first self-learning large-scale conversational AI system in production.
APA, Harvard, Vancouver, ISO, and other styles
28

Lee, GaYoung, and SunYoung Huh. "Exploring the Effects of Generative AI-Based Personalized Feedback for Enhancing Pre-Service Teachers' Teaching Competencies." Korea Association of Yeolin Education 32, no. 2 (2024): 265–87. http://dx.doi.org/10.18230/tjye.2024.32.2.265.

Full text
Abstract:
This study aimed to develop personalized feedback using generative AI and apply it to pre-service teachers to examine their reactions and effects. For this purpose, the study utilized the generative AI ChatGPT to create three rounds of personalized feedback. The first feedback was tailored based on the diagnosis of teaching competencies, while the second and third feedback were customized for instructional plans written by pre-service teachers. These personalized feedback sessions were sequentially provided to a total of 40 pre-service teachers. The research results showed that learners generally perceived personalized feedback positively, indicating that the provided feedback was helpful for creating instructional plans or improving teaching competencies. However, some participants suggested that the feedback content was relatively general and expressed a desire for more specific examples tailored to their individual needs. Regarding the effectiveness of personalized feedback, the study confirmed improvement in teaching competencies related to instructional design and operational skills among pre-service teachers. This research is significant in providing specific applications of generative AI in the field of education at a time when its utility is increasing. However, the study has limitations as it only examined the results of teaching competency diagnoses in confirming the effectiveness of personalized feedback. Future research will aim to explore evaluation results for instructional plans and teaching demonstrations.
APA, Harvard, Vancouver, ISO, and other styles
29

Yang, Hongli, Yu Zhang, and Jixuan Guo. "Exploring the Effectiveness of Cooperative Pre-Service Teacher and Generative AI Writing Feedback on Chinese Writing." Behavioral Sciences 15, no. 4 (2025): 518. https://doi.org/10.3390/bs15040518.

Full text
Abstract:
Due to their efficiency, stability, and enhanced language comprehension and analysis capabilities, generative AIs have attracted increasing attention in the field of writing as higher-level automated writing evaluation (AWE) feedback tools. However, few studies have examined the impact of pre-service teachers using generative AI in combination with their own teaching experience to provide feedback on Chinese writing. To fill this gap, based on 1035 writing feedback texts, we examined the differences in writing feedback between 11 pre-service teachers and Erine Bot (a generative AI) and interviewed the pre-service teachers about their willingness to cooperate with generative AI. The collaborative writing feedback generated by the pre-service teachers using AI was compared with the feedback generated by the pre-service teachers and generative AI separately. We identified that, although Ernie Bot provided significantly better feedback than the pre-service teachers in three specific areas (except for language expression), and both Ernie Bot and the pre-service teachers had respective advantages in terms of writing strategy, human–computer cooperative writing feedback was significantly better than the writing feedback provided by either Ernie Bot or the pre-service teachers alone. The was true across all aspects of the feedback in terms of focus and strategy. These findings can support the training of pre-service teachers and improve the writing quality of their students via implementing AI to provide more effective writing feedback.
APA, Harvard, Vancouver, ISO, and other styles
30

Sumedi, Siti Hanna. "AI-Powered Mediated Synchronous Corrective Feedback on Efl Senior High School Students’ Paragraph Writing Skill." EDULIA: English Education, Linguistic and Art Journal 5, no. 1 (2024): 12–22. https://doi.org/10.31539/edulia.v5i1.11113.

Full text
Abstract:
This study investigated the effect of AI-powered mediated Synchronous Corrective Feedback as an AI-based automated pedagogical aid on senior high school students’ English writing skill. Correspondingly, this study was conducted through quantitative methods, specifically quasi-experimental research design. There were 30 eleventh-grade students as research subjects who were assigned into two different classes. Practically, 15 students were sorted to an experimental class in which implemented AI-powered mediated Synchronous Corrective Feedback supported by the teacher’ corrective feedback in terms of track changes, recast and metalinguistic feedback. On the other hand, the rest of 15 students were assigned into a control class that personally applied AI-powered mediated Corrective Feedback. This study revealed there was a significant effect of AI-powered mediated Synchronous Corrective Feedback implemented in experimental class on students’ paragraph writing score. Students in the experimental class significantly performed better in writing English paragraphs and obtained higher scores in the English paragraph writing test compared to students in the control class. Therefore, this study concluded AI-powered mediated Synchronous Corrective Feedback along with support, presence, and assistance of teachers in the matter of immediate and synchronous feedback was totally remarkable and crucial as it effectively and successfully improved students’ writing skill and mastery in English Paragraph Writing. Keywords: Artificial Intelligence, Corrective Feedback, Writing Skill
APA, Harvard, Vancouver, ISO, and other styles
31

E. Duraimurugan, S. Kesava Sundara Nathan, and V. Vishnuvardhan. "Persona: Revolutionizing Conversations with AI Automation." International Research Journal on Advanced Engineering and Management (IRJAEM) 3, no. 03 (2025): 831–36. https://doi.org/10.47392/irjaem.2025.0134.

Full text
Abstract:
Implementing adaptive learning mechanisms improves the accuracy of the detection process by continuously refining its capabilities based on real-time user feedback. A website is designed to focus on interactive pathways, bot management, easy UI, and support. This approach ensures that the system evolves, learning from past interactions to deliver more accurate and reliable results. By efficiently managing bulk calls for regular feedback collection, an AI-powered call automation bot significantly reduces the burden on human operators. This automation minimizes manual intervention, ensuring consistency, reliability, and scalability in gathering valuable insights, allowing businesses to focus on more critical tasks. To enhance user interactions and provide a more personalized and engaging experience, the system utilizes advanced conversational AI algorithms. Robotic Process Automation (RPA) is seamlessly integrated into the feedback process, making it more efficient and responsive to user needs. Additionally, AI-driven feedback collection enables companies to manage large volumes of feedback effortlessly and provides real-time analysis for better decision-making. The system uses natural language processing and AI-driven analytics to understand and interpret feedback contextually. Ultimately, through continuous learning, maximizing engagement, and fostering operational excellence, AI and RPA are revolutionizing feedback automation.
APA, Harvard, Vancouver, ISO, and other styles
32

Saravanan, Kandaneri Ramamoorthy, Babu Praveenkumar, and Benazer S.Sakena. "Smart Feedback Generation: An AI-Based Approach for Real-Time Student Assistance." Journal of Advances in Computational Intelligence Theory 7, no. 1 (2024): 9–18. https://doi.org/10.5281/zenodo.13746895.

Full text
Abstract:
<em>In recent years, the increasing demand for personalized and immediate feedback in educational settings has driven the development of advanced technologies. This paper presents a novel AI-based approach for real-time student assistance through smart feedback generation. The proposed system leverages machine learning algorithms and natural language processing techniques to analyze student performance, identify learning gaps, and provide tailored feedback in real time. The system is designed to support a wide range of educational activities, from formative assessments to continuous learning processes, enhancing student engagement and improving learning outcomes. By integrating adaptive learning paths and intelligent tutoring systems, the solution offers a scalable and efficient way to address the diverse needs of learners. Extensive experiments demonstrate the effectiveness of the proposed approach in various educational scenarios, highlighting its potential to revolutionize the traditional feedback mechanisms in education.</em> <em>&nbsp;</em>
APA, Harvard, Vancouver, ISO, and other styles
33

Chaugule, Balaji. "HireIQ - AI-Based Mock Interview Platform with Behavioral Analysis." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 05 (2025): 1–9. https://doi.org/10.55041/ijsrem48228.

Full text
Abstract:
Abstract - This initiative is concerned with the creation of an AI-based platform for conducting job interviews that utilizes both technical assessment and behaviour analysis. The system enables candidates to respond to questions through video, where their responses and behaviours are analysed using advanced technology. Their verbal responses are processed using speech recognition tools and their content assessed for technical accuracy and communications by natural language processing models. Computer vision is used by the platform to evaluate non-verbal like their facial expressions and postures and derive insights regarding their emotion, confidence, and engagement. Fusing both verbal and non-verbal inputs, the system offers a more complete assessment of candidates, providing impartial, automated feedback. By using AI-based analysis, the process of hiring is streamlined to be more effective and objective. Key Words: Helps Preparation, Improve Communication, Behaviour Analysis, Feedback
APA, Harvard, Vancouver, ISO, and other styles
34

Rainey, Clare, Raymond Bond, Jonathan McConnell, et al. "The impact of AI feedback on the accuracy of diagnosis, decision switching and trust in radiography." PLOS One 20, no. 5 (2025): e0322051. https://doi.org/10.1371/journal.pone.0322051.

Full text
Abstract:
Artificial intelligence decision support systems have been proposed to assist a struggling National Health Service (NHS) workforce in the United Kingdom. Its implementation in UK healthcare systems has been identified as a priority for deployment. Few studies have investigated the impact of the feedback from such systems on the end user. This study investigated the impact of two forms of AI feedback (saliency/heatmaps and AI diagnosis with percentage confidence) on student and qualified diagnostic radiographers’ accuracy when determining binary diagnosis on skeletal radiographs. The AI feedback proved beneficial to accuracy in all cases except when the AI was incorrect and for pathological cases in the student group. The self-reported trust of all participants decreased from the beginning to the end of the study. The findings of this study should guide developers in the provision of the most advantageous forms of AI feedback and direct educators in tailoring education to highlight weaknesses in human interaction with AI-based clinical decision support systems.
APA, Harvard, Vancouver, ISO, and other styles
35

Pookkuttath, Sathian, Raihan Enjikalayil Abdulkader, Mohan Rajesh Elara, and Prabakaran Veerajagadheswar. "AI-Enabled Vibrotactile Feedback-Based Condition Monitoring Framework for Outdoor Mobile Robots." Mathematics 11, no. 18 (2023): 3804. http://dx.doi.org/10.3390/math11183804.

Full text
Abstract:
An automated Condition Monitoring (CM) and real-time controlling framework is essential for outdoor mobile robots to ensure the robot’s health and operational safety. This work presents a novel Artificial Intelligence (AI)-enabled CM and vibrotactile haptic-feedback-based real-time control framework suitable for deploying mobile robots in dynamic outdoor environments. It encompasses two sections: developing a 1D Convolutional Neural Network (1D CNN) model for predicting system degradation and terrain flaws threshold classes and a vibrotactile haptic feedback system design enabling a remote operator to control the robot as per predicted class feedback in real-time. As vibration is an indicator of failure, we identified and separated system- and terrain-induced vibration threshold levels suitable for CM of outdoor robots into nine classes, namely Safe, moderately safe system-generated, and moderately safe terrain-induced affected by left, right, and both wheels, as well as severe classes such as unsafe system-generated and unsafe terrain-induced affected by left, right, and both wheels. The vibration-indicated data for each class are modelled based on two sensor data: an Inertial Measurement Unit (IMU) sensor for the change in linear and angular motion and a current sensor for the change in current consumption at each wheel motor. A wearable novel vibrotactile haptic feedback device architecture is presented with left and right vibration modules configured with unique haptic feedback patterns corresponding to each abnormal vibration threshold class. The proposed haptic-feedback-based CM framework and real-time remote controlling are validated with three field case studies using an in-house-developed outdoor robot, resulting in a threshold class prediction accuracy of 91.1% and an effectiveness that, by minimising the traversal through undesired terrain features, is four times better than the usual practice.
APA, Harvard, Vancouver, ISO, and other styles
36

Dazzeo, Robin. "AI-Enhanced Writing Self- Assessment." Journal of Technology-Integrated Lessons and Teaching 3, no. 2 (2024): 80–85. https://doi.org/10.13001/jtilt.v3i2.9119.

Full text
Abstract:
This technology-rich, three-day lesson for 9th-12th grade English Language Arts students leverages artificial intelligence (AI)-enhanced rubrics and writing analysis tools (Ouyang &amp; Jiao, 2021) to improve student writing and self-assessment skills. Students explore AI-enhanced digital rubrics, use AI tools to analyze their writing, and apply AI-generated feedback to revise their work. This approach enhances current writing assignments and develops critical thinking skills and digital literacy. Assessments include AI-generated feedback reports, peer evaluations, and final revised writing samples demonstrating improvement based on AI and peer input, equipping students with valuable skills for future writing tasks.
APA, Harvard, Vancouver, ISO, and other styles
37

Uparkar, Swati. "IntelliView: An AI Based Mock Interview Platform." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem31356.

Full text
Abstract:
The IntelliView initiative represents a pioneering endeavor designed to empower novice job seekers through the integration of state-of-the-art Artificial Intelligence (AI) and Natural Language Processing (NLP) technologies. This platform employs a sophisticated blend of HTML, CSS, JavaScript, and the Deep Face method, introducing a comprehensive framework to redefine interview assessment and augment job application preparation. The primary module facilitates text-based analysis, enabling users to engage with real-time interview questions by entering responses directly into the web interface. Subsequently, advanced algorithms compare users' responses to expected answers, yielding a percentage similarity score and presenting the correct answer. Beyond affording essential interview practice, this feature furnishes constructive feedback, enhancing users' responses and ultimately refining their interview performance. In the second module, IntelliView introduces a dynamic interview environment with video-based analysis. Users respond to inquiries utilizing webcams, enabling the system to meticulously record both verbal responses and facial expressions. Leveraging the Deep Face method, the platform conducts real-time emotion and sentiment analysis, offering users insights into their emotional states throughout interviews. This feedback facilitates the refinement of non-verbal communication skills, empowering candidates to recognize emotional tendencies and adapt interview strategies accordingly. The third module functions as a comprehensive resume builder, employing HTML, CSS, and JavaScript to provide diverse templates tailored to individual needs. In summation, IntelliView heralds a transformative paradigm in job application preparation, seamlessly amalgamating technological advancements and human interaction to equip first-time job seekers with the requisite tools for navigating the competitive job market successfully. Key Words: Interview Assessment, DeepFace, NLP, Text-based analysis, Video-based analysis, Resume Builder
APA, Harvard, Vancouver, ISO, and other styles
38

Journal, IJSREM. "IntelliView: An AI Based Mock Interview Platform." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 03 (2024): 1–11. http://dx.doi.org/10.55041/ijsrem29201.

Full text
Abstract:
The IntelliView initiative represents a pioneering endeavor designed to empower novice job seekers through the integration of state-of-the-art Artificial Intelligence (AI) and Natural Language Processing (NLP) technologies. This platform employs a sophisticated blend of HTML, CSS, JavaScript, and the Deep Face method, introducing a comprehensive framework to redefine interview assessment and augment job application preparation. The primary module facilitates text-based analysis, enabling users to engage with real-time interview questions by entering responses directly into the web interface. Subsequently, advanced algorithms compare users' responses to expected answers, yielding a percentage similarity score and presenting the correct answer. Beyond affording essential interview practice, this feature furnishes constructive feedback, enhancing users' responses and ultimately refining their interview performance. In the second module, IntelliView introduces a dynamic interview environment with video-based analysis. Users respond to inquiries utilizing webcams, enabling the system to meticulously record both verbal responses and facial expressions. Leveraging the Deep Face method, the platform conducts real-time emotion and sentiment analysis, offering users insights into their emotional states throughout interviews. This feedback facilitates the refinement of non-verbal communication skills, empowering candidates to recognize emotional tendencies and adapt interview strategies accordingly. The third module functions as a comprehensive resume builder, employing HTML, CSS, and JavaScript to provide diverse templates tailored to individual needs. In summation, IntelliView heralds a transformative paradigm in job application preparation, seamlessly amalgamating technological advancements and human interaction to equip first-time job seekers with the requisite tools for navigating the competitive job market successfully. Key Words: Interview Assessment, DeepFace, NLP, Text- based analysis, Video-based analysis, Resume Builder
APA, Harvard, Vancouver, ISO, and other styles
39

Chan, Sumie Tsz Sum, Noble Po Kan Lo, and Alan Man Him Wong. "Enhancing university level English proficiency with generative AI: Empirical insights into automated feedback and learning outcomes." Contemporary Educational Technology 16, no. 4 (2024): ep541. http://dx.doi.org/10.30935/cedtech/15607.

Full text
Abstract:
This paper investigates the effects of large language model (LLM) based feedback on the essay writing proficiency of university students in Hong Kong. It focuses on exploring the potential improvements that generative artificial intelligence (AI) can bring to student essay revisions, its effect on student engagement with writing tasks, and the emotions students experience while undergoing the process of revising written work. Utilizing a randomized controlled trial, it draws comparisons between the experiences and performance of 918 language students at a Hong Kong university, some of whom received generated feedback (GPT-3.5-turbo LLM) and some of whom did not. The impact of AI-generated feedback is assessed not only through quantifiable metrics, entailing statistical analysis of the impact of AI feedback on essay grading, but also through subjective indices, student surveys that captured motivational levels and emotional states, as well as thematic analysis of interviews with participating students. The incorporation of AI-generated feedback into the revision process demonstrated significant improvements in the caliber of students’ essays. The quantitative data suggests notable effect sizes of statistical significance, while qualitative feedback from students highlights increases in engagement and motivation as well as a mixed emotional experience during revision among those who received AI feedback.
APA, Harvard, Vancouver, ISO, and other styles
40

Sysoyev, P. V., E. M. Filatov, M. N. Evstigneev, O. G. Polyakov, I. A. Evstigneeva, and D. O. Sorokin. "A matrix of artificial intelligence tools in pre-service foreign language teacher training." Tambov University Review. Series: Humanities 29, no. 3 (2024): 559–88. http://dx.doi.org/10.20310/1810-0201-2024-29-3-559-588.

Full text
Abstract:
Importance. The modern stage of information and technological development of civilization is characterised by the dynamic emergence of artificial intelligence (AI) technologies and the development of tools based on them, which are being more and more introduced into various spheres of life. The education system in general, and foreign language education in particular, is no exception. Currently, there are several dozen AI tools that are actively used by students and teachers in the development of foreign language communicative skills and the development of language skills. There is a rather voluminous body of research in the academic literature devoted to the disclosure of the language teaching potential of modern AI tools. However, most of the studies are of a pilot nature. The focus of scholars’ attention is on particular methods for the development of students’ communicative skills or the development of certain language skills based on individual AI tools. The systematic consideration of the integration of AI technologies into the process of teaching foreign language majoring students – future foreign language teachers – linguistic and teaching methods training has not been the subject of a special study. The purpose of this study is to develop a matrix of AI tools used in the linguistic and teaching methods training of future foreign language teachers.Materials and Methods. The study is conducted on the basis of the expert assessment method, which allows to: a) identify the language teaching potential, as well as the limitations of the most common AI tools; b) summarise and classify the available knowledge in the form of a matrix of AI tools used in the linguistic and teaching methods training of future foreign language teachers. The materials of the study were research articles, published in Russian and foreign academic journals, indexed in Web of Science and Scopus.Results and Discussion. A matrix of AI tools in the linguistic and teaching methods training of future foreign language teachers has been developed. The matrix is presented according to six types of feedback from generative AI used in foreign language teaching and teaching methods. The following are the main and most accessible AI tools for teachers and students providing feedback of each type: a) Replika, LingvoBot, Multitran_bot, Slavaribot, WorldContextBot, ChatGPT, Google Assistant, EGEEnglish.ru (educational and social feedback); b) ChatGPT, YandexGPT and GigaChat (information and reference feedback); c) ChatGPT 4.0, YandexGPT, GigaChat, Twee (methodological feedback); d) ChatGPT, YandexGPT, GigaChat, Turnitin, software “Antiplagiat” (analytical feedback); e) Grammarly, PaperRater, Pigai, ChatGPT 4.0, YandexGPT, GigaChat, Criterion (assessment and evaluative feedback); f) ChatGPT, YandexGPT, GigaChat, AI Poem Generator, Midjourney, Suno, Sora, Runway (conditionally creative feedback).Conclusion. The novelty of the research consists in the development of a matrix of AI tools in the linguistic and teaching methods training of future foreign language teachers. The prospects for further research lie in the development of teaching methods for aspects of language, types of speech activity, as well as specialised disciplines based on specific AI tools. In their entirety, these particular methods will enable the creation of an integrated system of linguistic and teaching methods training of future foreign language teachers based on AI tools.
APA, Harvard, Vancouver, ISO, and other styles
41

Prashant Gulave and Dr. Kavita Moholkar. "Trends in SDLC Document Review using Generative AI." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 11, no. 2 (2025): 3009–11. https://doi.org/10.32628/cseit25112777.

Full text
Abstract:
This research paper explores the evolving role of Generative AI in Software Development Life Cycle (SDLC) document review. With AI-driven advancements in Natural Language Processing (NLP), models such as GPT, BERT, and domain-specific LLMs have been adapted to evaluate requirement specifications, test plans, and design documents. We present an analysis of how these models are being fine-tuned for document validation, compliance checking, and contextual feedback generation in the software industry. The paper also examines the integration of rule-based methods with AI, providing structured feedback for engineering domain documentation. Furthermore, we discuss emerging trends, challenges, and future research directions for enhancing AI-based document review in SDLC.
APA, Harvard, Vancouver, ISO, and other styles
42

Narayan, Ram, Anita Gehlot, Rajesh Singh, Shaik Vaseem Akram, Neeraj Priyadarshi, and Bhekisipho Twala. "Hospitality Feedback System 4.0: Digitalization of Feedback System with Integration of Industry 4.0 Enabling Technologies." Sustainability 14, no. 19 (2022): 12158. http://dx.doi.org/10.3390/su141912158.

Full text
Abstract:
Digitalization enables the realization of the resilient infrastructure in every application for achieving sustainability. In the context of the hospitality business, resilient infrastructure based on digital technologies is critical for gaining the best customer feedback on providing quality service. Digital technology has already proved to enhance hospitality services with intelligent decisions through real-time data. In the previous studies, the significance of digital technologies in the hotel sector has been extended in numerous theoretical and empirical studies, yet there is a lack of research that provides a discussion on feedback systems in hospitality with digital technologies applications. With the motivation from the above aspects, this study intends to present the importance and application of the Internet of Things (IoT), artificial intelligence (AI), cloud computing, and big data implementation in customer quality and satisfaction. Moreover, we have discussed each technology´s significance and application for realizing digital-based customer quality and satisfaction. It has been identified that the AI-based system collects the input data from different common websites and compares it with a different algorithm using a neural network. According to the findings of this study, AI and personnel quality of service have an impact on customer pleasure and loyalty. The study also concludes with the following recommendations, such as the design and development of dedicated hardware to gain the actual feedback from the customer on a large scale for improving the accuracy in the future.
APA, Harvard, Vancouver, ISO, and other styles
43

Hernández, Roque Jacinto. "Optimizing the Effectiveness of Chat GPT’s Feedback on ESL Student's Written Productions." YUYAY: Estrategias, Metodologías & Didácticas Educativas 3, no. 2 (2024): 50–61. http://dx.doi.org/10.59343/yuyay.v3i2.69.

Full text
Abstract:
This essay explores the integration of AI technologies, specifically ChatGPT, into ESL education to enhance the feedback process. It argues for a rubric-based framework to ensure the feedback aligns with pedagogical objectives and effectively meets student needs. The discussion includes various studies highlighting the importance of feedback in language learning and the potential of AI to offer timely, personalized feedback. By employing a systematic evaluation of ChatGPT’s responses through a well-defined rubric, educators can refine the feedback to be more supportive and effective. This approach not only optimizes AI's utility in ESL education but also promotes a deeper understanding of effective teaching and learning strategies. The essay underscores the transformative potential of AI in education, advocating for a balanced integration that enhances rather than replaces traditional educational methods.
APA, Harvard, Vancouver, ISO, and other styles
44

Bogolepova, S. V. "Potential of Artificial Intelligence Tools for Text Evaluation and Feedback Provision." Professional Discourse & Communication 7, no. 1 (2025): 70–88. https://doi.org/10.24833/2687-0126-2025-7-1-70-88.

Full text
Abstract:
The article aims to explore the potential of generative artificial intelligence (AI) for assessing written work and providing feedback on it. The goal of this research is to determine the possibilities and limitations of generative AI when used for evaluating students’ written production and providing feedback. To accomplish the aim, a systematic review of twenty-two original studies was conducted. The selected studies were carried out in both Russian and international contexts, with results published between 2022 and 2025. It was found that the criteria-based assessments made by generative models align with those of instructors, and that generative AI surpasses human evaluators in its ability to assess language and argumentation. However, the reliability of this evaluation is negatively affected by the instability of sequential assessments, the hallucinations of generative models, and their limited ability to account for contextual nuances. Despite the detailisation and constructive nature of feedback from generative AI, it is often insufficiently specific and overly verbose, which can hinder student comprehension. Feedback from generative models primarily targets local deficiencies, while human evaluators pay attention to global issues, such as the incomplete alignment of content with the assigned topic. Unlike instructors, generative AI provides template-based feedback, avoiding indirect phrasing and leading questions contributing to the development of self-regulation skills. Nevertheless, these shortcomings can be addressed through subsequent queries to the generative model. It was also found that students are open to receiving feedback from generative AI; however, they prefer to receive it from instructors and peers. The results are discussed in the context of using generative models for evaluating written work and formulating feedback by foreign language instructors. The conclusion emphasises the necessity of a critical approach to using generative models in the assessment of written work and the importance of training instructors for effective interaction with these technologies.
APA, Harvard, Vancouver, ISO, and other styles
45

Kadaruddin, Muh. Nurtanzis Sutoyo, Heri Alfian, et al. "IMPLEMENTATION OF A MOBILE APPLICATION BASED ON MULTIMEDIA AND AI FOR INTERACTIVE LEARNING IN HIGHER EDUCATION." MORFAI JOURNAL 5, no. 2 (2025): 648–60. https://doi.org/10.54443/morfai.v5i2.2725.

Full text
Abstract:
The advancement of technology in education has led to the development of interactive and personalized learning tools. This research focuses on creating mobile learning application with multimedia and AI support to improve student engagement, motivation, and academic performance in higher education. This research aims to answer the question: How effective is an AI-based mobile learning app with multimedia support in enhancing student interactivity, motivation, and performance in higher education? The research follows the ADDIE model (Analysis, Design, Development, Implementation, and Evaluation). Data was collected from expert validators (media and material experts) and student feedback. A quantitative descriptive analysis was done using Likert-scale instruments to measure user experience, interactivity, motivation, and learning outcomes. The application was tested on 38 students, and their feedback was used to assess its effectiveness. The results show that the AI-based mobile learning app significantly improved student interactivity, motivation, and academic performance. The post-test results showed an average score of 81.82%, indicating an improvement in learning outcomes. Students expressed high satisfaction with the app’s interactivity, multimedia content, and personalized learning paths. The AI-driven real-time feedback contributed to a more engaging learning experience. In conclusion, the AI-based mobile learning app with multimedia support effectively enhances student engagement, motivation, and academic performance. The integration of personalized learning paths and real-time feedback makes learning more interactive and adaptive. These findings highlight the need to design future mobile learning apps that prioritize accessibility and adaptability.
APA, Harvard, Vancouver, ISO, and other styles
46

Hwang, Jihyun, Yechan Lee, Seung-Hyun Kim, Juyeon Lee, and Wan Hui Lee. "Effectiveness analysis of AI-based educational technology tools." Korean School Mathematics Society 28, no. 2 (2025): 143–62. https://doi.org/10.30807/ksms.2025.28.2.004.

Full text
Abstract:
This study examined the effectiveness of AI-based educational technology through a case study of JindanMath, an AI-driven tool designed to support personalized mathematics instruction. JindanMath incorporated features such as real-time feedback, personalized learning pathways, and error note recommendations, aimed at enhancing teaching practices and student engagement. The study employed the Fidelity of Implementation (FOI) framework and a logic model to assess how the tool’s design and implementation influenced classroom outcomes, as opposed to using a traditional group comparison design. Data collection included teacher interviews, student usage logs, and a Delphi survey to evaluate key aspects of pedagogical effectiveness. The findings revealed that JindanMath significantly improved teacher efficiency through automated task manage- ment, personalized feedback, and student performance monitoring. Teachers benefited from streamlined grading, which al- lowed more time for instructional planning and student engagement activities. The tool also enhanced student learning by fos- tering autonomy and personalized learning experiences. However, sustaining long-term student participation posed a challenge, as some students showed decreased engagement over time. This finding underscored the importance of ongoing motivation and adaptive instructional strategies tailored to diverse learner needs. The study concluded that AI-based tools like JindanMath could have a profound impact when aligned with clear instructional goals and continuously optimized based on classroom contexts. By offering empirical evidence, this research informed educators and developers of effective practices for evaluating the impact of AI-based tools in educational settings.
APA, Harvard, Vancouver, ISO, and other styles
47

Shekhawat, Devanshu, Anjali Mishra, Anjali Singh, and Aaryan Singh. "Gen-AI Based Chatbot on Top of Llama for SSR Documents." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 03 (2025): 1–9. https://doi.org/10.55041/ijsrem43414.

Full text
Abstract:
Data Institutions generate vast amounts of information, making efficient access challenging. The proposed institutional chatbot integrates Llama, a conversational AI, into the institution’s website to provide real-time access to Self- Study Reports (SSR) and other documents. Using natural language processing (NLP), the chatbot streamlines communication, minimizes manual searches, and enhances transparency and accessibility. By leveraging machine learning, it interprets queries, retrieves relevant data, and generates human-like responses. A feedback loop ensures continuous learning, while Llama’s scalability enables accurate responses to complex queries. This AI-driven system modernizes institutional knowledge management and improves efficiency. Keywords - Data Institutional chatbot, Llama, natural language processing (NLP), Self-Study Reports (SSR), machine learning, information retrieval, feedback loop, knowledge management.
APA, Harvard, Vancouver, ISO, and other styles
48

Gupta, Neha, Aashi Sharma, Suneet Shukla, and Praveen Kumar. "AI-based mock interview evaluator: An emotion and confidence classifier model." International Journal of Research in Engineering and Innovation 09, no. 03 (2025): 136–42. https://doi.org/10.36037/ijrei.2025.9308.

Full text
Abstract:
In today’s highly competitive job market, interview preparedness has become more critical than ever. Traditional mock interviews often lack scalability, objectivity, and actionable insights. To address these limitations, this paper proposes an AI-Based Mock Interview Evaluator that combines emotion recognition and confidence classification to assess candidate performance. Using facial expression analysis and speech feature extraction, the system offers real-time feedback on emotional state and confidence level during simulated interview sessions. The model leverages Convolutional Neural Networks (CNNs) for emotion detection and supervised machine learning techniques for vocal confidence estimation. Experimental results show that the system can classify emotional and confidence cues with high accuracy, providing an interactive and personalized experience for users. The proposed tool aims to democratize interview training by offering intelligent, automated feedback that enhances self-awareness and performance for job seekers. This paper presents an AI-driven mock interview evaluator designed to assess candidates' emotional states and confidence levels during simulated interviews. By integrating facial expression recognition, speech analysis, and machine learning classifiers, the system provides real-time feedback to enhance interview preparation. The model aims to bridge the gap between traditional interview coaching and modern AI capabilities, offering a scalable solution for personalized interview training.
APA, Harvard, Vancouver, ISO, and other styles
49

Vasuki, Vasuki. "AI DIET PLANNER BASED ON USER GOALS." International Scientific Journal of Engineering and Management 04, no. 06 (2025): 1–9. https://doi.org/10.55041/isjem04275.

Full text
Abstract:
ABSTRACT: This project proposes an AI diet planner that allows users to use user-specific menus depending on the specific destination of each user. H. Weight loss, muscle growth, or a healthy lifestyle. Machine learning and nutrition database algorithms are used to collect and process user-specific information such as age, gender, weight, size, activity level, food preferences, and medical illnesses. The AI ​​algorithm dynamically adapts key nutrients and calorie instructions and provides menus according to the user's goals and preferences. The planner includes adaptive real-time feedback to help users monitor their progress over the long term and reach dynamic nutrition orders. Adaptive solutions are user-oriented and intelligent features for healthier dietary rewards and long-term interventions for wells. Keyword- Machine Learning K-means.
APA, Harvard, Vancouver, ISO, and other styles
50

Kim, Dongsim, Dahyeon Ryo, Kudong Park, and Hyesin Kang. "Evaluating the effectiveness of AI-based synchronous online feedback education system." Journal of Korean Association of Computer Education 27, no. 7 (2024): 11–23. https://doi.org/10.32431/kace.2024.27.7.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!