To see the other types of publications on this topic, follow the link: Video annotation tool.

Journal articles on the topic 'Video annotation tool'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Video annotation tool.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Benitez-Garcia, Gibran, Jesus Olivares-Mercado, Gabriel Sanchez-Perez, and Hiroki Takahashi. "IPN HandS: Efficient Annotation Tool and Dataset for Skeleton-Based Hand Gesture Recognition." Applied Sciences 15, no. 11 (2025): 6321. https://doi.org/10.3390/app15116321.

Full text
Abstract:
Hand gesture recognition (HGR) heavily relies on high-quality annotated datasets. However, annotating hand landmarks in video sequences is a time-intensive challenge. In this work, we introduce IPN HandS, an enhanced version of our IPN Hand dataset, which now includes approximately 700,000 hand skeleton annotations and corrected gesture boundaries. To generate these annotations efficiently, we propose a novel annotation tool that combines automatic detection, inter-frame interpolation, copy–paste capabilities, and manual refinement. This tool significantly reduces annotation time from 70 min to just 27 min per video, allowing for the scalable and precise annotation of large datasets. We validate the advantages of the IPN HandS dataset by training a lightweight LSTM-based model using these annotations and comparing its performance against models trained with annotations from the widely used MediaPipe hand pose estimators. Our model achieves an accuracy that is 12% higher than the MediaPipe Hands model and 8% higher than the MediaPipe Holistic model. These results underscore the importance of annotation quality in training generalization and overall recognition performance. Both the IPN HandS dataset and the annotation tool will be released to support reproducible research and future work in HGR and related fields.
APA, Harvard, Vancouver, ISO, and other styles
2

Groh, Florian, Dominik Schörkhuber, and Margrit Gelautz. "A tool for semi-automatic ground truth annotation of traffic videos." Electronic Imaging 2020, no. 16 (2020): 200–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.16.avm-150.

Full text
Abstract:
We have developed a semi-automatic annotation tool – “CVL Annotator” – for bounding box ground truth generation in videos. Our research is particularly motivated by the need for reference annotations of challenging nighttime traffic scenes with highly dynamic lighting conditions due to reflections, headlights and halos from oncoming traffic. Our tool incorporates a suite of different state-of-the-art tracking algorithms in order to minimize the amount of human input necessary to generate high-quality ground truth data. We focus our user interface on the premise of minimizing user interaction and visualizing all information relevant to the user at a glance. We perform a preliminary user study to measure the amount of time and clicks necessary to produce ground truth annotations of video traffic scenes and evaluate the accuracy of the final annotation results.
APA, Harvard, Vancouver, ISO, and other styles
3

Von Wachter, Jana-Kristin, and Doris Lewalter. "Video Annotation as a Supporting Tool for Video-based Learning in Teacher Training – A Systematic Literature Review." International Journal of Higher Education 12, no. 2 (2023): 1. http://dx.doi.org/10.5430/ijhe.v12n2p1.

Full text
Abstract:
Digital video annotation tools, which allow users to add synchronized comments to video content, have gained significant attention in teacher education in recent years. However, there is no overview of the research on the use of annotations, their implementation in teacher training and their effect on the development of professional competencies as a result of using video annotations as a supporting tool for video-based learning. In order to fill this gap, this paper reports on the results of a systematic literature review which was carried out to determine 1) how video annotations were implemented in studies in educational settings, 2) which professional competencies were investigated to be further developed with the aid of video annotations in these studies, and 3) which learning outcomes were reported in the selected studies. A total of 18 eligible studies, published between 2014 and 2022, were identified via database search and cross-referencing. A qualitative content analysis of these studies showed that video annotations were generally used to perform one or more of three functions, these being feedback, communication, and documentation, while they also enabled a deeper content knowledge of teaching, reflective skills, and professional vision, and facilitated social integration and recognition. The convincing evidence of the positive effect of using video annotation as a supporting tool in video teacher training prove them to be a powerful tool supporting the development of professional vision and other teaching skills. The use of video annotation tools in educational settings points towards further research as well.
APA, Harvard, Vancouver, ISO, and other styles
4

Yammahi, Amal Al. "Entrepreneurship Mentorship for HCT Education Alumni to Transition as Lead Businesswomen in the UAE Education Sector." International Journal of Higher Education 12, no. 2 (2023): 73. http://dx.doi.org/10.5430/ijhe.v12n2p73.

Full text
Abstract:
Digital video annotation tools, which allow users to add synchronized comments to video content, have gained significant attention in teacher education in recent years. However, there is no overview of the research on the use of annotations, their implementation in teacher training and their effect on the development of professional competencies as a result of using video annotations as a supporting tool for video-based learning. In order to fill this gap, this paper reports on the results of a systematic literature review which was carried out to determine 1) how video annotations were implemented in studies in educational settings, 2) which professional competencies were investigated to be further developed with the aid of video annotations in these studies, and 3) which learning outcomes were reported in the selected studies. A total of 18 eligible studies, published between 2014 and 2022, were identified via database search and cross-referencing. A qualitative content analysis of these studies showed that video annotations were generally used to perform one or more of three functions, these being feedback, communication, and documentation, while they also enabled a deeper content knowledge of teaching, reflective skills, and professional vision, and facilitated social integration and recognition. The convincing evidence of the positive effect of using video annotation as a supporting tool in video teacher training prove them to be a powerful tool supporting the development of professional vision and other teaching skills. The use of video annotation tools in educational settings points towards further research as well.
APA, Harvard, Vancouver, ISO, and other styles
5

Barz, Michael, Omair Shahzad Bhatti, Hasan Md Tusfiqur Alam, et al. "eyeNotate: Interactive Annotation of Mobile Eye Tracking Data Based on Few-Shot Image Classification." Journal of Eye Movement Research 18, no. 4 (2025): 27. https://doi.org/10.3390/jemr18040027.

Full text
Abstract:
Mobile eye tracking is an important tool in psychology and human-centered interaction design for understanding how people process visual scenes and user interfaces. However, analyzing recordings from head-mounted eye trackers, which typically include an egocentric video of the scene and a gaze signal, is a time-consuming and largely manual process. To address this challenge, we develop eyeNotate, a web-based annotation tool that enables semi-automatic data annotation and learns to improve from corrective user feedback. Users can manually map fixation events to areas of interest (AOIs) in a video-editing-style interface (baseline version). Further, our tool can generate fixation-to-AOI mapping suggestions based on a few-shot image classification model (IML-support version). We conduct an expert study with trained annotators (n = 3) to compare the baseline and IML-support versions. We measure the perceived usability, annotations’ validity and reliability, and efficiency during a data annotation task. We asked our participants to re-annotate data from a single individual using an existing dataset (n = 48). Further, we conducted a semi-structured interview to understand how participants used the provided IML features and assessed our design decisions. In a post hoc experiment, we investigate the performance of three image classification models in annotating data of the remaining 47 individuals.
APA, Harvard, Vancouver, ISO, and other styles
6

Ardley, Jillian, and Jacqueline Johnson. "Video Annotation Software in Teacher Education: Researching University Supervisor’s Perspective of a 21st-Century Technology." Journal of Educational Technology Systems 47, no. 4 (2018): 479–99. http://dx.doi.org/10.1177/0047239518812715.

Full text
Abstract:
Video recordings for student teaching field experiences have been utilized with student teachers (also known as teacher candidates) to (a) capture the demonstration of their lesson plans, (b) critique their abilities within the performance, and (c) share and rate experiences for internal and external evaluations by the state and other organizations. Many times, the recording, saving, grading, and sharing process was not efficient. Thus, the feedback cycle from the university supervisor to the teacher candidate was negatively impacted. However, one communication technology tool that has the potential to facilitate the feedback process is video annotation software. This communication technology uses the storage within a remote server, known also as a cloud, to store videos that include typed commentary that is in sync with the portion of the video recorded. A group of university supervisors piloted a video annotation tool during student teaching to rate its effectiveness. Through a survey, the participants addressed how they perceived the implementation of the video annotation tool within the student teaching experience. Results suggest a video annotated technology-based supervision method is feasible and effective if paired with effective training and technical support.
APA, Harvard, Vancouver, ISO, and other styles
7

Gil de Gómez Pérez, David, and Roman Bednarik. "POnline: An Online Pupil Annotation Tool Employing Crowd-sourcing and Engagement Mechanisms." Human Computation 6 (December 10, 2019): 176–91. http://dx.doi.org/10.15346/hc.v6i1.99.

Full text
Abstract:
Pupil center and pupil contour are two of the most important features in the eye-image used for video-based eye-tracking. Well annotated databases are needed in order to allow benchmarking of the available- and new pupil detection and gaze estimation algorithms. Unfortunately, creation of such a data set is costly and requires a lot of efforts, including manual work of the annotators. In addition, reliability of manual annotations is hard to establish with a low number of annotators. In order to facilitate progress of the gaze tracking algorithm research, we created an online pupil annotation tool that engages many users to interact through gamification and allows utilization of the crowd power to create reliable annotations \cite{artstein2005bias}. We describe the tool and the mechanisms employed, and report results on the annotation of a publicly available data set. Finally, we demonstrate an example utilization of the new high-quality annotation on a comparison of two state-of-the-art pupil center algorithms.
APA, Harvard, Vancouver, ISO, and other styles
8

Feng, Shuo, James Wainwright, Chong Wang, et al. "Video Segmentation of Wire + Arc Additive Manufacturing (WAAM) Using Visual Large Model." Sensors 25, no. 14 (2025): 4346. https://doi.org/10.3390/s25144346.

Full text
Abstract:
Process control and quality assurance of wire + arc additive manufacturing (WAAM) and automated welding rely heavily on in-process monitoring videos to quantify variables such as melt pool geometry, location and size of droplet transfer, arc characteristics, etc. To enable feedback control based upon this information, an automatic and robust segmentation method for monitoring of videos and images is required. However, video segmentation in WAAM and welding is challenging due to constantly fluctuating arc brightness, which varies with deposition and welding configurations. Additionally, conventional computer vision algorithms based on greyscale value and gradient lack flexibility and robustness in this scenario. Deep learning offers a promising approach to WAAM video segmentation; however, the prohibitive time and cost associated with creating a well-labelled, suitably sized dataset have hindered its widespread adoption. The emergence of large computer vision models, however, has provided new solutions. In this study a semi-automatic annotation tool for WAAM videos was developed based upon the computer vision foundation model SAM and the video object tracking model XMem. The tool can enable annotation of the video frames hundreds of times faster than traditional manual annotation methods, thus making it possible to achieve rapid quantitative analysis of WAAM and welding videos with minimal user intervention. To demonstrate the effectiveness of the tool, three cases are demonstrated: online wire position closed-loop control, droplet transfer behaviour analysis, and assembling a dataset for dedicated deep learning segmentation models. This work provides a broader perspective on how to exploit large models in WAAM and weld deposits.
APA, Harvard, Vancouver, ISO, and other styles
9

Aasman, Susan, Liliana Melgar Estrada, Tom Slootweg, and Rob Wegter. "Tales of a Tool Encounter." Audiovisual Data in Digital Humanities 7, no. 14 (2018): 73. http://dx.doi.org/10.18146/2213-0969.2018.jethc154.

Full text
Abstract:
This article explores the affordances and functionalities of the Dutch CLARIAH research infrastructure – and the integrated video annotation tool – for doing media historical research with digitised audiovisual sources from television archives. The growing importance of digital research infrastructures, archives and tools, has enticed media historians to rethink their research practices more and more in terms of methodological transparency, tool criticism and reflection. Moreover, also questions related to the heuristics and hermeneutics of our scholarly work need to be reconsidered. The article hence sketches the role of digital research infrastructures for the humanities (in the Netherlands), and the use of video annotation in media studies and other research domains. By doing so, the authors reflect on their own specific engagements with the CLARIAH infrastructure and its tools, both as media historians and co-developers. This dual position greatly determines the possibilities and constraints for the various modes of digital scholarship relevant to media history. To exemplify this, two short case studies – based on a pilot project ‘Me and Myself. Tracing First Person in Documentary History in AV-Collections’ – show how the authors deployed video annotation to segment interpretative units of interest, rather than opting for units of analysis common in statistical analysis. The deliberate choice to abandon formal modes of moving image annotation and analysis ensued from a delicate interplay between the desired interpretative research goals, and the integration of tool criticism and reflection in the research design. The authors found that due to the formal and stylistic complexity of documentaries, also alternative, hermeneutic research strategies ought to be supported by digital infrastructures and its tools.
APA, Harvard, Vancouver, ISO, and other styles
10

Ganesan, K., and N. S. Manikandan. "Energy-aware automatic video annotation tool for autonomous vehicle." International Journal of Computational Vision and Robotics 1, no. 1 (2022): 1. http://dx.doi.org/10.1504/ijcvr.2022.10048219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Muñoz García, Adolfo, Héctor Julio Pérez López, and Nuria Lloret Romero. "Researching Video Annotation Tool for Music and Theatre Arts." International Journal of Technology, Knowledge, and Society 3, no. 2 (2007): 51–58. http://dx.doi.org/10.18848/1832-3669/cgp/v03i02/55726.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Salisbury, Elliot, Sebastian Stein, and Sarvapali Ramchurn. "CrowdAR: A Live Video Annotation Tool for Rapid Mapping." Procedia Engineering 159 (2016): 89–93. http://dx.doi.org/10.1016/j.proeng.2016.08.069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Manikandan, N. S., and K. Ganesan. "Energy-aware automatic video annotation tool for autonomous vehicle." International Journal of Computational Vision and Robotics 13, no. 5 (2023): 510–32. http://dx.doi.org/10.1504/ijcvr.2023.133137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Monedero-Moya, Juan José, Daniel Cebrián-Robles, and Philip Desenne. "Usability and Satisfaction in Multimedia Annotation Tools for MOOCs." Comunicar 22, no. 44 (2015): 55–62. http://dx.doi.org/10.3916/c44-2015-06.

Full text
Abstract:
The worldwide boom in digital video may be one of the reasons behind the exponential growth of MOOCs. The evaluation of a MOOC requires a great degree of multimedia and collaborative interaction. Given that videos are one of the main elements in these courses, it would be interesting to work on innovations that would allow users to interact with multimedia and collaborative activities within the videos. This paper is part of a collaboration project whose main objective is «to design and develop multimedia annotation tools to improve user interaction with contents». This paper will discuss the assessment of two tools: Collaborative Annotation Tool (CaTool) and Open Video Annotation (OVA). The latter was developed by the aforementioned project and integrated into the edX MOOC. The project spanned two academic years (2012-2014) and the assessment tools were tested on different groups in the Faculty of Education, with responses from a total of 180 students. Data obtained from both tools were compared by using average contrasts. Results showed significant differences in favour of the second tool (OVA). The project concludes with a useful video annotation tool, whose design was approved by users, and which is also a quick and user-friendly instrument to evaluate any software or MOOC. A comprehensive review of video annotation tools was also carried out at the end of the project.El auge del vídeo digital a nivel mundial puede ser una de las causas del crecimiento exponencial de los MOOC. Las evaluaciones de los MOOC recomiendan una mayor interacción multimedia y colaborativa. Siendo los vídeos unos de los elementos destacados en estos cursos, será interesante trabajar en innovaciones que permitan una mayor capacidad a los usuarios para interactuar con anotaciones multimedia y colaborativas dentro de los vídeos. El presente artículo es parte del proyecto de colaboración, cuyo objetivo principal fue «El diseño y creación de herramientas de anotaciones multimedia para mejorar la interactividad de los usuarios con los contenidos». En este artículo mostraremos la evaluación de dos herramientas como fueron Collaborative Annotation Tool (CaTool) y Open Video Annotation (OVA) esta última desarrollada por el proyecto e integrada en el MOOC de edX. El proyecto abarcó dos cursos académicos (2012-14) y se aplicó un instrumento de evaluación en diferentes grupos de la Facultad de Educación a un total de 180 estudiantes. Se compararon los datos obtenidos entre ambas herramientas con contrastes de media, resultando diferencias significativas a favor de la segunda herramienta. Al concluir el proyecto se dispone de una herramienta de anotaciones de vídeo con diseño validado por los usuarios; además de un instrumento sencillo y rápido de aplicar para evaluar cualquier software y MOOC. Se realizó también una revisión amplia sobre herramientas de anotaciones de vídeos.
APA, Harvard, Vancouver, ISO, and other styles
15

Feyeux, M., A. Reignier, M. Mocaer, et al. "Development of automated annotation software for human embryo morphokinetics." Human Reproduction 35, no. 3 (2020): 557–64. http://dx.doi.org/10.1093/humrep/deaa001.

Full text
Abstract:
Abstract STUDY QUESTION Is it possible to develop an automated annotation tool for human embryo development in time-lapse devices based on image analysis? SUMMARY ANSWER We developed and validated an automated software for the annotation of human embryo morphokinetic parameters, having a good concordance with expert manual annotation on 701 time-lapse videos. WHAT IS KNOWN ALREADY Morphokinetic parameters obtained with time-lapse devices are increasingly used for the assessment of human embryo quality. However, their annotation is time-consuming and can be slightly operator-dependent, highlighting the need to develop fully automated approaches. STUDY DESIGN, SIZE, DURATION This monocentric study was conducted on 701 videos originating from 584 couples undergoing IVF with embryo culture in a time-lapse device. The only selection criterion was that the duration of the video must be over 60 h. PARTICIPANTS/MATERIALS, SETTING, METHODS An automated morphokinetic annotation tool was developed based on gray level coefficient of variation and detection of the thickness of the zona pellucida. The detection of cellular events obtained with the automated tool was compared with those obtained manually by trained experts in clinical settings. MAIN RESULTS AND THE ROLE OF CHANCE Although some differences were found when embryos were considered individually, we found an overall concordance between automated and manual annotation of human embryo morphokinetics from fertilization to expanded blastocyst stage (r2 = 0.92). LIMITATIONS, REASONS FOR CAUTION These results should undergo multicentric external evaluation in order to test the overall performance of the annotation tool. Getting access to the export of 3D videos would enhance the quality of the correlation with the same algorithm and its extension to the 3D regions of interest. A technical limitation of our work lies within the duration of the video. The more embryo stages the video contains, the more information the script has to identify them correctly. WIDER IMPLICATIONS OF THE FINDINGS Our system paves the way for high-throughput analysis of multicentric morphokinetic databases, providing new insights into the clinical value of morphokinetics as a predictor of embryo quality and implantation. STUDY FUNDING/COMPETING INTEREST(S) This study was partly funded by Finox-Gedeon Richter Forward Grant 2016 and NeXT (ANR-16-IDEX-0007). We have no conflict of interests to declare. TRIAL REGISTRATION NUMBER N/A
APA, Harvard, Vancouver, ISO, and other styles
16

Chen, Yu-Hua, and Radovan Bruncak. "Transcribear – Introducing a secure online transcription and annotation tool." Digital Scholarship in the Humanities 35, no. 2 (2019): 265–75. http://dx.doi.org/10.1093/llc/fqz016.

Full text
Abstract:
Abstract Reliable high-quality transcription and/or annotation (a.k.a. ‘coding’) is essential for research in a variety of areas in Humanities and Social Sciences which make use of qualitative data such as interviews, focus groups, classroom observations, or any other audio/video recordings. A good tool can facilitate the work of transcription and annotation because the process is notoriously time-consuming and challenging. However, our survey indicates that few existing tools can accommodate the requirements for transcription and annotation (e.g. audio/video playback, spelling checks, keyboard shortcuts, adding tags of annotation) in one place so that a user does not need to constantly switch between multiple windows, for example, an audio player and a text editor. ‘Transcribear’ (https://transcribear.com) is therefore developed as an easy-to-use online tool which facilitates transcription and annotation on the same interface while this web tool operates offline so that a user’s recordings and transcripts can remain secure and confidential. To minimize human errors, the functionality of tag validation is also added. Originally designed for a multimodal corpus project UNNC CAWSE (https://www.nottingham.edu.cn/en/english/research/cawse/), this browser-based application can be customized for individual users’ needs in terms of the annotation scheme and corresponding shortcut keys. This article will explain how this new tool can make tedious and repetitive manual work faster and easier and at the same time improve the quality of outputs as the process of transcription and annotation tends to be prone to human errors. The limitations of Transcribear and future work will also be discussed.
APA, Harvard, Vancouver, ISO, and other styles
17

Gutierrez Becker, B., E. Giuffrida, M. Mangia, et al. "P069 Artificial intelligence (AI)-filtered Videos for Accelerated Scoring of Colonoscopy Videos in Ulcerative Colitis Clinical Trials." Journal of Crohn's and Colitis 15, Supplement_1 (2021): S173—S174. http://dx.doi.org/10.1093/ecco-jcc/jjab076.198.

Full text
Abstract:
Abstract Background Endoscopic assessment is a critical procedure to assess the improvement of mucosa and response to therapy, and therefore a pivotal component of clinical trial endpoints for IBD. Central scoring of endoscopic videos is challenging and time consuming. We evaluated the feasibility of using an Artificial Intelligence (AI) algorithm to automatically produce filtered videos where the non-readable portions of the video are removed, with the aim of accelerating the scoring of endoscopic videos. Methods The AI algorithm was based on a Convolutional Neural Network trained to perform a binary classification task. This task consisted of assigning the frames in a colonoscopy video to one of two classes: “readable” or “unreadable.” The algorithm was trained using annotations performed by two data scientists (BG, FA). The criteria to consider a frame “readable” were: i) the colon walls were within the field of view; ii) contrast and sharpness of the frame were sufficient to visually inspect the mucosa, and iii) no presence of artifacts completely obstructing the visibility of the mucosa. The frames were extracted randomly from 351 colonoscopy videos of the etrolizumab EUCALYPTUS (NCT01336465) Phase II ulcerative colitis clinical trial. Evaluation of the performance of the AI algorithm was performed on colonoscopy videos obtained as part of the etrolizumab HICKORY (NCT02100696) and LAUREL (NCT02165215) Phase III ulcerative colitis clinical trials. Each video was filtered using the AI algorithm, resulting in a shorter video where the sections considered unreadable by the AI algorithm were removed. Each of three annotators (EG, MM and MD) was randomly assigned an equal number of AI-filtered videos and raw videos. The gastroenterologist was tasked to score temporal segments of the video according to the Mayo Clinic Endoscopic Subscore (MCES). Annotations were performed by means of an online annotation platform (Virgo Surgical Video Solutions, Inc). Results We measured the time it took the annotators to score raw and AI-filtered videos. We observed a statistically significant reduction (Mann Whitney U test p-value=0.039) in the median time spent by the annotators scoring raw videos (10.59∓ 0.94 minutes) with respect to the time spent scoring AI-filtered videos (9.51 ∓ 0.92 minutes), with a substantial intra-rater agreement when evaluating highlight and raw videos (Cohen’s kappa 0.92 and 0.55 for experienced and junior gastroenterologists respectively). Conclusion Our analysis shows that AI can be used reliably as an assisting tool to automatically remove non-readable time segments from full colonoscopy videos. The use of our proposed algorithm can lead to reduced annotation times in the task of centrally reading colonoscopy videos.
APA, Harvard, Vancouver, ISO, and other styles
18

Mahmood, M. H., J. Salvi, and X. Lladó. "Semi‐automatic tool for motion annotation on complex video sequences." Electronics Letters 52, no. 8 (2016): 602–4. http://dx.doi.org/10.1049/el.2015.4163.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Lösel, Gunter. "Tags and tracks and annotations: Research Video as a new form of publication of embodied knowledge." International Journal of Performance Arts and Digital Media 17, no. 1 (2021): 1–15. https://doi.org/10.5281/zenodo.4454683.

Full text
Abstract:
The last 20 years have seen numerous claims and suggestions to overcome purely text-based research. In this article I will describe the RESEARCH VIDEO project, that dedicated itself to the exploration of annotated videos as a new form of publication in artistic research. (1) Software development: One part of our team developed a software tool that was optimized for artistic research and allows for a publication as an annotated video. I will describe the features of the software and explain the design decisions that were made throughout the project. I will also point out future demands for this tool. (2) Research standards: Our team continually reflected on the questions of how to meet both academic and artistic needs, trying to shape the research process accordingly. We decided to minimize academic claims to two basic claims – "sharability" and "challengeability" and explored how the research process changes, when these claims are informing each step of the research process. Finally I will discuss suggestions to make a publication as a Research Video comparable to a research paper.
APA, Harvard, Vancouver, ISO, and other styles
20

Bianco, Simone, Gianluigi Ciocca, Paolo Napoletano, and Raimondo Schettini. "An interactive tool for manual, semi-automatic and automatic video annotation." Computer Vision and Image Understanding 131 (February 2015): 88–99. http://dx.doi.org/10.1016/j.cviu.2014.06.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Cassano, Giacomo, Nicoletta Di Blas, and Alessia Mataresi. "Enhancing Learning Engagement in the Flipped Classroom using a Video-Annotation Tool." Electronic Journal of e-Learning 22, no. 9 (2024): 75–90. http://dx.doi.org/10.34190/ejel.22.9.3259.

Full text
Abstract:
In the realm of academia, there has been a burgeoning interest in examining student engagement within the context of the flipped classroom approach, particularly at the higher education level. Nevertheless, research pertaining to a pivotal facet of this pedagogical methodology—namely, student engagement during the individualized, at-home preparatory phase—remains limited. Within this void, the primary objective of this study is to delve into the prospective benefits offered by a video annotation tool in augmenting students’ level of engagement during this autonomous study phase. Throughout the course of this investigation, a bespoke questionnaire was crafted, grounded in a multifaceted framework for evaluating student engagement, with a specific emphasis on dissecting four distinct dimensions: behavioral, emotional, agentive, and cognitive. This questionnaire is geared towards gauging student engagement within the domain of the Flipped Classroom model that relies on video-based instructional content—a facet heretofore poorly explored within the scholarly literature. Both quantitative and qualitative data were collected by administering the aforementioned questionnaire to a cohort of 68 undergraduate engineering students at Politecnico di Milano (Italy), all within the authentic context of a case study. The outcomes stemming from this empirical inquiry showcase a noteworthy enhancement in student engagement, with particular prominence accorded to the realms of emotional and agentive engagement. Moreover, this study establishes that the interactivity and proactive involvement necessitated by the video annotation tool do not obstruct students’ behavioral and cognitive engagement levels. In summation, this research endeavors to illuminate the potentiality of bridging a pivotal juncture in the Flipped Classroom paradigm, specifically the phase characterized by independent at-home study. This bridge is facilitated by the utilization of a video annotation tool designed to heighten student engagement. This transformative approach effectively transmutes the traditionally passive at-home study phase of the Flipped Classroom into an active experience, thereby enhancing the overall efficacy of this pedagogical approach.
APA, Harvard, Vancouver, ISO, and other styles
22

Staniszewski, Michał, Paweł Foszner, Karol Kostorz, et al. "Application of Crowd Simulations in the Evaluation of Tracking Algorithms." Sensors 20, no. 17 (2020): 4960. http://dx.doi.org/10.3390/s20174960.

Full text
Abstract:
Tracking and action-recognition algorithms are currently widely used in video surveillance, monitoring urban activities and in many other areas. Their development highly relies on benchmarking scenarios, which enable reliable evaluations/improvements of their efficiencies. Presently, benchmarking methods for tracking and action-recognition algorithms rely on manual annotation of video databases, prone to human errors, limited in size and time-consuming. Here, using gained experiences, an alternative benchmarking solution is presented, which employs methods and tools obtained from the computer-game domain to create simulated video data with automatic annotations. Presented approach highly outperforms existing solutions in the size of the data and variety of annotations possible to create. With proposed system, a potential user can generate a sequence of random images involving different times of day, weather conditions, and scenes for use in tracking evaluation. In the design of the proposed tool, the concept of crowd simulation is used and developed. The system is validated by comparisons to existing methods.
APA, Harvard, Vancouver, ISO, and other styles
23

Oomori, Kotaro, Wataru Kawabe, Fabrice Matulic, Takeo Igarashi, and Keita Higuchi. "Interactive 3D Annotation of Objects in Moving Videos from Sparse Multi-view Frames." Proceedings of the ACM on Human-Computer Interaction 7, ISS (2023): 309–26. http://dx.doi.org/10.1145/3626476.

Full text
Abstract:
Segmenting and determining the 3D bounding boxes of objects of interest in RGB videos is an important task for a variety of applications such as augmented reality, navigation, and robotics. Supervised machine learning techniques are commonly used for this, but they need training datasets: sets of images with associated 3D bounding boxes manually defined by human annotators using a labelling tool. However, precisely placing 3D bounding boxes can be difficult using conventional 3D manipulation tools on a 2D interface. To alleviate that burden, we propose a novel technique with which 3D bounding boxes can be created by simply drawing 2D bounding rectangles on multiple frames of a video sequence showing the object from different angles. The method uses reconstructed dense 3D point clouds from the video and computes tightly fitting 3D bounding boxes of desired objects selected by back-projecting the 2D rectangles. We show concrete application scenarios of our interface, including training dataset creation and editing 3D spaces and videos. An evaluation comparing our technique with a conventional 3D annotation tool shows that our method results in higher accuracy. We also confirm that the bounding boxes created with our interface have a lower variance, likely yielding more consistent labels and datasets.
APA, Harvard, Vancouver, ISO, and other styles
24

Caws, Catherine, and Stewart Arneil. "Modes of Annotation in the Video-Based Corpus FrancoToile: Developing a Design Method." KULA: Knowledge Creation, Dissemination, and Preservation Studies 1 (November 30, 2017): 1. http://dx.doi.org/10.5334/kula.3.

Full text
Abstract:
In corpus linguistics, texts are typically annotated in order to focus the attention on: (a) the form of the text or words, and (b) the structure of sentences (that is, morphological and syntactic tagging). Yet, when dealing with language learning and the development of skills other than just linguistic ones, other types of annotations are needed. Annotating with either a specific learner or pedagogy in mind often engages the researcher in more complex issues than the ones just related to corpus linguistics. In this article, we report on the methods used to create a digital library of videos and annotated transcripts called FrancoToile (http://francotoile.uvic.ca). As a needs-driven corpus, FrancoToile includes annotations within the video transcripts in order to help users develop their cultural and linguistic literacies in French. These annotations must relate directly to the purpose of the system (the development of cultural and linguistic literacies) and to the specific skill or competency that we hope language learners will gain. We analyze learning needs, modify the software, and observe and engage with users on an ongoing basis to create a language tool that will better address users’ needs. This approach of incorporating user feedback increases the usefulness of the annotated videos. We continue to seek means to encourage the involvement of users, both teachers and learners, in the process of corpora editing and content building.
APA, Harvard, Vancouver, ISO, and other styles
25

Park, Jang-Sik, and Seung-Jai Yi. "Development of Video Data-base and a Video Annotation Tool for Evaluation of Smart CCTV System." Journal of the Korea institute of electronic communication sciences 9, no. 7 (2014): 739–45. http://dx.doi.org/10.13067/jkiecs.2014.9.7.739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Catherine Chui Lam, Nguoi, and Hadina Habil. "The Use of Video Annotation in Education: A Review." Asian Journal of University Education 17, no. 4 (2021): 84. http://dx.doi.org/10.24191/ajue.v17i4.16208.

Full text
Abstract:
Abstract: Video annotation (VA), a tool which allows commentaries to be synchronized with video content has recently received significant research attention in education. However, the application contexts of these studies are varied and fragmented. A review was therefore undertaken with the objectives to find out the extent to which the use of VA has been explored for different instructional purposes and summarize the potential affordances of VA in supporting student learning. Articles related to the use of VA in education context were searched from 2011 to 2020 (Nov). Of the final 32 eligible studies, it was found that VA tools were used predominantly to develop teaching practices, enhance learners’ conceptual understanding of video content and develop workplace skills as well as clinical practices. Five most dominant educational affordances of VA tools were summarized as follows: (1) facilitating learners’ reflection (2) facilitating feedback process (3) enhancing comprehension of video content (4) promoting students’ learning satisfaction and positive attitude and (5) convenience and ease. With the outstanding weight of research evidence gained on educational affordances offered by VA, it is convincing that advancing the use of VA in education can further expand the learning opportunities in 21st century classrooms. 
 
 Keywords: Affordances, Education, Feedback, Learners’ reflection, Video annotation
APA, Harvard, Vancouver, ISO, and other styles
27

Douglas, Kathy A., Josephine Lang, and Meg Colasante. "The Challenges of Blended Learning Using a Media Annotation Tool." Journal of University Teaching and Learning Practice 11, no. 2 (2014): 84–103. http://dx.doi.org/10.53761/1.11.2.7.

Full text
Abstract:
Blended learning has been evolving as an important approach to learning and teaching in tertiary education. This approach incorporates learning in both online and face-to-face modes and promotes deep learning by incorporating the best of both approaches. An innovation in blended learning is the use of an online media annotation tool (MAT) in combination with face-to-face classes. This tool allows students to annotate their own or teacher-uploaded video adding to their understanding of professional skills in various disciplines in tertiary education. Examination of MAT occurred in 2011 and included nine cohorts of students using the tool. This article canvasses selected data relating to MAT including insights into the use of blended learning focussing on the challenges of combining face-to-face and online learning using a relatively new online tool.
APA, Harvard, Vancouver, ISO, and other styles
28

Serwatka, Witold, Dominik Rzepka, Hamza Oran, et al. "Tissue Puncture Event Detection in Needle Procedures using Vibroacoustic Signals - ResNet optimised Phantom Results." Current Directions in Biomedical Engineering 10, no. 4 (2024): 579–82. https://doi.org/10.1515/cdbme-2024-2142.

Full text
Abstract:
Abstract Vibroacoustic signals generated by the interaction of moving clinical devices, e.g. an aspiration or biopsy needle, with different tissues creates a distinct signal. These signals can be received via a dedicated audio sensor and subsequently analysed with the potential to provide information about location, tissue characterisation, and event classification. With that it could also be used as an additional and complementary guidance tool particularly for future robotic assisted procedures. In our laboratory research we used different phantoms with animal and artificial tissues, audio pre-processing, and subsequent training and optimisation of a ResNet model with a tissue event detection F1 score of 95.3% when compared to video based annotation tool. This result is very encouraging, as several possible improvements have been identified that will be implemented in the next research steps together with a robot assisted insertion and an automatic video annotation algorithm.
APA, Harvard, Vancouver, ISO, and other styles
29

Diete, Alexander, Timo Sztyler, and Heiner Stuckenschmidt. "Exploring Semi-Supervised Methods for Labeling Support in Multimodal Datasets." Sensors 18, no. 8 (2018): 2639. http://dx.doi.org/10.3390/s18082639.

Full text
Abstract:
Working with multimodal datasets is a challenging task as it requires annotations which often are time consuming and difficult to acquire. This includes in particular video recordings which often need to be watched as a whole before they can be labeled. Additionally, other modalities like acceleration data are often recorded alongside a video. For that purpose, we created an annotation tool that enables to annotate datasets of video and inertial sensor data. In contrast to most existing approaches, we focus on semi-supervised labeling support to infer labels for the whole dataset. This means, after labeling a small set of instances our system is able to provide labeling recommendations. We aim to rely on the acceleration data of a wrist-worn sensor to support the labeling of a video recording. For that purpose, we apply template matching to identify time intervals of certain activities. We test our approach on three datasets, one containing warehouse picking activities, one consisting of activities of daily living and one about meal preparations. Our results show that the presented method is able to give hints to annotators about possible label candidates.
APA, Harvard, Vancouver, ISO, and other styles
30

Leung, Kim Chau, and Mei Po Shek. "Adoption of video annotation tool in enhancing students’ reflective ability level and communication competence." Coaching: An International Journal of Theory, Research and Practice 14, no. 2 (2021): 151–61. http://dx.doi.org/10.1080/17521882.2021.1879187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

McFadden, Justin, Joshua Ellis, Tasneem Anwar, and Gillian Roehrig. "Beginning Science Teachers’ Use of a Digital Video Annotation Tool to Promote Reflective Practices." Journal of Science Education and Technology 23, no. 3 (2013): 458–70. http://dx.doi.org/10.1007/s10956-013-9476-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Jose, John Anthony C., Meygen D. Cruz, Jefferson James U. Keh, Maverick Rivera, Edwin Sybingco, and Elmer P. Dadios. "Anno-Mate: Human–Machine Collaboration Features for Fast Annotation." Journal of Advanced Computational Intelligence and Intelligent Informatics 25, no. 4 (2021): 404–9. http://dx.doi.org/10.20965/jaciii.2021.p0404.

Full text
Abstract:
Large annotated datasets are crucial for training deep machine learning models, but they are expensive and time-consuming to create. There are already numerous public datasets, but a vast amount of unlabeled data, especially video data, can still be annotated and leveraged to further improve the performance and accuracy of machine learning models. Therefore, it is essential to reduce the time and effort required to annotate a dataset to prevent bottlenecks in the development of this field. In this study, we propose Anno-Mate, a pair of features integrated into the Computer Vision Annotation Tool (CVAT). It facilitates human–machine collaboration and reduces the required human effort. Anno-Mate comprises Auto-Fit, which uses an EfficientDet-D0 backbone to tighten an existing bounding box around an object, and AutoTrack, which uses a channel and spatial reliability tracking (CSRT) tracker to draw a bounding box on the target object as it moves through the video frames. Both features exhibit a good speed and accuracy trade-off. Auto-Fit garnered an overall accuracy of 87% and an average processing time of 0.47 s, whereas the AutoTrack feature exhibited an overall accuracy of 74.29% and could process 18.54 frames per second. When combined, these features are proven to reduce the time required to annotate a minute of video by 26.56%.
APA, Harvard, Vancouver, ISO, and other styles
33

Verma, Navdeep, Seyum Getenet, Christopher Dann, and Thanveer Shaik. "Characteristics of engaging teaching videos in higher education: a systematic literature review of teachers’ behaviours and movements in video conferencing." Research and Practice in Technology Enhanced Learning 18 (March 21, 2023): 040. http://dx.doi.org/10.58459/rptel.2023.18040.

Full text
Abstract:
Online learning is in high demand due to benefits such as convenience, flexibility, cost efficiency, and improved accessibility. In online learning, video conferencing is an effective technology for collaboration and increasing online student engagement. This study is part of a larger study conducted using design-based research (DBR) to develop a video annotation tool using artificial intelligence (AI) methodologies such as machine learning and deep learning. This systematic literature review is the foundation of the process which identifies the characteristics and indicators of engaging teaching videos. The studies included in this systematic literature review have been gathered from seven databases and selected by applying inclusion/exclusion criteria in accordance with the Preferred Reporting Items for Systematic Reviews. From the selected studies, we identified, categorised, and explained the characteristics and indicators of engaging teaching videos based on teachers’ behaviours and movements. In this study, we identified 11 characteristics and 47 associated indicators of the characteristics critical in enhancing student engagement. Teachers and higher education institutions can use these characteristics and indicators as a benchmark to improve the quality of engaging teaching videos and later improve teaching and learning. In the final stage of DBR, the identified indicators can be used to train a machine learning tool, a form of AI. This tool can provide a report on engaging teaching videos by highlighting the teachers’ behaviours and movements.
APA, Harvard, Vancouver, ISO, and other styles
34

Nayak, Sudhir. "Slim Shadey: A manipulation tool for multiple sequence alignments." Bioinformation 19, no. 5 (2023): 659–62. http://dx.doi.org/10.6026/97320630019659.

Full text
Abstract:
The visualization of sequence alignments with the addition of meaningful shading and annotation is critical to convey the importance of structural elements, domains, motifs, and individual residues. Hence, we have developed a Java FX based software package (SlimShadey) with an intuitive graphical user interface that allows for the creation and visualization of features on sequence alignments, as well as trimming and editing of subsequences. SlimShadey will run without modification on Windows 7 (or higher) and will also run on OS X / macOS, most Linux distributions, and servers. SlimShadey features real-time shading and comparison of residues based on user-defined measures of conservation such as frequency, user-selected substitution matrices, composition-based consensus sequence, regular expressions, and hidden Markov models. The software also allows users to generate custom sequence logos, configurable publication quality images of alignments with shading and annotation, and shareable self-contained project files for collaboration. SlimShadey is an open source freely available Java program. Compiled .jar executables, source code, supplementary materials including the user manual, links to video tutorials, and all sample data are available through the URLS at availability.
APA, Harvard, Vancouver, ISO, and other styles
35

Koulaouzidis, Anastasios, Dimitris Iakovidis, Diana Yung, et al. "KID Project: an internet-based digital video atlas of capsule endoscopy for research purposes." Endoscopy International Open 05, no. 06 (2017): E477—E483. http://dx.doi.org/10.1055/s-0043-105488.

Full text
Abstract:
Abstract Background and aims Capsule endoscopy (CE) has revolutionized small-bowel (SB) investigation. Computational methods can enhance diagnostic yield (DY); however, incorporating machine learning algorithms (MLAs) into CE reading is difficult as large amounts of image annotations are required for training. Current databases lack graphic annotations of pathologies and cannot be used. A novel database, KID, aims to provide a reference for research and development of medical decision support systems (MDSS) for CE. Methods Open-source software was used for the KID database. Clinicians contribute anonymized, annotated CE images and videos. Graphic annotations are supported by an open-access annotation tool (Ratsnake). We detail an experiment based on the KID database, examining differences in SB lesion measurement between human readers and a MLA. The Jaccard Index (JI) was used to evaluate similarity between annotations by the MLA and human readers. Results The MLA performed best in measuring lymphangiectasias with a JI of 81 ± 6 %. The other lesion types were: angioectasias (JI 64 ± 11 %), aphthae (JI 64 ± 8 %), chylous cysts (JI 70 ± 14 %), polypoid lesions (JI 75 ± 21 %), and ulcers (JI 56 ± 9 %). Conclusion MLA can perform as well as human readers in the measurement of SB angioectasias in white light (WL). Automated lesion measurement is therefore feasible. KID is currently the only open-source CE database developed specifically to aid development of MDSS. Our experiment demonstrates this potential.
APA, Harvard, Vancouver, ISO, and other styles
36

Zakharova, O. V. "Metadata as a tool of the semantic analysis of the complex contents of the big data. The images." PROBLEMS IN PROGRAMMING, no. 1 (January 2023): 58–65. http://dx.doi.org/10.15407/pp2023.01.058.

Full text
Abstract:
The purpose of the research is to specify effective approaches for improving the semantic analysis of graphic contents of big data. This article considers images or video scenes as examples of such complex contents. Proposed approach takes into account the special features of these contents and create a hybrid annotation model that extends the text annotation model with more specific elements. For the visual data, these are characteristics of visualization. Determining the similarity of information contents is a critical problem for solving big data tasks. It is the basis for the big data categorization and enables the composition of the documents, conversion of an unstructured contents to relevant knowledge structures and the visualization of the information. Semantic analysis of information contents is usually based on their metadata, which form the basis of semantic annotations. Also, they are elements of a structured semantic description of the content and the basis for its automated processing. The approach is based on using ontologies to define semantic annotations. Ontologies provide various sources of knowledge to measure semantic similarity, contain a lot of information about the interpretation of concepts and other semantic relationships with a hierarchical structure based on hyponymy relations. But, in recent years, there is the rapid growth of the number of images and video resources. And, at this time, we can note a significant enrichment of available visual information. From a visual point of view, it is easier to understand whether two concepts are similar. Therefore, the integration of semantic and visual information of the image ensures the optimization of the ontological methods for similarity estimation and allows to obtain similarity metrics that are more consistent with human perception. De facto, such assessments of the complex semantic similarity of concepts are defined by the composition of two functions, the first of which, in fact, is an ontological measure of similarity, and the second is built on the basis of a complex facilities vector. It is a concatenation of semantic and visual characteristics with an established weight balance between these two types of features. The combination of visualization features with semantic and ontological characteristics of the contents in the similarity metrics is the central idea of this study.
APA, Harvard, Vancouver, ISO, and other styles
37

Schreyer, Verena, Marco Xaver Bornschlegl, and Matthias Hemmje. "Toward Annotation, Visualization, and Reproducible Archiving of Human–Human Dialog Video Recording Applications." Information 16, no. 5 (2025): 349. https://doi.org/10.3390/info16050349.

Full text
Abstract:
The COVID-19 pandemic increased the number of video conferences, for example, through online teaching and home office meetings. Even in the medical environment, consultation sessions are now increasingly conducted in the form of video conferencing. This includes sessions between psychotherapists and one or more call participants (individual/group calls). To subsequently document and analyze patient conversations, as well as any other human–human dialog, it is possible to record these video conferences. This allows experts to concentrate better on the conversation during the dialog and to perform analysis afterward. Artificial intelligence (AI) and its machine learning approach, which has already been used extensively for innovations, can provide support for subsequent analyses. Among other things, emotion recognition algorithms can be used to determine dialog participants’ emotions and record them automatically. This can alert experts to any noticeable sections of the conversation during subsequent analysis, thus simplifying the analysis process. As a result, experts can identify the cause of such sections based on emotion sequence data and exchange ideas with other experts within the context of an analysis tool.
APA, Harvard, Vancouver, ISO, and other styles
38

Pandeya, Yagya Raj, Bhuwan Bhattarai, Usman Afzaal, Jong-Bok Kim, and Joonwhoan Lee. "A monophonic cow sound annotation tool using a semi-automatic method on audio/video data." Livestock Science 256 (February 2022): 104811. http://dx.doi.org/10.1016/j.livsci.2021.104811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Bouten, Arne, Leen Haerens, Nele Van Doren, Sofie Compernolle, and Katrien De Cocker. "An online video annotation tool for optimizing secondary teachers’ motivating style: Acceptability, usability, and feasibility." Teaching and Teacher Education 134 (November 2023): 104307. http://dx.doi.org/10.1016/j.tate.2023.104307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Mirriahi, Negin, Daniyal Liaqat, Shane Dawson, and Dragan Gašević. "Uncovering student learning profiles with a video annotation tool: reflective learning with and without instructional norms." Educational Technology Research and Development 64, no. 6 (2016): 1083–106. http://dx.doi.org/10.1007/s11423-016-9449-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Bernad-Mechó, Edgar, and Carolina Girón-García. "multimodal analysis of humour as an engagement strategy in YouTube research dissemination videos." European Journal of Humour Research 11, no. 1 (2023): 46–66. http://dx.doi.org/10.7592/ejhr.2023.11.1.760.

Full text
Abstract:
Science popularisation has received widespread interest in the last decade. With the rapid evolution from print to digital modes of information, science outreach has been seen to cross educational boundaries and become integrated into wider contexts such as YouTube. One of the main features of the success of research dissemination videos on YouTube is the ability to establish a meaningful connection with the audience. In this regard, humour may be used as a strategy for engagement. Most studies on humour, however, are conducted solely from a purely linguistic perspective, obviating the complex multimodal reality of communication in the digital era. Considering this background, we set out to explore how humour is used from a multimodal point of view as an engagement strategy in YouTube research dissemination. We selected three research dissemination videos from three distinct YouTube channels to fulfil this aim. After an initial viewing, 22 short humoristic fragments that were particularly engaging were selected. These fragments were further explored using Multimodal Analysis - Video (MAV)[1], a multi-layered annotation tool that allows for fine-grained multimodal analysis. Humoristic strategies and contextual features were explored, as well as two main types of modes: embodied and filmic. Results show the presence of 9 linguistic strategies to introduce humour in YouTube science dissemination videos which are always accompanied by heterogeneous combinations of embodied and filmic modes that contribute to fully achieving humoristic purposes. [1] Multi-layer annotation software used to describe the use of semiotic modes in video files. By using this software, researchers may analyse, for instance, how gestures, gaze, proxemics, head movements, facial expression, etc. are employed in a given file.
APA, Harvard, Vancouver, ISO, and other styles
42

Lauricella, Sharon, Christopher Craig, and Robin Kay. "Shifting Reading into a Socially Constructed Activity: A Case Study on the Benefits and Challenges of Using." Journal of Educational Informatics 4, no. 2 (2024): 32–44. http://dx.doi.org/10.51357/jei.v4i2.231.

Full text
Abstract:
Perusall is a social annotation tool that engages students in digital course materials. The system facilitates student interaction via posts consisting of questions, responses, and comments on the video, written materials, and audio sources. This paper considers student perceptions of Perusall in an upper-year social science course (n=28). Students described Perusall as fun and engaging because they enjoyed positive communication with classmates, and the system was easy to use. Challenges of using the tool reflect the need for students to learn a new interface and technology glitches such as disappearing comments. We include suggestions for instructors wishing to use Perusall in this paper.
APA, Harvard, Vancouver, ISO, and other styles
43

Hardison, Debra M. "Visualizing the acoustic and gestural beats of emphasis in multimodal discourse." Journal of Second Language Pronunciation 4, no. 2 (2018): 232–59. http://dx.doi.org/10.1075/jslp.17006.har.

Full text
Abstract:
Abstract Perceivers’ attention is entrained to the rhythm of a speaker’s gestural and acoustic beats. When different rhythms (polyrhythms) occur across the visual and auditory modalities of speech simultaneously, attention may be heightened, enhancing memorability of the sequence. In this three-stage study, Stage 1 analyzed videorecordings of native English-speaking instructors, focusing on frame-by-frame analysis of time-aligned annotations from Praat and Anvil (video annotation tool) of polyrhythmic sequences. Stage 2 explored the perceivers’ perspective on the sequences’ discourse role. Stage 3 analyzed 10 international teaching assistants’ gestures, and implemented a multistep technology-assisted program to enhance verbal and nonverbal communication skills. Findings demonstrated (a) a dynamic temporal gesture-speech relationship involving perturbations of beat intervals surrounding pitch-accented vowels, (b) the sequences’ important role as highlighters of information, and (c) improvement of ITA confidence, teaching effectiveness, and ability to communicate important points. Findings support the joint production of gesture and prosodically prominent features.
APA, Harvard, Vancouver, ISO, and other styles
44

Lauer, Tobias, Rainer Müller, and Thomas Ottmann. "Animations for Teaching Purposes: Now and Tomorrow." JUCS - Journal of Universal Computer Science 7, no. (5) (2001): 420–33. https://doi.org/10.3217/jucs-007-05-0420.

Full text
Abstract:
Animation is commonly seen as an ideal tool for teaching dynamic phenomena. While there have been very few studies testing this hypothesis, animations are used extensively in teaching, particularly in the field of algorithms. We highlight features that we consider important for animation systems, describe the development of algorithm animation by examples, and present a new Java-based system supporting annotation and recording of animations. We also outline a way to annotate animations and movies given in the MPEG video format. By listing several case studies we describe new ways and possibilities of how animation systems may be used in the future.
APA, Harvard, Vancouver, ISO, and other styles
45

Bystedt, Mattias, and Jens Edlund. "New applications of gaze tracking in speech science." Digital Humanities in the Nordic and Baltic Countries Publications 2, no. 1 (2019): 73–78. http://dx.doi.org/10.5617/dhnbpub.11082.

Full text
Abstract:
We present an overview of speech research applications of gaze tracking technology, where gaze behaviours are exploited as a tool for analysis rather than as a primary object of study. The methods presented are all in their infancy, but can greatly assist the analysis of digital audio and video as well as unlock the relationship between writing and other encodings on the one hand, and natural language, such as speech, on the other. We discuss three directions in this type of gaze tracking application: modelling of text that is read aloud, evaluation and annotation with naïve informants, and evaluation and annotation with expert annotators. In each of these areas, we use gaze tracking information to gauge the behaviour of people when working with speech and conversation, rather than when reading text aloud or partaking in conversations, in order to learn something about how the speech may be analysed from a human perspective.
APA, Harvard, Vancouver, ISO, and other styles
46

Niehorster, Diederick C., Roy S. Hessels, and Jeroen S. Benjamins. "GlassesViewer: Open-source software for viewing and analyzing data from the Tobii Pro Glasses 2 eye tracker." Behavior Research Methods 52, no. 3 (2020): 1244–53. http://dx.doi.org/10.3758/s13428-019-01314-1.

Full text
Abstract:
AbstractWe present GlassesViewer, open-source software for viewing and analyzing eye-tracking data of the Tobii Pro Glasses 2 head-mounted eye tracker as well as the scene and eye videos and other data streams (pupil size, gyroscope, accelerometer, and TTL input) that this headset can record. The software provides the following functionality written in MATLAB: (1) a graphical interface for navigating the study- and recording structure produced by the Tobii Glasses 2; (2) functionality to unpack, parse, and synchronize the various data and video streams comprising a Glasses 2 recording; and (3) a graphical interface for viewing the Glasses 2’s gaze direction, pupil size, gyroscope and accelerometer time-series data, along with the recorded scene and eye camera videos. In this latter interface, segments of data can furthermore be labeled through user-provided event classification algorithms or by means of manual annotation. Lastly, the toolbox provides integration with the GazeCode tool by Benjamins et al. (2018), enabling a completely open-source workflow for analyzing Tobii Pro Glasses 2 recordings.
APA, Harvard, Vancouver, ISO, and other styles
47

Gayathri, Balasubrmaniam, Raksha Vedavyas, P. Sharanya, and K. Karthik. "Effectiveness of reflective learning in skill-based teaching among postgraduate anesthesia students: An outcome-based study using video annotation tool." Medical Journal Armed Forces India 77 (February 2021): S202—S207. http://dx.doi.org/10.1016/j.mjafi.2020.12.028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Ramanathan, Vishal, Mohammad Zaidi Ariffin, Guo Dong Goh, et al. "The Design and Development of Instrumented Toys for the Assessment of Infant Cognitive Flexibility." Sensors 23, no. 5 (2023): 2709. http://dx.doi.org/10.3390/s23052709.

Full text
Abstract:
The first years of an infant’s life represent a sensitive period for neurodevelopment where one can see the emergence of nascent forms of executive function (EF), which are required to support complex cognition. Few tests exist for measuring EF during infancy, and the available tests require painstaking manual coding of infant behaviour. In modern clinical and research practice, human coders collect data on EF performance by manually labelling video recordings of infant behaviour during toy or social interaction. Besides being extremely time-consuming, video annotation is known to be rater-dependent and subjective. To address these issues, starting from existing cognitive flexibility research protocols, we developed a set of instrumented toys to serve as a new type of task instrumentation and data collection tool suitable for infant use. A commercially available device comprising a barometer and an inertial measurement unit (IMU) embedded in a 3D-printed lattice structure was used to detect when and how the infant interacts with the toy. The data collected using the instrumented toys provided a rich dataset that described the sequence of toy interaction and individual toy interaction patterns, from which EF-relevant aspects of infant cognition can be inferred. Such a tool could provide an objective, reliable, and scalable method of collecting early developmental data in socially interactive contexts.
APA, Harvard, Vancouver, ISO, and other styles
49

Suh, Jennifer, Melissa A. Gallagher, Laurie Capen, and Sara Birkhead. "Enhancing teachers' noticing around mathematics teaching practices through video-based lesson study with peer coaching." International Journal for Lesson & Learning Studies 10, no. 2 (2021): 150–67. http://dx.doi.org/10.1108/ijlls-09-2020-0073.

Full text
Abstract:
PurposeThe purpose of this study is to examine what teachers notice in their own enactment of eight high leverage practices as well as the patterns of interactions between the teachers and their peers when participating in video-based lesson study.Design/methodology/approachEach teacher taught and uploaded video from one lesson to a platform, which allowed video annotation, for their lesson study team. There were nine lesson study teams. This study used a qualitative design to examine the teachers' comments on their own videos as well as the patterns in the comments between peers on lesson study teams.FindingsTeachers noticed both positive instantiations as well as opportunities for growth in their enactment of: using and connecting mathematical representations, posing purposeful questions and supporting students' productive struggle. Analysis displayed a pattern of exchanges where peers coached, validated, empathized and pushed each other beyond their comfort zone as critical peers.Research limitations/implicationsAlthough not all lesson study teams were made up of school-based teams and the teachers shared short recordings of their teaching, this research contributes to the understanding of how adapting lesson study by using video can help teachers notice their instantiation of teaching practices and peers can support and push one another towards ambitious instruction. Future research could extend this work by investigating the impact of video-based lesson study on teachers in isolated areas who may not have professional learning networks.Practical implicationsVideo-based LS may help to overcome barriers to the implementation of lesson study, such as the challenge of scheduling a common release time for lesson observation and the financial burden of funding substitute teachers for release time.Originality/valueThe current realities of COVID-19 creates an opportunity for mathematics educators to reimagine teacher professional development (PD) in ways that push the field forward. In light of this disruption, the authors propose an innovative model of utilizing video-based Lesson Study (LS; Lewis, 2002) with peer coaching to offer PD opportunities with methodological considerations for both mathematics researchers and teacher practitioners. The authors document and analyze a collection of online LSs that were taught by a focal teacher and recorded for the peers in the LS group. Video-based LS PD structure allowed the authors to examine how they can leverage this online model of LS to analyze student thinking and learn about teaching rich tasks in an online environment using eight teaching practices. Through their paper the authors will detail the necessary features of online LS specifically using a video annotation tool like Goreact and how video can be used to enhance the professional learning of the mathematics teaching practices (MTPs; NCTM, 2014) and the noticing of student thinking (Jacobs et al., 2010; Sherin and van Es, 2009; van Es and Sherin, 2002, 2008). In addition, the authors will document the norms that were established in the online LS community that impacted collaboration of LS teams and developed strong peer coaching relationships. The online LS PD design also supports collaboration of teachers from varying contexts, promotes professional growth and demonstrates how educators might leverage peer coaches as social capital within their schools to develop teachers along the professional continuum.
APA, Harvard, Vancouver, ISO, and other styles
50

Girón-García, Carolina. "Youtube videos to develop multimodal literacy." Elia, no. 24 (2024): 209–44. https://doi.org/10.12795/elia.2024.i24.7.

Full text
Abstract:
Teaching and learning in English have been growing recently (Mitchell, 2016), and the general trend towards internationalisation in English for Specific Purposes (ESP) contexts in Higher Education (HE) has led to an increased emphasis on English language instruction (Dafouz & Smit, 2020). Several studies in applied linguistics have focused on the analysis of digital genres (i.e., Internet and videos) in the ESP classroom (Bernad-Mechó & Girón-García, 2023; Girón-García & Fortanet-Gómez, 2023). Recently, the digitisation of materials, resources, and teaching activities has grown exponentially. Previous studies have examined how digital genres (Shepherd & Watters, 1998; Luzón, et al., 2010) develop and take advantage of the potential of the Internet in the digital era (Kress, 2010). They have proved that a stronger digital presence and more diversity of semiotic resources and communication channels are increasingly in demand by 21st-century learners to address their learning needs. The aim of this study is to analyse the multimodal nature of YouTube videos in a Legal English classroom by (1) raising students’ level of awareness of the multimodal characteristics present in YouTube videos, and (2) carrying out a multimodal discourse analysis of an extract of one video used in an ESP Law course at HE to unveil the features students must be made aware of. A multi-layered annotation tool (Multimodal Analysis – Video (MAV)) (O’Halloran et al., 2012) was used to attain the second aim. The results derived from this study may broaden students’ comprehension of how multimodal communication occurs (i.e., how to acquire multimodal awareness) to become multimodally literate (i.e., to acquire multimodal literacy).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!