Auswahl der wissenschaftlichen Literatur zum Thema „Video text“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Video text" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Video text"

1

Huang, Bin, Xin Wang, Hong Chen, Houlun Chen, Yaofei Wu, and Wenwu Zhu. "Identity-Text Video Corpus Grounding." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 4 (2025): 3608–16. https://doi.org/10.1609/aaai.v39i4.32375.

Der volle Inhalt der Quelle
Annotation:
Video corpus grounding (VCG), which aims to retrieve relevant video moments from a video corpus, has attracted significant attention in the multimedia research community. However, the existing VCG setting primarily focuses on matching textual descriptions with videos and ignores the distinct visual identities in the videos, thus resulting in inaccurate understanding of video content and deteriorated retrieval performances. To address this limitation, we introduce a novel task, Identity-Text Video Corpus Grounding (ITVCG), which simultaneously utilize textual descriptions and visual identities
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Avinash, N. Bhute, and Meshram B.B. "Text Based Approach For Indexing And Retrieval Of Image And Video: A Review." Advances in Vision Computing: An International Journal (AVC) 1, no. 1 (2014): 27–38. https://doi.org/10.5281/zenodo.3554868.

Der volle Inhalt der Quelle
Annotation:
Text data present in multimedia contain useful information for automatic annotation, indexing. Extracted information used for recognition of the overlay or scene text from a given video or image. The Extracted text can be used for retrieving the videos and images. In this paper, firstly, we are discussed the different techniques for text extraction from images and videos. Secondly, we are reviewed the techniques for indexing and retrieval of image and videos by using extracted text.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Avinash, N. Bhute, and Meshram B.B. "Text Based Approach For Indexing And Retrieval Of Image And Video: A Review." Advances in Vision Computing: An International Journal (AVC) 1, no. 1 (2014): 27–38. https://doi.org/10.5281/zenodo.3357696.

Der volle Inhalt der Quelle
Annotation:
Text data present in multimedia contain useful information for automatic annotation, indexing. Extracted information used for recognition of the overlay or scene text from a given video or image. The Extracted text can be used for retrieving the videos and images. In this paper, firstly, we are discussed the different techniques for text extraction from images and videos. Secondly, we are reviewed the techniques for indexing and retrieval of image and videos by using extracted text.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

V, Divya, Prithica G, and Savija J. "Text Summarization for Education in Vernacular Languages." International Journal for Research in Applied Science and Engineering Technology 11, no. 7 (2023): 175–78. http://dx.doi.org/10.22214/ijraset.2023.54589.

Der volle Inhalt der Quelle
Annotation:
Abstract: This project proposes a video summarizing system based on natural language processing (NLP) and Machine Learning to summarize the YouTube video transcripts without losing the key elements. The quantity of videos available on web platforms is steadily expanding. The content is made available globally, primarily for educational purposes. Additionally, educational content is available on YouTube, Facebook, Google, and Instagram. A significant issue of extracting information from videos is that unlike an image, where data can be collected from a single frame, a viewer must watch the enti
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Namrata, Dave, and S. Holia Mehfuza. "News Story Retrieval Based on Textual Query." International Journal of Engineering and Advanced Technology (IJEAT) 9, no. 3 (2021): 2918–22. https://doi.org/10.5281/zenodo.5589205.

Der volle Inhalt der Quelle
Annotation:
This paper presents news video retrieval using text query for Gujarati language news videos. Due to the fact that Broadcasted Video in India is lacking in metadata information such as closed captioning, transcriptions etc., retrieval of videos based on text data is trivial task for most of the Indian language video. To retrieve specific story based on text query in regional language is the key idea behind our approach. Broadcast video is segmented to get shots representing small news stories. To represent each shot efficiently, key frame extraction using singular value decomposition and rank o
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Doran, Michael, Adrian Barnett, Joan Leach, William Lott, Katie Page, and Will Grant. "Can video improve grant review quality and lead to more reliable ranking?" Research Ideas and Outcomes 3 (February 1, 2017): e11931. https://doi.org/10.3897/rio.3.e11931.

Der volle Inhalt der Quelle
Annotation:
Multimedia video is rapidly becoming mainstream, and many studies indicate that it is a more effective communication medium than text. In this project we AIM to test if videos can be used, in place of text-based grant proposals, to improve communication and increase the reliability of grant ranking. We will test if video improves reviewer comprehension (AIM 1), if external reviewer grant scores are more consistent with video (AIM 2), and if mock Australian Research Council (ARC) panels award more consistent scores when grants are presented as videos (AIM 3). This will be the first study to eva
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Jiang, Ai Wen, and Gao Rong Zeng. "Multi-information Integrated Method for Text Extraction from Videos." Advanced Materials Research 225-226 (April 2011): 827–30. http://dx.doi.org/10.4028/www.scientific.net/amr.225-226.827.

Der volle Inhalt der Quelle
Annotation:
Video text provides important semantic information in video content analysis. However, video text with complex background has a poor recognition performance for OCR. Most of the previous approaches to extracting overlay text from videos are based on traditional binarization and give little attention on multi-information integration, especially fusing the background information. This paper presents an effective method to precisely extract characters from videos to enable it for OCR with a good recognition performance. The proposed method combines multi-information together including background
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Ma, Fan, Xiaojie Jin, Heng Wang, Jingjia Huang, Linchao Zhu, and Yi Yang. "Stitching Segments and Sentences towards Generalization in Video-Text Pre-training." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 5 (2024): 4080–88. http://dx.doi.org/10.1609/aaai.v38i5.28202.

Der volle Inhalt der Quelle
Annotation:
Video-language pre-training models have recently achieved remarkable results on various multi-modal downstream tasks. However, most of these models rely on contrastive learning or masking modeling to align global features across modalities, neglecting the local associations between video frames and text tokens. This limits the model’s ability to perform fine-grained matching and generalization, especially for tasks that selecting segments in long videos based on query texts. To address this issue, we propose a novel stitching and matching pre-text task for video-language pre-training that enco
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Liu, Yang, Shudong Huang, Deng Xiong, and Jiancheng Lv. "Learning Dynamic Similarity by Bidirectional Hierarchical Sliding Semantic Probe for Efficient Text Video Retrieval." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 6 (2025): 5667–75. https://doi.org/10.1609/aaai.v39i6.32604.

Der volle Inhalt der Quelle
Annotation:
Text-video retrieval is a foundation task in multi-modal research which aims to align texts and videos in the embedding space. The key challenge is to learn the similarity between videos and texts. A conventional approach involves directly aligning video-text pairs using cosine similarity. However, due to the disparity in the information conveyed by videos and texts, i.e., a single video can be described from multiple perspectives, the retrieval accuracy is suboptimal. An alternative approach employs cross-modal interaction to enable videos to dynamically acquire distinct features from various
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Sun, Shangkun, Xiaoyu Liang, Songlin Fan, Wenxu Gao, and Wei Gao. "VE-Bench: Subjective-Aligned Benchmark Suite for Text-Driven Video Editing Quality Assessment." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 7 (2025): 7105–13. https://doi.org/10.1609/aaai.v39i7.32763.

Der volle Inhalt der Quelle
Annotation:
Text-driven video editing has recently experienced rapid development. Despite this, evaluating edited videos remains a considerable challenge. Current metrics tend to fail to align with human perceptions, and effective quantitative metrics for video editing are still notably absent. To address this, we introduce VE-Bench, a benchmark suite tailored to the assessment of text-driven video editing. This suite includes VE-Bench DB, a video quality assessment (VQA) database for video editing. VE-Bench DB encompasses a diverse set of source videos featuring various motions and subjects, along with m
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Dissertationen zum Thema "Video text"

1

Sidevåg, Emmilie. "Användarmanual text vs video." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-17617.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Salway, Andrew. "Video annotation : the role of specialist text." Thesis, University of Surrey, 1999. http://epubs.surrey.ac.uk/843350/.

Der volle Inhalt der Quelle
Annotation:
Digital video is among the most information-intensive modes of communication. The retrieval of video from digital libraries, along with sound and text, is a major challenge for the computing community in general and for the artificial intelligence community specifically. The advent of digital video has set some old questions in a new light. Questions relating to aesthetics and to the role of surrogates - image for reality and text for image, invariably touch upon the link between vision and language. Dealing with this link computationally is important for the artificial intelligence enterprise
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Smith, Gregory. "VIDEO SCENE DETECTION USING CLOSED CAPTION TEXT." VCU Scholars Compass, 2009. http://scholarscompass.vcu.edu/etd/1932.

Der volle Inhalt der Quelle
Annotation:
Issues in Automatic Video Biography Editing are similar to those in Video Scene Detection and Topic Detection and Tracking (TDT). The techniques of Video Scene Detection and TDT can be applied to interviews to reduce the time necessary to edit a video biography. The system has attacked the problems of extraction of video text, story segmentation, and correlation. This thesis project was divided into three parts: extraction, scene detection, and correlation. The project successfully detected scene breaks in series television episodes and displayed scenes that had similar content.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Zhang, Jing. "Extraction of Text Objects in Image and Video Documents." Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4266.

Der volle Inhalt der Quelle
Annotation:
The popularity of digital image and video is increasing rapidly. To help users navigate libraries of image and video, Content Based Information Retrieval (CBIR) system that can automatically index image and video documents are needed. However, due to the semantic gap between low-level machine descriptors and high-level semantic descriptors, the existing CBIR systems are still far from perfect. Text embedded in multi-media data, as a well-defined model of concepts for humans' communication, contains much semantic information related to the content. This text information can provide a much truer
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Sjölund, Jonathan. "Detection of Frozen Video Subtitles Using Machine Learning." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-158239.

Der volle Inhalt der Quelle
Annotation:
When subtitles are burned into a video, an error can sometimes occur in the encoder that results in the same subtitle being burned into several frames, resulting in subtitles becoming frozen. This thesis provides a way to detect frozen video subtitles with the help of an implemented text detector and classifier. Two types of classifiers, naïve classifiers and machine learning classifiers, are tested and compared on a variety of different videos to see how much a machine learning approach can improve the performance. The naïve classifiers are evaluated using ground truth data to gain an underst
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Chen, Datong. "Text detection and recognition in images and video sequences /." [S.l.] : [s.n.], 2003. http://library.epfl.ch/theses/?display=detail&nr=2863.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Štindlová, Marie. "Museli to založit." Master's thesis, Vysoké učení technické v Brně. Fakulta výtvarných umění, 2015. http://www.nusl.cz/ntk/nusl-232451.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Bird, Paul. "Elementary students' comprehension of computer presented text." Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/29187.

Der volle Inhalt der Quelle
Annotation:
The study investigated grade 6 students' comprehension of narrative text when presented on a computer and as printed words on paper. A set of comprehension tests were developed for three stories of varying length (382 words, 1047 words and 1933 words) using a skills hierarchy protocol. The text for each story was prepared for presentation on a Macintosh computer using a program written for the study and as print in the form of exact copies of the computer screen. Students from two grade 6 classes in a suburban elementary school were randomly assigned to read one of the stories in either print
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Sharma, Nabin. "Multi-lingual Text Processing from Videos." Thesis, Griffith University, 2015. http://hdl.handle.net/10072/367489.

Der volle Inhalt der Quelle
Annotation:
Advances in digital technology have produced low priced portable imaging devices such as digital cameras attached to mobile phones, camcorders, PDA’s etc. which are highly portable. These devices can be used to capture videos and images at ease, which can be shared through the internet and other communication media. In the commercial do- main, cameras are used to create news, advertisement videos and other forms of material for information communication. The use of multiple languages to create information for targeted audiences is quite common in countries having mul
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Fraz, Muhammad. "Video content analysis for intelligent forensics." Thesis, Loughborough University, 2014. https://dspace.lboro.ac.uk/2134/18065.

Der volle Inhalt der Quelle
Annotation:
The networks of surveillance cameras installed in public places and private territories continuously record video data with the aim of detecting and preventing unlawful activities. This enhances the importance of video content analysis applications, either for real time (i.e. analytic) or post-event (i.e. forensic) analysis. In this thesis, the primary focus is on four key aspects of video content analysis, namely; 1. Moving object detection and recognition, 2. Correction of colours in the video frames and recognition of colours of moving objects, 3. Make and model recognition of vehicles and
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Bücher zum Thema "Video text"

1

Lu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan, and Wenyin Liu. Video Text Detection. Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

author, Wilde Rod, ed. Volleyball essentials: Video-text. Total Health Publications, 2014.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Shivakumara, Palaiahnakote, and Umapada Pal. Cognitively Inspired Video Text Processing. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-7069-5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Barry, Atkins, and Krzywinska Tanya, eds. Videogame, player, text. Manchester University Press, 2007.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Peterson, Tara. Should kids play video games?: A persuasive text. Mondo, 2006.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

H, Stark James, ed. The practice of mediation: A video-integrated text. 2nd ed. Wolters Kluwer Law & Business, 2012.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Chen, Datong. Text detection and recognition in images and video sequences. EPFL, 2003.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Szuprowicz, Bohdan O. Multimedia technology: Combining sound, text, computing, graphics, and video. Computer Technology Research Corp., 1992.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Grannell, Mike. Self-managed study in mathematics using text and video. Open Learning Foundation, 1996.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Griggs, Yvonne. Shakespeare's King Lear: The relationship between text and film. Methuen Drama, 2009.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Buchteile zum Thema "Video text"

1

Weik, Martin H. "video text." In Computer Science and Communications Dictionary. Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_20796.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Lu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan, and Wenyin Liu. "Video Preprocessing." In Video Text Detection. Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Shivakumara, Palaiahnakote, and Umapada Pal. "Video Text Recognition." In Cognitive Intelligence and Robotics. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-7069-5_9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Shivakumara, Palaiahnakote, and Umapada Pal. "Video Text Detection." In Cognitive Intelligence and Robotics. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-7069-5_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Lu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan, and Wenyin Liu. "Video Caption Detection." In Video Text Detection. Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Lu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan, and Wenyin Liu. "Video Text Detection Systems." In Video Text Detection. Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Lu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan, and Wenyin Liu. "Introduction to Video Text Detection." In Video Text Detection. Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan, and Wenyin Liu. "Performance Evaluation." In Video Text Detection. Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan, and Wenyin Liu. "Text Detection from Video Scenes." In Video Text Detection. Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Lu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan, and Wenyin Liu. "Post-processing of Video Text Detection." In Video Text Detection. Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Video text"

1

Lin, Xing, Langxi Liu, Pengjun Zhai, and Yu Fang. "Entity-Aware Video-Text Interaction for Contextualised Video Caption in News Video." In 2024 9th International Conference on Intelligent Computing and Signal Processing (ICSP). IEEE, 2024. http://dx.doi.org/10.1109/icsp62122.2024.10743775.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Sridhar, Bodanapu, Gourishetti Saivishnu, Varla ManiShanker, D. Dhana Lakshmi, and Shanmugasundaram Hariharan. "Summarization of Video into Text and Text to Braille Script." In 2024 International Conference on Knowledge Engineering and Communication Systems (ICKECS). IEEE, 2024. http://dx.doi.org/10.1109/ickecs61492.2024.10617121.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Y K, Anupama, Neelam Neha, Dinky Verma, S. B. Sankeerthana, and Medha Jha. "Compilation of Text to Video models." In 2024 International Conference on IoT, Communication and Automation Technology (ICICAT). IEEE, 2024. https://doi.org/10.1109/icicat62666.2024.10923075.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Zhao, Heng, Zhao Yinjie, Bihan Wen, Yew-Soon Ong, and Joey Tianyi Zhou. "Video-Text Prompting for Weakly Supervised Spatio-Temporal Video Grounding." In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.emnlp-main.1086.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Jin, Xiaojie, Bowen Zhang, Weibo Gong, et al. "MV-Adapter: Multimodal Video Transfer Learning for Video Text Retrieval." In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.02563.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Menapace, Willi, Aliaksandr Siarohin, Ivan Skorokhodov, et al. "Snap Video: Scaled Spatiotemporal Transformers for Text-to-Video Synthesis." In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.00672.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Wang, Jiamian, Pichao Wang, Guohao Sun, et al. "Text Is MASS: Modeling as Stochastic Embedding for Text-Video Retrieval." In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.01566.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Nixon, Lyndon, Damianos Galanopoulos, and Vasileios Mezaris. "Finding Video Shots for Immersive Journalism Through Text-to-Video Search." In 2024 International Conference on Content-Based Multimedia Indexing (CBMI). IEEE, 2024. https://doi.org/10.1109/cbmi62980.2024.10859220.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Zu, Xinyan, Haiyang Yu, Bin Li, and Xiangyang Xue. "Towards Accurate Video Text Spotting with Text-wise Semantic Reasoning." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/206.

Der volle Inhalt der Quelle
Annotation:
Video text spotting (VTS) aims at extracting texts from videos, where text detection, tracking and recognition are conducted simultaneously. There have been some works that can tackle VTS; however, they may ignore the underlying semantic relationships among texts within a frame. We observe that the texts within a frame usually share similar semantics, which suggests that, if one text is predicted incorrectly by a text recognizer, it still has a chance to be corrected via semantic reasoning. In this paper, we propose an accurate video text spotter, VLSpotter, that reads texts visually, linguist
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Shen, Xiaobo, Qianxin Huang, Long Lan, and Yuhui Zheng. "Contrastive Transformer Cross-Modal Hashing for Video-Text Retrieval." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/136.

Der volle Inhalt der Quelle
Annotation:
As video-based social networks continue to grow exponentially, there is a rising interest in video retrieval using natural language. Cross-modal hashing, which learns compact hash code for encoding multi-modal data, has proven to be widely effective in large-scale cross-modal retrieval, e.g., image-text retrieval, primarily due to its computation and storage efficiency. However, when applied to video-text retrieval, existing cross-modal hashing methods generally extract features at the frame- or word-level for videos and texts individually, thereby ignoring their long-term dependencies. To add
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Video text"

1

Li, Huiping, David Doermann, and Omid Kia. Automatic Text Detection and Tracking in Digital Video. Defense Technical Information Center, 1998. http://dx.doi.org/10.21236/ada458675.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Justin Olsson, Justin Olsson. Real-time underwater fish identification and biomonitoring via machine learning-based compression of video to text. Experiment, 2025. https://doi.org/10.18258/77387.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Kuzmin, Vyacheslav, Alebai Sabitov, Andrei Reutov, Vladimir Amosov, Lidiia Neupokeva, and Igor Chernikov. Electronic training manual "Providing first aid to the population". SIB-Expertise, 2024. http://dx.doi.org/10.12731/er0774.29012024.

Der volle Inhalt der Quelle
Annotation:
First aid represents the simplest urgent measures necessary to save the lives of victims of injuries, accidents and sudden illnesses. Providing first aid greatly increases the chances of salvation in case of bleeding, injury, cardiac and respiratory arrest, and prevents complications such as shock, massive blood loss, additional displacement of bone fragments and injury to large nerve trunks and blood vessels. This electronic educational resourse consists of four theoretical educational modules: legal aspects of providing first aid to victims and work safety when providing first aid; providing
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Sharova, Iryna. WAYS OF PROMOTING UKRANIAN PUBLISHING HOUSES ON FACEBOOK DURING QUARANTINE. Ivan Franko National University of Lviv, 2021. http://dx.doi.org/10.30970/vjo.2021.49.11076.

Der volle Inhalt der Quelle
Annotation:
The article reviews and analyzes the promotion of Ukrainian publishing houses on Facebook during quarantine in 2020. The study’s main objective is content and its types, which were used for representing on Facebook. We found out that going live and posting a text with a picture was most popular. The phenomenon of live video is tightly connected to the quarantine phenomenon. Though, not every publishing house was able to go live permanently or at least regular. However, simple text with a picture is the most uncomplicated content to post and the most popular. Ukrainian publishers also use UGC (
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Baluk, Nadia, Natalia Basij, Larysa Buk, and Olha Vovchanska. VR/AR-TECHNOLOGIES – NEW CONTENT OF THE NEW MEDIA. Ivan Franko National University of Lviv, 2021. http://dx.doi.org/10.30970/vjo.2021.49.11074.

Der volle Inhalt der Quelle
Annotation:
The article analyzes the peculiarities of the media content shaping and transformation in the convergent dimension of cross-media, taking into account the possibilities of augmented reality. With the help of the principles of objectivity, complexity and reliability in scientific research, a number of general scientific and special methods are used: method of analysis, synthesis, generalization, method of monitoring, observation, problem-thematic, typological and discursive methods. According to the form of information presentation, such types of media content as visual, audio, verbal and combi
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Prudkov, Mikhail, Vasily Ermolaev, Elena Shurygina, and Eduard Mikaelyan. Electronic educational resource "Hospital Surgery for 5th year students of the Faculty of Pediatrics". SIB-Expertise, 2024. http://dx.doi.org/10.12731/er0780.29012024.

Der volle Inhalt der Quelle
Annotation:
Electronic educational resourc was created for independent work of 5th year students of the pediatric faculty in the study of the discipline "Hospital Surgery". The possibility of control by the teacher is provided. This EER includes an introductory module, a topic module, and a quality assessment module. The structure of each topic in the EER (there are 19 topics in total) consists of the following sections: educational and methodological tasks on the topic, abstract of the topic, control tests on the topic, clinical situational tasks on the topic and a list of references. The section "Summar
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Felix, Juri, and Laura Webb. Use of artificial intelligence in education delivery and assessment. Parliamentary Office of Science and Technology, 2024. http://dx.doi.org/10.58248/pn712.

Der volle Inhalt der Quelle
Annotation:
This POSTnote considers how artificial intelligence (AI) technologies can be used by educators and learners in schools, colleges and universities. Artificial intelligence technologies that can be used in education have developed rapidly in recent years. This has been driven in part by advancements of generative AI, which is now capable of performing a wide range of tasks including the production of realistic content such as text, images, audio and video. Artificial intelligence tools have the potential to provide different ways of learning and to help educators with lesson planning, marking an
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Krull, R. 8mm video tape test. Office of Scientific and Technical Information (OSTI), 1990. http://dx.doi.org/10.2172/6375254.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Rekstad, Gary. Development of a Video Tape to Test Video Codecs Operating at 64KBPS. Defense Technical Information Center, 1989. http://dx.doi.org/10.21236/ada228157.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Crandall, Rob. Airborne Separation Video System Government Suitability Test. Defense Technical Information Center, 1999. http://dx.doi.org/10.21236/ada368478.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!