Academic literature on the topic 'The Caption'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'The Caption.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "The Caption"

1

Lai, Hongling, Dianjian Wang, and Xiancai Ou. "The Effects of Different Caption Modes on Chinese English Learners' Content and Vocabulary Comprehension." International Journal of Computer-Assisted Language Learning and Teaching 11, no. 4 (2021): 54–68. http://dx.doi.org/10.4018/ijcallt.2021100104.

Full text
Abstract:
This empirical study investigates the effects of different caption modes on the content and vocabulary comprehension by Chinese English learners with different levels of English proficiency. The results show that the full captioned group performed better on content comprehension than the keyword group, while no significant difference was found on vocabulary comprehension between the two captioned groups. For the beginning-level learners, the full captioned groups did better both in content and vocabulary comprehension than the keyword caption group; meanwhile, for the advanced learners, both f
APA, Harvard, Vancouver, ISO, and other styles
2

Butler, Janine. "The Visual Experience of Accessing Captioned Television and Digital Videos." Television & New Media 21, no. 7 (2019): 679–96. http://dx.doi.org/10.1177/1527476418824805.

Full text
Abstract:
The increase in video-based communication has made different caption styles more apparent to audiences, including hearing viewers who watch social media videos with colorful open captions. To explore how viewers respond to a variety of caption styles, this article shares findings from three focus group discussions with twenty deaf and hard-of-hearing college students. This article begins by discussing the accessibility of captioned television and digital media and how captions can influence the viewing experience. This article then analyzes deaf and hard-of-hearing focus group participants’ st
APA, Harvard, Vancouver, ISO, and other styles
3

Cárdenas, Monica, and Daniela Rocio Ramirez Orellana. "Progressive Reduction of Captions in Language Learning." Journal of Information Technology Education: Innovations in Practice 23 (2024): 002. http://dx.doi.org/10.28945/5263.

Full text
Abstract:
Aim/Purpose: This exploratory qualitative case study examines the perceptions of high-school learners of English regarding a pedagogical intervention involving progressive reduction of captions (full, sentence-level, keyword captions, and no-captions) in enhancing language learning. Background: Recognizing the limitations of caption usage in fostering independent listening comprehension in non-captioned environments, this research builds upon and extends the foundational work of Vanderplank (2016), who highlighted the necessity of a comprehensive blend of tasks, strategies, focused viewing, an
APA, Harvard, Vancouver, ISO, and other styles
4

Muehlbradt, Annika, and Shaun K. Kane. "What's in an ALT Tag? Exploring Caption Content Priorities through Collaborative Captioning." ACM Transactions on Accessible Computing 15, no. 1 (2022): 1–32. http://dx.doi.org/10.1145/3507659.

Full text
Abstract:
Evaluating the quality of accessible image captions with human raters is difficult, as it may be difficult for a visually impaired user to know how comprehensive a caption is, whereas a sighted assistant may not know what information a user will need from a caption. To explore how image captioners and caption consumers assess caption content, we conducted a series of collaborative captioning sessions in which six pairs, consisting of a blind person and their sighted partner, worked together to discuss, create, and evaluate image captions. By making captioning a collaborative task, we were able
APA, Harvard, Vancouver, ISO, and other styles
5

Hsu, Hui-Tzu. "Incidental professional vocabulary scquisition of EFL business learners: Effect of captioned video with glosses as a multimedia annotation." JALT CALL Journal 14, no. 2 (2018): 119–42. http://dx.doi.org/10.29140/jaltcall.v14n2.j227.

Full text
Abstract:
Use of captioned video in classrooms has gained considerable attention in the second and foreign language learning. However, the effect of application of captioned video embedded with glosses on incidental vocabulary enhancement has not been explored. This study aims to examine the effect of video captions with glosses on efl students’ incidental business vocabulary acquisition; 50 students from a college of management served as participants. A pretest was adopted to ensure participants lacked familiarity with the target vocabulary. All participants watched three video clips presented in three
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Yan. "Listen or Read? The Impact of Proficiency and Visual Complexity on Learners’ Reliance on Captions." Behavioral Sciences 15, no. 4 (2025): 542. https://doi.org/10.3390/bs15040542.

Full text
Abstract:
This study investigates how Chinese EFL (English as a foreign language) learners of low- and high-proficiency levels allocate attention between captions and audio while watching videos, and how visual complexity (single- vs. multi-speaker content) influences caption reliance. The study employed a novel paused transcription method to assess real-time processing. A total of 64 participants (31 low-proficiency [A1–A2] and 33 high-proficiency [C1–C2] learners) viewed single- and multi-speaker videos with English captions. Misleading captions were inserted to objectively measure reliance on caption
APA, Harvard, Vancouver, ISO, and other styles
7

Hsu, Ching-Kun. "Learning motivation and adaptive video caption filtering for EFL learners using handheld devices." ReCALL 27, no. 1 (2014): 84–103. http://dx.doi.org/10.1017/s0958344014000214.

Full text
Abstract:
AbstractThe aim of this study was to provide adaptive assistance to improve the listening comprehension of eleventh grade students. This study developed a video-based language learning system for handheld devices, using three levels of caption filtering adapted to student needs. Elementary level captioning excluded 220 English sight words (see Section 1 for definition), but provided captions and Chinese translations for the remaining words. Intermediate level excluded 1000 high frequency English words, but provided captions for the remaining words, and 2200 high frequency English words were ex
APA, Harvard, Vancouver, ISO, and other styles
8

Suh, Hyesun, Jiyeon Kim, Jinsoo So, and Jongjin Jung. "A core region captioning framework for automatic video understanding in story video contents." International Journal of Engineering Business Management 14 (January 2022): 184797902210781. http://dx.doi.org/10.1177/18479790221078130.

Full text
Abstract:
Due to the rapid increase in images and image data, research examining the visual analysis of such unstructured data has recently come to be actively conducted. One of the representative image caption models the DenseCap model extracts various regions in an image and generates region-level captions. However, since the existing DenseCap model does not consider priority for region captions, it is difficult to identify relatively significant region captions that best describe the image. There has also been a lack of research into captioning focusing on the core areas for story content, such as im
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Hongxiang, Meng Cao, Xuxin Cheng, Yaowei Li, Zhihong Zhu, and Yuexian Zou. "Exploiting Auxiliary Caption for Video Grounding." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 17 (2024): 18508–16. http://dx.doi.org/10.1609/aaai.v38i17.29812.

Full text
Abstract:
Video grounding aims to locate a moment of interest matching the given query sentence from an untrimmed video. Previous works ignore the sparsity dilemma in video annotations, which fails to provide the context information between potential events and query sentences in the dataset. In this paper, we contend that exploiting easily available captions which describe general actions, i.e., auxiliary captions defined in our paper, will significantly boost the performance. To this end, we propose an Auxiliary Caption Network (ACNet) for video grounding. Specifically, we first introduce dense video
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Jie Chi, and Peichin Chang. "Captions and reduced forms instruction: The impact on EFL students’ listening comprehension." ReCALL 26, no. 1 (2013): 44–61. http://dx.doi.org/10.1017/s0958344013000219.

Full text
Abstract:
AbstractFor many EFL learners, listening poses a grave challenge. The difficulty in segmenting a stream of speech and limited capacity in short-term memory are common weaknesses for language learners. Specifically, reduced forms, which frequently appear in authentic informal conversations, compound the challenges in listening comprehension. Numerous interventions have been implemented to assist EFL language learners, and of these, the application of captions has been found highly effective in promoting learning. Few studies have examined how different modes of captions may enhance listening co
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "The Caption"

1

Dulle, John David. "A caption-based natural-language interface handling descriptive captions for a multimedia database system." Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA236533.

Full text
Abstract:
Thesis (M.S. in Computer Science)--Naval Postgraduate School, June 1990.<br>Thesis Advisor(s): Lum, Vincent Y. ; Rowe, Neil C. "June 1990." Description based on signature page. DTIC Identifiers: Interfaces, natural language, databases, theses. Author(s) subject terms: Natural language processing, multimedia database system, natural language interface, descriptive captions. Includes bibliographical references (p. 27).
APA, Harvard, Vancouver, ISO, and other styles
2

SELVATICI, CAROLINA. "CLOSED CAPTION: ACHIEVEMENTS E ISSUES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2010. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=16105@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR<br>CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO<br>O closed caption é um tipo de legenda que foi concebida para permitir aos surdos e pessoas com dificuldades auditivas o acesso a programas, comerciais e filmes veiculados na televisão, em vídeo e DVD. Criado nos Estados Unidos, ele também atende a outros segmentos da sociedade, como idosos ensurdecidos, estrangeiros aprendendo o idioma, semi-analfabetos e crianças em fase de alfabetização e possibilita o entendimento do áudio em locais onde o som da TV for inaudível.
APA, Harvard, Vancouver, ISO, and other styles
3

Feng, Yansong. "Automatic caption generation for news images." Thesis, University of Edinburgh, 2011. http://hdl.handle.net/1842/5291.

Full text
Abstract:
This thesis is concerned with the task of automatically generating captions for images, which is important for many image-related applications. Automatic description generation for video frames would help security authorities manage more efficiently and utilize large volumes of monitoring data. Image search engines could potentially benefit from image description in supporting more accurate and targeted queries for end users. Importantly, generating image descriptions would aid blind or partially sighted people who cannot access visual information in the same way as sighted people can. However
APA, Harvard, Vancouver, ISO, and other styles
4

Zhou, Mingjie. "Deep networks for sign language video caption." HKBU Institutional Repository, 2020. https://repository.hkbu.edu.hk/etd_oa/848.

Full text
Abstract:
In the hearing-loss community, sign language is a primary tool to communicate with people while there is a communication gap between hearing-loss people with normal hearing people. Sign language is different from spoken language. It has its own vocabulary and grammar. Recent works concentrate on the sign language video caption which consists of sign language recognition and sign language translation. Continuous sign language recognition, which can bridge the communication gap, is a challenging task because of the weakly supervised ordered annotations where no frame-level label is provided. To
APA, Harvard, Vancouver, ISO, and other styles
5

Smith, Gregory. "VIDEO SCENE DETECTION USING CLOSED CAPTION TEXT." VCU Scholars Compass, 2009. http://scholarscompass.vcu.edu/etd/1932.

Full text
Abstract:
Issues in Automatic Video Biography Editing are similar to those in Video Scene Detection and Topic Detection and Tracking (TDT). The techniques of Video Scene Detection and TDT can be applied to interviews to reduce the time necessary to edit a video biography. The system has attacked the problems of extraction of video text, story segmentation, and correlation. This thesis project was divided into three parts: extraction, scene detection, and correlation. The project successfully detected scene breaks in series television episodes and displayed scenes that had similar content.
APA, Harvard, Vancouver, ISO, and other styles
6

Keisala, Simon. "Using a Character-Based Language Model for Caption Generation." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-163001.

Full text
Abstract:
Using AI to automatically describe images is a challenging task. The aim of this study has been to compare the use of character-based language models with one of the current state-of-the-art token-based language models, im2txt, to generate image captions, with focus on morphological correctness. Previous work has shown that character-based language models are able to outperform token-based language models in morphologically rich languages. Other studies show that simple multi-layered LSTM-blocks are able to learn to replicate the syntax of its training data. To study the usability of character
APA, Harvard, Vancouver, ISO, and other styles
7

Maryam, Sadat Mirzaei. "Partial and Synchronized Caption to Foster Second Language Listening based on Automatic Speech Recognition Clues." 京都大学 (Kyoto University), 2017. http://hdl.handle.net/2433/225730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lin, Xiao. "Leveraging Multimodal Perspectives to Learn Common Sense for Vision and Language Tasks." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/79521.

Full text
Abstract:
Learning and reasoning with common sense is a challenging problem in Artificial Intelligence (AI). Humans have the remarkable ability to interpret images and text from different perspectives in multiple modalities, and to use large amounts of commonsense knowledge while performing visual or textual tasks. Inspired by that ability, we approach commonsense learning as leveraging perspectives from multiple modalities for images and text in the context of vision and language tasks. Given a target task (e.g., textual reasoning, matching images with captions), our system first represents input imag
APA, Harvard, Vancouver, ISO, and other styles
9

Kadmark, Louise. "Intimitet och emotionell kommunikation via Instagram : en kvalitativ studie om influencers sätt att kommunicera." Thesis, Stockholms universitet, JMK, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-165564.

Full text
Abstract:
Sociala medier har blivit en integrerad del av vår vardag och profiler arbetar ständigt med skapandet av identitet, representation och interaktion. Efter sökandet av tidigare forskning uppfattades en avsaknad för studerandet av utformning och påverkan i kommunikationen på Instagram. Främst vad gäller en intim och emotionell sådan, som sänds ut via influencers. Studien syftar till att ge förståelse av de självrepresentativa aspekterna, men också hur emotion och förmänskligandet av kanalen bidrar till den annars distanserade närkontakten med följarna. Uppsatsen ska belysa vikten av interaktion i
APA, Harvard, Vancouver, ISO, and other styles
10

Soler, Edilaine Martins. "Otimização dos custos de energia elétrica na programação do armazenamento e distribuição de água em redes urbanas." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-02042008-105417/.

Full text
Abstract:
O problema abordado nesta pesquisa consiste na distribuição de água em redes urbanas para o atendimento de demandas conhecidas, com o objetivo de minimizar o custo da energia elétrica necessária para o funcionamento de bombas hidráulicas. As bombas hidráulicas são utilizadas para captar água de poços artesianos ou estações de tratamento de água para abastecer reservatários distribuídos por bairros de uma cidade, de onde a população será atendida por força gravitacional. Como o custo da energia elétrica varia ao longo do dia, se faz necessário um planejamento do funcionamento das bombas para qu
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "The Caption"

1

Paola, Nicolin, Obrist Hans Ulrich, and Padiglione d'arte contemporanea (Milan, Italy), eds. Alberto Garutti: Didascalia/caption. Mousse Publishing, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rowe, Neil C. Efficient caption-based retrieval of multimedia information. Naval Postgraduate School, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Verlinde, Ruth. How to write and caption for deaf people. T.J. Publishers, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

United States. Interstate Commerce Commission. Office of Public Assistance., ed. Sample caption summaries and standard transportation commodity codes. The Office, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tom, Fox, and Ken Wiederhorn. Return of the living dead. Warner Home Video, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hariman, Robert. No caption needed: Iconic photographs, public culture, and liberal democracy. University of Chicago Press, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Educational Resources Information Center (U.S.), ed. Caption speed and viewer comprehension of television programs: Final report. U.S. Dept. of Education, Office of Educational Research and Improvement, Educational Resources Information Center, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Stickley, Michael. Stickley optical family: Four optical sizes: display, headline, text, caption. Multiple styles & weights. P22 Type Foundry, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

United States. Congress. Senate. Committee on Commerce, Science, and Transportation. Television Decoder Circuitry Act of 1990: Report of the Senate Committee on Commerce, Science, and Transportation on S. 1974. U.S. G.P.O., 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

United States. Congress. Senate. Committee on Commerce, Science, and Transportation. Subcommittee on Communications. TV Decoder Circuitry Act of 1989: Hearing before the Subcommittee on Communications of the Committee on Commerce, Science, and Transportation, United States Senate, One Hundred First Congress, second session on S. 1974 ... June 20, 1990. U.S. G.P.O., 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "The Caption"

1

Alm, Antonie, and Yuki Watanabe. "Caption Literacy." In The Palgrave Encyclopedia of Computer-Assisted Language Learning. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-51447-0_263-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan, and Wenyin Liu. "Video Caption Detection." In Video Text Detection. Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Deepika, B., S. Pushpanjali Reddy, S. Gouthami Satya, and K. Rushil Kumar. "Image Caption Generator." In Atlantis Highlights in Computer Sciences. Atlantis Press International BV, 2023. http://dx.doi.org/10.2991/978-94-6463-314-6_35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Costa, Karen. "Caption Your Videos." In 99 Tips for Creating Simple and Sustainable Educational Videos. Routledge, 2023. http://dx.doi.org/10.4324/9781003442691-96.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Muñoz, Carmen, and Imma Miralpeix. "More pieces in the puzzle about language learning through audiovisual input." In Language Learning & Language Teaching. John Benjamins Publishing Company, 2024. http://dx.doi.org/10.1075/lllt.61.10mun.

Full text
Abstract:
In this concluding chapter, we bring together findings from the studies in this volume and place them within the context of prior research on audiovisual input, particularly within the broader framework of the SUBTiLL project. The findings are organized into three sections: captioned viewing, learning outcomes across various language dimensions, and individual differences. The first section addresses several concerns regarding captions, including their appropriateness for use with primary school children, a comparison with L1 subtitles, and caption enhancement. The second section delves into t
APA, Harvard, Vancouver, ISO, and other styles
6

Sato, Yuri, Ayaka Suzuki, and Koji Mineshima. "Building a Large Dataset of Human-Generated Captions for Science Diagrams." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-71291-3_32.

Full text
Abstract:
AbstractHuman-generated captions for photographs, particularly snapshots, have been extensively collected in recent AI research. They play a crucial role in the development of systems capable of multimodal information processing that combines vision and language. Recognizing that diagrams may serve a distinct function in thinking and communication compared to photographs, we shifted our focus from snapshot photographs to diagrams. We provided humans with text-free diagrams and collected data on the captions they generated. The diagrams were sourced from AI2D-RST, a subset of AI2D. This subset
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Zhen, Long Chen, Wenbo Ma, et al. "Explicit Image Caption Editing." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20059-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Arroyo Chavez, Mariana, Bernard Thompson, Molly Feanny, et al. "Customization of Closed Captions via Large Language Models." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-62849-8_7.

Full text
Abstract:
AbstractThis study investigates the feasibility of employing artificial intelligence and large language models (LLMs) to customize closed captions/subtitles to match the personal needs of deaf and hard of hearing viewers. Drawing on recorded live TV samples, it compares user ratings of caption quality, speed, and understandability across five experimental conditions: unaltered verbatim captions, slowed-down verbatim captions, moderately and heavily edited captions via ChatGPT, and lightly edited captions by an LLM optimized for TV content by AppTek, LLC. Results across 16 deaf and hard of hear
APA, Harvard, Vancouver, ISO, and other styles
9

Tanti, Marc, Albert Gatt, and Adrian Muscat. "Pre-gen Metrics: Predicting Caption Quality Metrics Without Generating Captions." In Lecture Notes in Computer Science. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-11018-5_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhou, Chang, Yuzhao Mao, and Xiaojie Wang. "Topic-Specific Image Caption Generation." In Lecture Notes in Computer Science. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-69005-6_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "The Caption"

1

Jaswanth, Pasupuleti, Chakravaram Hari Priya, Krithin Thota, and Manju Khanna. "Live Transcription and Closed Caption." In 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT). IEEE, 2024. http://dx.doi.org/10.1109/icccnt61001.2024.10724562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ben-Kish, Assaf, Moran Yanuka, Morris Alper, Raja Giryes, and Hadar Averbuch-Elor. "Mitigating Open-Vocabulary Caption Hallucinations." In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.emnlp-main.1263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bhadane, Prathamesh, Pratik Bhil, Atharv Dhup, and Ankita k. Patel. "Caption Driven Video Event Detection." In 2024 4th International Conference on Intelligent Technologies (CONIT). IEEE, 2024. http://dx.doi.org/10.1109/conit61985.2024.10626468.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sasidhar, Chalcheema, Madan Lal Saini, Medarametla Charan, Avula Venkata Shivanand, and Vijay Mohan Shrimal. "Image Caption Generator Using LSTM." In 2024 4th International Conference on Technological Advancements in Computational Sciences (ICTACS). IEEE, 2024. https://doi.org/10.1109/ictacs62700.2024.10841294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jagtap, Adhiraj Ajaykumar, Jitendra Musale, Sambhaji Nawale, Parth Takate, Indra Kale, and Saurabh Waghmare. "Image Caption Generator with CLIP Interrogator." In 2025 IEEE International Conference on Interdisciplinary Approaches in Technology and Management for Social Innovation (IATMSI). IEEE, 2025. https://doi.org/10.1109/iatmsi64286.2025.10985030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Roy, Aniket, Anshul Shah, Ketul Shah, Anirban Roy, and Rama Chellappa. "Cap2Aug: Caption Guided Image data Augmentation." In 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE, 2025. https://doi.org/10.1109/wacv61041.2025.00884.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yun, Youngsik. "Culturally-aware Image Captioning." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/975.

Full text
Abstract:
The primary research challenge lies in mitigating and measuring geographical and demographic biases in generative models, which is crucial for ensuring fairness in AI applications. Existing models trained on web-crawled datasets like LAION-400M often perpetuate harmful stereotypes and biases, especially concerning minority groups or less-represented regions. To address this, I proposed a framework called CIC (Culturally-aware Image Caption) to generate culturally-aware image captions. This framework leverages visual question answering (VQA) to extract cultural visual elements from images. It p
APA, Harvard, Vancouver, ISO, and other styles
8

Peng, Qingsong, Yousheng Zhang, Ronggui Wang, and Shuli Zheng. "Detecting caption using caption histograms." In Second International Conference on Image and Graphics, edited by Wei Sui. SPIE, 2002. http://dx.doi.org/10.1117/12.477208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wu, Aming, Yahong Han, and Yi Yang. "Video Interactive Captioning with Human Prompts." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/135.

Full text
Abstract:
Video captioning aims at generating a proper sentence to describe the video content. As a video often includes rich visual content and semantic details, different people may be interested in different views. Thus the generated sentence always fails to meet the ad hoc expectations. In this paper, we make a new attempt that, we launch a round of interaction between a human and a captioning agent. After generating an initial caption, the agent asks for a short prompt from the human as a clue of his expectation. Then, based on the prompt, the agent could generate a more accurate caption. We name t
APA, Harvard, Vancouver, ISO, and other styles
10

Yih-Ming Su and Chaur-Heh Hsieh. "A Novel Caption Extraction Scheme for Various Sports Captions." In 18th International Conference on Pattern Recognition (ICPR'06). IEEE, 2006. http://dx.doi.org/10.1109/icpr.2006.135.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "The Caption"

1

Suratwala, T. Figure and caption for LDRD annual report. Office of Scientific and Technical Information (OSTI), 2017. http://dx.doi.org/10.2172/1410015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Samuelson, Magdalen. Captive Still Life. Portland State University Library, 2000. http://dx.doi.org/10.15760/etd.1343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pauly, T., and D. Thakore, eds. Captive Portal API. RFC Editor, 2020. http://dx.doi.org/10.17487/rfc8908.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Larose, K., D. Dolson, and H. Liu. Captive Portal Architecture. RFC Editor, 2020. http://dx.doi.org/10.17487/rfc8952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rowe, Neil C., and Eugene J. Guglielmo. Exploiting Captions in Retrieval of Multimedia Data. Defense Technical Information Center, 1992. http://dx.doi.org/10.21236/ada255184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tetzlaff, Sasha, Jinelle Sperry, Bruce Kingsburg, and Brett DeGregorio. Captive-rearing duration may be more important than environmental enrichment for enhancing turtle head-starting success. Engineer Research and Development Center (U.S.), 2021. http://dx.doi.org/10.21079/11681/41800.

Full text
Abstract:
Raising captive animals past critical mortality stages for eventual release (head-starting) is a common conservation tactic. Counterintuitively, post-release survival can be low. Post-release behavior affecting survival could be influenced by captive-rearing duration and housing conditions. Practitioners have adopted environmental enrichment to promote natural behaviors during head-starting such as raising animals in naturalistic enclosures. Using 32 captive-born turtles (Terrapene carolina), half of which were raised in enriched enclosures, we employed a factorial design to explore how enrich
APA, Harvard, Vancouver, ISO, and other styles
7

Taylor, Charles Edward, Richard G. Van De Water, David M. Lee, et al. Captain Electronics Technical Report. Office of Scientific and Technical Information (OSTI), 2018. http://dx.doi.org/10.2172/1487349.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Esson, Maggie, and Kelly Jacobs. The European Captive Population of Jaguar. American Museum of Natural History, 2009. http://dx.doi.org/10.5531/cbc.ncep.0162.

Full text
Abstract:
This case study examines some of the goals and challenges of conservation breeding in a European zoo, as a complement to the NCEP module The Management of Conservation Breeding Programs in Zoos and Aquariums. The set-up involves discussions between the director of the zoo and the fundraising officer concerning a breeding program for jaguars, Panthera onca. The subsequent scenarios and data, on topics such as studbooks and selective breeding, are based on real events that occurred as part of the jaguar conservation breeding program developed by the North of England Zoological Society Chester Zo
APA, Harvard, Vancouver, ISO, and other styles
9

Berejikian, Barry. Research on Captive Broodstock Programs for Pacific Salmon; Assessment of Captive Broodstock Technologies, Annual Report 2002-2003. Office of Scientific and Technical Information (OSTI), 2004. http://dx.doi.org/10.2172/963079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Meyers, John J. Robert E. Lee, Great Captain of History. Defense Technical Information Center, 1990. http://dx.doi.org/10.21236/ada222308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!