Academic literature on the topic 'Leakage in image captioning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Leakage in image captioning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Leakage in image captioning"

1

Farrukh, Baratov, Yüksel Göksenin, Petcu Darie, and Bakker Jan. "[Re] Reproducibility Study of "Quantifying Societal Bias Amplification in Image Captioning"." ReScience C 9, no. 2 (2023): #26. https://doi.org/10.5281/zenodo.8173715.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vasudha Bahl and Nidhi Sengar, Gaurav Joshi, Dr Amita Goel. "Image Captioning System." International Journal for Modern Trends in Science and Technology 6, no. 12 (2020): 40–44. http://dx.doi.org/10.46501/ijmtst061208.

Full text
Abstract:
Deep Learning is relatively a new field and it has grabbed a lot of attention because it provides higher level of accuracy in recognizing objects than ever earlier. NLP is also one field that has created a huge impact in our life. NLP has come a long way from producing a readable summary of the texts to analysis of mental illness, it shows the impact of NLP. Image captioning task combines both NLP and Deep Learning. Describing images in a meaningful way can be done using Image captioning. Describing an image don’t just mean recognizing objects, to describe an image properly we first need to id
APA, Harvard, Vancouver, ISO, and other styles
3

Mukund Upadhyay and Prof. Shallu Bashambu. "Image captioning Bot." International Journal for Modern Trends in Science and Technology 6, no. 12 (2020): 348–54. http://dx.doi.org/10.46501/ijmtst061265.

Full text
Abstract:
Image captioning means automatically generating a caption for an image with the development of deep learning, the combination of computer vision and natural language process has caught great attention in the last few years. Image captioning is a representative of this filed, which makes the computer learn to use one or more sentences to understand the visual content of an image. The meaningful description generation process of highlevel image semantics requires not only the recognition of the object and the scene, but the ability of analyzing the state, the attributes and the relationship amon
APA, Harvard, Vancouver, ISO, and other styles
4

Hou, Xiaohan. "To describe the content of image: The view from image captioning." Applied and Computational Engineering 5, no. 1 (2023): 1–10. http://dx.doi.org/10.54254/2755-2721/5/20230511.

Full text
Abstract:
The aim of developing the technology of "image captioning," which integrates natural language and computer processing, is to automatically give descriptions for photographs by the machine itself. The work can be separated into two parts, which depends on correctly comprehending both language and images from a semantic and syntactic perspective. In light of the growing body of information on the subject, it is getting harder to stay abreast of the most recent advancements in the area of image captioning. Nevertheless, the review papers that are now available don't go into enough detail about th
APA, Harvard, Vancouver, ISO, and other styles
5

Jaiswal, Sushma, Harikumar Pallthadka, Rajesh P. Chinchewadi, and Tarun Jaiswal. "Optimized Image Captioning: Hybrid Transformers Vision Transformers and Convolutional Neural Networks: Enhanced with Beam Search." International Journal of Intelligent Systems and Applications 16, no. 2 (2024): 53–61. http://dx.doi.org/10.5815/ijisa.2024.02.05.

Full text
Abstract:
Deep learning has improved image captioning. Transformer, a neural network architecture built for natural language processing, excels at image captioning and other computer vision applications. This paper reviews Transformer-based image captioning methods in detail. Convolutional neural networks (CNNs) extracted image features and RNNs or LSTM networks generated captions in traditional image captioning. This method often has information bottlenecks and trouble capturing long-range dependencies. Transformer architecture revolutionized natural language processing with its attention strategy and
APA, Harvard, Vancouver, ISO, and other styles
6

Al-Malla, Muhammad Abdelhadie, Muhammad Abdelhadie Al-Malla, Assef Jafar, and Nada Ghneim. "Pre-trained CNNs as Feature-Extraction Modules for Image Captioning." ELCVIA Electronic Letters on Computer Vision and Image Analysis 21, no. 1 (2022): 1–16. http://dx.doi.org/10.5565/rev/elcvia.1436.

Full text
Abstract:
In this work, we present a thorough experimental study about feature extraction using Convolutional NeuralNetworks (CNNs) for the task of image captioning in the context of deep learning. We perform a set of 72experiments on 12 image classification CNNs pre-trained on the ImageNet [29] dataset. The features areextracted from the last layer after removing the fully connected layer and fed into the captioning model. We usea unified captioning model with a fixed vocabulary size across all the experiments to study the effect of changingthe CNN feature extractor on image captioning quality. The sco
APA, Harvard, Vancouver, ISO, and other styles
7

Muzaffar, Rimsha, Syed Yasser Arafat, Junaid Rashid, Jungeun Kim, and Usman Naseem. "UICD: A new dataset and approach for urdu image captioning." PLOS One 20, no. 6 (2025): e0320701. https://doi.org/10.1371/journal.pone.0320701.

Full text
Abstract:
Advancements in deep learning have revolutionized numerous real-world applications, including image recognition, visual question answering, and image captioning. Among these, image captioning has emerged as a critical area of research, with substantial progress achieved in Arabic, Chinese, Uyghur, Hindi, and predominantly English. However, despite Urdu being a morphologically rich and widely spoken language, research in Urdu image captioning remains underexplored due to a lack of resources. This study creates a new Urdu Image Captioning Dataset (UCID) called UC-23-RY to fill in the gaps in Urd
APA, Harvard, Vancouver, ISO, and other styles
8

Fudholi, Dhomas Hatta, Umar Abdul Aziz Al-Faruq, Royan Abida N. Nayoan, and Annisa Zahra. "A study on attention-based deep learning architecture model for image captioning." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 1 (2024): 23. http://dx.doi.org/10.11591/ijai.v13.i1.pp23-34.

Full text
Abstract:
<span lang="EN-US">Image captioning has been widely studied due to its ability in a visual scene understanding. Automatic visual scene understanding is useful for remote monitoring system and visually impaired people. Attention-based models, including transformer, are the current state-of-the-art architectures used in developing image captioning model. This study examines the works in the development of image captioning model, especially models that are developed based on attention mechanism. The architecture, the dataset, and the evaluation metrics analysis are done to the collected wor
APA, Harvard, Vancouver, ISO, and other styles
9

Fudholi, Dhomas Hatta, Umar Abdul Aziz Al-Faruq, Royan Abida N. Nayoan, and Annisa Zahra. "A study on attention-based deep learning architecture model for image captioning." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 1 (2024): 23–34. https://doi.org/10.11591/ijai.v13.i1.pp23-34.

Full text
Abstract:
Image captioning has been widely studied due to its ability in a visual scene understanding. Automatic visual scene understanding is useful for remote monitoring system and visually impaired people. Attention-based models, including transformer, are the current state-of-the-art architectures used in developing image captioning model. This study examines the works in the development of image captioning model, especially models that are developed based on attention mechanism. The architecture, the dataset, and the evaluation metrics analysis are done to the collected works. A general flow of ima
APA, Harvard, Vancouver, ISO, and other styles
10

Nursikuwagus, Agus, Rinaldi Munir, and Masayu Layla Khodra. "Image Captioning menurut Scientific Revolution Kuhn dan Popper." Jurnal Manajemen Informatika (JAMIKA) 10, no. 2 (2020): 110–21. http://dx.doi.org/10.34010/jamika.v10i2.2630.

Full text
Abstract:
Perkembangan untuk memberikan caption pada suatu gambar merupakan suatu ranah perkembangan baru dalam bidang intelejensia buatan. Image captioning merupakan penggabungan dari beberapa bidang seperti computer vision, natural language, dan pembelajaran mesin. Aspek yang menjadi perhatian dalam bidang image captioning ini adalah ketepatan arsitektur neural network yang dimodelkan untuk mendapatkan hasil yang sedekat mungkin dengan ground-thruth yang disampaikan oleh person. Beberapa kajian yang sudah diteliti masih mendapatkan kalimat yang masih jauh dari ground-thruth tersebut. Permasalahan yang
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Leakage in image captioning"

1

Hoxha, Genc. "IMAGE CAPTIONING FOR REMOTE SENSING IMAGE ANALYSIS." Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/351752.

Full text
Abstract:
Image Captioning (IC) aims to generate a coherent and comprehensive textual description that summarizes the complex content of an image. It is a combination of computer vision and natural language processing techniques to encode the visual features of an image and translate them into a sentence. In the context of remote sensing (RS) analysis, IC has been emerging as a new research area of high interest since it not only recognizes the objects within an image but also describes their attributes and relationships. In this thesis, we propose several IC methods for RS image analysis. We focus on t
APA, Harvard, Vancouver, ISO, and other styles
2

Hossain, Md Zakir. "Deep learning techniques for image captioning." Thesis, Hossain, Md. Zakir (2020) Deep learning techniques for image captioning. PhD thesis, Murdoch University, 2020. https://researchrepository.murdoch.edu.au/id/eprint/60782/.

Full text
Abstract:
Generating a description of an image is called image captioning. Image captioning is a challenging task because it involves the understanding of the main objects, their attributes, and their relationships in an image. It also involves the generation of syntactically and semantically meaningful descriptions of the images in natural language. A typical image captioning pipeline comprises an image encoder and a language decoder. Convolutional Neural Networks (CNNs) are widely used as the encoder while Long short-term memory (LSTM) networks are used as the decoder. A variety of LSTMs and CNNs incl
APA, Harvard, Vancouver, ISO, and other styles
3

Elguendouze, Sofiane. "Explainable Artificial Intelligence approaches for Image Captioning." Electronic Thesis or Diss., Orléans, 2024. http://www.theses.fr/2024ORLE1003.

Full text
Abstract:
L'évolution rapide des modèles de sous-titrage d'images, impulsée par l'intégration de techniques d'apprentissage profond combinant les modalités image et texte, a conduit à des systèmes de plus en plus complexes. Cependant, ces modèles fonctionnent souvent comme des boîtes noires, incapables de fournir des explications transparentes de leurs décisions. Cette thèse aborde l'explicabilité des systèmes de sous-titrage d'images basés sur des architectures Encodeur-Attention-Décodeur, et ce à travers quatre aspects. Premièrement, elle explore le concept d'espace latent, s'éloignant ainsi des appro
APA, Harvard, Vancouver, ISO, and other styles
4

Tu, Guoyun. "Image Captioning On General Data And Fashion Data : An Attribute-Image-Combined Attention-Based Network for Image Captioning on Mutli-Object Images and Single-Object Images." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-282925.

Full text
Abstract:
Image captioning is a crucial field across computer vision and natural language processing. It could be widely applied to high-volume web images, such as conveying image content to visually impaired users. Many methods are adopted in this area such as attention-based methods, semantic-concept based models. These achieve excellent performance on general image datasets such as the MS COCO dataset. However, it is still left unexplored on single-object images.In this paper, we propose a new attribute-information-combined attention- based network (AIC-AB Net). At each time step, attribute informati
APA, Harvard, Vancouver, ISO, and other styles
5

Karayil, Tushar [Verfasser], and Andreas [Akademischer Betreuer] Dengel. "Affective Image Captioning: Extraction and Semantic Arrangement of Image Information with Deep Neural Networks / Tushar Karayil ; Betreuer: Andreas Dengel." Kaiserslautern : Technische Universität Kaiserslautern, 2020. http://d-nb.info/1214640958/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gennari, Riccardo. "End-to-end Deep Metric Learning con Vision-Language Model per il Fashion Image Captioning." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25772/.

Full text
Abstract:
L'image captioning è un task di machine learning che consiste nella generazione di una didascalia, o caption, che descriva le caratteristiche di un'immagine data in input. Questo può essere applicato, ad esempio, per descrivere in dettaglio i prodotti in vendita su un sito di e-commerce, migliorando l'accessibilità del sito web e permettendo un acquisto più consapevole ai clienti con difficoltà visive. La generazione di descrizioni accurate per gli articoli di moda online è importante non solo per migliorare le esperienze di acquisto dei clienti, ma anche per aumentare le vendite online. Oltre
APA, Harvard, Vancouver, ISO, and other styles
7

Kan, Jichao. "Visual-Text Translation with Deep Graph Neural Networks." Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/23759.

Full text
Abstract:
Visual-text translation is to produce textual descriptions in natural languages from images and videos. In this thesis, we investigate two topics in the field: image captioning and continuous sign language recognition, by exploring structural representations of visual content. Image captioning is to generate text descriptions for a given image. Deep learning based methods have achieved impressive performance on this topic. However, the relations among objects in an image have not been fully explored. Thus, a topic-guided local-global graph neural network is proposed to extract graph propertie
APA, Harvard, Vancouver, ISO, and other styles
8

Ma, Yufeng. "Going Deeper with Images and Natural Language." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/99993.

Full text
Abstract:
One aim in the area of artificial intelligence (AI) is to develop a smart agent with high intelligence that is able to perceive and understand the complex visual environment around us. More ambitiously, it should be able to interact with us about its surroundings in natural languages. Thanks to the progress made in deep learning, we've seen huge breakthroughs towards this goal over the last few years. The developments have been extremely rapid in visual recognition, in which machines now can categorize images into multiple classes, and detect various objects within an image, with an ability t
APA, Harvard, Vancouver, ISO, and other styles
9

MIORINI, Rinaldo Luigi. "Investigations on the flow inside pumps by means of 2D particle image velocimetry." Doctoral thesis, Università degli studi di Bergamo, 2010. http://hdl.handle.net/10446/615.

Full text
Abstract:
This work is concerned with the sampling, evaluating and critically interpreting of fuid dynamics phenomena inside two radically different pumps. Large scale flow structures are investigated in the vaned diffuser of a centrifugal pump and in the rotor passage of a water jet axial pump by means of two-dimensional particle image velocimetry (2DPIV). In the first part of the work, a centrifugal pump is run at various capacities to derive information about the flow around the diffuser vanes. Preliminarily, time resolved pressure measurements have indicated the presence of very large scale non-pe
APA, Harvard, Vancouver, ISO, and other styles
10

Nair, Prashant. "Designing low power SRAM system using energy compression." Thesis, Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47663.

Full text
Abstract:
The power consumption in commercial processors and application specific integrated circuits increases with decreasing technology nodes. Power saving techniques have become a first class design point for current and future VLSI systems. These systems employ large on-chip SRAM memories. Reducing memory leakage power while maintaining data integrity is a key criterion for modern day systems. Unfortunately, state of the art techniques like power-gating can only be applied to logic as these would destroy the contents of the memory if applied to a SRAM system. Fortunately, previous works have noted
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Leakage in image captioning"

1

M, Smith L., and United States. National Aeronautics and Space Administration., eds. Final report for grant titled: SSME propellant path leak detection real-time. National Aeronautics and Space Administration, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Photographic analysis technique for assessing external tank foam loss events. National Aeronautics and Space Administration, Marshall Space Flight Center, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Leakage in image captioning"

1

Sarang, Poornachandra. "Image Captioning." In Artificial Neural Networks with TensorFlow 2. Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-6150-7_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bagheri, Roxana. "Satellite Image Captioning." In Advances in Intelligent Systems and Computing. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-89063-5_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

He, Sen, Wentong Liao, Hamed R. Tavakoli, Michael Yang, Bodo Rosenhahn, and Nicolas Pugeault. "Image Captioning Through Image Transformer." In Computer Vision – ACCV 2020. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-69538-5_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Deng, Chaorui, Ning Ding, Mingkui Tan, and Qi Wu. "Length-Controllable Image Captioning." In Computer Vision – ECCV 2020. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58601-0_42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yang, Huan, Dandan Song, and Lejian Liao. "Image Captioning with Relational Knowledge." In Lecture Notes in Computer Science. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-97310-4_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Ziwei, Zi Huang, and Yadan Luo. "PAIC: Parallelised Attentive Image Captioning." In Lecture Notes in Computer Science. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-39469-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Meng, Zihang, David Yang, Xuefei Cao, Ashish Shah, and Ser-Nam Lim. "Object-Centric Unsupervised Image Captioning." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20059-5_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Adithya, Paluvayi Veera, Mourya Viswanadh Kalidindi, Nallani Jyothi Swaroop, and H. N. Vishwas. "Image Captioning Using Deep Learning." In Communications in Computer and Information Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-64070-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Joshi, Akshay, Kartik Kalal, Dhiraj Bhandare, Vaishnavi Patil, Uday Kulkarni, and S. M. Meena. "Image Captioning Using CNN-LSTM." In Lecture Notes in Electrical Engineering. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-99-7633-1_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sabri, My Abdelouahed, Hamza El Madhoune, Chaimae Zouitni, and Abdellah Aarab. "Image Captioning: An Understanding Study." In Digital Technologies and Applications. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-29860-8_49.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Leakage in image captioning"

1

Saraswat, Mala, Challa Vivekananda Reddy, and Garandal Yashwanth Singh. "Image Captioning Using NLP." In 2024 First International Conference on Technological Innovations and Advance Computing (TIACOMP). IEEE, 2024. http://dx.doi.org/10.1109/tiacomp64125.2024.00096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Swaraj, Aman, Ravi Raj, Aniket Shaw, Richa Dubey, Prianka Dey, and Sagarika Chowdhury. "Automated Image Captioning Systems." In 2025 International Conference on Inventive Computation Technologies (ICICT). IEEE, 2025. https://doi.org/10.1109/icict64420.2025.11004999.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Adhikari, Aashish, and Sushil Ghimire. "Nepali Image Captioning." In 2019 Artificial Intelligence for Transforming Business and Society (AITB). IEEE, 2019. http://dx.doi.org/10.1109/aitb48515.2019.8947436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Aneja, Jyoti, Aditya Deshpande, and Alexander G. Schwing. "Convolutional Image Captioning." In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018. http://dx.doi.org/10.1109/cvpr.2018.00583.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Feng, Yang, Lin Ma, Wei Liu, and Jiebo Luo. "Unsupervised Image Captioning." In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019. http://dx.doi.org/10.1109/cvpr.2019.00425.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ge, Xuri, Fuhai Chen, Chen Shen, and Rongrong Ji. "Colloquial Image Captioning." In 2019 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2019. http://dx.doi.org/10.1109/icme.2019.00069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Puscasiu, Adela, Alexandra Fanca, Dan-Ioan Gota, and Honoriu Valean. "Automated image captioning." In 2020 IEEE International Conference on Automation, Quality and Testing, Robotics (AQTR). IEEE, 2020. http://dx.doi.org/10.1109/aqtr49680.2020.9129930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Panigrahi, Lipismita, Raghab Ranjan Panigrahi, and Saroj Kumar Chandra. "Hybrid Image Captioning Model." In 2022 OPJU International Technology Conference on Emerging Technologies for Sustainable Development (OTCON). IEEE, 2023. http://dx.doi.org/10.1109/otcon56053.2023.10113957.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yan, Xu, Zhengcong Fei, Zekang Li, Shuhui Wang, Qingming Huang, and Qi Tian. "Semi-Autoregressive Image Captioning." In MM '21: ACM Multimedia Conference. ACM, 2021. http://dx.doi.org/10.1145/3474085.3475179.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ramos, Rita, Desmond Elliott, and Bruno Martins. "Retrieval-augmented Image Captioning." In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.eacl-main.266.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Leakage in image captioning"

1

Decroux, Agnes, Kassem Kalo, and Keith Swinden. PR-393-205100-R01 IRIS X-Ray CT Qualification for Flexible Pipe Inspection (Phase 1). Pipeline Research Council International, Inc. (PRCI), 2021. http://dx.doi.org/10.55274/r0012068.

Full text
Abstract:
There are several techniques available to inspect single wall carbon steel pipelines including; Magnetic flux leakage (MFL), ultrasonic testing (UT), Electro-Magnetic Acoustic Transducer (EMAT), Phased Array, guide wave testing (GWT), etc. However, for more complex structures such as flexible pipelines the technology available to inspect them is far more limited. PRCI commissioned a program (SPIM 2-1) under the Subsea TC (2017-2020) to evaluate all known and suspected technologies that could be used to provide a detailed subsea inspection of a flexible riser. PRCI produced four samples of flex
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!