Academic literature on the topic 'Visual grounding of text'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Visual grounding of text.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Visual grounding of text"

1

Chao Wang, Chao Wang, Wei Luo Chao Wang, Jia-Rui Zhu Wei Luo, Ying-Chun Xia Jia-Rui Zhu, Jin He Ying-Chun Xia, and Li-Chuan Gu Jin He. "End-to-end Visual Grounding Based on Query Text Guidance and Multi-stage Reasoning." 電腦學刊 35, no. 1 (2024): 083–95. http://dx.doi.org/10.53106/199115992024023501006.

Full text
Abstract:
<p>Visual grounding locates target objects or areas in the image based on natural language expression. Most current methods extract visual features and text embeddings independently, and then carry out complex fusion reasoning to locate target objects mentioned in the query text. However, such independently extracted visual features often contain many features that are irrelevant to the query text or misleading, thus affecting the subsequent multimodal fusion module, and deteriorating target localization. This study introduces a combined network model based on the transformer architectur
APA, Harvard, Vancouver, ISO, and other styles
2

Regneri, Michaela, Marcus Rohrbach, Dominikus Wetzel, Stefan Thater, Bernt Schiele, and Manfred Pinkal. "Grounding Action Descriptions in Videos." Transactions of the Association for Computational Linguistics 1 (December 2013): 25–36. http://dx.doi.org/10.1162/tacl_a_00207.

Full text
Abstract:
Recent work has shown that the integration of visual information into text-based models can substantially improve model predictions, but so far only visual information extracted from static images has been used. In this paper, we consider the problem of grounding sentences describing actions in visual information extracted from videos. We present a general purpose corpus that aligns high quality videos with multiple natural language descriptions of the actions portrayed in the videos, together with an annotation of how similar the action descriptions are to each other. Experimental results dem
APA, Harvard, Vancouver, ISO, and other styles
3

Zhan, Yang, Yuan Yuan, and Zhitong Xiong. "Mono3DVG: 3D Visual Grounding in Monocular Images." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 7 (2024): 6988–96. http://dx.doi.org/10.1609/aaai.v38i7.28525.

Full text
Abstract:
We introduce a novel task of 3D visual grounding in monocular RGB images using language descriptions with both appearance and geometry information. Specifically, we build a large-scale dataset, Mono3DRefer, which contains 3D object targets with their corresponding geometric text descriptions, generated by ChatGPT and refined manually. To foster this task, we propose Mono3DVG-TR, an end-to-end transformer-based network, which takes advantage of both the appearance and geometry information in text embeddings for multi-modal learning and 3D object localization. Depth predictor is designed to expl
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Qianjun, and Jin Yuan. "Semantic-Aligned Cross-Modal Visual Grounding Network with Transformers." Applied Sciences 13, no. 9 (2023): 5649. http://dx.doi.org/10.3390/app13095649.

Full text
Abstract:
Multi-modal deep learning methods have achieved great improvements in visual grounding; their objective is to localize text-specified objects in images. Most of the existing methods can localize and classify objects with significant appearance differences but suffer from the misclassification problem for extremely similar objects, due to inadequate exploration of multi-modal features. To address this problem, we propose a novel semantic-aligned cross-modal visual grounding network with transformers (SAC-VGNet). SAC-VGNet integrates visual and textual features with semantic alignment to highlig
APA, Harvard, Vancouver, ISO, and other styles
5

Shen, Haozhan, Tiancheng Zhao, Mingwei Zhu, and Jianwei Yin. "GroundVLP: Harnessing Zero-Shot Visual Grounding from Vision-Language Pre-training and Open-Vocabulary Object Detection." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 5 (2024): 4766–75. http://dx.doi.org/10.1609/aaai.v38i5.28278.

Full text
Abstract:
Visual grounding, a crucial vision-language task involving the understanding of the visual context based on the query expression, necessitates the model to capture the interactions between objects, as well as various spatial and attribute information. However, the annotation data of visual grounding task is limited due to its time-consuming and labor-intensive annotation process, resulting in the trained models being constrained from generalizing its capability to a broader domain. To address this challenge, we propose GroundVLP, a simple yet effective zero-shot method that harnesses visual gr
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Shilong, Shijia Huang, Feng Li, et al. "DQ-DETR: Dual Query Detection Transformer for Phrase Extraction and Grounding." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 2 (2023): 1728–36. http://dx.doi.org/10.1609/aaai.v37i2.25261.

Full text
Abstract:
In this paper, we study the problem of visual grounding by considering both phrase extraction and grounding (PEG). In contrast to the previous phrase-known-at-test setting, PEG requires a model to extract phrases from text and locate objects from image simultaneously, which is a more practical setting in real applications. As phrase extraction can be regarded as a 1D text segmentation problem, we formulate PEG as a dual detection problem and propose a novel DQ-DETR model, which introduces dual queries to probe different features from image and text for object prediction and phrase mask predict
APA, Harvard, Vancouver, ISO, and other styles
7

Cheng, Zesen, Kehan Li, Peng Jin, et al. "Parallel Vertex Diffusion for Unified Visual Grounding." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 2 (2024): 1326–34. http://dx.doi.org/10.1609/aaai.v38i2.27896.

Full text
Abstract:
Unified visual grounding (UVG) capitalizes on a wealth of task-related knowledge across various grounding tasks via one-shot training, which curtails retraining costs and task-specific architecture design efforts. Vertex generation-based UVG methods achieve this versatility by unified modeling object box and contour prediction and provide a text-powered interface to vast related multi-modal tasks, e.g., visual question answering and captioning. However, these methods typically generate vertexes sequentially through autoregression, which is prone to be trapped in error accumulation and heavy co
APA, Harvard, Vancouver, ISO, and other styles
8

Feng, Steven Y., Kevin Lu, Zhuofu Tao, et al. "Retrieve, Caption, Generate: Visual Grounding for Enhancing Commonsense in Text Generation Models." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (2022): 10618–26. http://dx.doi.org/10.1609/aaai.v36i10.21306.

Full text
Abstract:
We investigate the use of multimodal information contained in images as an effective method for enhancing the commonsense of Transformer models for text generation. We perform experiments using BART and T5 on concept-to-text generation, specifically the task of generative commonsense reasoning, or CommonGen. We call our approach VisCTG: Visually Grounded Concept-to-Text Generation. VisCTG involves captioning images representing appropriate everyday scenarios, and using these captions to enrich and steer the generation process. Comprehensive evaluation and analysis demonstrate that VisCTG notic
APA, Harvard, Vancouver, ISO, and other styles
9

Jia, Meihuizi, Lei Shen, Xin Shen, et al. "MNER-QG: An End-to-End MRC Framework for Multimodal Named Entity Recognition with Query Grounding." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 7 (2023): 8032–40. http://dx.doi.org/10.1609/aaai.v37i7.25971.

Full text
Abstract:
Multimodal named entity recognition (MNER) is a critical step in information extraction, which aims to detect entity spans and classify them to corresponding entity types given a sentence-image pair. Existing methods either (1) obtain named entities with coarse-grained visual clues from attention mechanisms, or (2) first detect fine-grained visual regions with toolkits and then recognize named entities. However, they suffer from improper alignment between entity types and visual regions or error propagation in the two-stage manner, which finally imports irrelevant visual information into texts
APA, Harvard, Vancouver, ISO, and other styles
10

Shi, Zhan, Yilin Shen, Hongxia Jin, and Xiaodan Zhu. "Improving Zero-Shot Phrase Grounding via Reasoning on External Knowledge and Spatial Relations." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (2022): 2253–61. http://dx.doi.org/10.1609/aaai.v36i2.20123.

Full text
Abstract:
Phrase grounding is a multi-modal problem that localizes a particular noun phrase in an image referred to by a text query. In the challenging zero-shot phrase grounding setting, the existing state-of-the-art grounding models have limited capacity in handling the unseen phrases. Humans, however, can ground novel types of objects in images with little effort, significantly benefiting from reasoning with commonsense. In this paper, we design a novel phrase grounding architecture that builds multi-modal knowledge graphs using external knowledge and then performs graph reasoning and spatial relatio
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Visual grounding of text"

1

Engilberge, Martin. "Deep Inside Visual-Semantic Embeddings." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS150.

Full text
Abstract:
De nos jours l’Intelligence artificielle (IA) est omniprésente dans notre société. Le récent développement des méthodes d’apprentissage basé sur les réseaux de neurones profonds aussi appelé “Deep Learning” a permis une nette amélioration des modèles de représentation visuelle et textuelle. Cette thèse aborde la question de l’apprentissage de plongements multimodaux pour représenter conjointement des données visuelles et sémantiques. C’est une problématique centrale dans le contexte actuel de l’IA et du deep learning, qui présente notamment un très fort potentiel pour l’interprétabilité des mo
APA, Harvard, Vancouver, ISO, and other styles
2

Emmott, Stephen J. "The visual processing of text." Thesis, University of Stirling, 1993. http://hdl.handle.net/1893/1837.

Full text
Abstract:
The results of an investigation into the nature of the visual information obtained from pages of text and used in the visual processing of text during reading are reported. An initial investigation into the visual processing of text by applying a computational model of early vision (MIRAGE: Watt & Morgan, 1985; Watt, 1988) to pages of text (Computational Analysis 1) is shown to extract a range of features from a text image in the representation it delivers, which are organised across a range of spatial scales similar to those spanning human vision. The features the model extracts are capab
APA, Harvard, Vancouver, ISO, and other styles
3

Mi, Jinpeng Verfasser], and Jianwei [Akademischer Betreuer] [Zhang. "Natural Language Visual Grounding via Multimodal Learning / Jinpeng Mi ; Betreuer: Jianwei Zhang." Hamburg : Staats- und Universitätsbibliothek Hamburg, 2020. http://d-nb.info/1205070885/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mi, Jinpeng [Verfasser], and Jianwei [Akademischer Betreuer] Zhang. "Natural Language Visual Grounding via Multimodal Learning / Jinpeng Mi ; Betreuer: Jianwei Zhang." Hamburg : Staats- und Universitätsbibliothek Hamburg, 2020. http://d-nb.info/1205070885/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Prince, Md Enamul Hoque. "Visual text analytics for online conversations." Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/61772.

Full text
Abstract:
With the proliferation of Web-based social media, asynchronous conversations have become very common for supporting online communication and collaboration. Yet the increasing volume and complexity of conversational data often make it very difficult to get insights about the discussions. This dissertation posits that by integrating natural language processing and information visualization techniques in a synergistic way, we can better support the user's task of exploring and analyzing conversations. Unlike most previous systems, which do not consider the specific characteristics of online conv
APA, Harvard, Vancouver, ISO, and other styles
6

Chauhan, Aneesh. "Grounding human vocabulary in robot perception through interaction." Doctoral thesis, Universidade de Aveiro, 2014. http://hdl.handle.net/10773/12841.

Full text
Abstract:
Doutoramento em Engenharia Informática<br>This thesis addresses the problem of word learning in computational agents. The motivation behind this work lies in the need to support language-based communication between service robots and their human users, as well as grounded reasoning using symbols relevant for the assigned tasks. The research focuses on the problem of grounding human vocabulary in robotic agent’s sensori-motor perception. Words have to be grounded in bodily experiences, which emphasizes the role of appropriate embodiments. On the other hand, language is a cultural product
APA, Harvard, Vancouver, ISO, and other styles
7

Sabir, Ahmed. "Enhancing scene text recognition with visual context information." Doctoral thesis, Universitat Politècnica de Catalunya, 2020. http://hdl.handle.net/10803/670286.

Full text
Abstract:
This thesis addresses the problem of improving text spotting systems, which aim to detect and recognize text in unrestricted images (e.g. a street sign, an advertisement, a bus destination, etc.). The goal is to improve the performance of off-the-shelf vision systems by exploiting the semantic information derived from the image itself. The rationale is that knowing the content of the image or the visual context can help to decide which words are the correct andidate words. For example, the fact that an image shows a coffee shop makes it more likely that a word on a signboard reads as Dunkin
APA, Harvard, Vancouver, ISO, and other styles
8

Willems, Heather Marie. "Writing the written: text as a visual image." The Ohio State University, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=osu1382952227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kan, Jichao. "Visual-Text Translation with Deep Graph Neural Networks." Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/23759.

Full text
Abstract:
Visual-text translation is to produce textual descriptions in natural languages from images and videos. In this thesis, we investigate two topics in the field: image captioning and continuous sign language recognition, by exploring structural representations of visual content. Image captioning is to generate text descriptions for a given image. Deep learning based methods have achieved impressive performance on this topic. However, the relations among objects in an image have not been fully explored. Thus, a topic-guided local-global graph neural network is proposed to extract graph propertie
APA, Harvard, Vancouver, ISO, and other styles
10

Shmueli, Yael. "Integrating speech and visual text in multimodal interfaces." Thesis, University College London (University of London), 2005. http://discovery.ucl.ac.uk/1446688/.

Full text
Abstract:
This work systematically investigates when and how combining speech output and visual text may facilitate processing and comprehension of sentences. It is proposed that a redundant multimodal presentation of speech and text has the potential for improving sentence processing but also for severely disrupting it. The effectiveness of the presentation is assumed to depend on the linguistic complexity of the sentence, the memory demands incurred by the selected multimodal configuration and the characteristics of the user. The thesis employs both theoretical and empirical methods to examine this cl
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Visual grounding of text"

1

Jessica, Wyman, ed. Pro forma: Language, text, visual art. YYZ Books, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Strassner, Erich. Text-Bild-Kommunikation - Bild-Text-Kommunikation. Niemeyer, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wolfgang, Harms, and Deutsche Forschungsgemeinschaft, eds. Text und Bild, Bild und Text: DFG-Symposion 1988. J.B. Metzler, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Text und Bild: Grundfragen der Beschreibung von Text-Bild-Kommunikationen aus sprachwissenschaftlicher Sicht. Narr, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Leidner, Jochen L. Toponym resolution in text: Annotation, evaluation and applications of spatial grounding of place names. Dissertation.com, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

K, Ranai, ed. Visual editing on unix. World Scientific, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

1948-, John Samuel G., and Institute of Asian Studies (Madras, India), eds. The Great penance at Māmallapuram: Deciphering a visual text. Institute of Asian Studies, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

The Bible as visual culture: When text becomes image. Sheffield Phoenix Press, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

V, Drake Michael, ed. The visual fields: Text and atlas of clinical perimetry. 6th ed. Mosby, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gail, Finney, ed. Visual culture in twentieth-century Germany: Text as spectacle. Indiana University Press, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Visual grounding of text"

1

Min, Seonwoo, Nokyung Park, Siwon Kim, Seunghyun Park, and Jinkyu Kim. "Grounding Visual Representations with Texts for Domain Generalization." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19836-6_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hong, Tao, Ya Wang, Xingwu Sun, Xiaoqing Li, and Jinwen Ma. "CMMix: Cross-Modal Mix Augmentation Between Images and Texts for Visual Grounding." In Communications in Computer and Information Science. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8148-9_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hendricks, Lisa Anne, Ronghang Hu, Trevor Darrell, and Zeynep Akata. "Grounding Visual Explanations." In Computer Vision – ECCV 2018. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01216-8_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Johari, Kritika, Christopher Tay Zi Tong, Vigneshwaran Subbaraju, Jung-Jae Kim, and U.-Xuan Tan. "Gaze Assisted Visual Grounding." In Social Robotics. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-90525-5_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xiao, Junbin, Xindi Shang, Xun Yang, Sheng Tang, and Tat-Seng Chua. "Visual Relation Grounding in Videos." In Computer Vision – ECCV 2020. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58539-6_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Goy, Anna. "Grounding Meaning in Visual Knowledge." In Spatial Language. Springer Netherlands, 2002. http://dx.doi.org/10.1007/978-94-015-9928-3_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Silberer, Carina. "Grounding the Meaning of Words with Visual Attributes." In Visual Attributes. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-50077-5_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mazaheri, Amir, and Mubarak Shah. "Visual Text Correction." In Computer Vision – ECCV 2018. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01261-8_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wainer, Howard. "Integrating Figures and Text." In Visual Revelations. Springer New York, 1997. http://dx.doi.org/10.1007/978-1-4612-2282-8_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kittler, Josef, Mikhail Shevchenko, and David Windridge. "Visual Bootstrapping for Unsupervised Symbol Grounding." In Advanced Concepts for Intelligent Vision Systems. Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11864349_94.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Visual grounding of text"

1

Zhang, Yimeng, Xin Chen, Jinghan Jia, Sijia Liu, and Ke Ding. "Text-Visual Prompting for Efficient 2D Temporal Video Grounding." In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01421.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Yanmin, Xinhua Cheng, Renrui Zhang, Zesen Cheng, and Jian Zhang. "EDA: Explicit Text-Decoupling and Dense Alignment for 3D Visual Grounding." In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01843.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Endo, Ko, Masaki Aono, Eric Nichols, and Kotaro Funakoshi. "An Attention-based Regression Model for Grounding Textual Phrases in Images." In Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/558.

Full text
Abstract:
Grounding, or localizing, a textual phrase in an image is a challenging problem that is integral to visual language understanding. Previous approaches to this task typically make use of candidate region proposals, where end performance depends on that of the region proposal method and additional computational costs are incurred. In this paper, we treat grounding as a regression problem and propose a method to directly identify the region referred to by a textual phrase, eliminating the need for external candidate region prediction. Our approach uses deep neural networks to combine image and te
APA, Harvard, Vancouver, ISO, and other styles
4

Conser, Erik, Kennedy Hahn, Chandler Watson, and Melanie Mitchell. "Revisiting Visual Grounding." In Proceedings of the Second Workshop on Shortcomings in Vision and Language. Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/w19-1804.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kim, Yongmin, Chenhui Chu, and Sadao Kurohashi. "Flexible Visual Grounding." In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop. Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.acl-srw.22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Du, Ye, Zehua Fu, Qingjie Liu, and Yunhong Wang. "Visual Grounding with Transformers." In 2022 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2022. http://dx.doi.org/10.1109/icme52920.2022.9859880.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jing, Chenchen, Yuwei Wu, Mingtao Pei, Yao Hu, Yunde Jia, and Qi Wu. "Visual-Semantic Graph Matching for Visual Grounding." In MM '20: The 28th ACM International Conference on Multimedia. ACM, 2020. http://dx.doi.org/10.1145/3394171.3413902.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Deng, Chaorui, Qi Wu, Qingyao Wu, Fuyuan Hu, Fan Lyu, and Mingkui Tan. "Visual Grounding via Accumulated Attention." In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018. http://dx.doi.org/10.1109/cvpr.2018.00808.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lee, Jason, Kyunghyun Cho, and Douwe Kiela. "Countering Language Drift via Visual Grounding." In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/d19-1447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sun, Yuxi, Shanshan Feng, Xutao Li, Yunming Ye, Jian Kang, and Xu Huang. "Visual Grounding in Remote Sensing Images." In MM '22: The 30th ACM International Conference on Multimedia. ACM, 2022. http://dx.doi.org/10.1145/3503161.3548316.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Visual grounding of text"

1

Steed, Chad A., Christopher T. Symons, James K. Senter, and Frank A. DeNap. Guided Text Search Using Adaptive Visual Analytics. Office of Scientific and Technical Information (OSTI), 2012. http://dx.doi.org/10.2172/1055105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Beiker, Sven, ed. Unsettled Issues Regarding Visual Communication Between Automated Vehicles and Other Road Users. SAE International, 2021. http://dx.doi.org/10.4271/epr2021016.

Full text
Abstract:
As automated road vehicles begin their deployment into public traffic, and they will need to interact with human driven vehicles, pedestrians, bicyclists, etc. This requires some form of communication between those automated vehicles (AVs) and other road users. Some of these communication modes (e.g., auditory, motion) were discussed in “Unsettled Issues Regarding Communication of Automated Vehicles with Other Road Users.” Unsettled Issues Regarding Visual Communication Between Automated Vehicles and Other Road Users focuses on sisual communication and its balance of reach, clarity, and intuit
APA, Harvard, Vancouver, ISO, and other styles
3

Дирда, І. А., and З. П. Бакум. Linguodidactic fundamentals of the development of foreign students’ polycultural competence during the Ukrainian language training. Association 1901 "SEPIKE", 2016. http://dx.doi.org/10.31812/123456789/2994.

Full text
Abstract:
The paper shows the analysis of scientists’ views to the definitions of terms “approaches to studying”,”principles”, “methods” and “techniques”. The development of foreign students’ polycultural competence is realized in particular approaches (competence, activity approach, personal oriented, polycultural approach); principles (communicative principle, principles of humanism, scientific nature, visual methods, systematicness and succession, consciousness, continuity and availability, individualization, text centrism, native language consideration, connection between theory and practice); usage
APA, Harvard, Vancouver, ISO, and other styles
4

Бакум, З. П., and І. А. Дирда. Linguodidactic Fundamentals of the Development of Foreign Students' Polycultural Competence During the Ukrainian Language Training. Криворізький державний педагогічний університет, 2016. http://dx.doi.org/10.31812/0564/398.

Full text
Abstract:
The paper shows the analysis of scientists' views to the definitions of terms "approaches to studying", "principles", "methods" and "techniques". The development of foreign students' polycultural competence is realized in particular approaches (competence, activity approach, personal oriented, polycultural approach); principles (communicative principle, principles of humanism, scientific nature, visual methods, systematicness and succession, consciousness, continuity and availability, individualization, text centrism, native language consideration, connection between theory and practice); usag
APA, Harvard, Vancouver, ISO, and other styles
5

Figueredo, Luisa, Liliana Martinez, and Joao Paulo Almeida. Current role of Endoscopic Endonasal Approach for Craniopharyngiomas. A 10-year Systematic review and Meta-Analysis Comparison with the Open Transcranial Approach. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, 2023. http://dx.doi.org/10.37766/inplasy2023.1.0045.

Full text
Abstract:
Review question / Objective: To identify and review studies published in the last ten years, presenting the efficacy and outcomes of EEA and TCA for patients with cranio-pharyngiomas. Eligibility criteria: Studies meeting the following criteria were included: (a) retrospective and prospective studies and (b) observational studies (i.e., cross-sectional, case-control, case-series). The outcomes included visual outcomes (improvement, no changes, worsening), endocrinological outcomes (permanent diabetes insipidus and hypopituitarism), operatory site infection, meningitis, cerebrospinal fluid leak
APA, Harvard, Vancouver, ISO, and other styles
6

Yatsymirska, Mariya. Мова війни і «контрнаступальна» лексика у стислих медійних текстах. Ivan Franko National University of Lviv, 2023. http://dx.doi.org/10.30970/vjo.2023.52-53.11742.

Full text
Abstract:
The article examines the language of the russian-ukrainian war of the 21st century based on the materials of compressed media texts; the role of political narratives and psychological-emotional markers in the creation of new lexemes is clarified; the verbal expression of forecasts of ukrainian and foreign analysts regarding the course of hostilities on the territory of Ukraine is shown. Compressed media texts reflect the main meanings of the language of the russian-ukrainian war in relation to the surrounding world. First of all, the media vocabulary was supplemented with neologisms – aggressi
APA, Harvard, Vancouver, ISO, and other styles
7

Baluk, Nadia, Natalia Basij, Larysa Buk, and Olha Vovchanska. VR/AR-TECHNOLOGIES – NEW CONTENT OF THE NEW MEDIA. Ivan Franko National University of Lviv, 2021. http://dx.doi.org/10.30970/vjo.2021.49.11074.

Full text
Abstract:
The article analyzes the peculiarities of the media content shaping and transformation in the convergent dimension of cross-media, taking into account the possibilities of augmented reality. With the help of the principles of objectivity, complexity and reliability in scientific research, a number of general scientific and special methods are used: method of analysis, synthesis, generalization, method of monitoring, observation, problem-thematic, typological and discursive methods. According to the form of information presentation, such types of media content as visual, audio, verbal and combi
APA, Harvard, Vancouver, ISO, and other styles
8

Makhachashvili, Rusudan K., Svetlana I. Kovpik, Anna O. Bakhtina, and Ekaterina O. Shmeltser. Technology of presentation of literature on the Emoji Maker platform: pedagogical function of graphic mimesis. [б. в.], 2020. http://dx.doi.org/10.31812/123456789/3864.

Full text
Abstract:
The article deals with the technology of visualizing fictional text (poetry) with the help of emoji symbols in the Emoji Maker platform that not only activates students’ thinking, but also develops creative attention, makes it possible to reproduce the meaning of poetry in a succinct way. The application of this technology has yielded the significance of introducing a computer being emoji in the study and mastering of literature is absolutely logical: an emoji, phenomenologically, logically and eidologically installed in the digital continuum, is separated from the natural language provided by
APA, Harvard, Vancouver, ISO, and other styles
9

Yatsymirska, Mariya. SOCIAL EXPRESSION IN MULTIMEDIA TEXTS. Ivan Franko National University of Lviv, 2021. http://dx.doi.org/10.30970/vjo.2021.49.11072.

Full text
Abstract:
The article investigates functional techniques of extralinguistic expression in multimedia texts; the effectiveness of figurative expressions as a reaction to modern events in Ukraine and their influence on the formation of public opinion is shown. Publications of journalists, broadcasts of media resonators, experts, public figures, politicians, readers are analyzed. The language of the media plays a key role in shaping the worldview of the young political elite in the first place. The essence of each statement is a focused thought that reacts to events in the world or in one’s own country. Th
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!