Academic literature on the topic 'Multimodale Annotation'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multimodale Annotation.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Multimodale Annotation"
Kleida, Danae. "Entering a dance performance through multimodal annotation: annotating with scores." International Journal of Performance Arts and Digital Media 17, no. 1 (January 2, 2021): 19–30. http://dx.doi.org/10.1080/14794713.2021.1880182.
Full textDebras, Camille. "How to prepare the video component of the Diachronic Corpus of Political Speeches for multimodal analysis." Research in Corpus Linguistics 9, no. 1 (2021): 132–51. http://dx.doi.org/10.32714/ricl.09.01.08.
Full textChou, Chien-Li, Hua-Tsung Chen, and Suh-Yin Lee. "Multimodal Video-to-Near-Scene Annotation." IEEE Transactions on Multimedia 19, no. 2 (February 2017): 354–66. http://dx.doi.org/10.1109/tmm.2016.2614426.
Full textSaneiro, Mar, Olga C. Santos, Sergio Salmeron-Majadas, and Jesus G. Boticario. "Towards Emotion Detection in Educational Scenarios from Facial Expressions and Body Movements through Multimodal Approaches." Scientific World Journal 2014 (2014): 1–14. http://dx.doi.org/10.1155/2014/484873.
Full textLadilova, Anna. "Multimodal Metaphors of Interculturereality / Metaforas multimodais da interculturealidade." REVISTA DE ESTUDOS DA LINGUAGEM 28, no. 2 (May 5, 2020): 917. http://dx.doi.org/10.17851/2237-2083.28.2.917-955.
Full textHardison, Debra M. "Visualizing the acoustic and gestural beats of emphasis in multimodal discourse." Journal of Second Language Pronunciation 4, no. 2 (December 31, 2018): 232–59. http://dx.doi.org/10.1075/jslp.17006.har.
Full textDiete, Alexander, Timo Sztyler, and Heiner Stuckenschmidt. "Exploring Semi-Supervised Methods for Labeling Support in Multimodal Datasets." Sensors 18, no. 8 (August 11, 2018): 2639. http://dx.doi.org/10.3390/s18082639.
Full textZhu, Songhao, Xiangxiang Li, and Shuhan Shen. "Multimodal deep network learning‐based image annotation." Electronics Letters 51, no. 12 (June 2015): 905–6. http://dx.doi.org/10.1049/el.2015.0258.
Full textBrunner, Marie-Louise, and Stefan Diemer. "Multimodal meaning making: The annotation of nonverbal elements in multimodal corpus transcription." Research in Corpus Linguistics 10, no. 1 (2021): 63–88. http://dx.doi.org/10.32714/ricl.09.01.05.
Full textDa Fonte, Renata Fonseca Lima, and Késia Vanessa Nascimento da Silva. "MULTIMODALIDADE NA LINGUAGEM DE CRIANÇAS AUTISTAS: O "NÃO" EM SUAS DIVERSAS MANIFESTAÇÕES." PROLÍNGUA 14, no. 2 (May 6, 2020): 250–62. http://dx.doi.org/10.22478/ufpb.1983-9979.2019v14n2.48829.
Full textDissertations / Theses on the topic "Multimodale Annotation"
Völkel, Thorsten. "Multimodale Annotation geographischer Daten zur personalisierten Fußgängernavigation." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1239804877252-19609.
Full textMobility impaired pedestrians such as wheelchair users, blind and visually impaired, or elderly people impose specific requirements upon the calculation of appropriate routes. The shortest path might not be the best. Within this thesis, the concept of multimodal annotation is developed. The concept allows for extension of the geographical base data by users. Further concepts are developed allowing for the application of the acquired data for the calculation of personalized routes based on the requirements of the individual user. The concept of multimodal annotation was successfully evaluated incorporating 35 users and may be used as the base for further research in the area
Völkel, Thorsten. "Multimodale Annotation geographischer Daten zur personalisierten Fußgängernavigation." Doctoral thesis, Technische Universität Dresden, 2008. https://tud.qucosa.de/id/qucosa%3A23563.
Full textMobility impaired pedestrians such as wheelchair users, blind and visually impaired, or elderly people impose specific requirements upon the calculation of appropriate routes. The shortest path might not be the best. Within this thesis, the concept of multimodal annotation is developed. The concept allows for extension of the geographical base data by users. Further concepts are developed allowing for the application of the acquired data for the calculation of personalized routes based on the requirements of the individual user. The concept of multimodal annotation was successfully evaluated incorporating 35 users and may be used as the base for further research in the area.
Znaidia, Amel. "Handling Imperfections for Multimodal Image Annotation." Phd thesis, Ecole Centrale Paris, 2014. http://tel.archives-ouvertes.fr/tel-01012009.
Full textTayari, Meftah Imen. "Modélisation, détection et annotation des états émotionnels à l'aide d'un espace vectoriel multidimensionnel." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00838803.
Full textNguyen, Nhu Van. "Représentations visuelles de concepts textuels pour la recherche et l'annotation interactives d'images." Phd thesis, Université de La Rochelle, 2011. http://tel.archives-ouvertes.fr/tel-00730707.
Full textBudnik, Mateusz. "Active and deep learning for multimedia." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM011.
Full textThe main topics of this thesis include the use of active learning-based methods and deep learning in the context of retrieval of multimodal documents. The contributions proposed during this thesis address both these topics. An active learning framework was introduced, which allows for a more efficient annotation of broadcast TV videos thanks to the propagation of labels, the use of multimodal data and selection strategies. Several different scenarios and experiments were considered in the context of person identification in videos, including using different modalities (such as faces, speech segments and overlaid text) and different selection strategies. The whole system was additionally validated in a dry run involving real human annotators.A second major contribution was the investigation and use of deep learning (in particular the convolutional neural network) for video retrieval. A comprehensive study was made using different neural network architectures and training techniques such as fine-tuning or using separate classifiers like SVM. A comparison was made between learned features (the output of neural networks) and engineered features. Despite the lower performance of the engineered features, fusion between these two types of features increases overall performance.Finally, the use of convolutional neural network for speaker identification using spectrograms is explored. The results are compared to other state-of-the-art speaker identification systems. Different fusion approaches are also tested. The proposed approach obtains comparable results to some of the other tested approaches and offers an increase in performance when fused with the output of the best system
Nag, Chowdhury Sreyasi [Verfasser]. "Text-image synergy for multimodal retrieval and annotation / Sreyasi Nag Chowdhury." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2021. http://d-nb.info/1240674139/34.
Full textAbrilian, Sarkis. "Représentation de comportements emotionnels multimodaux spontanés : perception, annotation et synthèse." Phd thesis, Université Paris Sud - Paris XI, 2007. http://tel.archives-ouvertes.fr/tel-00620827.
Full textOram, Louise Carolyn. "Scrolling in radiology image stacks : multimodal annotations and diversifying control mobility." Thesis, University of British Columbia, 2013. http://hdl.handle.net/2429/45508.
Full textSilva, Miguel Marinhas da. "Automated image tagging through tag propagation." Master's thesis, Faculdade de Ciências e Tecnologia, 2011. http://hdl.handle.net/10362/5963.
Full textToday, more and more data is becoming available on the Web. In particular, we have recently witnessed an exponential increase of multimedia content within various content sharing websites. While this content is widely available, great challenges have arisen to effectively search and browse such vast amount of content. A solution to this problem is to annotate information, a task that without computer aid requires a large-scale human effort. The goal of this thesis is to automate the task of annotating multimedia information with machine learning algorithms. We propose the development of a machine learning framework capable of doing automated image annotation in large-scale consumer photos. To this extent a study on state of art algorithms was conducted, which concluded with a baseline implementation of a k-nearest neighbor algorithm. This baseline was used to implement a more advanced algorithm capable of annotating images in the situations with limited training images and a large set of test images – thus, a semi-supervised approach. Further studies were conducted on the feature spaces used to describe images towards a successful integration in the developed framework. We first analyzed the semantic gap between the visual feature spaces and concepts present in an image, and how to avoid or mitigate this gap. Moreover, we examined how users perceive images by performing a statistical analysis of the image tags inserted by users. A linguistic and statistical expansion of image tags was also implemented. The developed framework withstands uneven data distributions that occur in consumer datasets, and scales accordingly, requiring few previously annotated data. The principal mechanism that allows easier scaling is the propagation of information between the annotated data and un-annotated data.
Book chapters on the topic "Multimodale Annotation"
Cassidy, Steve, and Thomas Schmidt. "Tools for Multimodal Annotation." In Handbook of Linguistic Annotation, 209–27. Dordrecht: Springer Netherlands, 2017. http://dx.doi.org/10.1007/978-94-024-0881-2_7.
Full textSteininger, Silke, Florian Schiel, and Susen Rabold. "Annotation of Multimodal Data." In SmartKom: Foundations of Multimodal Dialogue Systems, 571–96. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/3-540-36678-4_35.
Full textSchmidt, Thomas, Susan Duncan, Oliver Ehmer, Jeffrey Hoyt, Michael Kipp, Dan Loehr, Magnus Magnusson, Travis Rose, and Han Sloetjes. "An Exchange Format for Multimodal Annotations." In Multimodal Corpora, 207–21. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04793-0_13.
Full textColletta, Jean-Marc, Ramona N. Kunene, Aurélie Venouil, Virginie Kaufmann, and Jean-Pascal Simon. "Multi-track Annotation of Child Language and Gestures." In Multimodal Corpora, 54–72. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04793-0_4.
Full textBlache, Philippe, Roxane Bertrand, Gaëlle Ferré, Berthille Pallaud, Laurent Prévot, and Stéphane Rauzy. "The Corpus of Interactional Data: A Large Multimodal Annotated Resource." In Handbook of Linguistic Annotation, 1323–56. Dordrecht: Springer Netherlands, 2017. http://dx.doi.org/10.1007/978-94-024-0881-2_51.
Full textGrassi, Marco, Christian Morbidoni, and Francesco Piazza. "Towards Semantic Multimodal Video Annotation." In Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces. Theoretical and Practical Issues, 305–16. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-18184-9_25.
Full textCavicchio, Federica, and Massimo Poesio. "Multimodal Corpora Annotation: Validation Methods to Assess Coding Scheme Reliability." In Multimodal Corpora, 109–21. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04793-0_7.
Full textGibbon, Dafydd, Inge Mertins, and Roger K. Moore. "Representation and annotation of dialogue." In Handbook of Multimodal and Spoken Dialogue Systems, 1–101. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/978-1-4615-4501-9_1.
Full textJohnston, Michael. "Extensible Multimodal Annotation for Intelligent Interactive Systems." In Multimodal Interaction with W3C Standards, 37–64. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-42816-1_3.
Full textBunt, Harry, Volha Petukhova, David Traum, and Jan Alexandersson. "Dialogue Act Annotation with the ISO 24617-2 Standard." In Multimodal Interaction with W3C Standards, 109–35. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-42816-1_6.
Full textConference papers on the topic "Multimodale Annotation"
Xing, Yuying, Guoxian Yu, Jun Wang, Carlotta Domeniconi, and Xiangliang Zhang. "Weakly-Supervised Multi-view Multi-instance Multi-label Learning." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/432.
Full textThomas, Martin. "Querying multimodal annotation." In the Linguistic Annotation Workshop. Morristown, NJ, USA: Association for Computational Linguistics, 2007. http://dx.doi.org/10.3115/1642059.1642069.
Full textPodlasov, A., K. O'Halloran, S. Tan, B. Smith, and A. Nagarajan. "Developing novel multimodal and linguistic annotation software." In the Third Linguistic Annotation Workshop. Morristown, NJ, USA: Association for Computational Linguistics, 2009. http://dx.doi.org/10.3115/1698381.1698404.
Full textBlache, Philippe. "A general scheme for broad-coverage multimodal annotation." In the Third Linguistic Annotation Workshop. Morristown, NJ, USA: Association for Computational Linguistics, 2009. http://dx.doi.org/10.3115/1698381.1698414.
Full textBarz, Michael, Mohammad Mehdi Moniri, Markus Weber, and Daniel Sonntag. "Multimodal multisensor activity annotation tool." In UbiComp '16: The 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2968219.2971459.
Full textWieschebrink, Stephan. "Collaborative editing of multimodal annotation data." In the 11th ACM symposium. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2034691.2034706.
Full textFroumentin, Max. "Extensible multimodal annotation markup language (EMMA)." In Proceeedings of the Workshop on NLP and XML (NLPXML-2004): RDF/RDFS and OWL in Language Technology. Morristown, NJ, USA: Association for Computational Linguistics, 2004. http://dx.doi.org/10.3115/1621066.1621071.
Full textZang, Xiaoxue, Ying Xu, and Jindong Chen. "Multimodal Icon Annotation For Mobile Applications." In MobileHCI '21: 23rd International Conference on Mobile Human-Computer Interaction. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3447526.3472064.
Full textCabral, Diogo, Urândia Carvalho, João Silva, João Valente, Carla Fernandes, and Nuno Correia. "Multimodal video annotation for contemporary dance creation." In the 2011 annual conference extended abstracts. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/1979742.1979930.
Full textSeta, L., G. Chiazzese, G. Merlo, S. Ottaviano, G. Ciulla, M. Allegra, V. Samperi, and G. Todaro. "Multimodal Annotation to Support Web Learning Activities." In 2008 19th International Conference on Database and Expert Systems Applications (DEXA). IEEE, 2008. http://dx.doi.org/10.1109/dexa.2008.68.
Full text