Academic literature on the topic 'Multimodal'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multimodal.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Multimodal"
Jakobsen, Ingrid Karoline. "Inspired by image: A multimodal analysis of 10th grade English school-leaving written examinations set in Norway (2014-2018)." Acta Didactica Norge 13, no. 1 (March 15, 2019): 5. http://dx.doi.org/10.5617/adno.6248.
Full textXiong, Ao, Yuanzheng Tong, Shaoyong Guo, Yanru Wang, Sujie Shao, and Lin Mei. "An Optimal Allocation Method of Power Multimodal Network Resources Based on NSGA-II." Wireless Communications and Mobile Computing 2021 (October 26, 2021): 1–10. http://dx.doi.org/10.1155/2021/9632277.
Full textDa Silva Martins, Liviane. "Leitura multimodal e o processo de construção de sentido em charges." Revista Leitura, no. 65 (March 26, 2020): 10–23. http://dx.doi.org/10.28998/2317-9945.202065.10-23.
Full textHolsting, Alexandra, Cindie Aaen Maagaard, and Nina Nørgaard. "Kan man lære at skifte gear? En multimodal tilgang til plot i den litterære tekst." NyS, Nydanske Sprogstudier, no. 56 (May 27, 2019): 52–76. http://dx.doi.org/10.7146/nys.v1i56.112649.
Full textRita, Rita. "Penyusunan Peooman Pemberian Izin Badan Usaha Angkutan Multimoda (BUAM)." Warta Penelitian Perhubungan 24, no. 6 (May 14, 2019): 567. http://dx.doi.org/10.25104/warlit.v24i6.1041.
Full textPinheiro, Michelle Soares. "Nas teias da formação docente continuada: letramento multimodal crítico nos livros didáticos." REVISTA INTERSABERES 18 (June 1, 2023): e023do3001. http://dx.doi.org/10.22169/revint.v18.e023do3001.
Full textTorres-Orihuela, Guido, Miluska Anggie Barriga Huamán, and Alexander Ramiro Arenas Cano. "Percepciones de alfabetización multimodal en estudiantes universitarios del área de ingeniería." Telos: Revista de Estudios Interdisciplinarios en Ciencias Sociales 25, no. 2 (May 12, 2023): 266–82. http://dx.doi.org/10.36390/telos252.04.
Full textSchmitt, Briane, Gabriel Da Silva Ribas, and Ernani Cesar Freitas. "O letramento multimodal na escola: estabelecendo uma relação entre o eu e a sociedade." Diálogo das Letras 7, no. 1 (June 5, 2018): 96–112. http://dx.doi.org/10.22297/dl.v7i1.2972.
Full textDulce M Santamaría. "La Saga Contemporánea: Apreciación desde la lectura intercódigo y multimodal." GACETA DE PEDAGOGÍA, no. 41 (December 6, 2021): 197–211. http://dx.doi.org/10.56219/rgp.vi41.944.
Full textBorgfeldt, Eva, and Anna Lyngfelt. "”Jag ritade först sen skrev jag” – elevperspektiv på multimodal textproduktion i årskurs 3." Forskning om undervisning och lärande 5, no. 1 (March 6, 2017): 64–89. http://dx.doi.org/10.61998/forskul.v5i1.27469.
Full textDissertations / Theses on the topic "Multimodal"
Yovera, Solano Luis Ángel, and Cárdenas Julio César Luna. "Multimodal interaction." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2017. http://hdl.handle.net/10757/621880.
Full textThis research aims to identify all the advances, research and proposals for this technology; in which they will be from developing trends to more bold but innovative solutions proposed. Likewise, in order to understand the mechanisms that allow this interaction, it is necessary to know the best practices and standards as stipulated by the W3C (World Wide Web Consortium) and the ACM (Association for Computing Machinery). It identified all the advances and proposals shall be known as the all mechanisms NLP (Natural Language Processing), Facial Recognition, Touch and respective requirements so it can be used allowing a more natural interaction between the user and the system. Identified all existing developments on this technology and the mechanisms and requirements that allow their use, a proposed developable system that is used by Multimodal Interaction is defined.
Hoffmann, Grasiele Fernandes. "Retextualização multimodal." reponame:Repositório Institucional da UFSC, 2015. https://repositorio.ufsc.br/xmlui/handle/123456789/158434.
Full textMade available in DSpace on 2016-01-15T14:53:16Z (GMT). No. of bitstreams: 1 336867.pdf: 2755760 bytes, checksum: 3abfda8513e32d6be165c3a029389b6b (MD5) Previous issue date: 2015
O designer educacional (DE) é o profissional que atua em cursos mediados pelas tecnologias da informação e comunicação realizando, em meio a várias atribuições, a retextualização (adequação e adaptação) de conteúdos educativos e instrucionais para outros gêneros textuais e modalidades semióticas. Foi neste contexto, na relação entre esta atividade desenvolvida pelo DE e a realizada pelo tradutor, que surgiu nosso interesse em verificar se o movimento realizado pelo DE ao transformar o texto base em um outro/novo texto se dá por meio de um processo de tradução/retextualização multimodal. Para realizar essa investigação nos apoiamos nos princípios teóricos da Tradução Funcionalista (REISS, [1984]1996; VERMEER, [1978]1986; [1984]1996; e NORD, [1988]1991; [1997]2014; 2006), na perspectiva da Retextualização (TRAVAGLIA, 2003; MARCUSCHI, 2001; MATÊNCIO, 2002; 2003; DELL?ISOLA, 2007) e na abordagem da multimodalidade textual (HODGE e KRESS, 1988; KRESS e van LEEUWEN, 2001; 2006; JEWITT, 2009; KRESS, 2010). Neste estudo analisamos o livro-texto impresso (texto base) e o e-book (texto meta) produzido para o curso a distância Prevenção dos Problemas Relacionados ao Uso de Drogas - Capacitação para Conselheiros e Lideranças Comunitárias (6ª edição), promovido pela Secretaria Nacional de Políticas sobre Drogas (vinculada ao Ministério da Justiça) e realizado pela Universidade Federal de Santa Catarina, por meio do Núcleo Multiprojetos de Tecnologia Educacional. No e-book estão sintetizados os conceitos mais importantes apresentados no livro-texto impresso, além de algumas informações contidas no AVEA. Para realizar o cotejamento e a análise deste corpus e identificar os movimentos tradutórios/retextualização realizados pelo DE, utilizamos o modelo de análise textual aplicado à tradução proposto por Nord ([1988]1991). Os resultados demonstraram que: 1) a atividade de retextualização realizada pelo DE contempla, durante o processo tradutório, outros modos e recursos semióticos que compõem o texto multimodal; 2) os fatores intratextuais relacionados por Nord enfocavam basicamente os elementos linguísticos e não compreendiam em um nível de igualdade todas as múltiplas modalidades semióticas que compõem o texto multimodal, daí a necessidade de acrescentar outras modalidades semióticas no modelo proposto pela teórica; e 3) o trabalho desenvolvido pelo DE se equipara ao realizado pelo tradutor, pois existe na atividade de retextualização realizada por ele uma ação intencional de produzir um texto multimodal a partir de uma oferta informativa base. Neste contexto, constatamos a necessidade de: 1) ampliar o conceito de retextualização, estendendo o processo para o estudo e a análise das demais modalidades semióticas que compõem os textos multimodais; 2) acrescentar ao quadro de Nord outros fatores de análise, ampliando o modelo para a análise textual aplicada à retextualização multimodal; e 3) o DE realiza sim um trabalho de tradução ao transformar um texto em um outro/novo texto multimodal. Dessa forma, atingimos o objetivo geral de nossa pesquisa e comprovamos, com base na teoria Funcionalista da Tradução, que o movimento realizado pelo DE ao transformar o texto base em outro/novo texto se dá por meio de um processo de tradução/retextualização multimodal e que, por esta razão, nesta função específica, ele se torna um tradutor/retextualizador.
Abstract : Instructional designers (ID) act on courses mediated by Information and Communication Technologies (ICTs), performing actions such as the retextualizaiton (adaptation and adequacy) of educational and instructional content for other textual genres and semiotic modalities. Within this context of relations between designer and translator is that we have acquired an interest in verifying whether the design movement in transforming the base text in another new text is done through a process of multimodal translation/retextualization. In order to perform this investigation, we have based our study in the theoretical principles of Functionalist Translation (REISS, [1984]1996; VERMEER, [1978]1986; [1984]1996; e NORD, [1988]1991; [1997]2014; 2006), in Retextualization perspectives (TRAVAGLIA, 2003; MARCUSCHI, 2001; MATÊNCIO, 2002; 2003; DELL?ISOLA, 2007), and in the textual multimodality approach (HODGE e KRESS, 1988; KRESS e van LEEUWEN, 2001; 2006; JEWITT, 2009; KRESS, 2010). For this study, analysis of a printed textbook (base text) and its eBook (target text) produced for a Distance Education course of Problem Prevention in Drug Use - A Course for Counselors and Community Leadership (6th edition) promoted by the Brazilian office of politics on drugs (Secretaria Nacional de Políticas sobre Drogas - SENAD) with the Ministry of Justice, developed by Universidade Federal de Santa Catarina (UFSC) through the multi-project center for educational technology (Núcleo Multiprojetos de Tecnologia Educacional - NUTE). In this ebook, the most important concepts from the printed textbook are presented, as well as some information available in the VLE. In order to quota and analyze translational and retextualization moves performed by the designer, the textual analysis model for translation proposed by Nord ([1988]1991) was utilized. Results demonstrate that: 1) retextualization performed by the designer includes other modes and semiotic resources composing the multimodal text, during the translation process; 2) intratextual factors listed by Nord focused basically on linguistic elements, not understanding all the multiple semiotic modalities that comprise the text in a degree of equality - hence the need to add other semiotic modalities in Nord?s proposed model; and 3) the instructional design is equivalent to the workof a translator, as there is an intent to produce a multimodal text from an informative source. In this context, it is possible to note the need to: 1) broaden the concept of retextualization, extending the process to study and analyze the other semiotic modalities that compose multimodal texts; 2) add to Nord?s framework other factors of analysis, broadening her model to the textual analysis applied to multimodal retextualization, and 3) the designer does indeed perform translation in ransforming a text into another new multimodal text. In this sense, the main objective of this study was achieved, thus proving within the Functionalist theory of Translation that the designer transforms the source text in another new text through a process of multimoda translation/retextualization, and that designers thus become translators/retextualizers.
Contreras, Lizarraga Adrián Arturo. "Multimodal microwave filters." Doctoral thesis, Universitat Politècnica de Catalunya, 2013. http://hdl.handle.net/10803/134931.
Full textGuilbeault, Douglas Richard. "Multimodal rhetorical figures." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/53978.
Full textArts, Faculty of
English, Department of
Graduate
Bazo, Rodríquez Alfredo, and Rosado Vitaliano Delgado. "Eje multimodal Amazonas." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2013. http://hdl.handle.net/10757/273520.
Full textKim, Hana 1980. "Multimodal animation control." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/29661.
Full textIncludes bibliographical references (leaf 44).
In this thesis, we present a multimodal animation control system. Our approach is based on a human-centric computing model proposed by Project Oxygen at MIT Laboratory for Computer Science. Our system allows the user to create and control animation in real time using the speech interface developed using SpeechBuilder. The user can also fall back to traditional input modes should the speech interface fail. We assume that the user has no prior knowledge and experience in animation and yet enable him to create interesting and meaningful animation naturally and fluently. We argue that our system can be used in a number of applications ranging from PowerPoint presentations to simulations to children's storytelling tools.
by Hana Kim.
M.Eng.
Caglayan, Ozan. "Multimodal Machine Translation." Thesis, Le Mans, 2019. http://www.theses.fr/2019LEMA1016/document.
Full textMachine translation aims at automatically translating documents from one language to another without human intervention. With the advent of deep neural networks (DNN), neural approaches to machine translation started to dominate the field, reaching state-ofthe-art performance in many languages. Neural machine translation (NMT) also revived the interest in interlingual machine translation due to how it naturally fits the task into an encoder-decoder framework which produces a translation by decoding a latent source representation. Combined with the architectural flexibility of DNNs, this framework paved the way for further research in multimodality with the objective of augmenting the latent representations with other modalities such as vision or speech, for example. This thesis focuses on a multimodal machine translation (MMT) framework that integrates a secondary visual modality to achieve better and visually grounded language understanding. I specifically worked with a dataset containing images and their translated descriptions, where visual context can be useful forword sense disambiguation, missing word imputation, or gender marking when translating from a language with gender-neutral nouns to one with grammatical gender system as is the case with English to French. I propose two main approaches to integrate the visual modality: (i) a multimodal attention mechanism that learns to take into account both sentence and convolutional visual representations, (ii) a method that uses global visual feature vectors to prime the sentence encoders and the decoders. Through automatic and human evaluation conducted on multiple language pairs, the proposed approaches were demonstrated to be beneficial. Finally, I further show that by systematically removing certain linguistic information from the input sentences, the true strength of both methods emerges as they successfully impute missing nouns, colors and can even translate when parts of the source sentences are completely removed
Hewa, Thondilege Akila Sachinthani Pemasiri. "Multimodal Image Correspondence." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/235433/1/Akila%2BHewa%2BThondilege%2BThesis%281%29.pdf.
Full textBruni, Elia. "Multimodal Distributional Semantics." Doctoral thesis, University of Trento, 2013. http://eprints-phd.biblio.unitn.it/1075/1/EliaBruniThesis.pdf.
Full textCampagnaro, Filippo. "Multimodal underwater networks." Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3422716.
Full textBooks on the topic "Multimodal"
Pandey, Shyam B., and Santosh Khadka. Multimodal Composition. New York: Routledge, 2021. http://dx.doi.org/10.4324/9781003163220.
Full textForceville, Charles J., and Eduardo Urios-Aparisi, eds. Multimodal Metaphor. Berlin, New York: Mouton de Gruyter, 2009. http://dx.doi.org/10.1515/9783110215366.
Full textBernsen, Niels Ole, and Laila Dybkjær. Multimodal Usability. London: Springer London, 2010. http://dx.doi.org/10.1007/978-1-84882-553-6.
Full textWong, May. Multimodal Communication. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15428-8.
Full textKipp, Michael, Jean-Claude Martin, Patrizia Paggio, and Dirk Heylen, eds. Multimodal Corpora. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04793-0.
Full textStanley, Lim, Sim Thomas, and Singapore Logistics Association, eds. Multimodal transport. Singapore: SNP Reference, 2006.
Find full textCh, Forceville, and Urios-Aparisi Eduardo 1964-, eds. Multimodal metaphor. Berlin: Mouton de Gruyter, 2009.
Find full textSpanjaart, Michiel. Multimodal Transport Law. New York : Routledge, 2017.: Routledge, 2017. http://dx.doi.org/10.4324/9781315213699.
Full textSilingardi, Gabriele. El transporte multimodal. Bogotá: Universidad Externado de Colombia, 1998.
Find full textBook chapters on the topic "Multimodal"
Turner, Mark. "Multimodal body, multimodal mind, multimodal communication." In Metaphor in Language, Cognition, and Communication, 95–108. Amsterdam: John Benjamins Publishing Company, 2022. http://dx.doi.org/10.1075/milcc.9.05tur.
Full textWong, May. "Social Semiotics: Setting the Scene." In Multimodal Communication, 1–9. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15428-8_1.
Full textWong, May. "Slim Arms, Waist, Thighs and Hips, but Not the Breasts: Portrayal of Female Body Image in Hong Kong’s Magazine Advertisements." In Multimodal Communication, 13–53. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15428-8_2.
Full textWong, May. "Postage Stamps as Windows on Social Changes and Identity in Postcolonial Hong Kong." In Multimodal Communication, 55–80. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15428-8_3.
Full textWong, May. "Emotional Branding in Multimodal Personal Loan TV Advertisements: Analysing Voices and Engagement." In Multimodal Communication, 83–106. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15428-8_4.
Full textWong, May. "The Discourse of Advertising for Luxury Residences in Hong Kong: A Multimodal Critical Discourse Analysis." In Multimodal Communication, 107–30. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15428-8_5.
Full textWong, May. "Digital Photography and Identity of Hong Kong Females: A Case Study of Facebook Images." In Multimodal Communication, 131–55. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15428-8_6.
Full textWong, May. "Significance of Social Semiotic Research." In Multimodal Communication, 157–62. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15428-8_7.
Full textBernsen, Niels Ole, and Laila Dybkjær. "Structure, Usability, Readership." In Multimodal Usability, 1–19. London: Springer London, 2009. http://dx.doi.org/10.1007/978-1-84882-553-6_1.
Full textBernsen, Niels Ole, and Laila Dybkjær. "Observation of Users." In Multimodal Usability, 209–31. London: Springer London, 2009. http://dx.doi.org/10.1007/978-1-84882-553-6_10.
Full textConference papers on the topic "Multimodal"
Caravaca Aguirre, Antonio Miguel M., Sakshi Singh, Simon Labouesse, Rafael Piestun, and Emmanuel Bossy. "Multimodal imaging through a multimode fiber." In Opto-Acoustic Methods and Applications in Biophotonics, edited by Vasilis Ntziachristos and Roger Zemp. SPIE, 2019. http://dx.doi.org/10.1117/12.2525988.
Full textCaravaca-Aguirre, Antonio M. "Multimodal endo-microscopy using multimode fibers." In Computational Optical Sensing and Imaging. Washington, D.C.: OSA, 2020. http://dx.doi.org/10.1364/cosi.2020.ctu5a.1.
Full textSanchez-Rada, J. Fernando, Carlos A. Iglesias, Hesam Sagha, Bjorn Schuller, Ian Wood, and Paul Buitelaar. "Multimodal multimodel emotion analysis as linked data." In 2017 Seventh International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). IEEE, 2017. http://dx.doi.org/10.1109/aciiw.2017.8272599.
Full textLiu, Kunzan, Tong Qiu, Honghao Cao, and Sixian You. "Adaptive Fiber Source for High-Speed Label-Free Multimodal Multiphoton Microscopy." In Imaging Systems and Applications. Washington, D.C.: Optica Publishing Group, 2023. http://dx.doi.org/10.1364/isa.2023.itu5e.4.
Full textHua, Xian-Sheng. "Session details: Multimodal-1 (Multimodal Reasoning)." In MM '18: ACM Multimedia Conference. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3286923.
Full textZhu, Junnan, Haoran Li, Tianshang Liu, Yu Zhou, Jiajun Zhang, and Chengqing Zong. "MSMO: Multimodal Summarization with Multimodal Output." In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/d18-1448.
Full text"Session details: Multimodal-1 (Multimodal Reasoning)." In 2018 ACM Multimedia Conference, chair Xian-Sheng Hua. New York, New York, USA: ACM Press, 2018. http://dx.doi.org/10.1145/3240508.3286923.
Full textYao, Shaowei, and Xiaojun Wan. "Multimodal Transformer for Multimodal Machine Translation." In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.acl-main.400.
Full textTsai, Yao-Hung Hubert, Shaojie Bai, Paul Pu Liang, J. Zico Kolter, Louis-Philippe Morency, and Ruslan Salakhutdinov. "Multimodal Transformer for Unaligned Multimodal Language Sequences." In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/p19-1656.
Full textZhang, Heng, Vishal M. Patel, and Rama Chellappa. "Hierarchical Multimodal Metric Learning for Multimodal Classification." In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017. http://dx.doi.org/10.1109/cvpr.2017.312.
Full textReports on the topic "Multimodal"
Cheung, Steven W., and Srikantan S. Nagarajan. Tinnitus Multimodal Imaging. Fort Belvoir, VA: Defense Technical Information Center, October 2014. http://dx.doi.org/10.21236/ada613544.
Full textGelperin, Alan, Boris Shraiman, and Daniel D. Lee. Multimodal Olfactory Scence Analysis. Fort Belvoir, VA: Defense Technical Information Center, July 2006. http://dx.doi.org/10.21236/ada455446.
Full textCohen, Philip R. Multimodal Interaction for Virtual Environments. Fort Belvoir, VA: Defense Technical Information Center, January 1999. http://dx.doi.org/10.21236/ada413862.
Full textNadimi, Sohail, Edward Hong, and Bir Bhanu. Multimodal Human Identification for Computer Security. Fort Belvoir, VA: Defense Technical Information Center, March 2005. http://dx.doi.org/10.21236/ada430881.
Full textPerzanowski, Dennis, Alan C. Schultz, William Adams, Elaine Marsh, and Magda Bugajska. Building a Multimodal Human-Robot Interface. Fort Belvoir, VA: Defense Technical Information Center, January 2001. http://dx.doi.org/10.21236/ada434941.
Full textMitra, Sabayachi, Bhuwan Bhaskar Agrawal, Hoe Yun Jeong, Shreyans Jain, Kavita Iyengar, and Atul Sanganeria. Developing Multimodal Logistics Parks in India. Asian Development Bank, June 2020. http://dx.doi.org/10.22617/brf200189-2.
Full textLinville, Lisa M., Joshua James Michalenko, and Dylan Zachary Anderson. Multimodal Data Fusion via Entropy Minimization. Office of Scientific and Technical Information (OSTI), March 2020. http://dx.doi.org/10.2172/1614682.
Full textCohen, Philip R., and David R. McGee. Multimodal Command Interaction: Scientific and Technical. Fort Belvoir, VA: Defense Technical Information Center, November 2003. http://dx.doi.org/10.21236/ada418922.
Full textGazzaniga, Michael S. Multimodal Interactions in Sensory-Motor Processing. Fort Belvoir, VA: Defense Technical Information Center, June 1992. http://dx.doi.org/10.21236/ada255780.
Full textChen, Fang. Robust Multimodal Cognitive Load Measurement (RMCLM). Fort Belvoir, VA: Defense Technical Information Center, March 2013. http://dx.doi.org/10.21236/ada582471.
Full text