Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Multimodal“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Multimodal" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Multimodal"
Jakobsen, Ingrid Karoline. „Inspired by image: A multimodal analysis of 10th grade English school-leaving written examinations set in Norway (2014-2018)“. Acta Didactica Norge 13, Nr. 1 (15.03.2019): 5. http://dx.doi.org/10.5617/adno.6248.
Der volle Inhalt der QuelleXiong, Ao, Yuanzheng Tong, Shaoyong Guo, Yanru Wang, Sujie Shao und Lin Mei. „An Optimal Allocation Method of Power Multimodal Network Resources Based on NSGA-II“. Wireless Communications and Mobile Computing 2021 (26.10.2021): 1–10. http://dx.doi.org/10.1155/2021/9632277.
Der volle Inhalt der QuelleDa Silva Martins, Liviane. „Leitura multimodal e o processo de construção de sentido em charges“. Revista Leitura, Nr. 65 (26.03.2020): 10–23. http://dx.doi.org/10.28998/2317-9945.202065.10-23.
Der volle Inhalt der QuelleHolsting, Alexandra, Cindie Aaen Maagaard und Nina Nørgaard. „Kan man lære at skifte gear? En multimodal tilgang til plot i den litterære tekst“. NyS, Nydanske Sprogstudier, Nr. 56 (27.05.2019): 52–76. http://dx.doi.org/10.7146/nys.v1i56.112649.
Der volle Inhalt der QuelleRita, Rita. „Penyusunan Peooman Pemberian Izin Badan Usaha Angkutan Multimoda (BUAM)“. Warta Penelitian Perhubungan 24, Nr. 6 (14.05.2019): 567. http://dx.doi.org/10.25104/warlit.v24i6.1041.
Der volle Inhalt der QuellePinheiro, Michelle Soares. „Nas teias da formação docente continuada: letramento multimodal crítico nos livros didáticos“. REVISTA INTERSABERES 18 (01.06.2023): e023do3001. http://dx.doi.org/10.22169/revint.v18.e023do3001.
Der volle Inhalt der QuelleTorres-Orihuela, Guido, Miluska Anggie Barriga Huamán und Alexander Ramiro Arenas Cano. „Percepciones de alfabetización multimodal en estudiantes universitarios del área de ingeniería“. Telos: Revista de Estudios Interdisciplinarios en Ciencias Sociales 25, Nr. 2 (12.05.2023): 266–82. http://dx.doi.org/10.36390/telos252.04.
Der volle Inhalt der QuelleSchmitt, Briane, Gabriel Da Silva Ribas und Ernani Cesar Freitas. „O letramento multimodal na escola: estabelecendo uma relação entre o eu e a sociedade“. Diálogo das Letras 7, Nr. 1 (05.06.2018): 96–112. http://dx.doi.org/10.22297/dl.v7i1.2972.
Der volle Inhalt der QuelleDulce M Santamaría. „La Saga Contemporánea: Apreciación desde la lectura intercódigo y multimodal“. GACETA DE PEDAGOGÍA, Nr. 41 (06.12.2021): 197–211. http://dx.doi.org/10.56219/rgp.vi41.944.
Der volle Inhalt der QuelleBorgfeldt, Eva, und Anna Lyngfelt. „”Jag ritade först sen skrev jag” – elevperspektiv på multimodal textproduktion i årskurs 3“. Forskning om undervisning och lärande 5, Nr. 1 (06.03.2017): 64–89. http://dx.doi.org/10.61998/forskul.v5i1.27469.
Der volle Inhalt der QuelleDissertationen zum Thema "Multimodal"
Yovera, Solano Luis Ángel, und Cárdenas Julio César Luna. „Multimodal interaction“. Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2017. http://hdl.handle.net/10757/621880.
Der volle Inhalt der QuelleThis research aims to identify all the advances, research and proposals for this technology; in which they will be from developing trends to more bold but innovative solutions proposed. Likewise, in order to understand the mechanisms that allow this interaction, it is necessary to know the best practices and standards as stipulated by the W3C (World Wide Web Consortium) and the ACM (Association for Computing Machinery). It identified all the advances and proposals shall be known as the all mechanisms NLP (Natural Language Processing), Facial Recognition, Touch and respective requirements so it can be used allowing a more natural interaction between the user and the system. Identified all existing developments on this technology and the mechanisms and requirements that allow their use, a proposed developable system that is used by Multimodal Interaction is defined.
Hoffmann, Grasiele Fernandes. „Retextualização multimodal“. reponame:Repositório Institucional da UFSC, 2015. https://repositorio.ufsc.br/xmlui/handle/123456789/158434.
Der volle Inhalt der QuelleMade available in DSpace on 2016-01-15T14:53:16Z (GMT). No. of bitstreams: 1 336867.pdf: 2755760 bytes, checksum: 3abfda8513e32d6be165c3a029389b6b (MD5) Previous issue date: 2015
O designer educacional (DE) é o profissional que atua em cursos mediados pelas tecnologias da informação e comunicação realizando, em meio a várias atribuições, a retextualização (adequação e adaptação) de conteúdos educativos e instrucionais para outros gêneros textuais e modalidades semióticas. Foi neste contexto, na relação entre esta atividade desenvolvida pelo DE e a realizada pelo tradutor, que surgiu nosso interesse em verificar se o movimento realizado pelo DE ao transformar o texto base em um outro/novo texto se dá por meio de um processo de tradução/retextualização multimodal. Para realizar essa investigação nos apoiamos nos princípios teóricos da Tradução Funcionalista (REISS, [1984]1996; VERMEER, [1978]1986; [1984]1996; e NORD, [1988]1991; [1997]2014; 2006), na perspectiva da Retextualização (TRAVAGLIA, 2003; MARCUSCHI, 2001; MATÊNCIO, 2002; 2003; DELL?ISOLA, 2007) e na abordagem da multimodalidade textual (HODGE e KRESS, 1988; KRESS e van LEEUWEN, 2001; 2006; JEWITT, 2009; KRESS, 2010). Neste estudo analisamos o livro-texto impresso (texto base) e o e-book (texto meta) produzido para o curso a distância Prevenção dos Problemas Relacionados ao Uso de Drogas - Capacitação para Conselheiros e Lideranças Comunitárias (6ª edição), promovido pela Secretaria Nacional de Políticas sobre Drogas (vinculada ao Ministério da Justiça) e realizado pela Universidade Federal de Santa Catarina, por meio do Núcleo Multiprojetos de Tecnologia Educacional. No e-book estão sintetizados os conceitos mais importantes apresentados no livro-texto impresso, além de algumas informações contidas no AVEA. Para realizar o cotejamento e a análise deste corpus e identificar os movimentos tradutórios/retextualização realizados pelo DE, utilizamos o modelo de análise textual aplicado à tradução proposto por Nord ([1988]1991). Os resultados demonstraram que: 1) a atividade de retextualização realizada pelo DE contempla, durante o processo tradutório, outros modos e recursos semióticos que compõem o texto multimodal; 2) os fatores intratextuais relacionados por Nord enfocavam basicamente os elementos linguísticos e não compreendiam em um nível de igualdade todas as múltiplas modalidades semióticas que compõem o texto multimodal, daí a necessidade de acrescentar outras modalidades semióticas no modelo proposto pela teórica; e 3) o trabalho desenvolvido pelo DE se equipara ao realizado pelo tradutor, pois existe na atividade de retextualização realizada por ele uma ação intencional de produzir um texto multimodal a partir de uma oferta informativa base. Neste contexto, constatamos a necessidade de: 1) ampliar o conceito de retextualização, estendendo o processo para o estudo e a análise das demais modalidades semióticas que compõem os textos multimodais; 2) acrescentar ao quadro de Nord outros fatores de análise, ampliando o modelo para a análise textual aplicada à retextualização multimodal; e 3) o DE realiza sim um trabalho de tradução ao transformar um texto em um outro/novo texto multimodal. Dessa forma, atingimos o objetivo geral de nossa pesquisa e comprovamos, com base na teoria Funcionalista da Tradução, que o movimento realizado pelo DE ao transformar o texto base em outro/novo texto se dá por meio de um processo de tradução/retextualização multimodal e que, por esta razão, nesta função específica, ele se torna um tradutor/retextualizador.
Abstract : Instructional designers (ID) act on courses mediated by Information and Communication Technologies (ICTs), performing actions such as the retextualizaiton (adaptation and adequacy) of educational and instructional content for other textual genres and semiotic modalities. Within this context of relations between designer and translator is that we have acquired an interest in verifying whether the design movement in transforming the base text in another new text is done through a process of multimodal translation/retextualization. In order to perform this investigation, we have based our study in the theoretical principles of Functionalist Translation (REISS, [1984]1996; VERMEER, [1978]1986; [1984]1996; e NORD, [1988]1991; [1997]2014; 2006), in Retextualization perspectives (TRAVAGLIA, 2003; MARCUSCHI, 2001; MATÊNCIO, 2002; 2003; DELL?ISOLA, 2007), and in the textual multimodality approach (HODGE e KRESS, 1988; KRESS e van LEEUWEN, 2001; 2006; JEWITT, 2009; KRESS, 2010). For this study, analysis of a printed textbook (base text) and its eBook (target text) produced for a Distance Education course of Problem Prevention in Drug Use - A Course for Counselors and Community Leadership (6th edition) promoted by the Brazilian office of politics on drugs (Secretaria Nacional de Políticas sobre Drogas - SENAD) with the Ministry of Justice, developed by Universidade Federal de Santa Catarina (UFSC) through the multi-project center for educational technology (Núcleo Multiprojetos de Tecnologia Educacional - NUTE). In this ebook, the most important concepts from the printed textbook are presented, as well as some information available in the VLE. In order to quota and analyze translational and retextualization moves performed by the designer, the textual analysis model for translation proposed by Nord ([1988]1991) was utilized. Results demonstrate that: 1) retextualization performed by the designer includes other modes and semiotic resources composing the multimodal text, during the translation process; 2) intratextual factors listed by Nord focused basically on linguistic elements, not understanding all the multiple semiotic modalities that comprise the text in a degree of equality - hence the need to add other semiotic modalities in Nord?s proposed model; and 3) the instructional design is equivalent to the workof a translator, as there is an intent to produce a multimodal text from an informative source. In this context, it is possible to note the need to: 1) broaden the concept of retextualization, extending the process to study and analyze the other semiotic modalities that compose multimodal texts; 2) add to Nord?s framework other factors of analysis, broadening her model to the textual analysis applied to multimodal retextualization, and 3) the designer does indeed perform translation in ransforming a text into another new multimodal text. In this sense, the main objective of this study was achieved, thus proving within the Functionalist theory of Translation that the designer transforms the source text in another new text through a process of multimoda translation/retextualization, and that designers thus become translators/retextualizers.
Contreras, Lizarraga Adrián Arturo. „Multimodal microwave filters“. Doctoral thesis, Universitat Politècnica de Catalunya, 2013. http://hdl.handle.net/10803/134931.
Der volle Inhalt der QuelleGuilbeault, Douglas Richard. „Multimodal rhetorical figures“. Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/53978.
Der volle Inhalt der QuelleArts, Faculty of
English, Department of
Graduate
Bazo, Rodríquez Alfredo, und Rosado Vitaliano Delgado. „Eje multimodal Amazonas“. Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2013. http://hdl.handle.net/10757/273520.
Der volle Inhalt der QuelleKim, Hana 1980. „Multimodal animation control“. Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/29661.
Der volle Inhalt der QuelleIncludes bibliographical references (leaf 44).
In this thesis, we present a multimodal animation control system. Our approach is based on a human-centric computing model proposed by Project Oxygen at MIT Laboratory for Computer Science. Our system allows the user to create and control animation in real time using the speech interface developed using SpeechBuilder. The user can also fall back to traditional input modes should the speech interface fail. We assume that the user has no prior knowledge and experience in animation and yet enable him to create interesting and meaningful animation naturally and fluently. We argue that our system can be used in a number of applications ranging from PowerPoint presentations to simulations to children's storytelling tools.
by Hana Kim.
M.Eng.
Caglayan, Ozan. „Multimodal Machine Translation“. Thesis, Le Mans, 2019. http://www.theses.fr/2019LEMA1016/document.
Der volle Inhalt der QuelleMachine translation aims at automatically translating documents from one language to another without human intervention. With the advent of deep neural networks (DNN), neural approaches to machine translation started to dominate the field, reaching state-ofthe-art performance in many languages. Neural machine translation (NMT) also revived the interest in interlingual machine translation due to how it naturally fits the task into an encoder-decoder framework which produces a translation by decoding a latent source representation. Combined with the architectural flexibility of DNNs, this framework paved the way for further research in multimodality with the objective of augmenting the latent representations with other modalities such as vision or speech, for example. This thesis focuses on a multimodal machine translation (MMT) framework that integrates a secondary visual modality to achieve better and visually grounded language understanding. I specifically worked with a dataset containing images and their translated descriptions, where visual context can be useful forword sense disambiguation, missing word imputation, or gender marking when translating from a language with gender-neutral nouns to one with grammatical gender system as is the case with English to French. I propose two main approaches to integrate the visual modality: (i) a multimodal attention mechanism that learns to take into account both sentence and convolutional visual representations, (ii) a method that uses global visual feature vectors to prime the sentence encoders and the decoders. Through automatic and human evaluation conducted on multiple language pairs, the proposed approaches were demonstrated to be beneficial. Finally, I further show that by systematically removing certain linguistic information from the input sentences, the true strength of both methods emerges as they successfully impute missing nouns, colors and can even translate when parts of the source sentences are completely removed
Hewa, Thondilege Akila Sachinthani Pemasiri. „Multimodal Image Correspondence“. Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/235433/1/Akila%2BHewa%2BThondilege%2BThesis%281%29.pdf.
Der volle Inhalt der QuelleBruni, Elia. „Multimodal Distributional Semantics“. Doctoral thesis, University of Trento, 2013. http://eprints-phd.biblio.unitn.it/1075/1/EliaBruniThesis.pdf.
Der volle Inhalt der QuelleCampagnaro, Filippo. „Multimodal underwater networks“. Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3422716.
Der volle Inhalt der QuelleBücher zum Thema "Multimodal"
Pandey, Shyam B., und Santosh Khadka. Multimodal Composition. New York: Routledge, 2021. http://dx.doi.org/10.4324/9781003163220.
Der volle Inhalt der QuelleForceville, Charles J., und Eduardo Urios-Aparisi, Hrsg. Multimodal Metaphor. Berlin, New York: Mouton de Gruyter, 2009. http://dx.doi.org/10.1515/9783110215366.
Der volle Inhalt der QuelleBernsen, Niels Ole, und Laila Dybkjær. Multimodal Usability. London: Springer London, 2010. http://dx.doi.org/10.1007/978-1-84882-553-6.
Der volle Inhalt der QuelleWong, May. Multimodal Communication. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15428-8.
Der volle Inhalt der QuelleKipp, Michael, Jean-Claude Martin, Patrizia Paggio und Dirk Heylen, Hrsg. Multimodal Corpora. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04793-0.
Der volle Inhalt der QuelleStanley, Lim, Sim Thomas und Singapore Logistics Association, Hrsg. Multimodal transport. Singapore: SNP Reference, 2006.
Den vollen Inhalt der Quelle findenCh, Forceville, und Urios-Aparisi Eduardo 1964-, Hrsg. Multimodal metaphor. Berlin: Mouton de Gruyter, 2009.
Den vollen Inhalt der Quelle finden1959-, Dybkjær Laila, Hrsg. Multimodal usability. Berlin: Springer, 2009.
Den vollen Inhalt der Quelle findenSpanjaart, Michiel. Multimodal Transport Law. New York : Routledge, 2017.: Routledge, 2017. http://dx.doi.org/10.4324/9781315213699.
Der volle Inhalt der QuelleSilingardi, Gabriele. El transporte multimodal. Bogotá: Universidad Externado de Colombia, 1998.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Multimodal"
Turner, Mark. „Multimodal body, multimodal mind, multimodal communication“. In Metaphor in Language, Cognition, and Communication, 95–108. Amsterdam: John Benjamins Publishing Company, 2022. http://dx.doi.org/10.1075/milcc.9.05tur.
Der volle Inhalt der QuelleWong, May. „Social Semiotics: Setting the Scene“. In Multimodal Communication, 1–9. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15428-8_1.
Der volle Inhalt der QuelleWong, May. „Slim Arms, Waist, Thighs and Hips, but Not the Breasts: Portrayal of Female Body Image in Hong Kong’s Magazine Advertisements“. In Multimodal Communication, 13–53. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15428-8_2.
Der volle Inhalt der QuelleWong, May. „Postage Stamps as Windows on Social Changes and Identity in Postcolonial Hong Kong“. In Multimodal Communication, 55–80. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15428-8_3.
Der volle Inhalt der QuelleWong, May. „Emotional Branding in Multimodal Personal Loan TV Advertisements: Analysing Voices and Engagement“. In Multimodal Communication, 83–106. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15428-8_4.
Der volle Inhalt der QuelleWong, May. „The Discourse of Advertising for Luxury Residences in Hong Kong: A Multimodal Critical Discourse Analysis“. In Multimodal Communication, 107–30. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15428-8_5.
Der volle Inhalt der QuelleWong, May. „Digital Photography and Identity of Hong Kong Females: A Case Study of Facebook Images“. In Multimodal Communication, 131–55. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15428-8_6.
Der volle Inhalt der QuelleWong, May. „Significance of Social Semiotic Research“. In Multimodal Communication, 157–62. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15428-8_7.
Der volle Inhalt der QuelleBernsen, Niels Ole, und Laila Dybkjær. „Structure, Usability, Readership“. In Multimodal Usability, 1–19. London: Springer London, 2009. http://dx.doi.org/10.1007/978-1-84882-553-6_1.
Der volle Inhalt der QuelleBernsen, Niels Ole, und Laila Dybkjær. „Observation of Users“. In Multimodal Usability, 209–31. London: Springer London, 2009. http://dx.doi.org/10.1007/978-1-84882-553-6_10.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Multimodal"
Caravaca Aguirre, Antonio Miguel M., Sakshi Singh, Simon Labouesse, Rafael Piestun und Emmanuel Bossy. „Multimodal imaging through a multimode fiber“. In Opto-Acoustic Methods and Applications in Biophotonics, herausgegeben von Vasilis Ntziachristos und Roger Zemp. SPIE, 2019. http://dx.doi.org/10.1117/12.2525988.
Der volle Inhalt der QuelleCaravaca-Aguirre, Antonio M. „Multimodal endo-microscopy using multimode fibers“. In Computational Optical Sensing and Imaging. Washington, D.C.: OSA, 2020. http://dx.doi.org/10.1364/cosi.2020.ctu5a.1.
Der volle Inhalt der QuelleSanchez-Rada, J. Fernando, Carlos A. Iglesias, Hesam Sagha, Bjorn Schuller, Ian Wood und Paul Buitelaar. „Multimodal multimodel emotion analysis as linked data“. In 2017 Seventh International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). IEEE, 2017. http://dx.doi.org/10.1109/aciiw.2017.8272599.
Der volle Inhalt der QuelleLiu, Kunzan, Tong Qiu, Honghao Cao und Sixian You. „Adaptive Fiber Source for High-Speed Label-Free Multimodal Multiphoton Microscopy“. In Imaging Systems and Applications. Washington, D.C.: Optica Publishing Group, 2023. http://dx.doi.org/10.1364/isa.2023.itu5e.4.
Der volle Inhalt der QuelleHua, Xian-Sheng. „Session details: Multimodal-1 (Multimodal Reasoning)“. In MM '18: ACM Multimedia Conference. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3286923.
Der volle Inhalt der QuelleZhu, Junnan, Haoran Li, Tianshang Liu, Yu Zhou, Jiajun Zhang und Chengqing Zong. „MSMO: Multimodal Summarization with Multimodal Output“. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/d18-1448.
Der volle Inhalt der Quelle„Session details: Multimodal-1 (Multimodal Reasoning)“. In 2018 ACM Multimedia Conference, chair Xian-Sheng Hua. New York, New York, USA: ACM Press, 2018. http://dx.doi.org/10.1145/3240508.3286923.
Der volle Inhalt der QuelleYao, Shaowei, und Xiaojun Wan. „Multimodal Transformer for Multimodal Machine Translation“. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.acl-main.400.
Der volle Inhalt der QuelleTsai, Yao-Hung Hubert, Shaojie Bai, Paul Pu Liang, J. Zico Kolter, Louis-Philippe Morency und Ruslan Salakhutdinov. „Multimodal Transformer for Unaligned Multimodal Language Sequences“. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/p19-1656.
Der volle Inhalt der QuelleZhang, Heng, Vishal M. Patel und Rama Chellappa. „Hierarchical Multimodal Metric Learning for Multimodal Classification“. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017. http://dx.doi.org/10.1109/cvpr.2017.312.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Multimodal"
Cheung, Steven W., und Srikantan S. Nagarajan. Tinnitus Multimodal Imaging. Fort Belvoir, VA: Defense Technical Information Center, Oktober 2014. http://dx.doi.org/10.21236/ada613544.
Der volle Inhalt der QuelleGelperin, Alan, Boris Shraiman und Daniel D. Lee. Multimodal Olfactory Scence Analysis. Fort Belvoir, VA: Defense Technical Information Center, Juli 2006. http://dx.doi.org/10.21236/ada455446.
Der volle Inhalt der QuelleCohen, Philip R. Multimodal Interaction for Virtual Environments. Fort Belvoir, VA: Defense Technical Information Center, Januar 1999. http://dx.doi.org/10.21236/ada413862.
Der volle Inhalt der QuelleNadimi, Sohail, Edward Hong und Bir Bhanu. Multimodal Human Identification for Computer Security. Fort Belvoir, VA: Defense Technical Information Center, März 2005. http://dx.doi.org/10.21236/ada430881.
Der volle Inhalt der QuellePerzanowski, Dennis, Alan C. Schultz, William Adams, Elaine Marsh und Magda Bugajska. Building a Multimodal Human-Robot Interface. Fort Belvoir, VA: Defense Technical Information Center, Januar 2001. http://dx.doi.org/10.21236/ada434941.
Der volle Inhalt der QuelleMitra, Sabayachi, Bhuwan Bhaskar Agrawal, Hoe Yun Jeong, Shreyans Jain, Kavita Iyengar und Atul Sanganeria. Developing Multimodal Logistics Parks in India. Asian Development Bank, Juni 2020. http://dx.doi.org/10.22617/brf200189-2.
Der volle Inhalt der QuelleLinville, Lisa M., Joshua James Michalenko und Dylan Zachary Anderson. Multimodal Data Fusion via Entropy Minimization. Office of Scientific and Technical Information (OSTI), März 2020. http://dx.doi.org/10.2172/1614682.
Der volle Inhalt der QuelleCohen, Philip R., und David R. McGee. Multimodal Command Interaction: Scientific and Technical. Fort Belvoir, VA: Defense Technical Information Center, November 2003. http://dx.doi.org/10.21236/ada418922.
Der volle Inhalt der QuelleGazzaniga, Michael S. Multimodal Interactions in Sensory-Motor Processing. Fort Belvoir, VA: Defense Technical Information Center, Juni 1992. http://dx.doi.org/10.21236/ada255780.
Der volle Inhalt der QuelleChen, Fang. Robust Multimodal Cognitive Load Measurement (RMCLM). Fort Belvoir, VA: Defense Technical Information Center, März 2013. http://dx.doi.org/10.21236/ada582471.
Der volle Inhalt der Quelle