Dissertations / Theses on the topic 'Multimodal'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Multimodal.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Yovera, Solano Luis Ángel, and Cárdenas Julio César Luna. "Multimodal interaction." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2017. http://hdl.handle.net/10757/621880.
Full textThis research aims to identify all the advances, research and proposals for this technology; in which they will be from developing trends to more bold but innovative solutions proposed. Likewise, in order to understand the mechanisms that allow this interaction, it is necessary to know the best practices and standards as stipulated by the W3C (World Wide Web Consortium) and the ACM (Association for Computing Machinery). It identified all the advances and proposals shall be known as the all mechanisms NLP (Natural Language Processing), Facial Recognition, Touch and respective requirements so it can be used allowing a more natural interaction between the user and the system. Identified all existing developments on this technology and the mechanisms and requirements that allow their use, a proposed developable system that is used by Multimodal Interaction is defined.
Hoffmann, Grasiele Fernandes. "Retextualização multimodal." reponame:Repositório Institucional da UFSC, 2015. https://repositorio.ufsc.br/xmlui/handle/123456789/158434.
Full textMade available in DSpace on 2016-01-15T14:53:16Z (GMT). No. of bitstreams: 1 336867.pdf: 2755760 bytes, checksum: 3abfda8513e32d6be165c3a029389b6b (MD5) Previous issue date: 2015
O designer educacional (DE) é o profissional que atua em cursos mediados pelas tecnologias da informação e comunicação realizando, em meio a várias atribuições, a retextualização (adequação e adaptação) de conteúdos educativos e instrucionais para outros gêneros textuais e modalidades semióticas. Foi neste contexto, na relação entre esta atividade desenvolvida pelo DE e a realizada pelo tradutor, que surgiu nosso interesse em verificar se o movimento realizado pelo DE ao transformar o texto base em um outro/novo texto se dá por meio de um processo de tradução/retextualização multimodal. Para realizar essa investigação nos apoiamos nos princípios teóricos da Tradução Funcionalista (REISS, [1984]1996; VERMEER, [1978]1986; [1984]1996; e NORD, [1988]1991; [1997]2014; 2006), na perspectiva da Retextualização (TRAVAGLIA, 2003; MARCUSCHI, 2001; MATÊNCIO, 2002; 2003; DELL?ISOLA, 2007) e na abordagem da multimodalidade textual (HODGE e KRESS, 1988; KRESS e van LEEUWEN, 2001; 2006; JEWITT, 2009; KRESS, 2010). Neste estudo analisamos o livro-texto impresso (texto base) e o e-book (texto meta) produzido para o curso a distância Prevenção dos Problemas Relacionados ao Uso de Drogas - Capacitação para Conselheiros e Lideranças Comunitárias (6ª edição), promovido pela Secretaria Nacional de Políticas sobre Drogas (vinculada ao Ministério da Justiça) e realizado pela Universidade Federal de Santa Catarina, por meio do Núcleo Multiprojetos de Tecnologia Educacional. No e-book estão sintetizados os conceitos mais importantes apresentados no livro-texto impresso, além de algumas informações contidas no AVEA. Para realizar o cotejamento e a análise deste corpus e identificar os movimentos tradutórios/retextualização realizados pelo DE, utilizamos o modelo de análise textual aplicado à tradução proposto por Nord ([1988]1991). Os resultados demonstraram que: 1) a atividade de retextualização realizada pelo DE contempla, durante o processo tradutório, outros modos e recursos semióticos que compõem o texto multimodal; 2) os fatores intratextuais relacionados por Nord enfocavam basicamente os elementos linguísticos e não compreendiam em um nível de igualdade todas as múltiplas modalidades semióticas que compõem o texto multimodal, daí a necessidade de acrescentar outras modalidades semióticas no modelo proposto pela teórica; e 3) o trabalho desenvolvido pelo DE se equipara ao realizado pelo tradutor, pois existe na atividade de retextualização realizada por ele uma ação intencional de produzir um texto multimodal a partir de uma oferta informativa base. Neste contexto, constatamos a necessidade de: 1) ampliar o conceito de retextualização, estendendo o processo para o estudo e a análise das demais modalidades semióticas que compõem os textos multimodais; 2) acrescentar ao quadro de Nord outros fatores de análise, ampliando o modelo para a análise textual aplicada à retextualização multimodal; e 3) o DE realiza sim um trabalho de tradução ao transformar um texto em um outro/novo texto multimodal. Dessa forma, atingimos o objetivo geral de nossa pesquisa e comprovamos, com base na teoria Funcionalista da Tradução, que o movimento realizado pelo DE ao transformar o texto base em outro/novo texto se dá por meio de um processo de tradução/retextualização multimodal e que, por esta razão, nesta função específica, ele se torna um tradutor/retextualizador.
Abstract : Instructional designers (ID) act on courses mediated by Information and Communication Technologies (ICTs), performing actions such as the retextualizaiton (adaptation and adequacy) of educational and instructional content for other textual genres and semiotic modalities. Within this context of relations between designer and translator is that we have acquired an interest in verifying whether the design movement in transforming the base text in another new text is done through a process of multimodal translation/retextualization. In order to perform this investigation, we have based our study in the theoretical principles of Functionalist Translation (REISS, [1984]1996; VERMEER, [1978]1986; [1984]1996; e NORD, [1988]1991; [1997]2014; 2006), in Retextualization perspectives (TRAVAGLIA, 2003; MARCUSCHI, 2001; MATÊNCIO, 2002; 2003; DELL?ISOLA, 2007), and in the textual multimodality approach (HODGE e KRESS, 1988; KRESS e van LEEUWEN, 2001; 2006; JEWITT, 2009; KRESS, 2010). For this study, analysis of a printed textbook (base text) and its eBook (target text) produced for a Distance Education course of Problem Prevention in Drug Use - A Course for Counselors and Community Leadership (6th edition) promoted by the Brazilian office of politics on drugs (Secretaria Nacional de Políticas sobre Drogas - SENAD) with the Ministry of Justice, developed by Universidade Federal de Santa Catarina (UFSC) through the multi-project center for educational technology (Núcleo Multiprojetos de Tecnologia Educacional - NUTE). In this ebook, the most important concepts from the printed textbook are presented, as well as some information available in the VLE. In order to quota and analyze translational and retextualization moves performed by the designer, the textual analysis model for translation proposed by Nord ([1988]1991) was utilized. Results demonstrate that: 1) retextualization performed by the designer includes other modes and semiotic resources composing the multimodal text, during the translation process; 2) intratextual factors listed by Nord focused basically on linguistic elements, not understanding all the multiple semiotic modalities that comprise the text in a degree of equality - hence the need to add other semiotic modalities in Nord?s proposed model; and 3) the instructional design is equivalent to the workof a translator, as there is an intent to produce a multimodal text from an informative source. In this context, it is possible to note the need to: 1) broaden the concept of retextualization, extending the process to study and analyze the other semiotic modalities that compose multimodal texts; 2) add to Nord?s framework other factors of analysis, broadening her model to the textual analysis applied to multimodal retextualization, and 3) the designer does indeed perform translation in ransforming a text into another new multimodal text. In this sense, the main objective of this study was achieved, thus proving within the Functionalist theory of Translation that the designer transforms the source text in another new text through a process of multimoda translation/retextualization, and that designers thus become translators/retextualizers.
Contreras, Lizarraga Adrián Arturo. "Multimodal microwave filters." Doctoral thesis, Universitat Politècnica de Catalunya, 2013. http://hdl.handle.net/10803/134931.
Full textGuilbeault, Douglas Richard. "Multimodal rhetorical figures." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/53978.
Full textArts, Faculty of
English, Department of
Graduate
Bazo, Rodríquez Alfredo, and Rosado Vitaliano Delgado. "Eje multimodal Amazonas." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2013. http://hdl.handle.net/10757/273520.
Full textKim, Hana 1980. "Multimodal animation control." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/29661.
Full textIncludes bibliographical references (leaf 44).
In this thesis, we present a multimodal animation control system. Our approach is based on a human-centric computing model proposed by Project Oxygen at MIT Laboratory for Computer Science. Our system allows the user to create and control animation in real time using the speech interface developed using SpeechBuilder. The user can also fall back to traditional input modes should the speech interface fail. We assume that the user has no prior knowledge and experience in animation and yet enable him to create interesting and meaningful animation naturally and fluently. We argue that our system can be used in a number of applications ranging from PowerPoint presentations to simulations to children's storytelling tools.
by Hana Kim.
M.Eng.
Caglayan, Ozan. "Multimodal Machine Translation." Thesis, Le Mans, 2019. http://www.theses.fr/2019LEMA1016/document.
Full textMachine translation aims at automatically translating documents from one language to another without human intervention. With the advent of deep neural networks (DNN), neural approaches to machine translation started to dominate the field, reaching state-ofthe-art performance in many languages. Neural machine translation (NMT) also revived the interest in interlingual machine translation due to how it naturally fits the task into an encoder-decoder framework which produces a translation by decoding a latent source representation. Combined with the architectural flexibility of DNNs, this framework paved the way for further research in multimodality with the objective of augmenting the latent representations with other modalities such as vision or speech, for example. This thesis focuses on a multimodal machine translation (MMT) framework that integrates a secondary visual modality to achieve better and visually grounded language understanding. I specifically worked with a dataset containing images and their translated descriptions, where visual context can be useful forword sense disambiguation, missing word imputation, or gender marking when translating from a language with gender-neutral nouns to one with grammatical gender system as is the case with English to French. I propose two main approaches to integrate the visual modality: (i) a multimodal attention mechanism that learns to take into account both sentence and convolutional visual representations, (ii) a method that uses global visual feature vectors to prime the sentence encoders and the decoders. Through automatic and human evaluation conducted on multiple language pairs, the proposed approaches were demonstrated to be beneficial. Finally, I further show that by systematically removing certain linguistic information from the input sentences, the true strength of both methods emerges as they successfully impute missing nouns, colors and can even translate when parts of the source sentences are completely removed
Hewa, Thondilege Akila Sachinthani Pemasiri. "Multimodal Image Correspondence." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/235433/1/Akila%2BHewa%2BThondilege%2BThesis%281%29.pdf.
Full textBruni, Elia. "Multimodal Distributional Semantics." Doctoral thesis, University of Trento, 2013. http://eprints-phd.biblio.unitn.it/1075/1/EliaBruniThesis.pdf.
Full textCampagnaro, Filippo. "Multimodal underwater networks." Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3422716.
Full textServajean, Philippe. "Approche "système unique" de la (méta)cognition." Thesis, Montpellier 3, 2018. http://www.theses.fr/2018MON30062.
Full textThere is today a broad consensus that the cognitive system is capable of having acti-vities on itself, we are talking about metacognition. Although several studies have focusedon the mechanisms underlying this metacognition, to our knowledge, none has done so ina "sensorimotor and integrative" perspective of cognitive functioning such as the one wepropose. Thus, the thesis we defend in this work is the following : metacognitive infor-mation, especially fluency, has strictly the same status as any cognitive information (i.e.,sensory and motor). In a first chapter, we propose a model of cognition respecting thisprinciple. Then, in the next two chapters, we test our hypothesis through experimentsand simulations using the mathematical model we have developed. This work focusedmore specifically on phenomena related to three original possibilities predicted by ourhypothesis : the possibility of meta-metacognition, the possibility of integration betweensensory information and metacognitive information, and the possibility of metacognitiveabstraction
Orta, de la Garza María Rebeca. "Nodo multimodal de transferencias." Thesis, Universidad de las Américas Puebla, 2003. http://catarina.udlap.mx/u_dl_a/tales/documentos/lar/orta_d_mr/.
Full textAas, Asbjørn. "Brukerforsøk med multimodal demonstrator." Thesis, Norwegian University of Science and Technology, Department of Electronics and Telecommunications, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10283.
Full textAngelica, Lim. "MEI: Multimodal Emotional Intelligence." 京都大学 (Kyoto University), 2014. http://hdl.handle.net/2433/188869.
Full textMarcollo, Hayden 1972. "Multimodal vortex-induced vibration." Monash University, Dept. of Mechanical Engineering, 2002. http://arrow.monash.edu.au/hdl/1959.1/7674.
Full textQvarfordt, Pernilla. "Eyes on multimodal interaction /." Linköping : Univ, 2004. http://www.bibl.liu.se/liupubl/disp/disp2004/tek893s.pdf.
Full textKernchen, Jochen Ralf. "Mobile multimodal user interfaces." Thesis, University of Surrey, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.531385.
Full textNyamapfene, Abel. "Unsupervised multimodal neural networks." Thesis, University of Surrey, 2006. http://epubs.surrey.ac.uk/844064/.
Full textDanielsson, Oscar. "Multimodal Brain Age Estimation." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281834.
Full textMaskininlärningsmodeller tränade på MR-data av friska personer kan användas för att estimera ålder. Noggrann uppskattning hjärnans ålder är viktigt för att pålitligt upptäcka onormalt åldrande av hjärnan. Ett sätt att öka noggrannheten är genom att använda multimodal data. Tidigare forskning gjord med multimodal data har till stor del inte varit baserad på djupinlärning; i detta examensarbete undersöker vi en djupinlärningsmodell som effektivt kan utnyttja flera modaliteter. Tre basmodeller tränades. Två använde T1-viktad respektive T2-viktad data. Den tredje modellen tränades på både T1- och T2-viktad data genom högnivå-fusion. Vi fann att användning av multimodal data minskade det genomsnittliga absoluta felet för estimerade åldrar. En fjärde modell använde separering (eng. disentanglement) för att skapa en representation som är robust vid avsaknad av T1- eller T2-viktad data. Resultaten var lika för denna modell och basmodellerna, vilket innebär att modellen är robust mot avsaknad av data, utan någon betydande försämring i noggranhet.
Sioson, Allan A. "Multimodal Networks in Biology." Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/29995.
Full textPh. D.
Fernández, Carbonell Marcos. "Automated Multimodal Emotion Recognition." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-282534.
Full textAtt kunna läsa och tolka affektiva tillstånd spelar en viktig roll i det mänskliga samhället. Detta är emellertid svårt i vissa situationer, särskilt när information är begränsad till antingen vokala eller visuella signaler. Många forskare har undersökt de så kallade grundläggande känslorna på ett övervakat sätt. Det här examensarbetet innehåller resultaten från en multimodal övervakad och oövervakad studie av ett mer realistiskt antal känslor. För detta ändamål extraheras ljud- och videoegenskaper från GEMEP-data med openSMILE respektive OpenFace. Det övervakade tillvägagångssättet inkluderar jämförelse av flera lösningar och visar att multimodala pipelines kan överträffa unimodala sådana, även med ett större antal affektiva tillstånd. Den oövervakade metoden omfattar en konservativ och en utforskande metod för att hitta meningsfulla mönster i det multimodala datat. Den innehåller också ett innovativt förfarande för att bättre förstå resultatet av klustringstekniker.
Alabau, Gonzalvo Vicente. "Multimodal interactive structured prediction." Doctoral thesis, Universitat Politècnica de València, 2014. http://hdl.handle.net/10251/35135.
Full textAlabau Gonzalvo, V. (2014). Multimodal interactive structured prediction [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/35135
TESIS
Premiado
Theissing, Simon. "Supervision en transport multimodal." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLN076/document.
Full textWithout any doubt, modern multimodal transportation systems are vital to the ecological sustainability and the economic prosperity of urban agglomerations, and in doing so to the quality of life of their many inhabitants. Moreover it is known that a well-functioning interoperability of the different modes and lines in such networked systems is key to their acceptance given the fact that (i) many if not most trips between different origin/destination pairs require transfers, and (ii) costly infrastructure investments targeting the creation of more direct links through the construction of new or the extension of existing lines are not open to debate. Thus, a better understanding of how the different modes and lines in these systems interact through passenger transfers is of utmost importance. However, acquiring this understanding is particularly tricky in degraded situations where some or all transportation services cannot be provided as planned due to e.g. some passenger incident, and/or where the demand for these scheduled services deviates from any statistical long term-plannings. Here, the development for and integration of sophisticated mathematical models into the operation of such systems may provide remedy, where model-predictive supervision seems to be one very promising area of application which we consider here. Model-predictive supervision can take several forms. In this work, we focus on the model-based impact analysis of different actions, such as the delayed departure of some vehicle from a stop, applied to the operation of the considered transportation system upon some downgrading situation occurs which lacks statistical data. For this purpose, we introduce a new stochastic hybrid automaton model, and show how this mathematically profound model can be used to forecast the passenger numbers in and the vehicle operational state of this transportation system starting from estimations of all passenger numbers and an exact knowledge of the vehicle operational state at the time of the incident occurrence. Our new automaton model brings under the same roof, all passengers who demand fixed-route transportation services, and all vehicles which provide them. It explicitly accounts for all capacity-limits and the fact that passengers do not necessarily follow efficient paths which must be mapped to some simple to understand cost function. Instead, every passenger has a trip profile which defines a fixed route in the infrastructure of the transportation system, and a preference for the different transportation services along this route. Moreover, our model does not abstract away from all vehicle movements but explicitly includes them in its dynamics, which latter property is crucial to the impact analysis of any vehicle movement-related action. In addition our model accounts for uncertainty; resulting from unknown initial passenger numbers and unknown passenger arrival flows. Compared to classical modelling approaches for hybrid automata, our Petri net-styled approach does not require the end user to specify our model's many differential equations systems by hand. Instead, all these systems can be derived from the model's predominantly graphical specification in a fully automated manner for the discrete time computation of any forecast. This latter property of our model in turn reduces the risk of man-made specification and thus forecasting errors. Besides introducing our new model, we also develop in this report some algorithmic bricks which target two major bottlenecks which are likely to occur during its forecast-producing simulation, namely the numerical integration of the many high-dimensional systems of stochastic differential equations and the combinatorial explosion of its discrete state. Moreover, we proof the computational feasibility and show the prospective benefits of our approach in form of some simplistic test- and some more realistic use case
Perez, Lloret Marta. "Photoactivable Multimodal Antimicrobial Nanoconstructs." Doctoral thesis, Università di Catania, 2017. http://hdl.handle.net/10761/3999.
Full textAdebayo, Kolawole John <1986>. "Multimodal Legal Information Retrieval." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amsdottorato.unibo.it/8634/1/ADEBAYO-JOHN-tesi.pdf.
Full textMedjahed, Hamid. "Distress situation identification by multimodal data fusion for home healthcare telemonitoring." Thesis, Evry, Institut national des télécommunications, 2010. http://www.theses.fr/2010TELE0002/document.
Full textThe population age increases in all societies throughout the world. In Europe, for example, the life expectancy for men is about 71 years and for women about 79 years. For North America the life expectancy, currently is about 75 for men and 81 for women. Moreover, the elderly prefer to preserve their independence, autonomy and way of life living at home the longest time possible. The current healthcare infrastructures in these countries are widely considered to be inadequate to meet the needs of an increasingly older population. Home healthcare monitoring is a solution to deal with this problem and to ensure that elderly people can live safely and independently in their own homes for as long as possible. Automatic in-home healthcare monitoring is a technological approach which helps people age in place by continuously telemonitoring. In this thesis, we explore automatic in-home healthcare monitoring by conducting a study of professionals who currently perform in-home healthcare monitoring, by combining and synchronizing various telemonitoring modalities,under a data synchronization and multimodal data fusion platform, FL-EMUTEM (Fuzzy Logic Multimodal Environment for Medical Remote Monitoring). This platform incorporates algorithms that process each modality and providing a technique of multimodal data fusion which can ensures a pervasive in-home health monitoring for elderly people based on fuzzy logic.The originality of this thesis which is the combination of various modalities in the home, about its inhabitant and their surroundings, will constitute an interesting benefit and impact for the elderly person suffering from loneliness. This work complements the stationary smart home environment in bringing to bear its capability for integrative continuous observation and detection of critical situations
Sundström, Jessica. "Multimodalt skrivande - förutsättningar och lärandemöjligheter : Litteraturstudie om mellanstadieelevers lärandemöjligheter vid multimodalt skrivande inom svenskämnet och förutsättningar för en multimodal skrivundervisning." Thesis, Högskolan Dalarna, Pedagogiskt arbete, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:du-18956.
Full textMalmberg, Lovisa, and Sara Stensils. "Multimodala texter i skolan : En multimodal läromedelsanalys av läseböcker i svenskämnet för årskurs F-3." Thesis, Uppsala universitet, Institutionen för pedagogik, didaktik och utbildningsstudier, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-434737.
Full textChen, Jianan. "Deep Learning Based Multimodal Retrieval." Electronic Thesis or Diss., Rennes, INSA, 2023. http://www.theses.fr/2023ISAR0019.
Full textMultimodal tasks play a crucial role in the progression towards achieving general artificial intelligence (AI). The primary goal of multimodal retrieval is to employ machine learning algorithms to extract relevant semantic information, bridging the gap between different modalities such as visual images, linguistic text, and other data sources. It is worth noting that the information entropy associated with heterogeneous data for the same high-level semantics varies significantly, posing a significant challenge for multimodal models. Deep learning-based multimodal network models provide an effective solution to tackle the difficulties arising from substantial differences in information entropy. These models exhibit impressive accuracy and stability in large-scale cross-modal information matching tasks, such as image-text retrieval. Furthermore, they demonstrate strong transfer learning capabilities, enabling a well-trained model from one multimodal task to be fine-tuned and applied to a new multimodal task, even in scenarios involving few-shot or zero-shot learning. In our research, we develop a novel generative multimodal multi-view database specifically designed for the multimodal referential segmentation task. Additionally, we establish a state-of-the-art (SOTA) benchmark and multi-view metric for referring expression segmentation models in the multimodal domain. The results of our comparative experiments are presented visually, providing clear and comprehensive insights
Gutiérrez, Aldrete Mariana. "El tratamiento del feminicidio en medios de comunicación en México." Doctoral thesis, Universitat Autònoma de Barcelona, 2020. http://hdl.handle.net/10803/670554.
Full textEsta investigación analiza el tratamiento del tema Feminicidio en la prensa mexicana. El objetivo es estudiar cuales son los aspectos del conflicto que se destacan en la información de prensa. Medimos la atención mediática y la comparamos con un lapso de tiempo al principio de la investigación y al final. Se extrajeron los marcos periodísticos multimodales, las fallas de contexto y los discursos ideológicos diseminados. Además analizamos la representación de los movimientos sociales contra el feminicidio como actor y las oportunidades discursivas alcanzadas, en comparación con la representación de las autoridades. Escogimos tres periódicos de circulación nacional tomando en cuenta las preferencias de la audiencia para elegir los más leídos en su versión impresa y por medios electrónicos. Durante un periodo de 41 meses se recabaron todos los artículos que informan feminicidio como tema principal o secundario, y los artículos sobre asesinatos de mujeres en los que no se ha comprobado si tenían motivos de género o no. Obtuvimos 2,527 artículos y se codificaron todos manualmente. Se utilizó la metodología de análisis de contenido cuantitativo textual y análisis de las imágenes para extraer los elementos de los marcos periodísticos multimodales, de acuerdo con la teoría de Entman: la denominación del problema, actores principales, la evaluación moral, la atribución de responsabilidad y el tratamiento recomendado. Cada elemento contiene diversas variables que se agruparon en conglomerados por orden de incidencia. La representación de los movimientos sociales se midió con las características del ‘paradigma de la protesta’ y analizamos el grado en que se adhiere a esta teoría. Utilizamos las mismas variables para medir las oportunidades discursivas del movimiento. Encontramos que la atención mediática al conflicto ha aumentado considerablemente en los 3 diarios diseminando la idea de que la severidad del problema también aumenta; sin embargo, la representación de las víctimas tiende a ser negativa y se reproducen discursos discriminatorios.
This research analyzes the treatment of the Feminicide issue in the Mexican press. The objective is to study which aspects of the conflict are highlighted in the press information. We measure media attention and compare it with a period of time at the beginning of the investigation and at the end. Multimodal journalistic frameworks, context failures and disseminated ideological discourses were extracted. We also analyze the representation of social movements against femicide as an actor and the discursive opportunities achieved, compared to the representation of the authorities. We chose three newspapers of national circulation taking into account the preferences of the audience to choose the most read in its printed version and electronically. During a period of 41 months, all articles that report femicide as the main or secondary topic were collected, and articles on murders of women in which it was not proven whether they had gender motives or not. We obtained 2,527 articles and all were manually coded. The methodology of textual quantitative content analysis and image analysis was used to extract the elements of multimodal journalistic frameworks, according to Entman's theory: the name of the problem, main actors, moral evaluation, attribution of responsibility and The recommended treatment. Each element contains several variables that were grouped into clusters in order of incidence. The representation of social movements was measured with the characteristics of the 'protest paradigm' and we analyzed the degree to which it adheres to this theory. We use the same variables to measure the discursive opportunities of the movement. We found that media attention to the conflict has increased considerably in the 3 newspapers disseminating the idea that the severity of the problem also increases; however, the representation of victims tends to be negative and discriminatory discourses are reproduced.
Specker, Elizabeth. "L1/L2 Eye Movement Reading of Closed Captioning: A Multimodal Analysis of Multimodal Use." Diss., The University of Arizona, 2008. http://hdl.handle.net/10150/194820.
Full textGarcía, Guerra Carlos Enrique. "Multimodal eye's optical quality (MEOQ)." Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/397198.
Full textDentro del sistema visual, la óptica del ojo es responsable de la formación de imágenes de objetos externos en el fondo de ojo para su fotorrecepción e interpretación neuronal. Sin embargo, el ojo no es perfecto y sus capacidades pueden verse limitadas por la presencia de aberraciones o de luz dispersa. De esta manera, la cuantificación de los factores ópticos que afectan al ojo resulta importante para fines de diagnóstico y de monitoreo. En este contexto, el presente documento resume el trabajo realizado durante la implementación del sistema Multimodal Eye’s Optical Quality (MEOQ), un dispositivo de medición que integra un instrumento de doble paso (DP) y un sensor de Hartmann-Shack (HS) para proporcionar no sólo información sobre aberraciones, sino también en la dispersión que se produce en el ojo humano. Un diseño binocular de campo abierto permite evaluaciones en condiciones visuales naturales. Además, el sistema es capaz de compensar tanto errores refractivos esféricos como astigmáticos mediante el uso de dispositivos de potencia óptica configurable. El sistema MEOQ se ha utilizado para cuantificar la dispersión en el ojo humano basándose en las diferencias entre estimaciones de DP y HS. Además, la información de DP se ha empleado para medir la dispersión intraocular utilizando un nuevo método de cuantificación. Por último, las propiedades configurables del corrector de refracción esférica se han utilizado para explorar un método para la reducción de ruido speckle en sistemas basados en reflexiones de luz en el fondo ocular.
Innerhalb des visuellen Systems ist die Optik des Auges verantwortlich für die Abbildung externer Objekte auf dem Fundus des Auges, damit Licht umgewandelt und neural interpretiert wird. Dennoch ist das Auge nicht perfekt und seine Möglichkeiten sind durch Abbildungsfehler und Streuung begrenzt. Daraus ergibt sich, dass die Quantifizierung der optischen Faktoren, welche das Auge betreffen, wichtig für die Diagnose und Überwachung sind. Innerhalb dieses Rahmens fasst dieses Dokument die Arbeit zusammen, welche die Implementierung eines System zur multimodalen Bestimmung der optischen Qualität des Auges (MEOQ), bestehend aus einem Doppelpass-Instument (DP) und einem Hartmann-Shack-Sensor (HS), beschreibt, um nicht nur Informationen über Abbildungsfehler, sondern auch über Streuung im menschlichen Auge zu erhalten. Ein biokulares Freisicht-Design ermöglicht natürliche Sehverhältnisse. Darüberhinaus ist das System in der Lage sphärische und astigmatische Brechungsfehler mit einem Gerät einstellbarer optischer Leistung zu korrigieren. Das MEOQ System wurde genutzt um Streuung im menschlichen Auge mit Hilfe der Unterschiede der bschätzungen des DP und des HS zu quantifizieren. Darüberhinaus wurden die DP Informationen angewandt um intraokulare Streuung durch eine neue Methode der Quantifizierung zu messen. Schließlich wurden die konfigurierbaren Einstellungen des sphärischen Brechungsfehlerskorrektor genutzt um eine Methode zur Reduzierung von Speckle in Systemen, welche auf Reflektionen von Licht vom Fundus des Auges basieren, zu untersuchen.
Sheikhi, Shoshtari Ava. "Multimodal assessment of neurodegenerative diseases." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/58324.
Full textApplied Science, Faculty of
Graduate
Black, Cory A., and n/a. "Supramolecular complexes of multimodal ligands." University of Otago. Department of Chemistry, 2007. http://adt.otago.ac.nz./public/adt-NZDU20070518.091104.
Full textTreviranus, Jutta. "Multimodal access to written communication." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ28724.pdf.
Full textRopinski, Timo, Ivan Viola, Martin Biermann, Helwig Hauser, and Klaus Hinrichs. "Multimodal Visualization with Interactive Closeups." University of Münster, Germany, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-93205.
Full textCorreia, Rose Mary. "Legal aspects of multimodal telecommunications." Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=23309.
Full textThis study begins in Chapter I with an examination of the emerging technologies and recent market trends which challenge traditional regulation, as well as the importance of upholding regulation in the emerging ISDN telecommunications environment. Chapter II discusses the recent market developments in Canada, the legal implications of emerging technologies for the current regulatory regime, and the need for comprehensive policy and regulation. Chapter III discusses the role of satellites in the emerging global ISDN environment, the mandate of INTELSAT in terms of spectrum/orbit resource management, the regulation of multimodal telecommunications under the INTELSAT Agreement, the challenges to INTELSAT represented by ISDN development, the role of the ITU in the regulation of the emerging global ISDN environment, and the future of INTELSAT in light of competition, technological progress, and regulatory trends. This is followed by a conclusion in Chapter IV.
Fatukasi, Omolara O. "Multimodal fusion of biometric experts." Thesis, University of Surrey, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.493242.
Full textLingam, Sumanth (Sumanth Kumar) 1978. "User interfaces for multimodal systems." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/8614.
Full textIncludes bibliographical references (leaves 68-69).
As computer systems become more powerful and complex, efforts to make computer interfaces more simple and natural become increasingly important. Natural interfaces should be designed to facilitate communication in ways people are already accustomed to using. Such interfaces allow users to concentrate on the tasks they are trying to accomplish, not worry about what they must do to control the interface. Multimodal systems process combined natural input modes- such as speech, pen, touch, manual gestures, gaze, and head and body movements- in a coordinated manner with multimedia system output. The initiative at W3C is to make the development of interfaces simple and easy to distribute applications across the Internet in an XML development environment. The languages so far such as HTML designed at W3C are for a particular platform and are not portable to other platforms. User Interface Markup Language (UIML) has been designed to develop cross-platform interfaces. It will be shown in this thesis that UIML can be used not only to develop multi-platform interfaces but also for creating multimodal interfaces. A survey of existing multimodal applications is performed and an efficient and easy-to-develop methodology is proposed. Later it will be also shown that the methodology proposed satisfies a major set of requirements laid down by W3C for multimodal dialogs.
by Sumanth Lingam.
M.Eng.
Adler, Aaron D. (Aaron Daniel) 1979. "MIDOS : Multimodal Interactive DialOgue System." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/52776.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (p. 239-243).
Interactions between people are typically conversational, multimodal, and symmetric. In conversational interactions, information flows in both directions. In multimodal interactions, people use multiple channels. In symmetric interactions, both participants communicate multimodally, with the integration of and switching between modalities basically effortless. In contrast, consider typical human-computer interaction. It is almost always unidirectional { we're telling the machine what to do; it's almost always unimodal (can you type and use the mouse simultaneously?); and it's symmetric only in the disappointing sense that when you type, it types back at you. There are a variety of things wrong with this picture. Perhaps chief among them is that if communication is unidirectional, it must be complete and unambiguous, exhaustively anticipating every detail and every misinterpretation. In brief, it's exhausting. This thesis examines the benefits of creating multimodal human-computer dialogues that employ sketching and speech, aimed initially at the task of describing early stage designs of simple mechanical devices. The goal of the system is to be a collaborative partner, facilitating design conversations. Two initial user studies provided key insights into multimodal communication: simple questions are powerful, color choices are deliberate, and modalities are closely coordinated. These observations formed the basis for our multimodal interactive dialogue system, or Midos. Midos makes possible a dynamic dialogue, i.e., one in which it asks questions to resolve uncertainties or ambiguities.
(cont.) The benefits of a dialogue in reducing the cognitive overhead of communication have long been known. We show here that having the system able to ask questions is good, but for an unstructured task like describing a design, knowing what questions to ask is crucial. We describe an architecture that enables the system to accept partial information from the user, then request details it considers relevant, noticeably lowering the cognitive overhead of communicating. The multimodal questions Midos asks are in addition purposefully designed to use the same multimodal integration pattern that people exhibited in our study. Our evaluation of the system showed that Midos successfully engages the user in a dialogue and produces the same conversational features as our initial human-human conversation studies.
by Aaron Daniel Adler.
Ph.D.
Zhao, Yang. "Multimodal transport and competing regimes." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612135.
Full textMukherjee, Sankha Subhra. "Multimodal headpose estimation and applications." Thesis, Heriot-Watt University, 2017. http://hdl.handle.net/10399/3338.
Full textUpadhaya, Taman. "Multimodal radiomics in neuro-oncology." Thesis, Brest, 2017. http://www.theses.fr/2017BRES0036/document.
Full textGlioblastoma multiforme (GBM) is a WHO grade IV tumor that represents 49% of ail brain tumours. Despite aggressive treatment modalities (radiotherapy, chemotherapy and surgical resections) the prognosis is poor, as médian overall survival (OS) is 12-14 months. GBM’s neuroimaging (non-invasive) features can provide opportunities for subclassification, prognostication, and the development of targeted therapies that could advance the clinical practice. This thesis focuses on developing a prognostic model based on multimodal MRI-derived (Tl pre- and post-contrast, T2 and FLAIR) radiomics in GBM. The proposed methodological framework consists in i) registering the available 3D multimodal MR images andsegmenting the tumor volume, ii) extracting radiomics iii) building and validating a prognostic model using machine learning algorithms applied to multicentric clinical cohorts of patients. The core component of the framework rely on extracting radiomics (including intensity, shape and textural metrics) and building prognostic models using two different machine learning algorithms (Support Vector Machine (SVM) and Random Forest (RF)) that were compared by selecting, ranking and combining optimal features. The potential benefits and respective impact of several MRI pre-processing steps (spatial resampling of the voxels, intensities quantization and normalization, segmentation) for reliable extraction of radiomics was thoroughly assessed. Moreover, the standardization of the radiomics features among methodological teams was done by contributing to “Multicentre Initiative for Standardisation of Radiomics”. The accuracy obtained on the independent test dataset using SVM and RF reached upto 83%- 77% when combining ail available features and upto 87%-77% when using only reliable features previously identified as robust, depending on number of features and modality. In this thesis, I developed a framework for developing a compréhensive prognostic model for patients with GBM from multimodal MRI-derived “radiomics and machine learning”. The future work will consists in building a unified prognostic model exploiting other contextual data such as genomics. In case of new algorithm development we look forward to develop the Ensemble models and deep learning-based techniques
Valero-Mas, Jose J. "Towards Interactive Multimodal Music Transcription." Doctoral thesis, Universidad de Alicante, 2017. http://hdl.handle.net/10045/71275.
Full textRezaei, Masoud. "Multimodal implantable neural interfacing microsystem." Doctoral thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/36437.
Full textStudying brain functionality to help patients suffering from neurological diseases needs fully implantable brain interface to enable access to neural activities as well as read and analyze them. In this thesis, ultra-low power implantable brain-machine-interfaces (BMIs) that are based on several innovations on circuits and systems are studied for use in neural recording applications. Such a system is intended to collect information on neural activity emitted by several hundreds of neurons, while activating them on demand using actuating means like electro- and/or photo-stimulation. Such a system must provide several recording channels, while consuming very low energy, and have an extremely small size for safety and biocompatibility. Typically, a brain interfacing microsystem includes several building blocks, such as an analog front-end (AFE), an analog-to-digital converter (ADC), digital signal processing modules, and a wireless data transceiver. A BMI extracts neural signals from noise, digitizes them, and transmits them to a base station without interfering with the natural behavior of the subject. This thesis focuses on ultra-low power front-ends to be utilized in a BMI, and presents front-ends with several innovative strategies to consume less power, while enabling high-resolution and high-quality of data. First, we present a new front-end structure using a current-reuse scheme. This structure is scalable to huge numbers of recording channels, owing to its small implementation silicon area and its low power consumption. The proposed current-reuse AFE, which includes a low-noise amplifier (LNA) and a programmable gain amplifier (PGA), employs a new fully differential current-mirror topology using fewer transistors. This is an improvement over several design parameters, in terms of power consumption and noise, over previous current-reuse amplifier circuit implementations. In the second part of this thesis, we propose a new multi-channel sigma-delta converter that converts several channels independently using a single op-amp and several charge storage capacitors. Compared to conventional techniques, this method applies a new interleaved multiplexing scheme, which does not need any reset phase for the integrator while it switches to a new channel; this enhances its resolution. When the chip area is not a priority, other approaches can be more attractive, and we propose a new power-efficient strategy based on a new in-channel ultra-low power sigma-delta converter designed to decrease further power consumption. This new converter uses a low-voltage architecture based on an innovative feed-forward topology that minimizes the nonlinearity associated with low-voltage supply.
Bensaid, Eden. "Multimodal generative models for storytelling." Thesis, Massachusetts Institute of Technology, 2021. https://hdl.handle.net/1721.1/130680.
Full textCataloged from the official PDF of thesis.
Includes bibliographical references (pages 41-45).
Storytelling is an open-ended task that entails creative thinking and requires a constant flow of ideas. Generative models have recently gained momentum thanks to their ability to identify complex data's inner structure and learn efficiently from unlabeled data [34]. Natural language generation (NLG) for storytelling is especially challenging because it requires the generated text to follow an overall theme while remaining creative and diverse to engage the reader [26]. Competitive story generation models still suffer from repetition [19], are unable to consistently condition on a theme [51] and struggle to produce a grounded, evolving storyboard [43]. Published story visualization architectures that generate images require a descriptive text to depict the scene to illustrate [30]. Therefore, it seems promising to evaluate an interactive multimodal generative platform that collaborates with writers to face the complex story-generation task. With co-creation, writers contribute their creative thinking, while generative models contribute to their constant workflow. In this work, we introduce a system and a web-based demo, FairyTailor¹, for machine-in-the-loop visual story co-creation. Users can create a cohesive children's story by weaving generated texts and retrieved images with their input. FairyTailor adds another modality and modifies the text generation process to produce a coherent and creative sequence of text and images. To our knowledge, this is the first dynamic tool for multimodal story generation that allows interactive co-creation of both texts and images. It allows users to give feedback on co-created stories and share their results. We release the demo source code² for other researchers' use.
by Eden Bensaid.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Klimas, Matthew L. "Argent Sound Recordings: Multimodal Storytelling." VCU Scholars Compass, 2008. http://scholarscompass.vcu.edu/etd/795.
Full textRönnqvist, Kim. "Multimodal characterisation of sensorimotor oscillations." Thesis, Aston University, 2013. http://publications.aston.ac.uk/19564/.
Full textDickson-LaPrade, Daniel. "Charles Darwin’s Multimodal Scientific Invention." Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/882.
Full textMcGee, David R. "Augmenting environments with multimodal interaction /." Full text open access at:, 2003. http://content.ohsu.edu/u?/etd,222.
Full text