Academic literature on the topic 'Multimodal behaviour generation'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multimodal behaviour generation.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Multimodal behaviour generation"
Tella, Akin. "Humour generation and multimodal framing of political actor in the 2015 Nigerian presidential election campaign memes." European Journal of Humour Research 6, no. 4 (December 30, 2018): 95. http://dx.doi.org/10.7592/ejhr2018.6.4.tella.
Full textWehr, Franka, and Martin Luccarelli. "Using Personas in the Design Process. Towards the Development of Green Product Personality for In-Car User Interfaces." Proceedings of the Design Society: International Conference on Engineering Design 1, no. 1 (July 2019): 2911–20. http://dx.doi.org/10.1017/dsi.2019.298.
Full textDock, Stephanie, Liza Cohen, Jonathan D. Rogers, Jamie Henson, Rachel Weinberger, Jason Schrieber, and Karina Ricks. "Methodology to Gather Multimodal Urban Trip Generation Data." Transportation Research Record: Journal of the Transportation Research Board 2500, no. 1 (January 2015): 48–58. http://dx.doi.org/10.3141/2500-06.
Full textMarchetti, Marco, Enrico Baria, Riccardo Cicchi, and Francesco Saverio Pavone. "Custom Multiphoton/Raman Microscopy Setup for Imaging and Characterization of Biological Samples." Methods and Protocols 2, no. 2 (June 20, 2019): 51. http://dx.doi.org/10.3390/mps2020051.
Full textBraddock, Barbara A., Jane Hilton, and Filip Loncke. "Multimodal Behaviors in Autism Spectrum: Insights From Typical Development Inform AAC." Perspectives of the ASHA Special Interest Groups 2, no. 12 (January 2017): 116–26. http://dx.doi.org/10.1044/persp2.sig12.116.
Full textYAN, GAO-WEI, and ZHAN-JU HAO. "A NOVEL OPTIMIZATION ALGORITHM BASED ON ATMOSPHERE CLOUDS MODEL." International Journal of Computational Intelligence and Applications 12, no. 01 (March 2013): 1350002. http://dx.doi.org/10.1142/s1469026813500028.
Full textKOPP, STEFAN, KIRSTEN BERGMANN, and IPKE WACHSMUTH. "MULTIMODAL COMMUNICATION FROM MULTIMODAL THINKING — TOWARDS AN INTEGRATED MODEL OF SPEECH AND GESTURE PRODUCTION." International Journal of Semantic Computing 02, no. 01 (March 2008): 115–36. http://dx.doi.org/10.1142/s1793351x08000361.
Full textHuang, Hung-Hsuan, Seiya Kimura, Kazuhiro Kuwabara, and Toyoaki Nishida. "Generation of Head Movements of a Robot Using Multimodal Features of Peer Participants in Group Discussion Conversation." Multimodal Technologies and Interaction 4, no. 2 (April 29, 2020): 15. http://dx.doi.org/10.3390/mti4020015.
Full textSun, Shih-Wei, Ting-Chen Mou, and Pao-Chi Chang. "Deadlift Recognition and Application based on Multiple Modalities using Recurrent Neural Network." Electronic Imaging 2020, no. 17 (January 26, 2020): 2–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.17.3dmp-a17.
Full textBREITFUSS, WERNER, HELMUT PRENDINGER, and MITSURU ISHIZUKA. "AUTOMATIC GENERATION OF GAZE AND GESTURES FOR DIALOGUES BETWEEN EMBODIED CONVERSATIONAL AGENTS." International Journal of Semantic Computing 02, no. 01 (March 2008): 71–90. http://dx.doi.org/10.1142/s1793351x0800035x.
Full textDissertations / Theses on the topic "Multimodal behaviour generation"
Stokes, Michael James. "Multimodal Behaviour Generation Frameworks in Virtual Heritage Applications : A Virtual Museum at Sverresborg." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9014.
Full textThis masters thesis proposes that multimodal behaviour generation frameworks are an appropriate way to increase the believability of animated characters in virtual heritage applications. To investigate this proposal, an existing virtual museum guide application developed by the author is extended by integrating the Behavioural Markup Language (BML), and the open-source BML realiser SmartBody. The architectural and implementation decisions involved in this process are catalogued and discussed. The integration of BML and SmartBody results in a dramatic improvement in the quality of character animation in the application, as well as greater flexibility and extensibility, including the ability to create scripted sequences of behaviour for multiple characters in the virtual museum. The successful integration confirms that multimodal behaviour generation frameworks have a place in virtual heritage applications.
Faggi, Simone. "An Evaluation Model For Speech-Driven Gesture Synthesis." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22844/.
Full textMihoub, Alaeddine. "Apprentissage statistique de modèles de comportement multimodal pour les agents conversationnels interactifs." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT079/document.
Full textFace to face interaction is one of the most fundamental forms of human communication. It is a complex multimodal and coupled dynamic system involving not only speech but of numerous segments of the body among which gaze, the orientation of the head, the chest and the body, the facial and brachiomanual movements, etc. The understanding and the modeling of this type of communication is a crucial stage for designing interactive agents capable of committing (hiring) credible conversations with human partners. Concretely, a model of multimodal behavior for interactive social agents faces with the complex task of generating gestural scores given an analysis of the scene and an incremental estimation of the joint objectives aimed during the conversation. The objective of this thesis is to develop models of multimodal behavior that allow artificial agents to engage into a relevant co-verbal communication with a human partner. While the immense majority of the works in the field of human-agent interaction (HAI) is scripted using ruled-based models, our approach relies on the training of statistical models from tracks collected during exemplary interactions, demonstrated by human trainers. In this context, we introduce "sensorimotor" models of behavior, which perform at the same time the recognition of joint cognitive states and the generation of the social signals in an incremental way. In particular, the proposed models of behavior have to estimate the current unit of interaction ( IU) in which the interlocutors are jointly committed and to predict the co-verbal behavior of its human trainer given the behavior of the interlocutor(s). The proposed models are all graphical models, i.e. Hidden Markov Models (HMM) and Dynamic Bayesian Networks (DBN). The models were trained and evaluated - in particular compared with classic classifiers - using datasets collected during two different interactions. Both interactions were carefully designed so as to collect, in a minimum amount of time, a sufficient number of exemplars of mutual attention and multimodal deixis of objects and places. Our contributions are completed by original methods for the interpretation and comparative evaluation of the properties of the proposed models. By comparing the output of the models with the original scores, we show that the HMM, thanks to its properties of sequential modeling, outperforms the simple classifiers in term of performances. The semi-Markovian models (HSMM) further improves the estimation of sensorimotor states thanks to duration modeling. Finally, thanks to a rich structure of dependency between variables learnt from the data, the DBN has the most convincing performances and demonstrates both the best performance and the most faithful multimodal coordination to the original multimodal events
Books on the topic "Multimodal behaviour generation"
Nurturing future generations: Promoting resilience in children and adolescents through social, emotional, and cognitive skills. 2nd ed. New York: Routledge, 2005.
Find full textRojc, Matej, and Izidor Mlakar. Expressive Conversational-Behavior Generation Models for Advanced Interaction Within Multimodal User Interfaces. Nova Science Publishers, Incorporated, 2016.
Find full textThompson, Rosemary. Nurturing Future Generations: Promoting Relilience in Children and Adolescents Through Social, Emotional, and Cognitive Skills, Second Edition. Brunner-Routledge, 2006.
Find full textThompson, Rosemary. Nurturing Future Generations: Promoting Resilience in Children and Adolescents Through Social, Emotional, and Cognitive Skills, Second Edition. Brunner-Routledge, 2006.
Find full textBook chapters on the topic "Multimodal behaviour generation"
Kopp, Stefan, Brigitte Krenn, Stacy Marsella, Andrew N. Marshall, Catherine Pelachaud, Hannes Pirker, Kristinn R. Thórisson, and Hannes Vilhjálmsson. "Towards a Common Framework for Multimodal Generation: The Behavior Markup Language." In Intelligent Virtual Agents, 205–17. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11821830_17.
Full textBreitfuss, Werner, Helmut Prendinger, and Mitsuru Ishizuka. "Automatic Generation of Non-verbal Behavior for Agents in Virtual Worlds: A System for Supporting Multimodal Conversations of Bots and Avatars." In Online Communities and Social Computing, 153–61. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02774-1_17.
Full textSantosh, Paramala J. "Medication for children and adolescents: current issues." In New Oxford Textbook of Psychiatry, 1793–99. Oxford University Press, 2012. http://dx.doi.org/10.1093/med/9780199696758.003.0236.
Full textConference papers on the topic "Multimodal behaviour generation"
Kucherenko, Taras. "Data Driven Non-Verbal Behavior Generation for Humanoid Robots." In ICMI '18: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3242969.3264970.
Full textIvanovic, Boris, Edward Schmerling, Karen Leung, and Marco Pavone. "Generative Modeling of Multimodal Multi-Human Behavior." In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018. http://dx.doi.org/10.1109/iros.2018.8594393.
Full textDermouche, Soumia, and Catherine Pelachaud. "Generative Model of Agent’s Behaviors in Human-Agent Interaction." In ICMI '19: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3340555.3353758.
Full textGrimaldi, Michele, and Catherine Pelachaud. "Generation of Multimodal Behaviors in the Greta platform." In IVA '21: ACM International Conference on Intelligent Virtual Agents. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3472306.3478368.
Full textHuang, Hung-Hsuan, Masato Fukuda, and Toyoaki Nishida. "An Investigation on the Effectiveness of Multimodal Fusion and Temporal Feature Extraction in Reactive and Spontaneous Behavior Generative RNN Models for Listener Agents." In HAI '19: 7th International Conference on Human-Agent Interaction. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3349537.3351908.
Full textVelázquez Romera, Guillermo, and Andrés Monzón. "PUBLIC TRANSPORT USERS' PREFERENCES AND WILLINGNESS TO PAY FOR A PUBLIC TRANSPORTATION MOBILE APP IN MADRID." In CIT2016. Congreso de Ingeniería del Transporte. Valencia: Universitat Politècnica València, 2016. http://dx.doi.org/10.4995/cit2016.2016.3498.
Full text