Academic literature on the topic 'Multimodal behaviour generation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multimodal behaviour generation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Multimodal behaviour generation"

1

Tella, Akin. "Humour generation and multimodal framing of political actor in the 2015 Nigerian presidential election campaign memes." European Journal of Humour Research 6, no. 4 (December 30, 2018): 95. http://dx.doi.org/10.7592/ejhr2018.6.4.tella.

Full text
Abstract:
Internet memes significantly constitute an outlet for extensive popular political participation in election contexts. They instantiate humour and represent political candidates so as to affect voters’ behaviour. Few studies on memes in political context exist (Shifman et al. 2007; Chen 2013; Tay 2014; Adegoju & Oyebode 2015; Huttington 2016; Dzanic & Berberovic 2017). These studies have not intensively examined the integrative deployment of visual and verbal resources afforded by internet memes to generate humour and to construct specific frames for election candidates in the campaign context of an emerging democracy. Therefore, this study investigates the use of language and visuals for humour generation and for the creation of definite frames for the two major presidential candidates in internet memes created in the course of the 2015 Nigerian presidential election campaigns. The theoretical insights for the study are derived from Attardo’s (1997) set-up-incongruity-resolution theory of humour, Kuypers’ (1997, 2002, 2009, 2010) model of rhetorical framing analysis, Bauman & Briggs’ (1990) concept of entextualisation, Kress & van Leeuwen’s (1996) socio-semiotic model for visual analysis and Sperber & Wilson’s (1986) relevance theory. The analysis indicates that meme producers generate humour and frame candidates through the entextualisation of verbal and visual texts, explicatures and implicatures. The memes construct seven individuated frames and one collective frame for the two major presidential candidates in the sampled memes using visual and linguistic resources. It concludes on the note that supporters of election candidates use humorous internet memes for negative portraying opponents and the positive representation of the favoured candidate. These negative other-representations serve the purpose of depreciating the electoral values of the opponents and indirectly increasing the electoral chances of their own candidates.
APA, Harvard, Vancouver, ISO, and other styles
2

Wehr, Franka, and Martin Luccarelli. "Using Personas in the Design Process. Towards the Development of Green Product Personality for In-Car User Interfaces." Proceedings of the Design Society: International Conference on Engineering Design 1, no. 1 (July 2019): 2911–20. http://dx.doi.org/10.1017/dsi.2019.298.

Full text
Abstract:
AbstractThe desire to combine advanced user-friendly interfaces with a product personality communicating environmental friendliness to customers poses new challenges for car interior designers, as little research has been carried out in this field to date. In this paper, the creation of three personas aimed at defining key German car users with pro-environmental behaviour is presented. After collecting ethnographic data of potential drivers through literature review, information about generation and Euro car segment led to the definition of three key user groups. The resulting personas were applied to determine the most important interaction points in car interior. Finally, present design cues of eco-friendly product personality developed in the field of automotive design were explored. Our work presents three strategic directions for the design development of future in-car user interfaces named as a) foster multimodal mobility; b) emphasize the interlinkage economy - sustainable driving; and c) highlight new technological developments. The presented results are meant as an impulse for developers to fit the needs of green customers and drivers when designing user-friendly HMI components.
APA, Harvard, Vancouver, ISO, and other styles
3

Dock, Stephanie, Liza Cohen, Jonathan D. Rogers, Jamie Henson, Rachel Weinberger, Jason Schrieber, and Karina Ricks. "Methodology to Gather Multimodal Urban Trip Generation Data." Transportation Research Record: Journal of the Transportation Research Board 2500, no. 1 (January 2015): 48–58. http://dx.doi.org/10.3141/2500-06.

Full text
Abstract:
Assessments of the impact of new land use development on the transportation network often rely on the ITE Trip Generation Manual informational report. Current ITE rates generally represent travel behavior for separated, single-use developments in low-density suburban areas. However, a more compact urban form, access to transit, and a greater mix of uses are known to generate fewer and shorter vehicle trips—and quite possibly more trips overall, especially in heavily urbanized areas like Washington, D.C. Local and national interest exists for generating data that expand upon existing trip rates (and similar parking generation rates) to include sites in diverse, dense contexts. The lack of adequate data on multimodal urban trip generation led the District Department of Transportation in Washington, D.C., to develop and test a streamlined methodology that meets the needs of practitioners who are evaluating the transportation impacts of new developments in dense, multimodal environments. This methodology focuses on capturing all trips to and from a site and the mode of all travelers, not just personal vehicle trips. The methodology was tested at mixed-use multifamily residential buildings but is intended for future use at a wide range of sites. This paper presents the methodology and rationale for a robust national data collection effort.
APA, Harvard, Vancouver, ISO, and other styles
4

Marchetti, Marco, Enrico Baria, Riccardo Cicchi, and Francesco Saverio Pavone. "Custom Multiphoton/Raman Microscopy Setup for Imaging and Characterization of Biological Samples." Methods and Protocols 2, no. 2 (June 20, 2019): 51. http://dx.doi.org/10.3390/mps2020051.

Full text
Abstract:
Modern optics offers several label-free microscopic and spectroscopic solutions which are useful for both imaging and pathological assessments of biological tissues. The possibility to obtain similar morphological and biochemical information with fast and label-free techniques is highly desirable, but no single optical modality is capable of obtaining all of the information provided by histological and immunohistochemical analyses. Integrated multimodal imaging offers the possibility of integrating morphological with functional-chemical information in a label-free modality, complementing the simple observation with multiple specific contrast mechanisms. Here, we developed a custom laser-scanning microscopic platform that combines confocal Raman spectroscopy with multimodal non-linear imaging, including Coherent Anti-Stokes Raman Scattering, Second-Harmonic Generation, Two-Photon Excited Fluorescence, and Fluorescence Lifetime Imaging Microscopy. The experimental apparatus is capable of high-resolution morphological imaging of the specimen, while also providing specific information about molecular organization, functional behavior, and molecular fingerprint. The system was successfully tested in the analysis of ex vivo tissues affected by urothelial carcinoma and by atherosclerosis, allowing us to multimodally characterize of the investigated specimen. Our results show a proof-of-principle demonstrating the potential of the presented multimodal approach, which could serve in a wide range of biological and biomedical applications.
APA, Harvard, Vancouver, ISO, and other styles
5

Braddock, Barbara A., Jane Hilton, and Filip Loncke. "Multimodal Behaviors in Autism Spectrum: Insights From Typical Development Inform AAC." Perspectives of the ASHA Special Interest Groups 2, no. 12 (January 2017): 116–26. http://dx.doi.org/10.1044/persp2.sig12.116.

Full text
Abstract:
Individuals with Autism Spectrum Disorder (ASD) who have limited natural speech may communicate using unaided and/or aided augmentative and alternative communication (AAC) and may combine potentially communicative behaviors in multimodal ways. Unaided AAC refers to the use of an alternative and augmentative system of communication that does not require aids external to the communicator's body. Aided AAC relies on the use of aids external to the body, such as pictures or a speech-generating device (SGD). Potential communicative acts refer to any behavior that others interpret as meaningful, including informal (unconventional) behaviors, such as body or hand movement, as well as a few words or (conventional) symbols, such as pointing to pictures. Foundational skills, such as communicative gesture and joint attention, can inform multimodal AAC practices in young children with or at risk for ASD. A data tracker of motor hand, oral-motor/vocal/verbal behaviors, and AAC is provided based on past research in children with or at risk for ASD. The data tracker highlights behaviors ranging from informal to conventional communication forms that may be produced in multimodal ways.
APA, Harvard, Vancouver, ISO, and other styles
6

YAN, GAO-WEI, and ZHAN-JU HAO. "A NOVEL OPTIMIZATION ALGORITHM BASED ON ATMOSPHERE CLOUDS MODEL." International Journal of Computational Intelligence and Applications 12, no. 01 (March 2013): 1350002. http://dx.doi.org/10.1142/s1469026813500028.

Full text
Abstract:
This paper introduces a novel numerical stochastic optimization algorithm inspired from the behaviors of cloud in the natural world, which is designated as atmosphere clouds model optimization (ACMO) algorithm. It is tried to simulate the generation behavior, move behavior and spread behavior of cloud in a simple way. The ACMO algorithm has been tested on a set of benchmark functions in comparison with two other evolutionary-based algorithms: particle swarm optimization (PSO) algorithm and genetic algorithm (GA). The results demonstrate that the proposed algorithm has certain advantages in solving multimodal functions, while the PSO algorithm has a better result in terms of convergence accuracy. In conclusion, the ACMO algorithm is an effective method in solving optimization problems.
APA, Harvard, Vancouver, ISO, and other styles
7

KOPP, STEFAN, KIRSTEN BERGMANN, and IPKE WACHSMUTH. "MULTIMODAL COMMUNICATION FROM MULTIMODAL THINKING — TOWARDS AN INTEGRATED MODEL OF SPEECH AND GESTURE PRODUCTION." International Journal of Semantic Computing 02, no. 01 (March 2008): 115–36. http://dx.doi.org/10.1142/s1793351x08000361.

Full text
Abstract:
A computational model for the automatic production of combined speech and iconic gesture is presented. The generation of multimodal behavior is grounded in processes of multimodal thinking, in which a propositional representation interacts and interfaces with an imagistic representation of visuo-spatial imagery. An integrated architecture for this is described, in which the planning of content and the planning of form across both modalities proceed in an interactive manner. Results from an empirical study are reported that inform the on-the-spot formation of gestures.
APA, Harvard, Vancouver, ISO, and other styles
8

Huang, Hung-Hsuan, Seiya Kimura, Kazuhiro Kuwabara, and Toyoaki Nishida. "Generation of Head Movements of a Robot Using Multimodal Features of Peer Participants in Group Discussion Conversation." Multimodal Technologies and Interaction 4, no. 2 (April 29, 2020): 15. http://dx.doi.org/10.3390/mti4020015.

Full text
Abstract:
In recent years, companies have been seeking communication skills from their employees. Increasingly more companies have adopted group discussions during their recruitment process to evaluate the applicants’ communication skills. However, the opportunity to improve communication skills in group discussions is limited because of the lack of partners. To solve this issue as a long-term goal, the aim of this study is to build an autonomous robot that can participate in group discussions, so that its users can repeatedly practice with it. This robot, therefore, has to perform humanlike behaviors with which the users can interact. In this study, the focus was on the generation of two of these behaviors regarding the head of the robot. One is directing its attention to either of the following targets: the other participants or the materials placed on the table. The second is to determine the timings of the robot’s nods. These generation models are considered in three situations: when the robot is speaking, when the robot is listening, and when no participant including the robot is speaking. The research question is: whether these behaviors can be generated end-to-end from and only from the features of peer participants. This work is based on a data corpus containing 2.5 h of the discussion sessions of 10 four-person groups. Multimodal features, including the attention of other participants, voice prosody, head movements, and speech turns extracted from the corpus, were used to train support vector machine models for the generation of the two behaviors. The performances of the generation models of attentional focus were in an F-measure range between 0.4 and 0.6. The nodding model had an accuracy of approximately 0.65. Both experiments were conducted in the setting of leave-one-subject-out cross validation. To measure the perceived naturalness of the generated behaviors, a subject experiment was conducted. In the experiment, the proposed models were compared. They were based on a data-driven method with two baselines: (1) a simple statistical model based on behavior frequency and (2) raw experimental data. The evaluation was based on the observation of video clips, in which one of the subjects was replaced by a robot performing head movements in the above-mentioned three conditions. The experimental results showed that there was no significant difference from original human behaviors in the data corpus and proved the effectiveness of the proposed models.
APA, Harvard, Vancouver, ISO, and other styles
9

Sun, Shih-Wei, Ting-Chen Mou, and Pao-Chi Chang. "Deadlift Recognition and Application based on Multiple Modalities using Recurrent Neural Network." Electronic Imaging 2020, no. 17 (January 26, 2020): 2–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.17.3dmp-a17.

Full text
Abstract:
To improve the workout efficiency and to provide the body movement suggestions to users in a “smart gym” environment, we propose to use a depth camera for capturing a user’s body parts and mount multiple inertial sensors on the body parts of a user to generate deadlift behavior models generated by a recurrent neural network structure. The contribution of this paper is trifold: 1) The multimodal sensing signals obtained from multiple devices are fused for generating the deadlift behavior classifiers, 2) the recurrent neural network structure can analyze the information from the synchronized skeletal and inertial sensing data, and 3) a Vaplab dataset is generated for evaluating the deadlift behaviors recognizing capability in the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
10

BREITFUSS, WERNER, HELMUT PRENDINGER, and MITSURU ISHIZUKA. "AUTOMATIC GENERATION OF GAZE AND GESTURES FOR DIALOGUES BETWEEN EMBODIED CONVERSATIONAL AGENTS." International Journal of Semantic Computing 02, no. 01 (March 2008): 71–90. http://dx.doi.org/10.1142/s1793351x0800035x.

Full text
Abstract:
In this paper we introduce a system that automatically adds different types of non-verbal behavior to a given dialogue script between two virtual embodied agents. It allows us to transform a dialogue in text format into an agent behavior script enriched by eye gaze and conversational gesture behavior. The agents' gaze behavior is informed by theories of human face-to-face gaze behavior. Gestures are generated based on the analysis of linguistic and contextual information of the input text. The resulting annotated dialogue script is then transformed into the Multimodal Presentation Markup Language for 3D agents (MPML3D), which controls the multi-modal behavior of animated life-like agents, including facial and body animation and synthetic speech. Using our system makes it very easy to add appropriate non-verbal behavior to a given dialogue text, a task that would otherwise be very cumbersome and time consuming. In order to test the quality of gaze generation, we conducted an empirical study. The results showed that by using our system, the naturalness of the agents' behavior was not increased when compared to randomly selected gaze behavior, but the quality of the communication between the two agents was perceived as significantly enhanced.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Multimodal behaviour generation"

1

Stokes, Michael James. "Multimodal Behaviour Generation Frameworks in Virtual Heritage Applications : A Virtual Museum at Sverresborg." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9014.

Full text
Abstract:

This masters thesis proposes that multimodal behaviour generation frameworks are an appropriate way to increase the believability of animated characters in virtual heritage applications. To investigate this proposal, an existing virtual museum guide application developed by the author is extended by integrating the Behavioural Markup Language (BML), and the open-source BML realiser SmartBody. The architectural and implementation decisions involved in this process are catalogued and discussed. The integration of BML and SmartBody results in a dramatic improvement in the quality of character animation in the application, as well as greater flexibility and extensibility, including the ability to create scripted sequences of behaviour for multiple characters in the virtual museum. The successful integration confirms that multimodal behaviour generation frameworks have a place in virtual heritage applications.

APA, Harvard, Vancouver, ISO, and other styles
2

Faggi, Simone. "An Evaluation Model For Speech-Driven Gesture Synthesis." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22844/.

Full text
Abstract:
The research and development of embodied agents with advanced relational capabilities is constantly evolving. In recent years, the development of behavioural signal generation models to be integrated in social robots and virtual characters, is moving from rule-based to data-driven approaches, requiring appropriate and reliable evaluation techniques. This work proposes a novel machine learning approach for the evaluation of speech-to-gestures models that is independent from the audio source. This approach enables the measurement of the quality of gestures produced by these models and provides a benchmark for their evaluation. Results show that the proposed approach is consistent with evaluations made through user studies and, furthermore, that its use allows for a reliable comparison of speech-to-gestures state-of-the-art models.
APA, Harvard, Vancouver, ISO, and other styles
3

Mihoub, Alaeddine. "Apprentissage statistique de modèles de comportement multimodal pour les agents conversationnels interactifs." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT079/document.

Full text
Abstract:
L'interaction face-à-face représente une des formes les plus fondamentales de la communication humaine. C'est un système dynamique multimodal et couplé – impliquant non seulement la parole mais de nombreux segments du corps dont le regard, l'orientation de la tête, du buste et du corps, les gestes faciaux et brachio-manuels, etc – d'une grande complexité. La compréhension et la modélisation de ce type de communication est une étape cruciale dans le processus de la conception des agents interactifs capables d'engager des conversations crédibles avec des partenaires humains. Concrètement, un modèle de comportement multimodal destiné aux agents sociaux interactifs fait face à la tâche complexe de générer un comportement multimodal étant donné une analyse de la scène et une estimation incrémentale des objectifs conjoints visés au cours de la conversation. L'objectif de cette thèse est de développer des modèles de comportement multimodal pour permettre aux agents artificiels de mener une communication co-verbale pertinente avec un partenaire humain. Alors que l'immense majorité des travaux dans le domaine de l'interaction humain-agent repose essentiellement sur des modèles à base de règles, notre approche se base sur la modélisation statistique des interactions sociales à partir de traces collectées lors d'interactions exemplaires, démontrées par des tuteurs humains. Dans ce cadre, nous introduisons des modèles de comportement dits "sensori-moteurs", qui permettent à la fois la reconnaissance des états cognitifs conjoints et la génération des signaux sociaux d'une manière incrémentale. En particulier, les modèles de comportement proposés ont pour objectif d'estimer l'unité d'interaction (IU) dans laquelle sont engagés de manière conjointe les interlocuteurs et de générer le comportement co-verbal du tuteur humain étant donné le comportement observé de son/ses interlocuteur(s). Les modèles proposés sont principalement des modèles probabilistes graphiques qui se basent sur les chaînes de markov cachés (HMM) et les réseaux bayésiens dynamiques (DBN). Les modèles ont été appris et évalués – notamment comparés à des classifieurs classiques – sur des jeux de données collectés lors de deux différentes interactions face-à-face. Les deux interactions ont été soigneusement conçues de manière à collecter, en un minimum de temps, un nombre suffisant d'exemplaires de gestion de l'attention mutuelle et de deixis multimodale d'objets et de lieux. Nos contributions sont complétées par des méthodes originales d'interprétation et d'évaluation des propriétés des modèles proposés. En comparant tous les modèles avec les vraies traces d'interactions, les résultats montrent que le modèle HMM, grâce à ses propriétés de modélisation séquentielle, dépasse les simples classifieurs en terme de performances. Les modèles semi-markoviens (HSMM) ont été également testé et ont abouti à un meilleur bouclage sensori-moteur grâce à leurs propriétés de modélisation des durées des états. Enfin, grâce à une structure de dépendances riche apprise à partir des données, le modèle DBN a les performances les plus probantes et démontre en outre la coordination multimodale la plus fidèle aux évènements multimodaux originaux
Face to face interaction is one of the most fundamental forms of human communication. It is a complex multimodal and coupled dynamic system involving not only speech but of numerous segments of the body among which gaze, the orientation of the head, the chest and the body, the facial and brachiomanual movements, etc. The understanding and the modeling of this type of communication is a crucial stage for designing interactive agents capable of committing (hiring) credible conversations with human partners. Concretely, a model of multimodal behavior for interactive social agents faces with the complex task of generating gestural scores given an analysis of the scene and an incremental estimation of the joint objectives aimed during the conversation. The objective of this thesis is to develop models of multimodal behavior that allow artificial agents to engage into a relevant co-verbal communication with a human partner. While the immense majority of the works in the field of human-agent interaction (HAI) is scripted using ruled-based models, our approach relies on the training of statistical models from tracks collected during exemplary interactions, demonstrated by human trainers. In this context, we introduce "sensorimotor" models of behavior, which perform at the same time the recognition of joint cognitive states and the generation of the social signals in an incremental way. In particular, the proposed models of behavior have to estimate the current unit of interaction ( IU) in which the interlocutors are jointly committed and to predict the co-verbal behavior of its human trainer given the behavior of the interlocutor(s). The proposed models are all graphical models, i.e. Hidden Markov Models (HMM) and Dynamic Bayesian Networks (DBN). The models were trained and evaluated - in particular compared with classic classifiers - using datasets collected during two different interactions. Both interactions were carefully designed so as to collect, in a minimum amount of time, a sufficient number of exemplars of mutual attention and multimodal deixis of objects and places. Our contributions are completed by original methods for the interpretation and comparative evaluation of the properties of the proposed models. By comparing the output of the models with the original scores, we show that the HMM, thanks to its properties of sequential modeling, outperforms the simple classifiers in term of performances. The semi-Markovian models (HSMM) further improves the estimation of sensorimotor states thanks to duration modeling. Finally, thanks to a rich structure of dependency between variables learnt from the data, the DBN has the most convincing performances and demonstrates both the best performance and the most faithful multimodal coordination to the original multimodal events
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Multimodal behaviour generation"

1

Nurturing future generations: Promoting resilience in children and adolescents through social, emotional, and cognitive skills. 2nd ed. New York: Routledge, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rojc, Matej, and Izidor Mlakar. Expressive Conversational-Behavior Generation Models for Advanced Interaction Within Multimodal User Interfaces. Nova Science Publishers, Incorporated, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Thompson, Rosemary. Nurturing Future Generations: Promoting Relilience in Children and Adolescents Through Social, Emotional, and Cognitive Skills, Second Edition. Brunner-Routledge, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Thompson, Rosemary. Nurturing Future Generations: Promoting Resilience in Children and Adolescents Through Social, Emotional, and Cognitive Skills, Second Edition. Brunner-Routledge, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Multimodal behaviour generation"

1

Kopp, Stefan, Brigitte Krenn, Stacy Marsella, Andrew N. Marshall, Catherine Pelachaud, Hannes Pirker, Kristinn R. Thórisson, and Hannes Vilhjálmsson. "Towards a Common Framework for Multimodal Generation: The Behavior Markup Language." In Intelligent Virtual Agents, 205–17. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11821830_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Breitfuss, Werner, Helmut Prendinger, and Mitsuru Ishizuka. "Automatic Generation of Non-verbal Behavior for Agents in Virtual Worlds: A System for Supporting Multimodal Conversations of Bots and Avatars." In Online Communities and Social Computing, 153–61. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02774-1_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Santosh, Paramala J. "Medication for children and adolescents: current issues." In New Oxford Textbook of Psychiatry, 1793–99. Oxford University Press, 2012. http://dx.doi.org/10.1093/med/9780199696758.003.0236.

Full text
Abstract:
Problems of mental health and behaviour in children are multidisciplinary in nature and optimal treatment is often multimodal. This article focuses on aspects of psychopharmacology that has special relevance in children and adolescents, especially the recent controversies. In general, this article provides information about classes of medication and not detailed information about specific medicines. Treatment recommendations of the specific disorders have been dealt within the appropriate chapters. The use of psychotropic medication in children is higher in the United States than in many other countries, and polypharmacy is common. About 1 per cent of overall medical consultations visits by children and adolescents in 2003–2004 in the US resulted in a second-generation antipsychotic (SGA) prescription. The majority of the visits involving antipsychotics were by Caucasian boys aged over nine years, visiting specialists, without private insurance, with a diagnosis of bipolar disorder, psychosis, depression, disruptive disorder, or anxiety. >Pre-school (2 to 4 year olds) psychotropic medication use, between 1995 and 2001 increased across the US for stimulants, antipsychotics, and antidepressants, while the use of anxiolytics, sedatives, hypnotics and anticonvulsants remained stable across these years, suggesting non-psychiatric medical usage. Ethnicity may influence differential prescription rates; for example, as compared to Caucasian youths, African-American youths are less likely to be prescribed psychotropic medications especially methylphenidate.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Multimodal behaviour generation"

1

Kucherenko, Taras. "Data Driven Non-Verbal Behavior Generation for Humanoid Robots." In ICMI '18: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3242969.3264970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ivanovic, Boris, Edward Schmerling, Karen Leung, and Marco Pavone. "Generative Modeling of Multimodal Multi-Human Behavior." In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018. http://dx.doi.org/10.1109/iros.2018.8594393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dermouche, Soumia, and Catherine Pelachaud. "Generative Model of Agent’s Behaviors in Human-Agent Interaction." In ICMI '19: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3340555.3353758.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Grimaldi, Michele, and Catherine Pelachaud. "Generation of Multimodal Behaviors in the Greta platform." In IVA '21: ACM International Conference on Intelligent Virtual Agents. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3472306.3478368.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Huang, Hung-Hsuan, Masato Fukuda, and Toyoaki Nishida. "An Investigation on the Effectiveness of Multimodal Fusion and Temporal Feature Extraction in Reactive and Spontaneous Behavior Generative RNN Models for Listener Agents." In HAI '19: 7th International Conference on Human-Agent Interaction. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3349537.3351908.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Velázquez Romera, Guillermo, and Andrés Monzón. "PUBLIC TRANSPORT USERS' PREFERENCES AND WILLINGNESS TO PAY FOR A PUBLIC TRANSPORTATION MOBILE APP IN MADRID." In CIT2016. Congreso de Ingeniería del Transporte. Valencia: Universitat Politècnica València, 2016. http://dx.doi.org/10.4995/cit2016.2016.3498.

Full text
Abstract:
Today, smart cities are presented as a solution to achieve a more sustainable urban development while increasing the quality of life of its citizens through the use of new technologies (Neirotti, 2013). Smart Mobility is based on innovative and sustainable ways to provide transport for the inhabitants of cities, enhancing the use of fuels or vehicle propulsion systems that respect the environment, supported by technological tools and a proactive behaviour of citizenship (Neirotti, 2013). In urban mobility, the purpose of the Smart Cities is to develop flexible systems for real-time information to support decision-making in the use and management of different transport modes, generating a positive impact, saving users time and improving efficiency and quality of service. In this context, several solution types are being introduced in the world’s cities. They enable the improvement of the abovementioned factors acting on the demand side resulting in more efficient journeys for individual travelers, and improved satisfaction with the service. (Skelley et Al., 2013) with a lower level of investment than that of infrastructure deployment or an increase in the level of service. One of the most extended solutions is the use of mobile apps for providing the user with contextualized -static and real time- transport information. The study is based on a survey carried out among users of public transport in Madrid under the European OPTICITES project of the 7th Research Framework Programme. The survey contained items on their transportation habits, their level of skills and technological capabilities, and their main expectations about the possibility of using a new application, the main desired capabilities and willingness to pay for use. The study results show the preferences of users of public transport capacity, static, real-time search and in-app services for a multimodal real-time application and willingness to pay for this service, all analyzed by different Slicers users. The results also establish the basis for an estimate of the usefulness of these applications for users of public transport.DOI: http://dx.doi.org/10.4995/CIT2016.2016.3498
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography