Academic literature on the topic 'Assistance multimodale'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Assistance multimodale.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Assistance multimodale"
Weiss, Manfred, and Manfred Laube. "Assistenz und Delegation mit mobilen Softwareagenten - Das Leitprojekt MAP (Assistance and Delegation using Software Agents - Lead Project MAP)." i-com 2, no. 2/2003 (February 1, 2003): 4–12. http://dx.doi.org/10.1524/icom.2.2.4.19593.
Full textTiferes, Judith, Ann M. Bisantz, Matthew L. Bolton, D. Jeffery Higginbotham, Ryan P. O’Hara, Nicole K. Wawrzyniak, Justen D. Kozlowski, Basel Ahmad, Ahmed A. Hussein, and Khurshid A. Guru. "Multimodal team interactions in Robot-Assisted Surgery." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 60, no. 1 (September 2016): 518–22. http://dx.doi.org/10.1177/1541931213601118.
Full textRocha, Ana Patrícia, Maksym Ketsmur, Nuno Almeida, and António Teixeira. "An Accessible Smart Home Based on Integrated Multimodal Interaction." Sensors 21, no. 16 (August 13, 2021): 5464. http://dx.doi.org/10.3390/s21165464.
Full textSchultz, Carl, and Mehul Bhatt. "Multimodal spatial data access for architecture design assistance." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 26, no. 2 (April 20, 2012): 177–203. http://dx.doi.org/10.1017/s0890060412000066.
Full textHerfet, Thorsten, Thomas Kirste, and Michael Schnaider. "EMBASSI multimodal assistance for infotainment and service infrastructures." Computers & Graphics 25, no. 4 (August 2001): 581–92. http://dx.doi.org/10.1016/s0097-8493(01)00086-3.
Full textEisenmann, U., R. Metzner, C. R. Wirtz, and H. Dickhaus. "Integrating multimodal information for intraoperative assistance in neurosurgery." Current Directions in Biomedical Engineering 1, no. 1 (September 1, 2015): 188–91. http://dx.doi.org/10.1515/cdbme-2015-0047.
Full textGao, Yixing, Hyung Jin Chang, and Yiannis Demiris. "User Modelling Using Multimodal Information for Personalised Dressing Assistance." IEEE Access 8 (2020): 45700–45714. http://dx.doi.org/10.1109/access.2020.2978207.
Full textKirchner, Elsa Andrea, Marc Tabie, and Anett Seeland. "Multimodal Movement Prediction - Towards an Individual Assistance of Patients." PLoS ONE 9, no. 1 (January 8, 2014): e85060. http://dx.doi.org/10.1371/journal.pone.0085060.
Full textDjaid, Nadia Touileb, Nadia Saadia, and Amar Ramdane-Cherif. "Multimodal Fusion Engine for an Intelligent Assistance Robot Using Ontology." Procedia Computer Science 52 (2015): 129–36. http://dx.doi.org/10.1016/j.procs.2015.05.041.
Full textBaeza, Rianna R., and Anil R. Kumar. "Perceived Usefulness of Multimodal Voice Assistant Technology." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 63, no. 1 (November 2019): 1560–64. http://dx.doi.org/10.1177/1071181319631031.
Full textDissertations / Theses on the topic "Assistance multimodale"
Kieffer, Suzanne. "Assistance multimodale à l'exploration de visualisations 2D interactives." Phd thesis, Université Henri Poincaré - Nancy I, 2005. http://tel.archives-ouvertes.fr/tel-00011312.
Full textNous avons adopté une approche expérimentale pour déterminer l'influence d'indications spatiales orales sur la rapidité et la précision du repérage de cibles et évaluer la satisfaction subjective d'utilisateurs potentiels de cette forme d'assistance à l'activité d'exploration visuelle.
Les différentes études réalisées ont montré d'une part que les présentations multimodales facilitent et améliorent les performances des utilisateurs pour le repérage visuel, en termes de temps et de précision de sélection des cibles. Elles ont montré d'autre part que les stratégies d'exploration visuelle des affichages, en l'absence de messages sonores, dépendent de l'organisation spatiale des informations au sein de l'affichage graphique.
Ullah, Sehat. "Multi-modal assistance for collaborative 3D interaction : study and analysis of performance in collaborative work." Thesis, Evry-Val d'Essonne, 2011. http://www.theses.fr/2011EVRY0003.
Full textThe recent advancement in the field oh high quality computer graphics and the capability of inexpensive computers to render realistic 3D scenes have made it possible to develop virtual environments where two more users can co-exist and work collaboratively to achieve a common goal. Such environments are called Collaborative Virtual Environnment (CVEs). The potential application domains of CVEs are many, such as military, medical, assembling, computer aided designing, teleoperation, education, games and social networks etc.. One of the problems related to CVEs is the user's low level of awareness about the status, actions and intentions of his/her collaborator, which not only reduces user's performance but also leads to non satisfactory results. In addition, collaborative tasks without using any proper computer generated assistance are very difficult to perform and are more prone to errors. The basic theme of this thesis is to provide assistance in collaborative 3D interactiion in CVEs. In this context, we study and develop the concept of multimodal (audio, visual and haptic) assistance of a user or group of users. Our study focuses on how we can assist users to collaboratively interact with the entities of CVEs. We propose here to study and analyze the contribution of multimodal assistance in collaborative (synchronous and asynchronous) interaction with objects in the virtual environment. Indeed, we propose and implement various multimodal virtual guides. Theses guides are evaluated through a series of experiments where selection/manipulation task is carried out by users both in synchronous and asynchronous mode. The experiments were carried out in LISA (Laboratoire d'Ingénierie et Systèmes Automatisés) lat at University of Angers and IBISC (Informatique, Biologie Intégrative et Systèmes complexes) lab at University of Evry. In these experiments users were asked to perform a task under various conditions (with and without guides). Analysis was done on the basis of task completion time, errors and users' learning. For subjective evaluations questionnaires were used. The findings of this research work can contribute to the development of collaborative systems for teleopreation, assembly tasks, e-learning, rehabilitation, computer aided design and entertainment
Ullah, Sehat. "Assistance multimodale pour l'interaction 3D collaborative : étude et analyse des performances pour le travail collaboratif." Phd thesis, Université d'Evry-Val d'Essonne, 2011. http://tel.archives-ouvertes.fr/tel-00562081.
Full textChapelier, Laurent. "Dialogue d'assistance dans une interface homme-machine multimodale." Nancy 1, 1996. http://www.theses.fr/1996NAN10117.
Full textMan-machine interfaces for a wide range of users still remain difficult to use in spite of their attractive aspect and their many fonctionalities. Thus, a novice user has to learn how to use the interface. In this thesis, we present our results on the assistance dialogue with a user in interaction with a multimodal intelligent interface. We conducted an experimental study using the Wizard of Oz paradigm to observe the behaviour of the users in front of such a system. The psycho-social study of the corpus permitted to identify and formalise dialogue schemes corresponding to different types of help. From these results we propose a model of a multi-agents architecture of a multimodal intelligent interface. We present a prototype of the assistance dialogue system which shows the use of the dialogue schemes in a multi-agents system
Mollaret, Christophe. "Perception multimodale de l'homme pour l'interaction Homme-Robot." Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30225/document.
Full textThis work is about human multimodal perception for human-robot interaction (HRI). This work was financed by the RIDDLE ANR Contint project (2012-2015). This project focuses on the development of an assisting robot for the elderly who experience small losses of memory. This project aims at coping with a growing need in human care for elder people living alone. Indeed in France, the population is aging and around 33% of the estimated population will be more than 60 years old by 2060. The goal is therefore to program an interactive robot (with perceptive capabilities), which would be able to learn the relationship between the user and a set of selected objects in their shared environment. In this field, lots of problems remain in terms of : (i) shared human-environment perception, (ii) integration on a robotic platform, and (iii) the validation of some scenarii about usual objects that involve both the robot and the elderly. The aim is to see the robot answer the user's interrogations about ten objects (defined by a preliminary study) with appropriate actions. For example, the robot will indicate the position of an object by moving towards it, grapping it or giving oral indications if it is not reachable. The RIDDLE project was formed by a consortium, with Magellium, the gerontology center of Toulouse, the MINC team from the LAAS-CNRS laboratory and Aldebaran Robotics. The final demonstrations will be led on the Rom´eo platform. This thesis has been co-directed by Fr´ed´eric Lerasle and Isabelle Ferran´e, respectively from the RAP team of LAAS-CNRS and the SAMoVA team of IRIT. Along the project, in partnership with the gerontology center, a robot scenario was determined following three major steps. During the first one -the "Monitoring step"- the robot is far from the user and waits for an intention of interaction. A "Proximal interaction step" is reached when the robot interacts with the user from a close position. Finally, the last step : the "Transition" allows the robot to move to reach the two previous ones. This scenario was built in order to create a not-intrusive proactive robot. This non-intrusiveness is materialized by the "monitoring step". The proactivity is achieved by the creation of a detector of user intention, allowing the robot to understand non-verbal information about the user's will to communicate with it. The scientific contributions of this thesis include various aspects : robotic scenarii, the detector of user intention, a filtering technique based on particle swarm optimization algorithm, and finally a Baysian scheme built to improve the word error rate given distance information. This thesis is divided in four chapters. The first one is about the detector of user intention. The second chapter moves on to the filtering technique. The third chapter will focus on the proximal interaction and the employed techniques, and finally the last chapter will deal with the robotic implementations
Courtial, Nicolas. "Fusion d’images multimodales pour l’assistance de procédures d’électrophysiologie cardiaque." Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1S015.
Full textCardiac electrophysiology procedures have been proved to be efficient to suppress arrythmia and heart failure symptoms. Their success rate depends on patient’s heart condition’s knowledge, including electrical and mechanical functions and tissular quality. It is a major clinical concern for these therapies. This work focuses on the development of specific patient multimodal model to plan and assist radio-frequency ablation (RFA) and cardiac resynchronization therapy (CRT). First, segmentation, registration and fusion methods have been developped to create these models, allowing to plan these interventional procedures. For each therapy, specific means of integration within surgical room have been established, for assistance purposes. Finally, a new multimodal descriptor has been synthesized during a post-procedure analysis, aiming to predict the CRT’s response depending on the left ventricular stimulation site. These studies have been applied and validated on patients candidate to CRT and ARF. They showed the feasibility and interest of integrating such multimodal models in the clinical workflow to assist these procedures
Morin, Philippe. "Partner, un système de dialogue homme-machine multimodal pour applications finalisées à forte composante orale." Nancy 1, 1994. http://www.theses.fr/1994NAN10423.
Full textLutkewitte, Claire E. "Multimodality is-- : a survey investigating how graduate teaching assistants and instructors teach multimodal assignments in first-year composition courses." CardinalScholar 1.0, 2010. http://liblink.bsu.edu/uhtbin/catkey/1560841.
Full textDepartment of English
Coue, Christophe. "Modèle bayésien pour l'analyse multimodale d'environnementsdynamiques et encombrés : Application à l'assistance à la conduite en milieu urbain." Phd thesis, 2003. http://tel.archives-ouvertes.fr/tel-00005527.
Full text(8636196), Ting Zhang. "MULTIMODAL DIGITAL IMAGE EXPLORATION WITH SYNCHRONOUS INTELLIGENT ASSISTANCE FOR THE BLIND." Thesis, 2020.
Find full textBooks on the topic "Assistance multimodale"
Directory of services for technical assistance in shipping, ports, and multimodal transport to developing countries. New York: United Nations, 1989.
Find full textBook chapters on the topic "Assistance multimodale"
Hortal, Enrique. "Multimodal Assistance System." In Brain-Machine Interfaces for Assistance and Rehabilitation of People with Reduced Mobility, 23–34. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-95705-0_2.
Full textCifuentes, Carlos A., and Anselmo Frizera. "Multimodal Interface for Human Mobility Assistance." In Springer Tracts in Advanced Robotics, 81–100. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-34063-0_5.
Full textFerrer, Gonzalo, Anaís Garrell, Michael Villamizar, Iván Huerta, and Alberto Sanfeliu. "Robot Interactive Learning through Human Assistance." In Multimodal Interaction in Image and Video Applications, 185–203. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-35932-3_11.
Full textCremers, Anita H. M., Maaike Duistermaat, Peter L. M. Groenewegen, and Jacomien G. M. de Jong. "Making Remote ‘Meeting Hopping’ Work: Assistance to Initiate, Join and Leave Meetings." In Machine Learning for Multimodal Interaction, 315–24. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-85853-9_29.
Full textPapageorgiou, Xanthi S., Costas S. Tzafestas, Petros Maragos, Georgios Pavlakos, Georgia Chalvatzaki, George Moustris, Iasonas Kokkinos, et al. "Advances in Intelligent Mobility Assistance Robot Integrating Multimodal Sensory Processing." In Universal Access in Human-Computer Interaction. Aging and Assistive Environments, 692–703. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-07446-7_66.
Full textBohus, Dan, and Alexander I. Rudnicky. "LARRI: A Language-Based Maintenance and Repair Assistant." In Spoken Multimodal Human-Computer Dialogue in Mobile Environments, 203–18. Dordrecht: Springer Netherlands, 2005. http://dx.doi.org/10.1007/1-4020-3075-4_12.
Full textAlaçam, Özge, Christopher Habel, and Cengiz Acartürk. "Towards Designing Audio Assistance for Comprehending Haptic Graphs: A Multimodal Perspective." In Universal Access in Human-Computer Interaction. Design Methods, Tools, and Interaction Techniques for eInclusion, 409–18. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-39188-0_44.
Full textLiao, Lizi, Lyndon Kennedy, Lynn Wilcox, and Tat-Seng Chua. "Crowd Knowledge Enhanced Multimodal Conversational Assistant in Travel Domain." In MultiMedia Modeling, 405–18. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37731-1_33.
Full textWasinger, Rainer, Antonio Krüger, and Oliver Jacobs. "Integrating Intra and Extra Gestures into a Mobile and Multimodal Shopping Assistant." In Lecture Notes in Computer Science, 297–314. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11428572_18.
Full textNiculescu, Andreea I., Ridong Jiang, Seokhwan Kim, Kheng Hui Yeo, Luis F. D’Haro, Arthur Niswar, and Rafael E. Banchs. "SARA: Singapore’s Automated Responsive Assistant, A Multimodal Dialogue System for Touristic Information." In Mobile Web Information Systems, 153–64. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10359-4_13.
Full textConference papers on the topic "Assistance multimodale"
Leclercq, Pierre, Geneviève Martin, Catherine Deshayes, and François Guena. "Vers une interface multimodale pour une assistance à la conception architecturale." In the 16th conference. New York, New York, USA: ACM Press, 2004. http://dx.doi.org/10.1145/1148613.1148629.
Full textBertoldi, Eduardo, and Lucia Filgueiras. "Multimodal advanced driver assistance systems." In the 2nd international workshop. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/2002368.2002370.
Full textRodriguez, B. Helena, Jean-Claude Moissinac, and Isabelle Demeure. "Multimodal instantiation of assistance services." In the 12th International Conference. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1967486.1967652.
Full textOrtega, Fabio J. M., Sergio I. Giraldo, and Rafael Ramirez. "Bowing modeling for violin students assistance." In ICMI '17: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3139513.3139525.
Full textMakula, Pooja, Anurag Mishra, Akshay Kumar, Krit Karan, and V. K. Mittal. "Multimodal smart robotic assistant." In 2015 International Conference on Signal Processing, Computing and Control (ISPCC). IEEE, 2015. http://dx.doi.org/10.1109/ispcc.2015.7374991.
Full textBrun, Damien. "Multimodal and Context-Aware Interaction in Augmented Reality for Active Assistance." In ICMI '18: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3242969.3264966.
Full textGuo, Wei, and Shiwei Cheng. "An Approach to Reading Assistance with Eye Tracking Data and Text Features." In ICMI '19: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3351529.3360659.
Full textMangipudi, Vidyavisal, and Raj Tumuluri. "Context-Aware Multimodal Robotic Health Assistant." In ICMI '14: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2014. http://dx.doi.org/10.1145/2663204.2669627.
Full textOh, Jin-hwan, Sudhakar Sah, Jihoon Kim, Yoori Kim, Jeonghwa Lee, Wooseung Lee, Myeongsoo Shin, Jaeyon Hwang, and Seongwon Kim. "Hang Out with the Language Assistant." In ICMI '19: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3340555.3358659.
Full textJohnston, Michael, John Chen, Patrick Ehlen, Hyuckchul Jung, Jay Lieske, Aarthi Reddy, Ethan Selfridge, Svetlana Stoyanchev, Brant Vasilieff, and Jay Wilpon. "MVA: The Multimodal Virtual Assistant." In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL). Stroudsburg, PA, USA: Association for Computational Linguistics, 2014. http://dx.doi.org/10.3115/v1/w14-4335.
Full textReports on the topic "Assistance multimodale"
Ehlen, Patrick. Multimodal Meeting Capture and Understanding with the CALO Meeting Assistant. Fort Belvoir, VA: Defense Technical Information Center, January 2007. http://dx.doi.org/10.21236/ada506397.
Full text