To see the other types of publications on this topic, follow the link: Human Robot Interaction (HRI).

Dissertations / Theses on the topic 'Human Robot Interaction (HRI)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Human Robot Interaction (HRI).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hüttenrauch, Helge. "From HCI to HRI : Designing Interaction for a Service Robot." Doctoral thesis, KTH, Numerisk Analys och Datalogi, NADA, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4255.

Full text
Abstract:
Service robots are mobile, embodied artefacts that operate in co presence with their users. This is a challenge for human-robot interaction (HRI) design. The robot’s interfaces must support users in understanding the system’s current state and possible next actions. One aspect in the design for such interaction is to understand users’ preferences and expectations by involving them in the design process. This thesis takes a user-centered design (UCD) perspective and tries to understand the different user roles that exist in service robotics in order to consider possible design implications. Another important aim in the thesis is to understand the spatial management that occurs in face-to-face encounters between humans and robotic systems. The Cero robot is an office “fetch-and-carry” robot that supports a user in the transportation of light objects in an office environment. The iterative, user-centered design of the graphical-user interface (GUI) for the Cero robot is presented in Paper I. It is based upon the findings from multiple prototype design- and evaluation iterations. The GUI is one of the robot’s interfacing components, i.e., it is to be seen in the overall interplay of the robot’s physical design and other interface modalities developed in parallel with the GUI. As interaction strategy for the GUI, a graphical representation based upon simplification of the graphical elements as well as hiding the robot system’s complexity in sensing and mission execution is recommended. The usage of the Cero robot by a motion-impaired user over a period of three months is presented in Paper II. This longitudinal user study aims to gain insights into the daily usage of such an assistive robot. This approach is complementary to the described GUI design and development process as it allows empirically investigating situated use of the Cero robot as novel service application over a longer period of time with the provided interfaces. Findings from this trial show that the robot and its interfaces provide a benefit to the user in the transport of light objects, but also implies increased independence. The long-term study also reveals further aspects of the Cero robot system usage as part of a workplace setting, including the social context that such a mobile, embodied system needs to be designed for. During the long-term user study, bystanders in the operation area of the Cero robot were observed in their attempt to interact with it. To understand better how such bystander users may shape the interaction with a service robot system, an experimental study investigates this special type and role of robot users in Paper III. A scenario in which the Cero robot addresses and asks invited trial subjects for a cup of coffee is described. The findings show that the level of occupation significantly influences bystander users’ willingness to assist the Cero robot with its request. The joint handling of space is an important part of HRI, as both users and service robots are mobile and often co-present during interaction. To inform the development of future robot locomotion behaviors and interaction design strategies, a Wizard-of Oz (WOZ) study is presented in Paper IV that explores the role of posture and positioning in HRI. The interpersonal distances and spatial formations that were observed during this trial are quantified and analyzed in a joint interaction task between a robot and its users in Paper V. Findings show that a face-to-face spatial formation and a distance between ~46 to ~122 cm is dominant while initiating a robot mission or instructing it about an object or place. Paper VI investigates another aspect on the role of spatial management in the joint task between a robot and its user based upon the study described in Papers IV and V. Taking the dynamics of interaction into account, the findings are that users structure their activities with the robot and that this organizing is observable as small movements in interaction. These small adaptations in posture and orientation signify the transition between different episodes of interaction and prepare for the next interaction exchange in the shared space. The understanding of these spatial management behaviors allow designing human-robot interaction based upon such awareness and active handling of space as a structuring interaction element.
QC 20100617
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Yan. "Gendering Human-Robot Interaction: exploring how a person's gender impacts attitudes toward and interaction with robots." Association for Computing Machinery, 2014. http://hdl.handle.net/1993/24446.

Full text
Abstract:
Developing an improved understanding and awareness of how gender impacts perceptions of robots and interactions with them is crucial for the ongoing advancement of the human-robot interaction (HRI) field, as a lack of awareness of gender issues increases the risk of robot rejection and poor performance. This thesis provides a theoretical grounding for gender-studies in HRI, and contributes to the understanding of how gender affects attitudes toward and interaction with robots via the findings from an on-line survey and a laboratory user study. We envision that this work will provide HRI designers with a foundation and exemplary account of how gender can influence attitudes toward and interaction with robots, serving as a resource and a sensitizing discussion for gender studies in HRI.
APA, Harvard, Vancouver, ISO, and other styles
3

Toris, Russell C. "Bringing Human-Robot Interaction Studies Online via the Robot Management System." Digital WPI, 2013. https://digitalcommons.wpi.edu/etd-theses/1058.

Full text
Abstract:
"Human-Robot Interaction (HRI) is a rapidly expanding field of study that focuses on allowing non-roboticist users to naturally and effectively interact with robots. The importance of conducting extensive user studies has become a fundamental component of HRI research; however, due to the nature of robotics research, such studies often become expensive, time consuming, and limited to constrained demographics. This work presents the Robot Management System, a novel framework for bringing robotic experiments to the web. A detailed description of the open source system, an outline of new security measures, and a use case study of the RMS as a means of conducting user studies is presented. Using a series of navigation and manipulation tasks with a PR2 robot, three user study conditions are compared: users that are co-present with the robot, users that are recruited to the university lab but control the robot from a different room, and remote web-based users. The findings show little statistical differences between usability patterns across these groups, further supporting the use of web-based crowdsourcing techniques for certain types of HRI evaluations."
APA, Harvard, Vancouver, ISO, and other styles
4

Pai, Abhishek. "Distance-Scaled Human-Robot Interaction with Hybrid Cameras." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1563872095430977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ponsler, Brett. "Recognizing Engagement Behaviors in Human-Robot Interaction." Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-theses/109.

Full text
Abstract:
Based on analysis of human-human interactions, we have developed an initial model of engagement for human-robot interaction which includes the concept of connection events, consisting of: directed gaze, mutual facial gaze, conversational adjacency pairs, and backchannels. We implemented the model in the open source Robot Operating System and conducted a human-robot interaction experiment to evaluate it.
APA, Harvard, Vancouver, ISO, and other styles
6

Juri, Michael J. "Design and Implementation of a Modular Human-Robot Interaction Framework." DigitalCommons@CalPoly, 2021. https://digitalcommons.calpoly.edu/theses/2327.

Full text
Abstract:
With the increasing longevity that accompanies advances in medical technology comes a host of other age-related disabilities. Among these are neuro-degenerative diseases such as Alzheimer's disease, Parkinson's disease, and stroke, which significantly reduce the motor and cognitive ability of affected individuals. As these diseases become more prevalent, there is a need for further research and innovation in the field of motor rehabilitation therapy to accommodate these individuals in a cost-effective manner. In recent years, the implementation of social agents has been proposed to alleviate the burden on in-home human caregivers. Socially assistive robotics (SAR) is a new subfield of research derived from human-robot interaction that aims to provide hands-off interventions for patients with an emphasis on social rather than physical interaction. As these SAR systems are very new within the medical field, there is no standardized approach to developing such systems for different populations and therapeutic outcomes. The primary aim of this project is to provide a standardized method for developing such systems by introducing a modular human-robot interaction software framework upon which future implementations can be built. The framework is modular in nature, allowing for a variety of hardware and software additions and modifications, and is designed to provide a task-oriented training structure with augmented feedback given to the user in a closed-loop format. The framework utilizes the ROS (Robot Operating System) middleware suite which supports multiple hardware interfaces and runs primarily on Linux operating systems. These design requirements are validated through testing and analysis of two unique implementations of the framework: a keyboard input reaction task and a reaching-to-grasp task. These implementations serve as example use cases for the framework and provide a template for future designs. This framework will provide a means to streamline the development of future SAR systems for research and rehabilitation therapy.
APA, Harvard, Vancouver, ISO, and other styles
7

Syrdal, Dag Sverre. "The impact of social expectation towards robots on human-robot interactions." Thesis, University of Hertfordshire, 2018. http://hdl.handle.net/2299/20962.

Full text
Abstract:
This work is presented in defence of the thesis that it is possible to measure the social expectations and perceptions that humans have of robots in an explicit and succinct manner, and these measures are related to how humans interact with, and evaluate, these robots. There are many ways of understanding how humans may respond to, or reason about, robots as social actors, but the approach that was adopted within this body of work was one which focused on interaction-specific expectations, rather than expectations regarding the true nature of the robot. These expectations were investigated using a questionnaire-based tool, the University of Hertfordshire Social Roles Questionnaire, which was developed as part of the work presented in this thesis and tested on a sample of 400 visitors to an exhibition in the Science Gallery in Dublin. This study suggested that responses to this questionnaire loaded on two main dimensions, one which related to the degree of social equality the participants expected the interactions with the robots to have, and the other was related to the degree of control they expected to exert upon the robots within the interaction. A single item, related to pet-like interactions, loaded on both and was considered a separate, third dimension. This questionnaire was deployed as part of a proxemics study, which found that the degree to which participants accepted particular proxemics behaviours was correlated with initial social expectations of the robot. If participants expected the robot to be more of a social equal, then the participants preferred the robot to approach from the front, while participants who viewed the robot more as a tool preferred it to approach from a less obtrusive angle. The questionnaire was also deployed in two long-term studies. In the first study, which involved one interaction a week over a period of two months, participant social expectations of the robots prior to the beginning of the study, not only impacted how participants evaluated open-ended interactions with the robots throughout the two-month period, but also how they collaborated with the robots in task-oriented interactions as well. In the second study, participants interacted with the robots twice a week over a period of 6 weeks. This study replicated the findings of the previous study, in that initial expectations impacted evaluations of interactions throughout the long-term study. In addition, this study used the questionnaire to measure post-interaction perceptions of the robots in terms of social expectations. The results from these suggest that while initial social expectations of robots impact how participants evaluate the robots in terms of interactional outcomes, social perceptions of robots are more closely related to the social/affective experience of the interaction.
APA, Harvard, Vancouver, ISO, and other styles
8

Holroyd, Aaron. "Generating Engagement Behaviors in Human-Robot Interaction." Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-theses/328.

Full text
Abstract:
Based on a study of the engagement process between humans, I have developed models for four types of connection events involving gesture and speech: directed gaze, mutual facial gaze, adjacency pairs and backchannels. I have developed and validated a reusable Robot Operating System (ROS) module that supports engagement between a human and a humanoid robot by generating appropriate connection events. The module implements policies for adding gaze and pointing gestures to referring phrases (including deictic and anaphoric references), performing end-of-turn gazes, responding to human-initiated connection events and maintaining engagement. The module also provides an abstract interface for receiving information from a collaboration manager using the Behavior Markup Language (BML) and exchanges information with a previously developed engagement recognition module. This thesis also describes a Behavior Markup Language (BML) realizer that has been developed for use in robotic applications. Instead of the existing fixed-timing algorithms used with virtual agents, this realizer uses an event-driven architecture, based on Petri nets, to ensure each behavior is synchronized in the presence of unpredictable variability in robot motor systems. The implementation is robot independent, open-source and uses the Robot Operating System (ROS).
APA, Harvard, Vancouver, ISO, and other styles
9

Chadalavada, Ravi Teja. "Human Robot Interaction for Autonomous Systems in Industrial Environments." Thesis, Chalmers University of Technology, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-55277.

Full text
Abstract:
The upcoming new generation of autonomous vehicles for transporting materials in industrial environments will be more versatile, flexible and efficient than traditional Automatic Guided Vehicles (AGV), which simply follow pre-defined paths. However, freely navigating vehicles can appear unpredictable to human workers and thus cause stress and render joint use of the available space inefficient. This work addresses the problem of providing information regarding a service robot’s intention to humans co-populating the environment. The overall goal is to make humans feel safer and more comfortable, even when they are in close vicinity of the robot. A spatial Augmented Reality (AR) system for robot intention communication by means of projecting proxemic information onto shared floor space is developed on a robotic fork-lift by equipping it with a LED projector. This helps in visualizing internal state information and intents on the shared floors spaces. The robot’s ability to communicate its intentions is evaluated in realistic situations where test subjects meet the robotic forklift. A Likert scalebased evaluation which also includes comparisons to human-human intention communication was performed. The results show that already adding simple information, such as the trajectory and the space to be occupied by the robot in the near future, is able to effectively improve human response to the robot. This kind of synergistic human-robot interaction in a work environment is expected to increase the robot’s acceptability in the industry.
APA, Harvard, Vancouver, ISO, and other styles
10

Michalland, Arthur-Henri. "Main et Cognition : les relations bi-directionnelles entre processus cognitifs et motricité manuelle." Thesis, Montpellier 3, 2019. http://www.theses.fr/2019MON30012.

Full text
Abstract:
La thèse que nous soutenons ici est que le sens haptique influence les processus cognitifs humains. Nous nous sommes intéressés aux processus mnésiques, perceptifs et moteurs, en nous appuyant sur deux notions utilisées dans les théories computationnelles et incarnées du contrôle moteur : la récurrence des patterns sensorimoteurs et l’anticipation sensorielle qui en découle. Notre premier axe de recherche étudiait les relations entre l’anticipation de propriétés haptiques d’un geste, la reconnaissance d’objets et la sélection d’une saisie. Le second axe s’intéressait au lien entre l’anticipation haptique et la latéralisation d’une action, ainsi qu’au rôle de cette anticipation dans la prise en compte de caractéristiques spatiales et émotionnelles pour sélectionner et initier un geste. Le dernier axe portait sur les stratégies motrices mises en place par l’être humain en fonction de la précision des anticipations haptiques, et tentait de cerner des paramètres susceptibles de faciliter l’interaction humain-robot. De manière générale ce travail montre que le sens haptique accompagne le mouvement qui repose sur des boucles perception-action possédant des étendues temporelles différentes, de la sélection de l’action à ses conséquences sensorielles terminales pour la plus longue, de l’afférence haptique à l’efférence vers les motoneurones alpha pour la plus courte. Il ressort que le sens haptique est à la base de ces boucles aux étendues temporelles variées, et joue un rôle dans des fonctions cognitives majeures
This thesis suggests that the haptic sense influences human cognitive processes. We were interested in mnesic, perceptive, and motor processes, and relied on two concepts from computational and embodied theories : recurrent sensorimotor patterns and the sensory anticipation that emerges from them. Our first line of research focused on the connections between anticipation of haptic features of a gesture, object recognition, and grip selection. The second line focused both on the link between haptic anticipation and action lateralization and on the impact of this anticipation on taking spatial and emotional clues into account to select and initiate an action. The third line focused on the motor strategies used by participants depending on the precision of their haptic anticipation, and tries to define control parameters that may facilitate human-robot interactions. Overall, this work shows that the haptic sense accompanies perception-action cycles of different durations, the longest being from action selection to its sensory terminal feedback, the shortest from the haptic afferent to alpha neuron efferent. The haptic sense is at the foundation of these cycles, and play a role in major cognitive functions
APA, Harvard, Vancouver, ISO, and other styles
11

Rehfeld, Sherri. "THE IMPACT OF MENTAL TRANSFORMATION TRAINING ACROSS LEVELS OF AUTOMATION ON SPATIAL AWARENESS IN HUMAN-ROBOT INTERACTION." Doctoral diss., University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3762.

Full text
Abstract:
One of the problems affecting robot operators' spatial awareness involves their ability to infer a robot's location based on the views from on-board cameras and other electro-optic systems. To understand the vehicle's location, operators typically need to translate images from a vehicle's camera into some other coordinates, such as a location on a map. This translation requires operators to relate the view by mentally rotating it along a number of axes, a task that is both attention-demanding and workload-intensive, and one that is likely affected by individual differences in operator spatial abilities. Because building and maintaining spatial awareness is attention-demanding and workload-intensive, any variable that changes operator workload and attention should be investigated for its effects on operator spatial awareness. One of these variables is the use of automation (i.e., assigning functions to the robot). According to Malleable Attentional Resource Theory (MART), variation in workload across levels of automation affects an operator's attentional capacity to process critical cues like those that enable an operator to understand the robot's past, current, and future location. The study reported here focused on performance aspects of human-robot interaction involving ground robots (i.e., unmanned ground vehicles, or UGVs) during reconnaissance tasks. In particular, this study examined how differences in operator spatial ability and in operator workload and attention interacted to affect spatial awareness during human-robot interaction (HRI). Operator spatial abilities were systematically manipulated through the use of mental transformation training. Additionally, operator workload and attention were manipulated via the use of three different levels of automation (i.e., manual control, decision support, and full automation). Operator spatial awareness was measured by the size of errors made by the operators, when they were tasked to infer the robot's location from on-board camera views at three different points in a sequence of robot movements through a simulated military operation in urban terrain (MOUT) environment. The results showed that mental transformation training increased two areas of spatial ability, namely mental rotation and spatial visualization. Further, spatial ability in these two areas predicted performance in vehicle localization during the reconnaissance task. Finally, assistive automation showed a benefit with respect to operator workload, situation awareness, and subsequently performance. Together, the results of the study have implications with respect to the design of robots, function allocation between robots and operators, and training for spatial ability. Future research should investigate the interactive effects on operator spatial awareness of spatial ability, spatial ability training, and other variables affecting operator workload and attention.
Ph.D.
Department of Psychology
Sciences
Psychology
APA, Harvard, Vancouver, ISO, and other styles
12

Tozadore, Daniel Carnieto. "Aplicação de um robô humanoide autônomo por meio de reconhecimento de imagem e voz em sessões pedagógicas interativas." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-04102016-110603/.

Full text
Abstract:
A Robótica Educacional consiste na utilização de robôs para aplicação prática dos conteúdos teóricos discutidos em sala de aula. Porém, os robôs mais usados apresentam uma carência de interação com os usuários, a qual pode ser melhorada com a inserção de robôs humanoides. Esta dissertação tem como objetivo a combinação de técnicas de visão computacional, robótica social e reconhecimento e síntese de fala para a construção de um sistema interativo que auxilie em sessões pedagógicas por meio de um robô humanoide. Diferentes conteúdos podem ser abordados pelos robôs de forma autônoma. Sua aplicação visa o uso do sistema como ferramenta de auxílio no ensino de matemática para crianças. Para uma primeira abordagem, o sistema foi treinado para interagir com crianças e reconhecer figuras geométricas 3D. O esquema proposto é baseado em módulos, no qual cada módulo é responsável por uma função específica e contém um grupo de funcionalidades. No total são 4 módulos: Módulo Central, Módulo de Diálogo, Módulo de Visão e Módulo Motor. O robô escolhido é o humanoide NAO. Para visão computacional, foram comparados a rede LEGION e o sistema VOCUS2 para detecção de objetos e SVM e MLP para classificação de imagens. O reconhecedor de fala Google Speech Recognition e o sintetizador de voz do NAOqi API são empregados para interações sonoras. Também foi conduzido um estudo de interação, por meio da técnica de Mágico-de-Oz, para analisar o comportamento das crianças e adequar os métodos para melhores resultados da aplicação. Testes do sistema completo mostraram que pequenas calibrações são suficientes para uma sessão de interação com poucos erros. Os resultados mostraram que crianças que tiveram contato com uma maior interatividade com o robô se sentiram mais engajadas e confortáveis nas interações, tanto nos experimentos quanto no estudo em casa para as próximas sessões, comparadas às crianças que tiveram contato com menor nível de interatividade. Intercalar comportamentos desafiadores e comportamentos incentivadores do robô trouxeram melhores resultados na interação com as crianças do que um comportamento constante.
Educational Robotics is a growing area that uses robots to apply theoretical concepts discussed in class. However, robots usually present a lack of interaction with users that can be improved with humanoid robots. This dissertation presents a project that combines computer vision techniques, social robotics and speech synthesis and recognition to build an interactive system which leads educational sessions through a humanoid robot. This system can be trained with different content to be addressed autonomously to users by a robot. Its application covers the use of the system as a tool in the mathematics teaching for children. For a first approach, the system has been trained to interact with children and recognize 3D geometric figures. The proposed scheme is based on modules, wherein each module is responsible for a specific function and includes a group of features for this purpose. In total there are 4 modules: Central Module, Dialog Module, Vision Module and Motor Module. The chosen robot was the humanoid NAO. For the Vision Module, LEGION network and VOCUS2 system were compared for object detection and SVM and MLP for image classification. The Google Speech Recognition speech recognizer and Voice Synthesizer Naoqi API are used for sound interactions. An interaction study was conducted by Wizard-of-Oz technique to analyze the behavior of children and adapt the methods for better application results. Full system testing showed that small calibrations are sufficient for an interactive session with few errors. Children who had experienced greater interaction degrees from the robot felt more engaged and comfortable during interactions, both in the experiments and studying at home for the next sessions, compared to children who had contact with a lower level of interactivity. Interim challenging behaviors and support behaviors brought better results in interaction than a constant behavior.
APA, Harvard, Vancouver, ISO, and other styles
13

Strineholm, Philippe. "Exploring Human-Robot Interaction Through Explainable AI Poetry Generation." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54606.

Full text
Abstract:
As the field of Artificial Intelligence continues to evolve into a tool of societal impact, a need of breaking its initial boundaries as a computer science discipline arises to also include different humanistic fields. The work presented in this thesis revolves around the role that explainable artificial intelligence has in human-robot interaction through the study of poetry generators. To better understand the scope of the project, a poetry generators study presents the steps involved in the development process and the evaluation methods. In the algorithmic development of poetry generators, the shift from traditional disciplines to transdisciplinarity is identified. In collaboration with researchers from the Research Institutes of Sweden, state-of-the-art generators are tested to showcase the power of artificially enhanced artifacts. A development plateau is discovered and with the inclusion of Design Thinking methods potential future human-robot interaction development is identified. A physical prototype capable of verbal interaction on top of a poetry generator is created with the new feature of changing the corpora to any given audio input. Lastly, the strengths of transdisciplinarity are connected with the open-sourced community in regards to creativity and self-expression, producing an online tool to address future work improvements and introduce nonexperts to the steps required to self-build an intelligent robotic companion, thus also encouraging public technological literacy. Explainable AI is shown to help with user involvement in the process of creation, alteration and deployment of AI enhanced applications.
APA, Harvard, Vancouver, ISO, and other styles
14

Aspernäs, Andreas. "Human-like Crawling for Humanoid Robots : Gait Evaluation on the NAO robot." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-78761.

Full text
Abstract:
Human-robot interaction (HRI) is the study of how we as humans interact and communicate with robots and one of its subfields is working on how we can improve the collaboration between humans and robots. We need robots that are more user friendly and easier to understand and a key aspect of this is human-like movements and behavior. This project targets a specific set of motions called locomotion and tests them on the humanoid NAO robot. A human-like crawling gait was developed for the NAO robot and compared to the built-in walking gait through three kinds of experiments. The first one to compare the speed of the two gaits, the second one to estimate their sta- bility, and the third to examine how long they can operate by measuring the power consumption and temperatures in the joints. The results showed the robot was significantly slower when crawling compared to walking, and when still the robot was more stable while standing than on all-fours. The power consumption remained essentially the same, but the crawling gait ended up having a shorter operational time due to higher temperature increase in the joints. While the crawling gait has benefits of having a lower profile then the walking gait and could therefore more easily pass under low hanging obsta- cles, it does have major issues that needs to be addressed to become a viable solution. Therefore these are important factors to consider when developing gaits and designing robots, and motives further research to try and solve these problems.
APA, Harvard, Vancouver, ISO, and other styles
15

Vasalya, Ashesh. "Human and humanoid robot co-workers : motor contagions and whole-body handover." Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTS112.

Full text
Abstract:
L'interaction homme-robot est un domaine émergent qui couvre plusieurs aspects des sciences humaines et d'ingénierie robotique. Le travail effectué dans cette thèse porte sur les interactions entre l'homme et un robot humanoïde vu comme un collaborateur dans des scénarios plutôt industriels. Dans ce contexte, les études que nous avons menées dans la première partie de cette thèse ont examiné l'influence que peuvent avoir une certaine façon de programmer les tâches d'un robot humanoïde sur le comportement de partenaires humains. Nous avons choisi un paradigme de tâches d'inspiration industrielle: Pick-n-Place. Dans le contexte des interactions homme-robot (pHRI), nous avons développé un nouveau framework de transfert d'objets bi-manuel utilisant le contrôle corps complet, et la locomotion d'un robot dans la 2ème partie de cette thèse.Lorsqu'un agent humain ou robotique effectue une action suivie de l'observation de cette action par une personne tièrce, des effets comportementaux implicites tels que des contagions motrices font que certaines caractéristiques (paramètres cinématiques, but ou résultat) de cette action deviennent semblables à l'action observée. Cependant, des études antérieures sur les contagions motrices ont examiné les effets induits soit pendant l'observation de l'action, soit après, mais jamais ensemble. C'est pourquoi il n'est pas établi si ces effets sont distincts les uns des autres, et en quoi ils sont different.Dans le chapitre 2, au cours du paradigme de tâche de mouvement répétitif Pick-n-Place, nous avons examiné l'effet des contagions motrices induites chez les participants pendant (contagions en-ligne) et après (contagions hors-ligne). Les mêmes mouvements en question sont effectués soit par une personne ou par un robot humanoïde, le répétiteur par contre est toujours une autre personne. Nous avons examiné en particulier les trois questions suivantesDans le Chapitre 3, sous le même paradigme de tâche répétitive et avec l'ajout de quelques conditions supplémentaires, nous avons systématiquement varié le comportement du robot, et observé comment la performance d'un observateur humain est affectée par la présence d'un agent humanoïde. Nous avons également étudié l'effet de la forme physique du robot humanoïde où le torse et la tête étaient couverts, et où seul le bras était visible pour les participants humains. Plus tard, nous avons comparé ces comportements avec ceux d'une personne et examiné comment le comportement observé change avec l'expérience des robots.Dans le chapitre 4, nous avons conçu un framework pour le transfert d'un objet entre un homme et un robot humanoïde dans un contexte d'interaction physique. Nous avons concentré nos efforts sur l'élaboration d'un framework de transfert simple mais robuste et efficace. Nous avons introduit un transfert bidirectionnel intuitif d'objets entre utilisant le contrôle corps complet en synchronization avec la locomotion. Tout au long de ce chapitre, le problème du transfert bidirectionnel d'objets entre une personne et un humanoïde a été traité avec la perspective d'atteindre un mouvement fluide continu et ponctuel. Au départ, nous avons commencé par concevoir un framework général dans lequel nous avons développé des modèles pour prédire la position de la main humaine convergeant au point de transfert, ainsi que pour estimer la configuration de saisie de l'objet et de la main humaine active pendant le transfert. Nous avons également conçu un modèle pour minimiser les forces d'interaction lors de la phase de remise d'un objet de masse inconnu ainsi que pour minimiser la durée totale de la remise d'un objet. Nous avons conçu ces modèles pour répondre à trois questions importantes liées à la remise d'objet robot humain ---quand (timing), où (position cartesienne) et comment (orientation et forces d'interaction) pendant un transfert
The work done in this thesis is about the interactions between human and humanoid robot HRP-2Kai as co-workers in the industrial scenarios The research topics in the thesis are divided into two categories. In the context of non-physical human-robot interactions, the studies conducted in the 1st part of this thesis are mostly motivated by social interactions between human and humanoid robot co-workers, which deal with the implicit behavioural and cognitive aspects of interactions. While in the context of physical human-robot interactions, the 2nd part of this thesis is motivated by the physical manipulations during object handover between human and humanoid robot co-workers in close proximity using humanoid robot whole-body control framework and locomotion.We designed a paradigm and a repetitive task inspired by the industrial Pick-n-Place movement task, in first HRI study, we examine the effect of motor contagions induced in participants during (we call it on-line contagions) and after (off-line contagions) the observation of the same movements performed by a human, or a humanoid robot co-worker.The results from this study have suggested that off-line contagions affects participant's movement velocity while on-line contagions affect their movement frequency. Interestingly, our findings suggest that the nature of the co-worker, (human or a robot), tend to influence the off-line contagions significantly more than the on-line contagions.Under the same paradigm and repetitive industrial task, we systematically varied the robot behaviour and observed whether and how the performance of a human participant is affected by the presence of the humanoid robot. We also investigated the effect of physical form of humanoid robot co-worker where the torso and head were covered, and only the moving arm was visible to the human participants. Later, we compared these behaviours with a human co-worker and examined how the observed behavioural effects scale with experience of robots.Our results show that the human and humanoid robot co-workers have been able to affect the performance frequencies of the participants, while their task accuracy remained undisturbed and unaffected. However, with the robot co-worker, this is true only when the robot head and torso were visible, and a robot made biological movements.Next, in pHRI study, we designed an intuitive bi-directional object handover routine between human and biped humanoid robot co-worker using whole-body control and locomotion, we designed models to predict and estimate the handover position in advance along with estimating the grasp configuration of an object and active human hand during handover trials. We also designed a model to minimize the interaction forces during the handover of an unknown mass object along with the timing of the object handover routine.We mainly focused on three important key features during handover, and we answered the following questions, ---when (timing), where (position in space), how (orientation and interaction forces) of the handover.we present a generalized handover controller, where both human and the robot is capable of selecting either of their hand to handover and exchange the object. Furthermore, by utilizing a whole-body control configuration, our handover controller is able to allow the robot to use both hands simultaneously during the object handover. Depending upon the shape and size of the object that needs to be transferred.Finally, we explored the full capabilities of a biped humanoid robot and added a scenario where the robot needs to proactively take few steps in order to handover or exchange the object between its human co-worker. We have tested this scenario on real humanoid robot HRP-2Kai during both when human-robot dyad uses either single or both hands simultaneously
APA, Harvard, Vancouver, ISO, and other styles
16

Vogt, David. "Learning Continuous Human-Robot Interactions from Human-Human Demonstrations." Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2018. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-233262.

Full text
Abstract:
In der vorliegenden Dissertation wurde ein datengetriebenes Verfahren zum maschinellen Lernen von Mensch-Roboter Interaktionen auf Basis von Mensch-Mensch Demonstrationen entwickelt. Während einer Trainingsphase werden Bewegungen zweier Interakteure mittels Motion Capture erfasst und in einem Zwei-Personen Interaktionsmodell gelernt. Zur Laufzeit wird das Modell sowohl zur Erkennung von Bewegungen des menschlichen Interaktionspartners als auch zur Generierung angepasster Roboterbewegungen eingesetzt. Die Leistungsfähigkeit des Ansatzes wird in drei komplexen Anwendungen evaluiert, die jeweils kontinuierliche Bewegungskoordination zwischen Mensch und Roboter erfordern. Das Ergebnis der Dissertation ist ein Lernverfahren, das intuitive, zielgerichtete und sichere Kollaboration mit Robotern ermöglicht.
APA, Harvard, Vancouver, ISO, and other styles
17

OPERTO, STEFANIA. "HRI: l’interazione tra esseri umani e macchine. Dall’interazione sociale all’interazione sociotecnica." Doctoral thesis, Università degli studi di Genova, 2021. http://hdl.handle.net/11567/1057911.

Full text
Abstract:
Interaction is a foundation of sociology, the complexity of which has produced over time numerous theoretical approaches and methods to define and analyze this concept. It is a process of varying duration between two or more actors who direct their actions by influencing the motivations and performance of these actions, producing reciprocal effects. The present research work analyzes a particular form of interaction, that between humans and machines: Human-Robot-Interaction (HRI). The interdisciplinary character of the relationship between humans and machines and its relationship with many sociological concepts is evident from the name itself: Human-Robot Interaction, a union of technological aspects concerning the robot, an artifact produced by human ingenuity, and human aspects related to social interaction. Science fiction has always been attracted by robots, often placed in a dystopian future in which they determine the increase of unemployment or reduce human beings to slavery. Robotics, more than other emerging technologies, stimulates symbolic imagery because of its form, the body of the robot that moves through space interacting with humans. The shift from recognizable machines to anthropomorphic robots amplifies these aspects, generating in people contradictory attitudes of enthusiasm and fear at the same time. During the last decade, the pervasiveness of these artifacts and their increasing diffusion in human environments and daily routines have increasingly motivated researchers to study the impact, perception, acceptability and acceptance of robotics. The analyses conducted aim to give a contribution in this sense also through an articulated survey work. The robot, as a social agent and actor, activates in the interaction with humans a complex system of symbols and categories. Science has opened new scenarios to describe the way in which human beings learn, as well as robots; the haptic dimension and the importance of touch as a primary sense put at the center, before others, the dimension of the body, the embodiment. The dimension of corporeality assumes considerable importance; two orientations emerge: the recognition of the robot in its machine form, on the one hand, and in its aspect more similar to a human being, on the other. The analyses conducted show that the interaction between humans and robots is a very complex phenomenon that activates social processes capable of influencing the representation of robots and the degree of acceptability. The resluts confirm that the word robot recalls a complex, multidimensional structure and determines a range of polysemic meanings belonging to different domains. The process of socialization with the robot, overall and in every single phase, appears to be influenced by symbolic systems, values, representations, mechanisms of memory and mind functioning and cognitive biases. Attitudes toward robotics are multifaceted and have a high level of interconnectedness. Like many new phenomena, socialization with robots will require a process of socialization and integration, the outcomes of which are not at all obvious at the moment. Perhaps, after a period of transition, the integration of robots into society will become a fact. This also seems to depend on the ability of research to reduce the divide between disciplines in order to open up to fully consider the ethical and social aspects underlying robotics.
APA, Harvard, Vancouver, ISO, and other styles
18

Miners, William Ben. "Toward Understanding Human Expression in Human-Robot Interaction." Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/789.

Full text
Abstract:
Intelligent devices are quickly becoming necessities to support our activities during both work and play. We are already bound in a symbiotic relationship with these devices. An unfortunate effect of the pervasiveness of intelligent devices is the substantial investment of our time and effort to communicate intent. Even though our increasing reliance on these intelligent devices is inevitable, the limits of conventional methods for devices to perceive human expression hinders communication efficiency. These constraints restrict the usefulness of intelligent devices to support our activities. Our communication time and effort must be minimized to leverage the benefits of intelligent devices and seamlessly integrate them into society. Minimizing the time and effort needed to communicate our intent will allow us to concentrate on tasks in which we excel, including creative thought and problem solving.

An intuitive method to minimize human communication effort with intelligent devices is to take advantage of our existing interpersonal communication experience. Recent advances in speech, hand gesture, and facial expression recognition provide alternate viable modes of communication that are more natural than conventional tactile interfaces. Use of natural human communication eliminates the need to adapt and invest time and effort using less intuitive techniques required for traditional keyboard and mouse based interfaces.

Although the state of the art in natural but isolated modes of communication achieves impressive results, significant hurdles must be conquered before communication with devices in our daily lives will feel natural and effortless. Research has shown that combining information between multiple noise-prone modalities improves accuracy. Leveraging this complementary and redundant content will improve communication robustness and relax current unimodal limitations.

This research presents and evaluates a novel multimodal framework to help reduce the total human effort and time required to communicate with intelligent devices. This reduction is realized by determining human intent using a knowledge-based architecture that combines and leverages conflicting information available across multiple natural communication modes and modalities. The effectiveness of this approach is demonstrated using dynamic hand gestures and simple facial expressions characterizing basic emotions. It is important to note that the framework is not restricted to these two forms of communication. The framework presented in this research provides the flexibility necessary to include additional or alternate modalities and channels of information in future research, including improving the robustness of speech understanding.

The primary contributions of this research include the leveraging of conflicts in a closed-loop multimodal framework, explicit use of uncertainty in knowledge representation and reasoning across multiple modalities, and a flexible approach for leveraging domain specific knowledge to help understand multimodal human expression. Experiments using a manually defined knowledge base demonstrate an improved average accuracy of individual concepts and an improved average accuracy of overall intents when leveraging conflicts as compared to an open-loop approach.
APA, Harvard, Vancouver, ISO, and other styles
19

Rudqwist, Lucas. "Designing an interface for a teleoperated vehicle which uses two cameras for navigation." Thesis, KTH, Medieteknik och interaktionsdesign, MID, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231914.

Full text
Abstract:
The Swedish fire department have been wanting a robot that can be sent to situations where it’s too dangerous to send in firefighters. A teleoperated vehicle is being developed for exactly this purpose. This thesis has its base in research that previously has been conducted within Human-Robot Interaction and interface design for teleoperated vehicles. In this study, a prototype was developed to be able to simulate the experience of driving a teleoperated vehicle. It visualised the intended interface of the operator and simulated the operating experience. The development followed a User-Centered Design process and was evaluated by users. After the final evaluation a design proposal, based on previous research and user feedback, was presented. The study discusses the issues discovered when designing an interface for a teleoperated vehicle that uses two cameras for maneuvering. One challenge was how to fully utilize the two video feeds and create an interplay between them. The evaluations showed that users could keep better focus with one larger, designated main feed and the second one being placed where it can be easily glanced at. Simplicity and were to display sensor data were also shown to be important aspects to consider when trying to lower the mental load on the operator. Further modifications to the vehicle and the interface has to be made to increase the operators awareness and confidence when maneuvering the vehicle.
Det svenska brandförsvaret har varit i behov utav en robot som kan användas i situationer där det är för riskfyllt att skicka in brandmän. Ett radiostyrt fordon håller på att utvecklas för exakt detta syfte. Detta arbete baseras på den forskning som tidigare genomförts inom Människa-Datorinteraktion och gränssnitts-design för radiostyrda fordon. I denna studie utvecklades en prototyp för att simulera känslan av att köra ett radiostyrt fordon. Det visualiserade det tänka gränssnitten för operatören och simulerade körupplevelsen. Utvecklingen skedde genom en Användarcentrerad designprocess och utvärderades med hjälp utav användare. Efter den slutgiltiga utvärderingen så presenterades ett designförslag som baserades på tidigare forskning och användarnas återkoppling. Studien diskuterar de problem som uppstår när man designar ett gränssnitt för ett radiostyrt fordon som använder två kameror för manövrering. En utmaning var hur man kan till fullo utnyttja de två kamerabilderna och skapa ett samspel mellan dem. Utvärderingarna visade att användarna kunde hålla bättre fokus med en större, dedikerad kamerabild och en mindre sekundär kamerabild som enkelt kan blickas över. Enkelhet och var sensordata placeras, visade sig också var viktiga aspekter för att minska den mentala påfrestningen för operatören. Vidare modifikationer på fordonet och gränssnittet behöver genomföras för öka operatörens medvetenhet och självförtroende vid manövrering.
APA, Harvard, Vancouver, ISO, and other styles
20

Khan, Mubasher Hassan, and Tayyab Laique. "An Evaluation of Gaze and EEG-Based Control of a Mobile Robot." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4625.

Full text
Abstract:
Context: Patients with diseases such as locked in syndrome or motor neuron are paralyzed and they need special care. To reduce the cost of their care, systems need to be designed where human involvement is minimal and affected people can perform their daily life activities independently. To assess the feasibility and robustness of combinations of input modalities, mobile robot (Spinosaurus) navigation is controlled by a combination of Eye gaze tracking and other input modalities. Objectives: Our aim is to control the robot using EEG brain signals and eye gaze tracking simultaneously. Different combinations of input modalities are used to control the robot and turret movement and then we find out which combination of control technique mapped to control command is most effective. Methods: The method includes developing the interface and control software. An experiment involving 15 participants was conducted to evaluate control of the mobile robot using a combination of eye tracker and other input modalities. Subjects were required to drive the mobile robot from a starting point to a goal along a pre-defined path. At the end of experiment, a sense of presence questionnaire was distributed among the participants to take their feedback. A qualitative pilot study was performed to find out how a low cost commercial EEG headset, the Emotiv EPOCTM, can be used for motion control of a mobile robot at the end. Results: Our study results showed that the Mouse/Keyboard combination was the most effective for controlling the robot motion and turret mounted camera respectively. In experimental evaluation, the Keyboard/Eye Tracker combination improved the performance by 9%. 86% of participants found that turret mounted camera was useful and provided great assistance in robot navigation. Our qualitative pilot study of the Emotiv EPOCTM demonstrated different ways to train the headset for different actions. Conclusions: In this study, we concluded that different combinations of control techniques could be used to control the devices e.g. a mobile robot or a powered wheelchair. Gaze-based control was found to be comparable with the use of a mouse and keyboard; EEG-based control was found to need a lot of training time and was difficult to train. Our pilot study suggested that using facial expressions to train the Emotiv EPOCTM was an efficient and effective way to train it.
APA, Harvard, Vancouver, ISO, and other styles
21

Wagner, Alan Richard. "The role of trust and relationships in human-robot social interaction." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31776.

Full text
Abstract:
Thesis (Ph.D)--Computing, Georgia Institute of Technology, 2010.
Committee Chair: Arkin, Ronald C.; Committee Member: Christensen, Henrik I.; Committee Member: Fisk, Arthur D.; Committee Member: Ram, Ashwin; Committee Member: Thomaz, Andrea. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
22

Pandey, Amit kumar. "Towards Socially Intelligent Robots in Human Centered Environment." Thesis, Toulouse, INSA, 2012. http://www.theses.fr/2012ISAT0032/document.

Full text
Abstract:
Bientôt, les robots ne travailleront plus de manière isolée mais avec nous. Ils entrent peu à peu dans notre vie de tous les jours pour coopérer, assister, aider, servir, apprendre, enseigner ou même jouer avec l'homme. Dans ce contexte, nous considérons que ce ne doit pas être à l'homme de s'adapter au robot. Au contraire, le robot doit être capable d'intégrer, dans ses stratégies de planification et de décision, différents facteurs d'effort et de confort et de prendre en compte les préférences et désirs de l'homme ainsi que les normes sociales de son environnement. Tout en respectant les principes de sécurité réglementaire, le robot doit se comporter, naviguer, manipuler, communiquer et apprendre d'une manière qui soit pertinente, acceptée et compréhensible par l'homme. Cette thèse explore et définit les ingrédients clés nécessaires au robot pour développer une telle intelligence socio-cognitive. Elle définit également un cadre pour l'interaction homme-robot permettant de s'attaquer à ces challenges dans le but de rendre le robot socialement intelligent
Robots will no longer be working isolated from us. They are entering into our day-to-day life to cooperate, assist, help, serve, learn, teach and play with us. In this context, it is important that because of the presence of robots, the human should not be on compromising side. To achieve this, beyond the basic safety requirements, robots should take into account various factors ranging from human’s effort, comfort, preferences, desire, to social norms, in their various planning and decision making strategies. They should behave, navigate, manipulate, interact and learn in a way, which is expected, accepted, and understandable by us, the human. This thesis begins by exploring and identifying the basic yet key ingredients of such socio-cognitive intelligence. Then we develop generic frameworks and concepts from HRI perspective to address these additional challenges, and to elevate the robots capabilities towards being socially intelligent
APA, Harvard, Vancouver, ISO, and other styles
23

Förster, Frank. "Robots that say 'no' : acquisition of linguistic behaviour in interaction games with humans." Thesis, University of Hertfordshire, 2013. http://hdl.handle.net/2299/20781.

Full text
Abstract:
Negation is a part of language that humans engage in pretty much from the onset of speech. Negation appears at first glance to be harder to grasp than object or action labels, yet this thesis explores how this family of ‘concepts’ could be acquired in a meaningful way by a humanoid robot based solely on the unconstrained dialogue with a human conversation partner. The earliest forms of negation appear to be linked to the affective or motivational state of the speaker. Therefore we developed a behavioural architecture which contains a motivational system. This motivational system feeds its state simultaneously to other subsystems for the purpose of symbol-grounding but also leads to the expression of the robot’s motivational state via a facial display of emotions and motivationally congruent body behaviours. In order to achieve the grounding of negative words we will examine two different mechanisms which provide an alternative to the established grounding via ostension with or without joint attention. Two large experiments were conducted to test these two mechanisms. One of these mechanisms is so called negative intent interpretation, the other one is a combination of physical and linguistic prohibition. Both mechanisms have been described in the literature on early child language development but have never been used in human-robot-interaction for the purpose of symbol grounding. As we will show, both mechanisms may operate simultaneously and we can exclude none of them as potential ontogenetic origin of negation.
APA, Harvard, Vancouver, ISO, and other styles
24

Thunberg, Sofia. "Can You Read My Mind? : A Participatory Design Study of How a Humanoid Robot Can Communicate Its Intent and Awareness." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-158033.

Full text
Abstract:
Communication between humans and interactive robots will benefit if people have a clear mental model of the robots' intent and awareness. The aim with this thesis was to investigate how human-robot interaction is affected by manipulation of social cues on the robot. The research questions were: How do social cues affect mental models of the Pepper robot, and how can a participatory design method be used for investigating how the Pepper robot could communicate intent and awareness? The hypothesis for the second question was that nonverbal cues would be preferred over verbal cues. An existing standard platform was used, Softbank's Pepper, as well as state-of-the-art tasks from the RoboCup@Home challenge. The rule book and observations from the 2018 competition were thematically coded and the themes created eight scenarios. A participatory design method called PICTIVE was used in a design study, where five student participants went through three phases, label, sketch and interview, to create a design for how the robot should communicate intent and awareness. The use of PICTIVE was a suitable way to extract a lot of design ideas. However, not all scenarios were optimal for the task. The design study confirmed the use of mediating physical attributes to alter the mental model of a humanoid robot to reach common ground. Further, it did not confirm the hypothesis that nonverbal cues would be preferred over verbal cues, though it did show that verbal cues would not be enough. This, however, needs to be further tested in live interactions.
APA, Harvard, Vancouver, ISO, and other styles
25

de, Greeff Joachim. "Interactive concept acquisition for embodied artificial agents." Thesis, University of Plymouth, 2013. http://hdl.handle.net/10026.1/1587.

Full text
Abstract:
An important capacity that is still lacking in intelligent systems such as robots, is the ability to use concepts in a human-like manner. Indeed, the use of concepts has been recognised as being fundamental to a wide range of cognitive skills, including classification, reasoning and memory. Intricately intertwined with language, concepts are at the core of human cognition; but despite a large body or research, their functioning is as of yet not well understood. Nevertheless it remains clear that if intelligent systems are to achieve a level of cognition comparable to humans, they will have to posses the ability to deal with the fundamental role that concepts play in cognition. A promising manner in which conceptual knowledge can be acquired by an intelligent system is through ongoing, incremental development. In this view, a system is situated in the world and gradually acquires skills and knowledge through interaction with its social and physical environment. Important in this regard is the notion that cognition is embodied. As such, both the physical body and the environment shape the manner in which cognition, including the learning and use of concepts, operates. Through active partaking in the interaction, an intelligent system might influence its learning experience as to be more effective. This work presents experiments which illustrate how these notions of interaction and embodiment can influence the learning process of artificial systems. It shows how an artificial agent can benefit from interactive learning. Rather than passively absorbing knowledge, the system actively partakes in its learning experience, yielding improved learning. Next, the influence of embodiment on perception is further explored in a case study concerning colour perception, which results in an alternative explanation for the question of why human colour experience is very similar amongst individuals despite physiological differences. Finally experiments, in which an artificial agent is embodied in a novel robot that is tailored for human-robot interaction, illustrate how active strategies are also beneficial in an HRI setting in which the robot learns from a human teacher.
APA, Harvard, Vancouver, ISO, and other styles
26

Krzewska, Weronika. "ZERROR : Provoking ethical discussions of humanoid robots through speculative animation." Thesis, Malmö universitet, Institutionen för konst, kultur och kommunikation (K3), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-45975.

Full text
Abstract:
Robotics engineers' ongoing quest to create human-like robots has raised profound questions on their lack of ethical implications. The rapid progress and growth of humanoid robots is said to have a significant impact on society and human psychology in the near future. Interaction Design is a multidisciplinary field in which designers are often encouraged to engage in important conversations and find solutions to complex problems. On the other hand, animators often use animated videos as metaphors to reflect on important matters that are present in our cultural and societal spheres. This study investigates the use of animation in Speculative Design settings as material to bridge two communities together - the animators and roboticists, to foster ethical behaviors and impact future technology. The main result of the design process is a concept for a mobile platform that stimulates discussions on the ethical considerations of human relationships with humanoid robots, through speculative animation. Moreover, the interactive platform enhances imagination, creativity and learning processes between users.
APA, Harvard, Vancouver, ISO, and other styles
27

Thellman, Sam. "Social Dimensions of Robotic versus Virtual Embodiment, Presence and Influence." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-130645.

Full text
Abstract:
Robots and virtual agents grow rapidly in behavioural sophistication and complexity. They become better learners and teachers, cooperators and communicators, workers and companions. These artefacts – whose behaviours are not always readily understood by human intuition nor comprehensibly explained in terms of mechanism – will have to interact socially. Moving beyond artificial rational systems to artificial social systems means having to engage with fundamental questions about agenthood, sociality, intelligence, and the relationship between mind and body. It also means having to revise our theories about these things in the course of continuously assessing the social sufficiency of existing artificial social agents. The present thesis presents an empirical study investigating the social influence of physical versus virtual embodiment on people's decisions in the context of a bargaining task. The results indicate that agent embodiment did not affect the social influence of the agent or the extent to which it was perceived as a social actor. However, participants' perception of the agent as a social actor did influence their decisions. This suggests that experimental results from studies comparing different robot embodiments should not be over-generalised beyond the particular task domain in which the studied interactions took place.
APA, Harvard, Vancouver, ISO, and other styles
28

Bengtsson, Camilla, and Caroline Englund. "“Do you want to take a short survey?” : Evaluating and improving the UX and VUI of a survey skill in the social robot Furhat: a qualitative case study." Thesis, Linnéuniversitetet, Institutionen för informatik (IK), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-76923.

Full text
Abstract:
The purpose of this qualitative case study is to evaluate an early stage survey skill developed for the social robot Furhat, and look into how the user experience (UX) and voice user interface (VUI) of that skill can be improved. Several qualitative methods have been used: expert evaluations using heuristics for human-robot interaction (HRI), user evaluations including observations and interviews, as well as a quantitative questionnaire (RoSAS – Robot Social Attribution Scale). The empirical findings have been classified into the USUS Evaluation Framework for Human-Robot Interaction. The user evaluations were performed in two modes, one group of informants talked and interacted with Furhat with the support of a graphical user interface (GUI), and the other group without the GUI. A positive user experience was identified in both modes, showing that the informants found interacting with Furhat a fun, engaging and interesting experience. The mode with the supportive GUI could be suitable in noisy environments, and for longer surveys with many response alternatives to choose from, whereas the other mode could work better for less noisy environments and for shorter surveys. General improvements that can contribute to a better user experience in both modes were found; such as having the robot adopt a more human-like character when it comes to the dialogue and the facial expressions and movements, along with addressing a number of technical and usability issues.
Syftet med den här kvalitativa fallstudien är att utvärdera en enkätskill för den sociala roboten Furhat. Förutom utvärderingen av denna skill, som är i ett tidigt skede av utvecklingen, är syftet även att undersöka hur användarupplevelsen (UX) och röstgränssnittet (VUI) kan förbättras. Olika kvalitativa metoder har använts: expertutvärderingar med heuristik för MRI (människa-robot-interaktion), användarutvärderingar bestående av observationer och intervjuer, samt ett kvantitativt frågeformulär (RoSAS – Robot Social Attribution Scale). Resultaten från dessa har placerats in i ramverket USUS Evaluation Framework for Human- Robot Interaction. Användarutvärderingarna utfördes i två olika grupper: en grupp pratade och interagerade med Furhat med stöd av ett grafiskt användargränssnitt (GUI), den andra hade inget GUI. En positiv användarupplevelse konstaterades i båda grupperna: informanterna tyckte att det var roligt, engagerande och intressant att interagera med Furhat. Att ha ett GUI som stöd kan passa bättre för bullriga miljöer och för längre enkäter med många svarsalternativ att välja bland, medan ett GUI inte behövs för lugnare miljöer och kortare enkäter. Generella förbättringar som kan bidra till att höja användarupplevelsen hittades i båda grupperna; till exempel att roboten bör agera mer människolikt när det kommer till dialogen och ansiktsuttryck och rörelser, samt att åtgärda ett antal tekniska problem och användbarhetsproblem.
APA, Harvard, Vancouver, ISO, and other styles
29

Marpaung, Andreas. "TOWARD BUILDING A SOCIAL ROBOT WITH AN EMOTION-BASED INTERNAL CONTROL." Master's thesis, University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3901.

Full text
Abstract:
In this thesis, we aim at modeling some aspects of the functional role of emotions on an autonomous embodied agent. We begin by describing our robotic prototype, Cherry--a robot with the task of being a tour guide and an office assistant for the Computer Science Department at the University of Central Florida. Cherry did not have a formal emotion representation of internal states, but did have the ability to express emotions through her multimodal interface. The thesis presents the results of a survey we performed via our social informatics approach where we found that: (1) the idea of having emotions in a robot was warmly accepted by Cherry's users, and (2) the intended users were pleased with our initial interface design and functionalities. Guided by these results, we transferred our previous code to a human-height and more robust robot--Petra, the PeopleBot™--where we began to build a formal emotion mechanism and representation for internal states to correspond to the external expressions of Cherry's interface. We describe our overall three-layered architecture, and propose the design of the sensory motor level (the first layer of the three-layered architecture) inspired by the Multilevel Process Theory of Emotion on one hand, and hybrid robotic architecture on the other hand. The sensory-motor level receives and processes incoming stimuli with fuzzy logic and produces emotion-like states without any further willful planning or learning. We will discuss how Petra has been equipped with sonar and vision for obstacle avoidance as well as vision for face recognition, which are used when she roams around the hallway to engage in social interactions with humans. We hope that the sensory motor level in Petra could serve as a foundation for further works in modeling the three-layered architecture of the Emotion State Generator.
M.S.
School of Computer Science
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO, and other styles
30

Stival, Francesca. "Subject-Independent Frameworks for Robotic Devices: Applying Robot Learning to EMG Signals." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3426704.

Full text
Abstract:
The capability of having human and robots cooperating together has increased the interest in the control of robotic devices by means of physiological human signals. In order to achieve this goal it is crucial to be able to catch the human intention of movement and to translate it in a coherent robot action. Up to now, the classical approach when considering physiological signals, and in particular EMG signals, is to focus on the specific subject performing the task since the great complexity of these signals. This thesis aims to expand the state of the art by proposing a general subject-independent framework, able to extract the common constraints of human movement by looking at several demonstration by many different subjects. The variability introduced in the system by multiple demonstrations from many different subjects allows the construction of a robust model of human movement, able to face small variations and signal deterioration. Furthermore, the obtained framework could be used by any subject with no need for long training sessions. The signals undergo to an accurate preprocessing phase, in order to remove noise and artefacts. Following this procedure, we are able to extract significant information to be used in online processes. The human movement can be estimated by using well-established statistical methods in Robot Programming by Demonstration applications, in particular the input can be modelled by using a Gaussian Mixture Model (GMM). The performed movement can be continuously estimated with a Gaussian Mixture Regression (GMR) technique, or it can be identified among a set of possible movements with a Gaussian Mixture Classification (GMC) approach. We improved the results by incorporating some previous information in the model, in order to enriching the knowledge of the system. In particular we considered the hierarchical information provided by a quantitative taxonomy of hand grasps. Thus, we developed the first quantitative taxonomy of hand grasps considering both muscular and kinematic information from 40 subjects. The results proved the feasibility of a subject-independent framework, even by considering physiological signals, like EMG, from a wide number of participants. The proposed solution has been used in two different kinds of applications: (I) for the control of prosthesis devices, and (II) in an Industry 4.0 facility, in order to allow human and robot to work alongside or to cooperate. Indeed, a crucial aspect for making human and robots working together is their mutual knowledge and anticipation of other’s task, and physiological signals are capable to provide a signal even before the movement is started. In this thesis we proposed also an application of Robot Programming by Demonstration in a real industrial facility, in order to optimize the production of electric motor coils. The task was part of the European Robotic Challenge (EuRoC), and the goal was divided in phases of increasing complexity. This solution exploits Machine Learning algorithms, like GMM, and the robustness was assured by considering demonstration of the task from many subjects. We have been able to apply an advanced research topic to a real factory, achieving promising results.
La possibilità di collaborazione tra robot ed esseri umani ha fatto crescere l’interesse nello sviluppo di tecniche per il controllo di dispositivi robotici attraverso segnali fisiologici provenienti dal corpo umano. Per poter ottenere questo obiettivo è essenziale essere in grado di cogliere l’intenzione di movimento da parte dell’essere umano e di tradurla in un relativo movimento del robot. Fin’ora, quando si consideravano segnali fisiologici, ed in particolare segnali EMG, il classico approccio era quello di concentrarsi sul singolo soggetto che svolgeva il task, a causa della notevole complessità di questo tipo di dati. Lo scopo di questa tesi è quello di espandere lo stato dell’arte proponendo un framework generico ed indipendente dal soggetto, in grado di estrarre le caratteristiche del movimento umano osservando diverse dimostrazioni svolte da un gran numero di soggetti differenti. La variabilità introdotta nel sistema dai diversi soggetti e dalle diverse ripetizioni del task permette la costruzione di un modello del movimento umano, robusto a piccole variazioni e a un possibile deterioramento del segnale. Inoltre, il framework ottenuto può essere utilizzato da ogni soggetto senza che debba sottoporsi a lunghe sessioni di allenamento. I segnali verranno sottoposti ad un’accurata fase di reprocessing per rimuovere rumore ed artefatti, seguendo questo procedimento sarà possibile estrarre dell’informazione significativa che verrà utilizzata per elaborare il segnale online. Il movimento umano può essere stimato utilizzando tecniche statistiche molto diffuse in applicazioni di Robot Programming by Demonstration, in particolare l’informazione in input può essere rappresentata utilizzando il Gaussian Mixture Model (GMM). Il movimento svolto dal soggetto può venire stimato in maniera continua con delle tecniche di regressione, come il Gaussian Mixture Regression (GMR), oppure può venire scelto tra un insieme di possibili movimenti con delle tecniche di classificazione, come il Gaussian Mixture Classification (GMC). I risultati sono stati migliorati incorporando nel modello dell’informazione a priori, in modo da arricchirlo. In particolare, è stata considerata l’informazione gerarchica fornita da una tassonomia quantitativa dei movimenti di presa della mano. E’ stata anche realizzata la prima tassonomia quantitativa delle prese della mano considerando l’informazione sia muscolare che cinematica proveniente da 40 soggetti. I risultati ottenuti hanno dimostrato la possibilità di realizzare un framework indipendente dal soggetto anche utilizzando segnali fisiologici come gli EMG provenienti da un grande numero di partecipanti. La soluzione proposta è stata utilizzata in due tipi diversi di applicazioni: (I) per il controllo di dispositivi prostetici, e (II) in una soluzione per l’Industria 4.0, con l’obiettivo di consentire a uomini e robot di lavorare assieme o di collaborare. Infatti, unaspetto cruciale perché uomini e robot possano lavorare assieme è che siano in grado di anticipare uno il task dell’altro e i segnali fisiologici riescono a fornire un segnale prima che avvenga l’effettivo movimento. In questa tesi è stata proposta anche un’applicazione di Robot Programming by Demonstration in una vera fabbrica che si occupa di realizzare motori elettrici, con lo scopo di ottimizzarne la produzione. Il task faceva parte della European Robotic Challenge (EuRoC) in cui l’obiettivo finale era diviso in fasi di complessità crescente. La soluzione proposta impiega tecniche di Machine Learning, come il GMM, mentre la robustezza dell’approccio è assicurata dalla considerazione di dimostrazioni da parte di molti soggetti diversi. Il sistema è stato testato in un contesto industriale ottenendo risultati promettenti.
APA, Harvard, Vancouver, ISO, and other styles
31

Schaffert, Carolin. "Safety system design in human-robot collaboration : Implementation for a demonstrator case in compliance with ISO/TS 15066." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-263900.

Full text
Abstract:
A close collaboration between humans and robots is one approach to achieve flexible production flows and a high degree of automation at the same time. In human-robot collaboration, both entities work alongside each other in a fenceless, shared environment. These workstations combine human flexibility, tactile sense and intelligence with robotic speed, endurance, and accuracy. This leads to improved ergonomic working conditions for the operator, better quality and higher efficiency. However, the widespread adoption of human-robot collaboration is limited by the current safety legislation. Robots are powerful machines and without spatial separation to the operator the risks drastically increase. The technical specification ISO/TS 15066 serves as a guideline for collaborative operations and supplements the international standard ISO 10218 for industrial robots. Because ISO/TS 15066 represents the first draft for a coming standard, companies have to gain knowledge in applying ISO/TS 15066. Currently, the guideline prohibits a collision with the head in transient contact. In this thesis work, a safety system is designed which is in compliance with ISO/TS 15066 and where certified safety technologies are used. Four theoretical safety system designs with a laser scanner as a presence sensing device and a collaborative robot, the KUKA lbr iiwa, are proposed. The system either stops the robot motion, reduces the robot’s speed and then triggers a stop or only activates a stop after a collision between the robot and the human occurred. In system 3 the size of the stop zone is decreased by combining the speed and separation monitoring principle with the power- and force-limiting safeguarding mode. The safety zones are static and are calculated according to the protective separation distance in ISO/TS 15066. A risk assessment is performed to reduce all risks to an acceptable level and lead to the final safety system design after three iterations. As a proof of concept the final safety system design is implemented for a demonstrator in a laboratory environment at Scania. With a feasibility study, the implementation differences between theory and praxis for the four proposed designs are identified and a feasible safety system behavior is developed. The robot reaction is realized through the safety configuration of the robot. There three ESM states are defined to use the internal safety functions of the robot and to integrate the laser scanner signal. The laser scanner is connected as a digital input to the discrete safety interface of the robot controller. To sum up, this thesis work describes the safety system design with all implementation details.
Ett nära samarbete mellan människor och robotar är ett sätt att uppnå flexibla produktionsflöden och en hög grad av automatisering samtidigt. I människa-robotsamarbeten arbetar båda enheterna tillsammans med varandra i en gemensam miljö utan skyddsstaket. Dessa arbetsstationer kombinerar mänsklig flexibilitet, taktil känsla och intelligens med robothastighet, uthållighet och noggrannhet. Detta leder till förbättrade ergonomiska arbetsförhållanden för operatören, bättre kvalitet och högre effektivitet. Det breda antagandet av människarobotsamarbeten är emellertid begränsat av den nuvarande säkerhetslagstiftningen. Robotar är kraftfulla maskiner och utan rymdseparation till operatören riskerna drastiskt ökar. Den tekniska specifikationen ISO / TS 15066 fungerar som riktlinje för samverkan och kompletterar den internationella standarden ISO 10218 för industrirobotar. Eftersom ISO / TS 15066 representerar det första utkastet för en kommande standard, måste företagen få kunskap om att tillämpa ISO / TS 15066. För närvarande förbjuder riktlinjen en kollision med huvudet i övergående kontakt. I detta avhandlingar är ett säkerhetssystem utformat som överensstämmer med ISO / TS 15066 och där certifierad säkerhetsteknik används. Fyra teoretiska säkerhetssystemdesigner med en laserskanner som närvarosensor och en samarbetsrobot, KUKA lbr iiwa, föreslås. Systemet stoppar antingen robotrörelsen, reducerar robotens hastighet och triggar sedan ett stopp eller aktiverar bara ett stopp efter en kollision mellan roboten och människan inträffade. I system 3 minskas storleken på stoppzonen genom att kombinera hastighets- och separationsövervakningsprincipen med det kraft- och kraftbegränsande skyddsläget. Säkerhetszoner är statiska och beräknas enligt skyddsavståndet i ISO / TS 15066. En riskbedömning görs för att minska alla risker till en acceptabel nivå och leda till den slutliga säkerhetssystemdesignen efter tre iterationer. Som ett bevis på konceptet är den slutliga säkerhetssystemdesignen implementerad för en demonstrant i en laboratoriemiljö hos Scania. Genom en genomförbarhetsstudie identifieras implementeringsskillnaderna mellan teori och praxis för de fyra föreslagna mönster och ett genomförbart säkerhetssystem beteende utvecklas. Robotreaktionen realiseras genom robotens säkerhetskonfiguration. Där definieras tre ESM-tillstånd för att använda robotens interna säkerhetsfunktioner och för att integrera laserscannersignalen. Laserskannern är ansluten som en digital ingång till robotkontrollens diskreta säkerhetsgränssnitt. Sammanfattningsvis beskriver detta avhandlingar säkerhetssystemdesignen med alla implementeringsdetaljer.
APA, Harvard, Vancouver, ISO, and other styles
32

Wåhlin, Peter. "Enhanching the Human-Team Awareness of a Robot." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-16371.

Full text
Abstract:
The use of autonomous robots in our society is increasing every day and a robot is no longer seen as a tool but as a team member. The robots are now working side by side with us and provide assistance during dangerous operations where humans otherwise are at risk. This development has in turn increased the need of robots with more human-awareness. Therefore, this master thesis aims at contributing to the enhancement of human-aware robotics. Specifically, we are investigating the possibilities of equipping autonomous robots with the capability of assessing and detecting activities in human teams. This capability could, for instance, be used in the robot's reasoning and planning components to create better plans that ultimately would result in improved human-robot teamwork performance. we propose to improve existing teamwork activity recognizers by adding intangible features, such as stress, motivation and focus, originating from human behavior models. Hidden markov models have earlier been proven very efficient for activity recognition and have therefore been utilized in this work as a method for classification of behaviors. In order for a robot to provide effective assistance to a human team it must not only consider spatio-temporal parameters for team members but also the psychological.To assess psychological parameters this master thesis suggests to use the body signals of team members. Body signals such as heart rate and skin conductance. Combined with the body signals we investigate the possibility of using System Dynamics models to interpret the current psychological states of the human team members, thus enhancing the human-awareness of a robot.
Användningen av autonoma robotar i vårt samhälle ökar varje dag och en robot ses inte längre som ett verktyg utan som en gruppmedlem. Robotarna arbetar nu sida vid sida med oss och ger oss stöd under farliga arbeten där människor annars är utsatta för risker. Denna utveckling har i sin tur ökat behovet av robotar med mer människo-medvetenhet. Därför är målet med detta examensarbete att bidra till en stärkt människo-medvetenhet hos robotar. Specifikt undersöker vi möjligheterna att utrusta autonoma robotar med förmågan att bedöma och upptäcka olika beteenden hos mänskliga lag. Denna förmåga skulle till exempel kunna användas i robotens resonemang och planering för att ta beslut och i sin tur förbättra samarbetet mellan människa och robot. Vi föreslår att förbättra befintliga aktivitetsidentifierare genom att tillföra förmågan att tolka immateriella beteenden hos människan, såsom stress, motivation och fokus. Att kunna urskilja lagaktiviteter inom ett mänskligt lag är grundläggande för en robot som ska vara till stöd för laget. Dolda markovmodeller har tidigare visat sig vara mycket effektiva för just aktivitetsidentifiering och har därför använts i detta arbete. För att en robot ska kunna ha möjlighet att ge ett effektivt stöd till ett mänskligtlag måste den inte bara ta hänsyn till rumsliga parametrar hos lagmedlemmarna utan även de psykologiska. För att tyda psykologiska parametrar hos människor förespråkar denna masteravhandling utnyttjandet av mänskliga kroppssignaler. Signaler så som hjärtfrekvens och hudkonduktans. Kombinerat med kroppenssignalerar påvisar vi möjligheten att använda systemdynamiksmodeller för att tolka immateriella beteenden, vilket i sin tur kan stärka människo-medvetenheten hos en robot.

The thesis work was conducted in Stockholm, Kista at the department of Informatics and Aero System at Swedish Defence Research Agency.

APA, Harvard, Vancouver, ISO, and other styles
33

Ajulo, Morenike. "Interactive text response for assistive robotics in the home." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34725.

Full text
Abstract:
In a home environment, there are many tasks that a human may need to accomplish. These activities, which range from picking up a telephone to clearing rooms in the house, all have the common trend of fetching. These tasks can only be completed correctly with the consideration of many things including an understanding of what the human wants, recognition of the correct item from the environment, and manipulation and grasping of the object of interest. The focus of this work is on addressing one aspect of this problem, decomposing an image scene such that a task-specific object of interest can be identified. In this work, communication between human and robot is represented using a feedback formalism. This involves the back-and-forth transfer of textual information between the human and the robot such that the robot receives all information necessary to recognize the task-specific object of interest. We name this new communication mechanism Interactive Text Response (ITR), which we believe will provide a novel contribution to the field of Human Robot Interaction. The methodology employed involves capturing a view of the scene that contains an object of interest. Then, the robot makes inquiries based on its current understanding of the scene to disambiguate between objects in the scene. In this work, we discuss development of ITR in human-robot interaction, and understanding of variability, ease of recognition, clutter, and workload needed to develop an interactive robot system.
APA, Harvard, Vancouver, ISO, and other styles
34

Papadopoulos, Fotios. "Socially interactive robots as mediators in human-human remote communication." Thesis, University of Hertfordshire, 2012. http://hdl.handle.net/2299/9151.

Full text
Abstract:
This PhD work was partially supported by the European LIREC project (Living with robots and interactive companions) a collaboration of 10 EU partners that aims to develop a new generation of interactive and emotionally intelligent companions able of establishing and maintaining long-term relationships with humans. The project takes a multi-disciplinary approach towards investigating methods to allow robotic companions to perceive, remember and react to people in order to enhance the companion’s awareness of sociability in domestic environments. (e.g. remind a user and provide useful information, carry heavy objects etc.). One of the project's scenarios concerns remote human-human communication enhancement utilising autonomous robots as social mediators which is the focus of this PhD thesis. This scenario involves a remote communication situation between two distant users who wish to utilise their robot companions in order to enhance their communication and interaction experience with each other over the internet. The scenario derived from the need of communication between people who are separated from their relatives and friends due to work commitments or other personal obligations. Even for people that live close by, communication mediated by modern technologies has become widespread. However, even with the use of video communication, they are still missing an important medium of interaction that has received much less attention over the past years, which is touch. The purpose of this thesis was to develop autonomous robots as social mediators in a remote human-human communication scenario in order to allow the users to use touch and other modalities on the robots. This thesis addressed the following research questions: Can an autonomous robot be a social mediator in human-human remote communication? How does an autonomous robotic mediator compare to a conventional computer interface in facilitating users’ remote communication? Which methodology should be used for qualitative and quantitative measurements for local user-robot and user-user social remote interactions? In order to answer these questions, three different communications platforms were developed during this research and each one addressed a number of research questions. The first platform (AIBOcom) allowed two distant users to collaborate in a virtual environment by utilising their autonomous robotic companions during their communication. Two pet-like robots, which interact individually with two remotely communicating users, allowed the users to play an interactive game cooperatively. The study tested two experimental conditions, characterised by two different modes of synchronisation between the robots that were located locally with each user. In one mode the robots incrementally affected each other’s behaviour, while in the other mode, the robots mirrored each other’s behaviour. This study aimed to identify users’ preferences for robot mediated human-human interactions in these two modes, as well as investigating users’ overall acceptance of such communication media. Findings indicated that users preferred the mirroring mode and that in this pilot study robot assisted remote communication was considered desirable and acceptable to the users. The second platform (AiBone) explored the effects of an autonomous robot on human-human remote communication and studied participants' preferences in comparison with a communication system not involving robots. We developed a platform for remote human-human communication in the context of a collaborative computer game. The exploratory study involved twenty pairs of participants who communicated using video conference software. Participants expressed more social cues and sharing of their game experiences with each other when using the robot. However, analysis of the interactions of the participants with each other and with the robot show that it is difficult for participants to familiarise themselves quickly with the robot while they can perform the same task more efficiently with conventional devices. Finally, our third platform (AIBOStory) was based on a remote interactive story telling software that allowed users to create and share common stories through an integrated, autonomous robot companion acting as a social mediator between two people. The behaviour of the robot was inspired by dog behaviour and used a simple computational memory model. An initial pilot study evaluated the proposed system's use and acceptance by the users. Five pairs of participants were exposed to the system, with the robot acting as a social mediator, and the results suggested an overall positive acceptance response. The main study involved long-term interactions of 20 participants in order to compare their preferences between two modes: using the game enhanced with an autonomous robot and a non-robot mode. The data was analysed using quantitative and qualitative techniques to measure user preference and Human-Robot Interaction. The statistical analysis suggests user preferences towards the robot mode. Furthermore, results indicate that users utilised the memory feature, which was an integral part of the robot’s control architecture, increasingly more as the sessions progressed. Results derived from the three main studies supported our argument that domestic robots could be used as social mediators in remote human-human communications and offered an enhanced experience during their interactions with both robots and each other. Additionally, it was found that the presence of intelligent robots in the communication can increase the number of exhibited social cues between the users and are more preferable compared to conventional interactive devices such as computer keyboard and mouse.
APA, Harvard, Vancouver, ISO, and other styles
35

Saleh, Diana. "Interaction Design for Remote Control of Military Unmanned Ground Vehicles." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-174074.

Full text
Abstract:
The fast technology development for military unmanned ground vehicles (UGVs) has led to a considerable demand to explore the soldier’s role in an interactive UGV system. This thesis explores how to design interactive systems for UGVs for infantry soldiers in the Swedish Armed Force. This was done through a user-centered design approach in three steps; (1) identifying the design drivers of the targeted military context through qualitative observations and user interviews, (2) using the design drivers to investigate concepts for controlling the UGV, and (3) create and evaluate a prototype of an interactive UGV system design. Results from interviews indicated that design drivers depend on the physical and psychological context of the intended soldiers. In addition, exploring the different concepts showed that early conceptual designs helped the user express their needs of a non-existing system. Furthermore, the results indicate that an interactive UGV system does not necessarily need to be at the highest level of autonomy in order to be useful for the soldiers on the field. The final prototype of an interactive UGV system was evaluated using a demonstration video, a Technology Acceptance Model (TAM), and semi-structured user interviews. Results from this evaluation suggested that the soldiers see the potential usefulness of an interactive UGV system but are not entirely convinced. In conclusion, this thesis argues that in order to design an interactive UGV system, the most critical aspect is the soldiers’ acceptance of the new system. Moreover, for soldiers to accept the concept of military UGVs, it is necessary to understand the context of use and the needs of the soldiers. This is done by involving the soldiers already in the conceptual design process and then throughout the development phases.
APA, Harvard, Vancouver, ISO, and other styles
36

Velor, Tosan. "A Low-Cost Social Companion Robot for Children with Autism Spectrum Disorder." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/41428.

Full text
Abstract:
Robot assisted therapy is becoming increasingly popular. Research has proven it can be of benefit to persons dealing with a variety of disorders, such as Autism Spectrum Disorder (ASD), Attention Deficit Hyperactivity Disorder (ADHD), and it can also provide a source of emotional support e.g. to persons living in seniors’ residences. The advancement in technology and a decrease in cost of products related to consumer electronics, computing and communication has enabled the development of more advanced social robots at a lower cost. This brings us closer to developing such tools at a price that makes them affordable to lower income individuals and families. Currently, in several cases, intensive treatment for patients with certain disorders (to the level of becoming effective) is practically not possible through the public health system due to resource limitations and a large existing backlog. Pursuing treatment through the private sector is expensive and unattainable for those with a lower income, placing them at a disadvantage. Design and effective integration of technology, such as using social robots in treatment, reduces the cost considerably, potentially making it financially accessible to lower income individuals and families in need. The Objective of the research reported in this manuscript is to design and implement a social robot that meets the low-cost criteria, while also containing the required functions to support children with ASD. The design considered contains knowledge acquired in the past through research involving the use of various types of technology for the treatment of mental and/or emotional disabilities.
APA, Harvard, Vancouver, ISO, and other styles
37

Hansson, Emmeli. "Investigating Augmented Reality for Improving Child-Robot Interaction." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-258009.

Full text
Abstract:
Communication in HRI, both verbal and non-verbal, can be hard for a robot to interpret and to convey which can lead to misinterpretations by both the human and the robot. In this thesis we look at answering the question if AR can be used to improve communication of a social robot’s intentions when interacting with children. We looked at behaviors such as getting children to pick up a cube, place a cube, give the cube to another child, tap the cube and shake the cube. We found that picking the cube was the most successful and reliable behavior and that most behaviors were slightly better with AR. Additionally, endorsement behavior was found to be necessary to engage the children, however, it needs to be quicker, more responsive and clearer. In conclusion, there is potential for using AR to improve the intent communication of a robot, but in many cases, the robot behavior alone was already quite clear. A larger study would need to be conducted to further explore this.
I Människa-Robot Interaktion kan både verbal och icke-verbal kommunikation vara svårt för en robot att förstå och förmedla vilket kan leda till missförstånd från både människans och robotens håll. I den här rapporten vill vi svara på frågan ifall AR kan användas för att förbättra kommunikationen av en social robots avsikter när den interagerar med barn. De beteenden vi kollade på var att få ett barn att plocka upp en kub, placera den, ge den till ett annat barn, knacka på den och skaka den. Resultaten var att plocka upp kuben var det mest framgångsrika och pålitliga beteendet och att de flesta beteenden var marginellt bättre med AR. Utöver det hittade vi också att bifallsbeteenden behövdes för att engagera barnen men behövde vara snabbare, mer responsiva och tydligare. Sammanfattningsvis finns det potential för att använda AR, men i många fall var enbart robotens beteenden redan väldigt tydliga. En större studie skulle behövas för att utforska detta ytterligare.
APA, Harvard, Vancouver, ISO, and other styles
38

Lindelöf, Gabriel Trim Olof. "Moraliska bedömningar av autonoma systems beslut." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166543.

Full text
Abstract:
Samhällsutvecklingen går i en riktning där människor arbetar i allt närmare samarbete med artificiella agenter. För att detta samarbete ska vara på användarens villkor är det viktigt att förstå hur människor uppfattar och förhåller sig till dessa system. Hur dessa agenter bedöms moraliskt är en komponent i denna förståelse. Malle m.fl. (2015) utförde en av de första studierna kring hur normer och skuld appliceras på människa respektive robot. I samma artikel efterfrågades mer forskning kring vilka faktorer hos agenter som påverkar de moraliska bedömningarna. Föreliggande studie tog avstamp i denna frågeställning och avsåg att undersöka hur moralisk godtagbarhet och skuldbeläggning skiljde sig beroende på om agenten var en person, en humanoid robot eller ett autonomt intelligent system utan kropp (AIS). Ett mellangrupps-experiment (N = 119) användes för att undersöka hur agenterna bedömdes för sina beslut i tre olika moraliska dilemman. Deltagares rättfärdigaden bakom bedömningar samt medveten hållning utforskades som förklaringsmodell av skillnader. Medveten hållning avser Dennetts (1971) teori kring huruvida en agent förstås utifrån mentala egenskaper. Resultaten visade att person och robot erhöll liknande godtagbarhet för sina beslut medan AIS fick signifikant lägre snitt. Graden skuld som tillskrevs skiljde sig inte signifikant mellan agenterna. Analysen av deltagares rättfärdiganden gav indikationer på att skuldbedömningarna av de artificiella agenterna inte grundade sig i sådan information som antagits ligga till grund för denna typ av bedömningar. Flera rättfärdiganden påpekade också att det var någon annan än de artificiella agenterna som bar skulden för besluten. Vidare analyser indikerade på att deltagare höll medveten hållning mot person i störst utsträckning följt av robot och sedan AIS. Studien väcker frågor kring huruvida skuld som fenomen går att applicera på artificiella agenter och i vilken utsträckning distribuerad skuld är en faktor när artificiella agenter bedöms.
APA, Harvard, Vancouver, ISO, and other styles
39

Kruse, Thibault. "Planning for human robot interaction." Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30059/document.

Full text
Abstract:
Les avancées récentes en robotique inspirent des visions de robots domestiques et de service rendant nos vies plus faciles et plus confortables. De tels robots pourront exécuter différentes tâches de manipulation d'objets nécessaires pour des travaux de ménage, de façon autonome ou en coopération avec des humains. Dans ce rôle de compagnon humain, le robot doit répondre à de nombreuses exigences additionnelles comparées aux domaines bien établis de la robotique industrielle. Le but de la planification pour les robots est de parvenir à élaborer un comportement visant à satisfaire un but et qui obtient des résultats désirés et dans de bonnes conditions d'efficacité. Mais dans l'interaction homme-robot (HRI), le comportement robot ne peut pas simplement être jugé en termes de résultats corrects, mais il doit être agréable aux acteurs humains. Cela signifie que le comportement du robot doit obéir à des critères de qualité supplémentaire. Il doit être sûr, confortable pour l'homme, et être intuitivement compris. Il existe des pratiques pour assurer la sécurité et offrir un confort en gardant des distances suffisantes entre le robot et des personnes à proximité. Toutefois fournir un comportement qui est intuitivement compris reste un défi. Ce défi augmente considérablement dans les situations d'interaction homme-robot dynamique, où les actions de la personne sont imprévisibles, le robot devant adapter en permanence ses plans aux changements. Cette thèse propose une approche nouvelle et des méthodes pour améliorer la lisibilité du comportement du robot dans des situations dynamiques. Cette approche ne considère pas seulement la qualité d'un seul plan, mais le comportement du robot qui est parfois le résultat de replanifications répétées au cours d'une interaction. Pour ce qui concerne les tâches de navigation, cette thèse présente des fonctions de coûts directionnels qui évitent les problèmes dans des situations de conflit. Pour la planification d'action en général, cette thèse propose une approche de replanification locale des actions de transport basé sur les coûts de navigation, pour élaborer un comportement opportuniste adaptatif. Les deux approches, complémentaires, facilitent la compréhension, par les acteurs et observateurs humains, des intentions du robot et permettent de réduire leur confusion
The recent advances in robotics inspire visions of household and service robots making our lives easier and more comfortable. Such robots will be able to perform several object manipulation tasks required for household chores, autonomously or in cooperation with humans. In that role of human companion, the robot has to satisfy many additional requirements compared to well established fields of industrial robotics. The purpose of planning for robots is to achieve robot behavior that is goal-directed and establishes correct results. But in human-robot-interaction, robot behavior cannot merely be judged in terms of correct results, but must be agree-able to human stakeholders. This means that the robot behavior must suffice additional quality criteria. It must be safe, comfortable to human, and intuitively be understood. There are established practices to ensure safety and provide comfort by keeping sufficient distances between the robot and nearby persons. However providing behavior that is intuitively understood remains a challenge. This challenge greatly increases in cases of dynamic human-robot interactions, where the actions of the human in the future are unpredictable, and the robot needs to constantly adapt its plans to changes. This thesis provides novel approaches to improve the legibility of robot behavior in such dynamic situations. Key to that approach is not to merely consider the quality of a single plan, but the behavior of the robot as a result of replanning multiple times during an interaction. For navigation planning, this thesis introduces directional cost functions that avoid problems in conflict situations. For action planning, this thesis provides the approach of local replanning of transport actions based on navigational costs, to provide opportunistic behavior. Both measures help human observers understand the robot's beliefs and intentions during interactions and reduce confusion
APA, Harvard, Vancouver, ISO, and other styles
40

Bodiroža, Saša. "Gestures in human-robot interaction." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2017. http://dx.doi.org/10.18452/17705.

Full text
Abstract:
Gesten sind ein Kommunikationsweg, der einem Betrachter Informationen oder Absichten übermittelt. Daher können sie effektiv in der Mensch-Roboter-Interaktion, oder in der Mensch-Maschine-Interaktion allgemein, verwendet werden. Sie stellen eine Möglichkeit für einen Roboter oder eine Maschine dar, um eine Bedeutung abzuleiten. Um Gesten intuitiv benutzen zukönnen und Gesten, die von Robotern ausgeführt werden, zu verstehen, ist es notwendig, Zuordnungen zwischen Gesten und den damit verbundenen Bedeutungen zu definieren -- ein Gestenvokabular. Ein Menschgestenvokabular definiert welche Gesten ein Personenkreis intuitiv verwendet, um Informationen zu übermitteln. Ein Robotergestenvokabular zeigt welche Robotergesten zu welcher Bedeutung passen. Ihre effektive und intuitive Benutzung hängt von Gestenerkennung ab, das heißt von der Klassifizierung der Körperbewegung in diskrete Gestenklassen durch die Verwendung von Mustererkennung und maschinellem Lernen. Die vorliegende Dissertation befasst sich mit beiden Forschungsbereichen. Als eine Voraussetzung für die intuitive Mensch-Roboter-Interaktion wird zunächst ein Aufmerksamkeitsmodell für humanoide Roboter entwickelt. Danach wird ein Verfahren für die Festlegung von Gestenvokabulare vorgelegt, das auf Beobachtungen von Benutzern und Umfragen beruht. Anschliessend werden experimentelle Ergebnisse vorgestellt. Eine Methode zur Verfeinerung der Robotergesten wird entwickelt, die auf interaktiven genetischen Algorithmen basiert. Ein robuster und performanter Gestenerkennungsalgorithmus wird entwickelt, der auf Dynamic Time Warping basiert, und sich durch die Verwendung von One-Shot-Learning auszeichnet, das heißt durch die Verwendung einer geringen Anzahl von Trainingsgesten. Der Algorithmus kann in realen Szenarien verwendet werden, womit er den Einfluss von Umweltbedingungen und Gesteneigenschaften, senkt. Schließlich wird eine Methode für das Lernen der Beziehungen zwischen Selbstbewegung und Zeigegesten vorgestellt.
Gestures consist of movements of body parts and are a mean of communication that conveys information or intentions to an observer. Therefore, they can be effectively used in human-robot interaction, or in general in human-machine interaction, as a way for a robot or a machine to infer a meaning. In order for people to intuitively use gestures and understand robot gestures, it is necessary to define mappings between gestures and their associated meanings -- a gesture vocabulary. Human gesture vocabulary defines which gestures a group of people would intuitively use to convey information, while robot gesture vocabulary displays which robot gestures are deemed as fitting for a particular meaning. Effective use of vocabularies depends on techniques for gesture recognition, which considers classification of body motion into discrete gesture classes, relying on pattern recognition and machine learning. This thesis addresses both research areas, presenting development of gesture vocabularies as well as gesture recognition techniques, focusing on hand and arm gestures. Attentional models for humanoid robots were developed as a prerequisite for human-robot interaction and a precursor to gesture recognition. A method for defining gesture vocabularies for humans and robots, based on user observations and surveys, is explained and experimental results are presented. As a result of the robot gesture vocabulary experiment, an evolutionary-based approach for refinement of robot gestures is introduced, based on interactive genetic algorithms. A robust and well-performing gesture recognition algorithm based on dynamic time warping has been developed. Most importantly, it employs one-shot learning, meaning that it can be trained using a low number of training samples and employed in real-life scenarios, lowering the effect of environmental constraints and gesture features. Finally, an approach for learning a relation between self-motion and pointing gestures is presented.
APA, Harvard, Vancouver, ISO, and other styles
41

Akan, Batu. "Human Robot Interaction Solutions for Intuitive Industrial Robot Programming." Licentiate thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-14315.

Full text
Abstract:
Over the past few decades the use of industrial robots has increased the efficiency as well as competitiveness of many companies. Despite this fact, in many cases, robot automation investments are considered to be technically challenging. In addition, for most small and medium sized enterprises (SME) this process is associated with high costs. Due to their continuously changing product lines, reprogramming costs are likely to exceed installation costs by a large margin. Furthermore, traditional programming methods for industrial robots are too complex for an inexperienced robot programmer, thus assistance from a robot programming expert is often needed.  We hypothesize that in order to make industrial robots more common within the SME sector, the robots should be reprogrammable by technicians or manufacturing engineers rather than robot programming experts. In this thesis we propose a high-level natural language framework for interacting with industrial robots through an instructional programming environment for the user.  The ultimate goal of this thesis is to bring robot programming to a stage where it is as easy as working together with a colleague.In this thesis we mainly address two issues. The first issue is to make interaction with a robot easier and more natural through a multimodal framework. The proposed language architecture makes it possible to manipulate, pick or place objects in a scene through high level commands. Interaction with simple voice commands and gestures enables the manufacturing engineer to focus on the task itself, rather than programming issues of the robot. This approach shifts the focus of industrial robot programming from the coordinate based programming paradigm, which currently dominates the field, to an object based programming scheme.The second issue addressed is a general framework for implementing multimodal interfaces. There have been numerous efforts to implement multimodal interfaces for computers and robots, but there is no general standard framework for developing them. The general framework proposed in this thesis is designed to perform natural language understanding, multimodal integration and semantic analysis with an incremental pipeline and includes a novel multimodal grammar language, which is used for multimodal presentation and semantic meaning generation.
robot colleague project
APA, Harvard, Vancouver, ISO, and other styles
42

Topp, Elin Anna. "Human-Robot Interaction and Mapping with a Service Robot : Human Augmented Mapping." Doctoral thesis, Stockholm : School of computer science and communication, KTH, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Huang, Chien-Ming. "Joint attention in human-robot interaction." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/41196.

Full text
Abstract:
Joint attention, a crucial component in interaction and an important milestone in human development, has drawn a lot of attention from the robotics community recently. Robotics researchers have studied and implemented joint attention for robots for the purposes of achieving natural human-robot interaction and facilitating social learning. Most previous work on the realization of joint attention in the robotics community has focused only on responding to joint attention and/or initiating joint attention. Responding to joint attention is the ability to follow another's direction of gaze and gestures in order to share common experience. Initiating joint attention is the ability to manipulate another's attention to a focus of interest in order to share experience. A third important component of joint attention is ensuring, where by the initiator ensures that the responders has changed their attention. However, to the best of our knowledge, there is no work explicitly addressing the ability for a robot to ensure that joint attention is reached by interacting agents. We refer to this ability as ensuring joint attention and recognize its importance in human-robot interaction. We propose a computational model of joint attention consisting of three parts: responding to joint attention, initiating joint attention, and ensuring joint attention. This modular decomposition is supported by psychological findings and matches the developmental timeline of humans. Infants start with the skill of following a caregiver's gaze, and then they exhibit imperative and declarative pointing gestures to get a caregiver's attention. Importantly, as they aged and social skills matured, initiating actions often come with an ensuring behavior that is to look back and forth between the caregiver and the referred object to see if the caregiver is paying attention to the referential object. We conducted two experiments to investigate joint attention in human-robot interaction. The first experiment explored effects of responding to joint attention. We hypothesize that humans will find that robots responding to joint attention are more transparent, more competent, and more socially interactive. Transparency helps people understand a robot's intention, facilitating a better human-robot interaction, and positive perception of a robot improves the human-robot relationship. Our hypotheses were supported by quantitative data, results from questionnaire, and behavioral observations. The second experiment studied the importance of ensuring joint attention. The results confirmed our hypotheses that robots that ensure joint attention yield better performance in interactive human-robot tasks and that ensuring joint attention behaviors are perceived as natural behaviors by humans. The findings suggest that social robots should use ensuring joint attention behaviors.
APA, Harvard, Vancouver, ISO, and other styles
44

Bremner, Paul. "Conversational gestures in human-robot interaction." Thesis, University of the West of England, Bristol, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.557106.

Full text
Abstract:
Humanoid service robotics is a rapidly developing field of research. One desired purpose of such service robots is for them to be able to interact and cooperate with people. In order for them to be able to do so successfully they need to be able to communicate effectively. One way of achieving this is for humanoid robots to communicate in a human-like way resulting in easier, more familiar and ultimately more successful human-robot interaction. An integral part of human communications is co-verbal gesture; thus, investigation into a means of their production and whether they engender the desired effects is undertaken in this thesis. In order for gestures to be able to be produced using BERTI (Bristol and Elumotion Robotic Torso I), the robot designed and built for this work, a means of coordinating the joints to produce the required hand motions was necessary. A relatively simple method for doing so is proposed which produces motion that shares characteristics with proposed mathematical models for human arm movements, i.e., smooth and direct motion. It was then investigated whether, as hypothesised, gestures produced using this method were recognisable and positively perceived by users. A series of user studies showed that the gestures were indeed as recognisable as their human counterparts, and positively perceived. In order to enable users to form more confident opinions of the gestures, investigate whether improvements in human-likeness would affect user perceptions, and enable investigation into the affects of robotic gestures on listener behaviour, methods for producing gesture sequences were developed. Sufficient procedural information for gesture production was not present in the anthropological literature, so empirical evidence was sought from monologue performances. This resulted in a novel set of rules for production of beat gestures (a key type of co-verbal gesture), as well as some other important procedural methods; these were used to produce a two minute monologue with accompanying gestures. A user study carried out using this monologue reinforced the previous finding that positively perceived gestures were produced. It also showed that gesture sequences using beat gestures generated using the rules, were not significantly preferable to those containing only naively selected pre-scripted beat gestures. This demonstrated that minor improvements in human-likeness offered no significant benefit in user perception. Gestures have been shown to have positive effects on listener engagement and memory (of the accompanied speech) in anthropological studies. In this thesis the hypothesis that similar effects would be observed when BERTI performed co-verbal gestures was investigated. It was found that there was a highly significant improvement in user engagement, as well as a significant improvement in certainty of data recalled. Thus, some of the expected effects of co-verbal gesture were observed.
APA, Harvard, Vancouver, ISO, and other styles
45

Fiore, Michelangelo. "Decision Making in Human-Robot Interaction." Thesis, Toulouse, INSA, 2016. http://www.theses.fr/2016ISAT0049/document.

Full text
Abstract:
Un intérêt croissant est aujourd'hui porté sur les robots capables de conduire des activités de collaboration d'une manière naturelle et efficace. Nous avons développé une architecture et un système qui traitent des aspects décisionnels de ce problème. Nous avons mis en oeuvre cette architecture pour traiter trois problèmes différents: le robot observateur, le robot équipier et enfin le robot instructeur. Dans cette thèse, nous discutons des défis et problématiques de la coopération homme-robot, puis nous décrivons l'architecture que nous avons développée et enfin détaillons sa mise oeuvre et les algorithmiques spécifiques à chacun des scénarios.Dans le cadre du scénario du robot observateur, le robot maintient un état du monde à jour au moyen d'un raisonnement géométrique effectué sur les données de perception, produisant ainsi une description symbolique de l'état du monde et des agents présents. Nous montrons également, sur la base d'un système de raisonnement intégrant des processus de décision de Markov (MDPs) et des réseaux Bayésiens, comment le robot est capable d'inférer les intentions et les actions futures de ses partenaires humain, à partir d'une observation de leurs mouvements relatifs aux objets de l'environnement. Nous identifions deux types de comportements proactifs : corriger les croyances de l'homme en lui fournissant l'information pertinente qui lui permettra de réaliser son but, aider physiquement la personne dans la réalisation de sa tâche, une fois celle-ci identifiée par le robot.Dans le cas du robot équipier, ce dernier doir réaliser une tâche en coopération avec un partenaire human. Nous introduisons un planificateur nommé Human-Aware Task Planner et détaillons la gestion par notre systeme du plan partagé par un composant appelé Plan Management component. Grâce à se système, le robot peut collaborer avec les hommes selon trois modalités différentes : robot leader, human leader, ou equal partners. Nous discutons des fonctions qui permettent au robot de suivre les actions de son partenaire humain et de vérifier qu'elles sont compatibles ou non avec le plan partagé et nous montrons comment le robot est capable de produire des comportements sûrs qui permettent de réaliser la tâche en prenant en compte de manière explicite la présence et les actions de l'homme ainsi que ses préférences. L'approche est fondée sur des processus décisionnels de Markov hiérarchisés avec observabilité mixte et permet d'estimer l'engagement de l'homme et de réagir en conséquence à différents niveaux d'abstraction. Enfin, nous discutions d'une approche prospective fondée sur un planificateur multi-agent probabiliste mettant en œuvre des MDPs et de sa pertinence quant à l'amélioration du composant de gestion de plan partagé.Dans le scénario du robot instructeur, nous détaillons les processus décisionnels qui permettent au robot d'adapter le plan partagé (shared plan) en fonction de l'état de connaissance et des désirs de son partenaire humain. Selon, le cas, le robot donne plus ou moins de détails sur le plan et adapte son comportement aux connaissances de l'homme ; Une étude utilisateur a également été menée permettant de valider la pertinence de cette approche.Finalement, nous présentons la mise en œuvre d'un robot guide autonome et détaillons les processu décisionnels que nous y avons intégrés pour lui permettre de guider des voyageurs dans un hall d'aéroport en s'adaptant au mieux au contexte et aux désirs des personnes guidées. Nous illustrons dans ce contexte des comportement adaptatifs et pro-actifs. Ce système a été effectivement intégré sur le robot Spencer qui a été déployé dans le terminal principal de l'aéroport d'Amsterdam (Schiphol). Le robot a fonctionné de manière robuste et satisfaisante. Une étude utilisateur a permis, dans ce cas également, de mesurer les performances et de valider le système
There has been an increasing interest, in the last years, in robots that are able to cooperate with humans not only as simple tools, but as full agents, able to execute collaborative activities in a natural and efficient way. In this work, we have developed an architecture for Human-Robot Interaction able to execute joint activities with humans. We have applied this architecture to three different problems, that we called the robot observer, the robot coworker, and the robot teacher. After quickly giving an overview on the main aspects of human-robot cooperation and on the architecture of our system, we detail these problems.In the observer problem the robot monitors the environment, analyzing perceptual data through geometrical reasoning to produce symbolic information.We show how the system is able to infer humans' actions and intentions by linking physical observations, obtained by reasoning on humans' motions and their relationships with the environment, with planning and humans' mental beliefs, through a framework based on Markov Decision Processes and Bayesian Networks. We show, in a user study, that this model approaches the capacity of humans to infer intentions. We also discuss on the possible reactions that the robot can execute after inferring a human's intention. We identify two possible proactive behaviors: correcting the human's belief, by giving information to help him to correctly accomplish his goal, and physically helping him to accomplish the goal.In the coworker problem the robot has to execute a cooperative task with a human. In this part we introduce the Human-Aware Task Planner, used in different experiments, and detail our plan management component. The robot is able to cooperate with humans in three different modalities: robot leader, human leader, and equal partners. We introduce the problem of task monitoring, where the robot observes human activities to understand if they are still following the shared plan. After that, we describe how our robot is able to execute actions in a safe and robust way, taking humans into account. We present a framework used to achieve joint actions, by continuously estimating the robot's partner activities and reacting accordingly. This framework uses hierarchical Mixed Observability Markov Decision Processes, which allow us to estimate variables, such as the human's commitment to the task, and to react accordingly, splitting the decision process in different levels. We present an example of Collaborative Planner, for the handover problem, and then a set of laboratory experiments for a robot coworker scenario. Additionally, we introduce a novel multi-agent probabilistic planner, based on Markov Decision Processes, and discuss how we could use it to enhance our plan management component.In the robot teacher problem we explain how we can adapt the plan explanation and monitoring of the system to the knowledge of users on the task to perform. Using this idea, the robot will explain in less details tasks that the user has already performed several times, going more in-depth on new tasks. We show, in a user study, that this adaptive behavior is perceived by users better than a system without this capacity.Finally, we present a case study for a human-aware robot guide. This robot is able to guide users with adaptive and proactive behaviors, changing the speed to adapt to their needs, proposing a new pace to better suit the task's objectives, and directly engaging users to propose help. This system was integrated with other components to deploy a robot in the Schiphol Airport of Amsterdam, to guide groups of passengers to their flight gates. We performed user studies both in a laboratory and in the airport, demonstrating the robot's capacities and showing that it is appreciated by users
APA, Harvard, Vancouver, ISO, and other styles
46

Alanenpää, Madelene. "Gaze detection in human-robot interaction." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-428387.

Full text
Abstract:
The aim of this thesis is to track gaze direction in a human-robot interaction scenario.The human-robot interaction consisted of a participant playing a geographic gamewith three important objects on which participants could focus: A tablet, a sharedtouchscreen, and a robot (called Furhat). During the game, the participant wasequipped with eye-tracking glasses. These collected a first-person view video as wellas annotations consisting of the participant's center of gaze. In this thesis, I aim to usethis data to detect the three important objects described above from the first-personvideo stream and discriminate whether the gaze of the person fell on one of theobjects of importance and for how long. To achieve this, I trained an accurate and faststate-of-the-art object detector called YOLOv4. To ascertain that this was thecorrect object detector, for this thesis, I compared YOLOv4 with its previousversion, YOLOv3, in terms of accuracy and run time. YOLOv4 was trained with adata set of 337 images consisting of various pictures of tablets, television screens andthe Furhat robot.The trained program was used to extract the relevant objects for each frame of theeye-tracking video and a parser was used to discriminate whether the gaze of theparticipant fell on the relevant objects and for how long. The result is a system thatcould determine, with an accuracy of 90.03%, what object the participant is looking atand for how long the participant is looking at that object.Tryckt av:
APA, Harvard, Vancouver, ISO, and other styles
47

Almeida, Luís Miguel Martins. "Human-robot interaction for object transfer." Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/22374.

Full text
Abstract:
Mestrado em Engenharia Mecânica
Robots come into physical contact with humans under a variety of circumstances to perform useful work. This thesis has the ambitious aim of contriving a solution that leads to a simple case of physical human-robot interaction, an object transfer task. Firstly, this work presents a review of the current research within the field of Human-Robot Interaction, where two approaches are distinguished, but simultaneously required: a pre-contact approximation and an interaction by contact. Further, to achieve the proposed objectives, this dissertation addresses a possible answer to three major problems: (1) The robot control to perform the inherent movements of the transfer assignment, (2) the human-robot pre interaction and (3) the interaction by contact. The capabilities of a 3D sensor and force/tactile sensors are explored in order to prepare the robot to handover an object and to control the robot gripper actions, correspondingly. The complete software development is supported by the Robot Operating System (ROS) framework. Finally, some experimental tests are conducted to validate the proposed solutions and to evaluate the system's performance. A possible transfer task is achieved, even if some refinements, improvements and extensions are required to improve the solution performance and range.
Os robôs entram em contacto físico com os humanos sob uma variedade de circunstâncias para realizar trabalho útil. Esta dissertação tem como objetivo o desenvolvimento de uma solução que permita um caso simples de interação física humano-robô, uma tarefa de transferência de objetos. Inicialmente, este trabalho apresenta uma revisão da pesquisa corrente na área da interação humano-robô, onde duas abordagens são distinguíveis, mas simultaneamente necessárias: uma aproximação pré-contacto e uma interação pós-contacto. Seguindo esta linha de pensamento, para atingir os objetivos propostos, esta dissertação procura dar resposta a três grandes problemas: (1) O controlo do robô para que este desempenhe os movimentos inerentes á tarefa de transferência, (2) a pré-interação humano-robô e (3) a interação por contacto. As capacidades de um sensor 3D e de sensores de força são exploradas com o objetivo de preparar o robô para a transferência e de controlar as ações da garra robótica, correspondentemente. O desenvolvimento de arquitetura software é suportado pela estrutura Robot Operating System (ROS). Finalmente, alguns testes experimentais são realizados para validar as soluções propostas e para avaliar o desempenho do sistema. Uma possível transferência de objetos é alcançada, mesmo que sejam necessários alguns refinamentos, melhorias e extensões para melhorar o desempenho e abrangência da solução.
APA, Harvard, Vancouver, ISO, and other styles
48

Kaupp, Tobias. "Probabilistic Human-Robot Information Fusion." Thesis, The University of Sydney, 2008. http://hdl.handle.net/2123/2554.

Full text
Abstract:
This thesis is concerned with combining the perceptual abilities of mobile robots and human operators to execute tasks cooperatively. It is generally agreed that a synergy of human and robotic skills offers an opportunity to enhance the capabilities of today’s robotic systems, while also increasing their robustness and reliability. Systems which incorporate both human and robotic information sources have the potential to build complex world models, essential for both automated and human decision making. In this work, humans and robots are regarded as equal team members who interact and communicate on a peer-to-peer basis. Human-robot communication is addressed using probabilistic representations common in robotics. While communication can in general be bidirectional, this work focuses primarily on human-to-robot information flow. More specifically, the approach advocated in this thesis is to let robots fuse their sensor observations with observations obtained from human operators. While robotic perception is well-suited for lower level world descriptions such as geometric properties, humans are able to contribute perceptual information on higher abstraction levels. Human input is translated into the machine representation via Human Sensor Models. A common mathematical framework for humans and robots reinforces the notion of true peer-to-peer interaction. Human-robot information fusion is demonstrated in two application domains: (1) scalable information gathering, and (2) cooperative decision making. Scalable information gathering is experimentally demonstrated on a system comprised of a ground vehicle, an unmanned air vehicle, and two human operators in a natural environment. Information from humans and robots was fused in a fully decentralised manner to build a shared environment representation on multiple abstraction levels. Results are presented in the form of information exchange patterns, qualitatively demonstrating the benefits of human-robot information fusion. The second application domain adds decision making to the human-robot task. Rational decisions are made based on the robots’ current beliefs which are generated by fusing human and robotic observations. Since humans are considered a valuable resource in this context, operators are only queried for input when the expected benefit of an observation exceeds the cost of obtaining it. The system can be seen as adjusting its autonomy at run-time based on the uncertainty in the robots’ beliefs. A navigation task is used to demonstrate the adjustable autonomy system experimentally. Results from two experiments are reported: a quantitative evaluation of human-robot team effectiveness, and a user study to compare the system to classical teleoperation. Results show the superiority of the system with respect to performance, operator workload, and usability.
APA, Harvard, Vancouver, ISO, and other styles
49

Kaupp, Tobias. "Probabilistic Human-Robot Information Fusion." University of Sydney, 2008. http://hdl.handle.net/2123/2554.

Full text
Abstract:
PhD
This thesis is concerned with combining the perceptual abilities of mobile robots and human operators to execute tasks cooperatively. It is generally agreed that a synergy of human and robotic skills offers an opportunity to enhance the capabilities of today’s robotic systems, while also increasing their robustness and reliability. Systems which incorporate both human and robotic information sources have the potential to build complex world models, essential for both automated and human decision making. In this work, humans and robots are regarded as equal team members who interact and communicate on a peer-to-peer basis. Human-robot communication is addressed using probabilistic representations common in robotics. While communication can in general be bidirectional, this work focuses primarily on human-to-robot information flow. More specifically, the approach advocated in this thesis is to let robots fuse their sensor observations with observations obtained from human operators. While robotic perception is well-suited for lower level world descriptions such as geometric properties, humans are able to contribute perceptual information on higher abstraction levels. Human input is translated into the machine representation via Human Sensor Models. A common mathematical framework for humans and robots reinforces the notion of true peer-to-peer interaction. Human-robot information fusion is demonstrated in two application domains: (1) scalable information gathering, and (2) cooperative decision making. Scalable information gathering is experimentally demonstrated on a system comprised of a ground vehicle, an unmanned air vehicle, and two human operators in a natural environment. Information from humans and robots was fused in a fully decentralised manner to build a shared environment representation on multiple abstraction levels. Results are presented in the form of information exchange patterns, qualitatively demonstrating the benefits of human-robot information fusion. The second application domain adds decision making to the human-robot task. Rational decisions are made based on the robots’ current beliefs which are generated by fusing human and robotic observations. Since humans are considered a valuable resource in this context, operators are only queried for input when the expected benefit of an observation exceeds the cost of obtaining it. The system can be seen as adjusting its autonomy at run-time based on the uncertainty in the robots’ beliefs. A navigation task is used to demonstrate the adjustable autonomy system experimentally. Results from two experiments are reported: a quantitative evaluation of human-robot team effectiveness, and a user study to compare the system to classical teleoperation. Results show the superiority of the system with respect to performance, operator workload, and usability.
APA, Harvard, Vancouver, ISO, and other styles
50

Ali, Muhammad. "Contribution to decisional human-robot interaction: towards collaborative robot companions." Phd thesis, INSA de Toulouse, 2012. http://tel.archives-ouvertes.fr/tel-00719684.

Full text
Abstract:
L'interaction homme-robot arrive dans une phase intéressante ou la relation entre un homme et un robot est envisage comme 'un partenariat plutôt que comme une simple relation maitre-esclave. Pour que cela devienne une réalité, le robot a besoin de comprendre le comportement humain. Il ne lui suffit pas de réagir de manière appropriée, il lui faut également être socialement proactif. Pour que ce comportement puis être mise en pratique le roboticien doit s'inspirer de la littérature déjà riche en sciences sociocognitives chez l'homme. Dans ce travail, nous allons identifier les éléments clés d'une telle interaction dans le contexte d'une tâche commune, avec un accent particulier sur la façon dont l'homme doit collaborer pour réaliser avec succès une action commune. Nous allons montrer l'application de ces éléments au cas un système robotique afin d'enrichir les interactions sociales homme-robot pour la prise de décision. A cet égard, une contribution a la gestion du but de haut niveau de robot et le comportement proactif est montre. La description d'un modèle décisionnel d'collaboration pour une tâche collaboratif avec l'humain est donnée. Ainsi, l'étude de l'interaction homme robot montre l'intéret de bien choisir le moment d'une action de communication lors des activités conjointes avec l'humain.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography