Dissertations / Theses on the topic 'Human Robot Interaction (HRI)'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Human Robot Interaction (HRI).'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Hüttenrauch, Helge. "From HCI to HRI : Designing Interaction for a Service Robot." Doctoral thesis, KTH, Numerisk Analys och Datalogi, NADA, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4255.
Full textQC 20100617
Wang, Yan. "Gendering Human-Robot Interaction: exploring how a person's gender impacts attitudes toward and interaction with robots." Association for Computing Machinery, 2014. http://hdl.handle.net/1993/24446.
Full textToris, Russell C. "Bringing Human-Robot Interaction Studies Online via the Robot Management System." Digital WPI, 2013. https://digitalcommons.wpi.edu/etd-theses/1058.
Full textPai, Abhishek. "Distance-Scaled Human-Robot Interaction with Hybrid Cameras." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1563872095430977.
Full textPonsler, Brett. "Recognizing Engagement Behaviors in Human-Robot Interaction." Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-theses/109.
Full textJuri, Michael J. "Design and Implementation of a Modular Human-Robot Interaction Framework." DigitalCommons@CalPoly, 2021. https://digitalcommons.calpoly.edu/theses/2327.
Full textSyrdal, Dag Sverre. "The impact of social expectation towards robots on human-robot interactions." Thesis, University of Hertfordshire, 2018. http://hdl.handle.net/2299/20962.
Full textHolroyd, Aaron. "Generating Engagement Behaviors in Human-Robot Interaction." Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-theses/328.
Full textChadalavada, Ravi Teja. "Human Robot Interaction for Autonomous Systems in Industrial Environments." Thesis, Chalmers University of Technology, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-55277.
Full textMichalland, Arthur-Henri. "Main et Cognition : les relations bi-directionnelles entre processus cognitifs et motricité manuelle." Thesis, Montpellier 3, 2019. http://www.theses.fr/2019MON30012.
Full textThis thesis suggests that the haptic sense influences human cognitive processes. We were interested in mnesic, perceptive, and motor processes, and relied on two concepts from computational and embodied theories : recurrent sensorimotor patterns and the sensory anticipation that emerges from them. Our first line of research focused on the connections between anticipation of haptic features of a gesture, object recognition, and grip selection. The second line focused both on the link between haptic anticipation and action lateralization and on the impact of this anticipation on taking spatial and emotional clues into account to select and initiate an action. The third line focused on the motor strategies used by participants depending on the precision of their haptic anticipation, and tries to define control parameters that may facilitate human-robot interactions. Overall, this work shows that the haptic sense accompanies perception-action cycles of different durations, the longest being from action selection to its sensory terminal feedback, the shortest from the haptic afferent to alpha neuron efferent. The haptic sense is at the foundation of these cycles, and play a role in major cognitive functions
Rehfeld, Sherri. "THE IMPACT OF MENTAL TRANSFORMATION TRAINING ACROSS LEVELS OF AUTOMATION ON SPATIAL AWARENESS IN HUMAN-ROBOT INTERACTION." Doctoral diss., University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3762.
Full textPh.D.
Department of Psychology
Sciences
Psychology
Tozadore, Daniel Carnieto. "Aplicação de um robô humanoide autônomo por meio de reconhecimento de imagem e voz em sessões pedagógicas interativas." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-04102016-110603/.
Full textEducational Robotics is a growing area that uses robots to apply theoretical concepts discussed in class. However, robots usually present a lack of interaction with users that can be improved with humanoid robots. This dissertation presents a project that combines computer vision techniques, social robotics and speech synthesis and recognition to build an interactive system which leads educational sessions through a humanoid robot. This system can be trained with different content to be addressed autonomously to users by a robot. Its application covers the use of the system as a tool in the mathematics teaching for children. For a first approach, the system has been trained to interact with children and recognize 3D geometric figures. The proposed scheme is based on modules, wherein each module is responsible for a specific function and includes a group of features for this purpose. In total there are 4 modules: Central Module, Dialog Module, Vision Module and Motor Module. The chosen robot was the humanoid NAO. For the Vision Module, LEGION network and VOCUS2 system were compared for object detection and SVM and MLP for image classification. The Google Speech Recognition speech recognizer and Voice Synthesizer Naoqi API are used for sound interactions. An interaction study was conducted by Wizard-of-Oz technique to analyze the behavior of children and adapt the methods for better application results. Full system testing showed that small calibrations are sufficient for an interactive session with few errors. Children who had experienced greater interaction degrees from the robot felt more engaged and comfortable during interactions, both in the experiments and studying at home for the next sessions, compared to children who had contact with a lower level of interactivity. Interim challenging behaviors and support behaviors brought better results in interaction than a constant behavior.
Strineholm, Philippe. "Exploring Human-Robot Interaction Through Explainable AI Poetry Generation." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54606.
Full textAspernäs, Andreas. "Human-like Crawling for Humanoid Robots : Gait Evaluation on the NAO robot." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-78761.
Full textVasalya, Ashesh. "Human and humanoid robot co-workers : motor contagions and whole-body handover." Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTS112.
Full textThe work done in this thesis is about the interactions between human and humanoid robot HRP-2Kai as co-workers in the industrial scenarios The research topics in the thesis are divided into two categories. In the context of non-physical human-robot interactions, the studies conducted in the 1st part of this thesis are mostly motivated by social interactions between human and humanoid robot co-workers, which deal with the implicit behavioural and cognitive aspects of interactions. While in the context of physical human-robot interactions, the 2nd part of this thesis is motivated by the physical manipulations during object handover between human and humanoid robot co-workers in close proximity using humanoid robot whole-body control framework and locomotion.We designed a paradigm and a repetitive task inspired by the industrial Pick-n-Place movement task, in first HRI study, we examine the effect of motor contagions induced in participants during (we call it on-line contagions) and after (off-line contagions) the observation of the same movements performed by a human, or a humanoid robot co-worker.The results from this study have suggested that off-line contagions affects participant's movement velocity while on-line contagions affect their movement frequency. Interestingly, our findings suggest that the nature of the co-worker, (human or a robot), tend to influence the off-line contagions significantly more than the on-line contagions.Under the same paradigm and repetitive industrial task, we systematically varied the robot behaviour and observed whether and how the performance of a human participant is affected by the presence of the humanoid robot. We also investigated the effect of physical form of humanoid robot co-worker where the torso and head were covered, and only the moving arm was visible to the human participants. Later, we compared these behaviours with a human co-worker and examined how the observed behavioural effects scale with experience of robots.Our results show that the human and humanoid robot co-workers have been able to affect the performance frequencies of the participants, while their task accuracy remained undisturbed and unaffected. However, with the robot co-worker, this is true only when the robot head and torso were visible, and a robot made biological movements.Next, in pHRI study, we designed an intuitive bi-directional object handover routine between human and biped humanoid robot co-worker using whole-body control and locomotion, we designed models to predict and estimate the handover position in advance along with estimating the grasp configuration of an object and active human hand during handover trials. We also designed a model to minimize the interaction forces during the handover of an unknown mass object along with the timing of the object handover routine.We mainly focused on three important key features during handover, and we answered the following questions, ---when (timing), where (position in space), how (orientation and interaction forces) of the handover.we present a generalized handover controller, where both human and the robot is capable of selecting either of their hand to handover and exchange the object. Furthermore, by utilizing a whole-body control configuration, our handover controller is able to allow the robot to use both hands simultaneously during the object handover. Depending upon the shape and size of the object that needs to be transferred.Finally, we explored the full capabilities of a biped humanoid robot and added a scenario where the robot needs to proactively take few steps in order to handover or exchange the object between its human co-worker. We have tested this scenario on real humanoid robot HRP-2Kai during both when human-robot dyad uses either single or both hands simultaneously
Vogt, David. "Learning Continuous Human-Robot Interactions from Human-Human Demonstrations." Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2018. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-233262.
Full textOPERTO, STEFANIA. "HRI: l’interazione tra esseri umani e macchine. Dall’interazione sociale all’interazione sociotecnica." Doctoral thesis, Università degli studi di Genova, 2021. http://hdl.handle.net/11567/1057911.
Full textMiners, William Ben. "Toward Understanding Human Expression in Human-Robot Interaction." Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/789.
Full textAn intuitive method to minimize human communication effort with intelligent devices is to take advantage of our existing interpersonal communication experience. Recent advances in speech, hand gesture, and facial expression recognition provide alternate viable modes of communication that are more natural than conventional tactile interfaces. Use of natural human communication eliminates the need to adapt and invest time and effort using less intuitive techniques required for traditional keyboard and mouse based interfaces.
Although the state of the art in natural but isolated modes of communication achieves impressive results, significant hurdles must be conquered before communication with devices in our daily lives will feel natural and effortless. Research has shown that combining information between multiple noise-prone modalities improves accuracy. Leveraging this complementary and redundant content will improve communication robustness and relax current unimodal limitations.
This research presents and evaluates a novel multimodal framework to help reduce the total human effort and time required to communicate with intelligent devices. This reduction is realized by determining human intent using a knowledge-based architecture that combines and leverages conflicting information available across multiple natural communication modes and modalities. The effectiveness of this approach is demonstrated using dynamic hand gestures and simple facial expressions characterizing basic emotions. It is important to note that the framework is not restricted to these two forms of communication. The framework presented in this research provides the flexibility necessary to include additional or alternate modalities and channels of information in future research, including improving the robustness of speech understanding.
The primary contributions of this research include the leveraging of conflicts in a closed-loop multimodal framework, explicit use of uncertainty in knowledge representation and reasoning across multiple modalities, and a flexible approach for leveraging domain specific knowledge to help understand multimodal human expression. Experiments using a manually defined knowledge base demonstrate an improved average accuracy of individual concepts and an improved average accuracy of overall intents when leveraging conflicts as compared to an open-loop approach.
Rudqwist, Lucas. "Designing an interface for a teleoperated vehicle which uses two cameras for navigation." Thesis, KTH, Medieteknik och interaktionsdesign, MID, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231914.
Full textDet svenska brandförsvaret har varit i behov utav en robot som kan användas i situationer där det är för riskfyllt att skicka in brandmän. Ett radiostyrt fordon håller på att utvecklas för exakt detta syfte. Detta arbete baseras på den forskning som tidigare genomförts inom Människa-Datorinteraktion och gränssnitts-design för radiostyrda fordon. I denna studie utvecklades en prototyp för att simulera känslan av att köra ett radiostyrt fordon. Det visualiserade det tänka gränssnitten för operatören och simulerade körupplevelsen. Utvecklingen skedde genom en Användarcentrerad designprocess och utvärderades med hjälp utav användare. Efter den slutgiltiga utvärderingen så presenterades ett designförslag som baserades på tidigare forskning och användarnas återkoppling. Studien diskuterar de problem som uppstår när man designar ett gränssnitt för ett radiostyrt fordon som använder två kameror för manövrering. En utmaning var hur man kan till fullo utnyttja de två kamerabilderna och skapa ett samspel mellan dem. Utvärderingarna visade att användarna kunde hålla bättre fokus med en större, dedikerad kamerabild och en mindre sekundär kamerabild som enkelt kan blickas över. Enkelhet och var sensordata placeras, visade sig också var viktiga aspekter för att minska den mentala påfrestningen för operatören. Vidare modifikationer på fordonet och gränssnittet behöver genomföras för öka operatörens medvetenhet och självförtroende vid manövrering.
Khan, Mubasher Hassan, and Tayyab Laique. "An Evaluation of Gaze and EEG-Based Control of a Mobile Robot." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4625.
Full textWagner, Alan Richard. "The role of trust and relationships in human-robot social interaction." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31776.
Full textCommittee Chair: Arkin, Ronald C.; Committee Member: Christensen, Henrik I.; Committee Member: Fisk, Arthur D.; Committee Member: Ram, Ashwin; Committee Member: Thomaz, Andrea. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Pandey, Amit kumar. "Towards Socially Intelligent Robots in Human Centered Environment." Thesis, Toulouse, INSA, 2012. http://www.theses.fr/2012ISAT0032/document.
Full textRobots will no longer be working isolated from us. They are entering into our day-to-day life to cooperate, assist, help, serve, learn, teach and play with us. In this context, it is important that because of the presence of robots, the human should not be on compromising side. To achieve this, beyond the basic safety requirements, robots should take into account various factors ranging from human’s effort, comfort, preferences, desire, to social norms, in their various planning and decision making strategies. They should behave, navigate, manipulate, interact and learn in a way, which is expected, accepted, and understandable by us, the human. This thesis begins by exploring and identifying the basic yet key ingredients of such socio-cognitive intelligence. Then we develop generic frameworks and concepts from HRI perspective to address these additional challenges, and to elevate the robots capabilities towards being socially intelligent
Förster, Frank. "Robots that say 'no' : acquisition of linguistic behaviour in interaction games with humans." Thesis, University of Hertfordshire, 2013. http://hdl.handle.net/2299/20781.
Full textThunberg, Sofia. "Can You Read My Mind? : A Participatory Design Study of How a Humanoid Robot Can Communicate Its Intent and Awareness." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-158033.
Full textde, Greeff Joachim. "Interactive concept acquisition for embodied artificial agents." Thesis, University of Plymouth, 2013. http://hdl.handle.net/10026.1/1587.
Full textKrzewska, Weronika. "ZERROR : Provoking ethical discussions of humanoid robots through speculative animation." Thesis, Malmö universitet, Institutionen för konst, kultur och kommunikation (K3), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-45975.
Full textThellman, Sam. "Social Dimensions of Robotic versus Virtual Embodiment, Presence and Influence." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-130645.
Full textBengtsson, Camilla, and Caroline Englund. "“Do you want to take a short survey?” : Evaluating and improving the UX and VUI of a survey skill in the social robot Furhat: a qualitative case study." Thesis, Linnéuniversitetet, Institutionen för informatik (IK), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-76923.
Full textSyftet med den här kvalitativa fallstudien är att utvärdera en enkätskill för den sociala roboten Furhat. Förutom utvärderingen av denna skill, som är i ett tidigt skede av utvecklingen, är syftet även att undersöka hur användarupplevelsen (UX) och röstgränssnittet (VUI) kan förbättras. Olika kvalitativa metoder har använts: expertutvärderingar med heuristik för MRI (människa-robot-interaktion), användarutvärderingar bestående av observationer och intervjuer, samt ett kvantitativt frågeformulär (RoSAS – Robot Social Attribution Scale). Resultaten från dessa har placerats in i ramverket USUS Evaluation Framework for Human- Robot Interaction. Användarutvärderingarna utfördes i två olika grupper: en grupp pratade och interagerade med Furhat med stöd av ett grafiskt användargränssnitt (GUI), den andra hade inget GUI. En positiv användarupplevelse konstaterades i båda grupperna: informanterna tyckte att det var roligt, engagerande och intressant att interagera med Furhat. Att ha ett GUI som stöd kan passa bättre för bullriga miljöer och för längre enkäter med många svarsalternativ att välja bland, medan ett GUI inte behövs för lugnare miljöer och kortare enkäter. Generella förbättringar som kan bidra till att höja användarupplevelsen hittades i båda grupperna; till exempel att roboten bör agera mer människolikt när det kommer till dialogen och ansiktsuttryck och rörelser, samt att åtgärda ett antal tekniska problem och användbarhetsproblem.
Marpaung, Andreas. "TOWARD BUILDING A SOCIAL ROBOT WITH AN EMOTION-BASED INTERNAL CONTROL." Master's thesis, University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3901.
Full textM.S.
School of Computer Science
Engineering and Computer Science
Computer Science
Stival, Francesca. "Subject-Independent Frameworks for Robotic Devices: Applying Robot Learning to EMG Signals." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3426704.
Full textLa possibilità di collaborazione tra robot ed esseri umani ha fatto crescere l’interesse nello sviluppo di tecniche per il controllo di dispositivi robotici attraverso segnali fisiologici provenienti dal corpo umano. Per poter ottenere questo obiettivo è essenziale essere in grado di cogliere l’intenzione di movimento da parte dell’essere umano e di tradurla in un relativo movimento del robot. Fin’ora, quando si consideravano segnali fisiologici, ed in particolare segnali EMG, il classico approccio era quello di concentrarsi sul singolo soggetto che svolgeva il task, a causa della notevole complessità di questo tipo di dati. Lo scopo di questa tesi è quello di espandere lo stato dell’arte proponendo un framework generico ed indipendente dal soggetto, in grado di estrarre le caratteristiche del movimento umano osservando diverse dimostrazioni svolte da un gran numero di soggetti differenti. La variabilità introdotta nel sistema dai diversi soggetti e dalle diverse ripetizioni del task permette la costruzione di un modello del movimento umano, robusto a piccole variazioni e a un possibile deterioramento del segnale. Inoltre, il framework ottenuto può essere utilizzato da ogni soggetto senza che debba sottoporsi a lunghe sessioni di allenamento. I segnali verranno sottoposti ad un’accurata fase di reprocessing per rimuovere rumore ed artefatti, seguendo questo procedimento sarà possibile estrarre dell’informazione significativa che verrà utilizzata per elaborare il segnale online. Il movimento umano può essere stimato utilizzando tecniche statistiche molto diffuse in applicazioni di Robot Programming by Demonstration, in particolare l’informazione in input può essere rappresentata utilizzando il Gaussian Mixture Model (GMM). Il movimento svolto dal soggetto può venire stimato in maniera continua con delle tecniche di regressione, come il Gaussian Mixture Regression (GMR), oppure può venire scelto tra un insieme di possibili movimenti con delle tecniche di classificazione, come il Gaussian Mixture Classification (GMC). I risultati sono stati migliorati incorporando nel modello dell’informazione a priori, in modo da arricchirlo. In particolare, è stata considerata l’informazione gerarchica fornita da una tassonomia quantitativa dei movimenti di presa della mano. E’ stata anche realizzata la prima tassonomia quantitativa delle prese della mano considerando l’informazione sia muscolare che cinematica proveniente da 40 soggetti. I risultati ottenuti hanno dimostrato la possibilità di realizzare un framework indipendente dal soggetto anche utilizzando segnali fisiologici come gli EMG provenienti da un grande numero di partecipanti. La soluzione proposta è stata utilizzata in due tipi diversi di applicazioni: (I) per il controllo di dispositivi prostetici, e (II) in una soluzione per l’Industria 4.0, con l’obiettivo di consentire a uomini e robot di lavorare assieme o di collaborare. Infatti, unaspetto cruciale perché uomini e robot possano lavorare assieme è che siano in grado di anticipare uno il task dell’altro e i segnali fisiologici riescono a fornire un segnale prima che avvenga l’effettivo movimento. In questa tesi è stata proposta anche un’applicazione di Robot Programming by Demonstration in una vera fabbrica che si occupa di realizzare motori elettrici, con lo scopo di ottimizzarne la produzione. Il task faceva parte della European Robotic Challenge (EuRoC) in cui l’obiettivo finale era diviso in fasi di complessità crescente. La soluzione proposta impiega tecniche di Machine Learning, come il GMM, mentre la robustezza dell’approccio è assicurata dalla considerazione di dimostrazioni da parte di molti soggetti diversi. Il sistema è stato testato in un contesto industriale ottenendo risultati promettenti.
Schaffert, Carolin. "Safety system design in human-robot collaboration : Implementation for a demonstrator case in compliance with ISO/TS 15066." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-263900.
Full textEtt nära samarbete mellan människor och robotar är ett sätt att uppnå flexibla produktionsflöden och en hög grad av automatisering samtidigt. I människa-robotsamarbeten arbetar båda enheterna tillsammans med varandra i en gemensam miljö utan skyddsstaket. Dessa arbetsstationer kombinerar mänsklig flexibilitet, taktil känsla och intelligens med robothastighet, uthållighet och noggrannhet. Detta leder till förbättrade ergonomiska arbetsförhållanden för operatören, bättre kvalitet och högre effektivitet. Det breda antagandet av människarobotsamarbeten är emellertid begränsat av den nuvarande säkerhetslagstiftningen. Robotar är kraftfulla maskiner och utan rymdseparation till operatören riskerna drastiskt ökar. Den tekniska specifikationen ISO / TS 15066 fungerar som riktlinje för samverkan och kompletterar den internationella standarden ISO 10218 för industrirobotar. Eftersom ISO / TS 15066 representerar det första utkastet för en kommande standard, måste företagen få kunskap om att tillämpa ISO / TS 15066. För närvarande förbjuder riktlinjen en kollision med huvudet i övergående kontakt. I detta avhandlingar är ett säkerhetssystem utformat som överensstämmer med ISO / TS 15066 och där certifierad säkerhetsteknik används. Fyra teoretiska säkerhetssystemdesigner med en laserskanner som närvarosensor och en samarbetsrobot, KUKA lbr iiwa, föreslås. Systemet stoppar antingen robotrörelsen, reducerar robotens hastighet och triggar sedan ett stopp eller aktiverar bara ett stopp efter en kollision mellan roboten och människan inträffade. I system 3 minskas storleken på stoppzonen genom att kombinera hastighets- och separationsövervakningsprincipen med det kraft- och kraftbegränsande skyddsläget. Säkerhetszoner är statiska och beräknas enligt skyddsavståndet i ISO / TS 15066. En riskbedömning görs för att minska alla risker till en acceptabel nivå och leda till den slutliga säkerhetssystemdesignen efter tre iterationer. Som ett bevis på konceptet är den slutliga säkerhetssystemdesignen implementerad för en demonstrant i en laboratoriemiljö hos Scania. Genom en genomförbarhetsstudie identifieras implementeringsskillnaderna mellan teori och praxis för de fyra föreslagna mönster och ett genomförbart säkerhetssystem beteende utvecklas. Robotreaktionen realiseras genom robotens säkerhetskonfiguration. Där definieras tre ESM-tillstånd för att använda robotens interna säkerhetsfunktioner och för att integrera laserscannersignalen. Laserskannern är ansluten som en digital ingång till robotkontrollens diskreta säkerhetsgränssnitt. Sammanfattningsvis beskriver detta avhandlingar säkerhetssystemdesignen med alla implementeringsdetaljer.
Wåhlin, Peter. "Enhanching the Human-Team Awareness of a Robot." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-16371.
Full textAnvändningen av autonoma robotar i vårt samhälle ökar varje dag och en robot ses inte längre som ett verktyg utan som en gruppmedlem. Robotarna arbetar nu sida vid sida med oss och ger oss stöd under farliga arbeten där människor annars är utsatta för risker. Denna utveckling har i sin tur ökat behovet av robotar med mer människo-medvetenhet. Därför är målet med detta examensarbete att bidra till en stärkt människo-medvetenhet hos robotar. Specifikt undersöker vi möjligheterna att utrusta autonoma robotar med förmågan att bedöma och upptäcka olika beteenden hos mänskliga lag. Denna förmåga skulle till exempel kunna användas i robotens resonemang och planering för att ta beslut och i sin tur förbättra samarbetet mellan människa och robot. Vi föreslår att förbättra befintliga aktivitetsidentifierare genom att tillföra förmågan att tolka immateriella beteenden hos människan, såsom stress, motivation och fokus. Att kunna urskilja lagaktiviteter inom ett mänskligt lag är grundläggande för en robot som ska vara till stöd för laget. Dolda markovmodeller har tidigare visat sig vara mycket effektiva för just aktivitetsidentifiering och har därför använts i detta arbete. För att en robot ska kunna ha möjlighet att ge ett effektivt stöd till ett mänskligtlag måste den inte bara ta hänsyn till rumsliga parametrar hos lagmedlemmarna utan även de psykologiska. För att tyda psykologiska parametrar hos människor förespråkar denna masteravhandling utnyttjandet av mänskliga kroppssignaler. Signaler så som hjärtfrekvens och hudkonduktans. Kombinerat med kroppenssignalerar påvisar vi möjligheten att använda systemdynamiksmodeller för att tolka immateriella beteenden, vilket i sin tur kan stärka människo-medvetenheten hos en robot.
The thesis work was conducted in Stockholm, Kista at the department of Informatics and Aero System at Swedish Defence Research Agency.
Ajulo, Morenike. "Interactive text response for assistive robotics in the home." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34725.
Full textPapadopoulos, Fotios. "Socially interactive robots as mediators in human-human remote communication." Thesis, University of Hertfordshire, 2012. http://hdl.handle.net/2299/9151.
Full textSaleh, Diana. "Interaction Design for Remote Control of Military Unmanned Ground Vehicles." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-174074.
Full textVelor, Tosan. "A Low-Cost Social Companion Robot for Children with Autism Spectrum Disorder." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/41428.
Full textHansson, Emmeli. "Investigating Augmented Reality for Improving Child-Robot Interaction." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-258009.
Full textI Människa-Robot Interaktion kan både verbal och icke-verbal kommunikation vara svårt för en robot att förstå och förmedla vilket kan leda till missförstånd från både människans och robotens håll. I den här rapporten vill vi svara på frågan ifall AR kan användas för att förbättra kommunikationen av en social robots avsikter när den interagerar med barn. De beteenden vi kollade på var att få ett barn att plocka upp en kub, placera den, ge den till ett annat barn, knacka på den och skaka den. Resultaten var att plocka upp kuben var det mest framgångsrika och pålitliga beteendet och att de flesta beteenden var marginellt bättre med AR. Utöver det hittade vi också att bifallsbeteenden behövdes för att engagera barnen men behövde vara snabbare, mer responsiva och tydligare. Sammanfattningsvis finns det potential för att använda AR, men i många fall var enbart robotens beteenden redan väldigt tydliga. En större studie skulle behövas för att utforska detta ytterligare.
Lindelöf, Gabriel Trim Olof. "Moraliska bedömningar av autonoma systems beslut." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166543.
Full textKruse, Thibault. "Planning for human robot interaction." Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30059/document.
Full textThe recent advances in robotics inspire visions of household and service robots making our lives easier and more comfortable. Such robots will be able to perform several object manipulation tasks required for household chores, autonomously or in cooperation with humans. In that role of human companion, the robot has to satisfy many additional requirements compared to well established fields of industrial robotics. The purpose of planning for robots is to achieve robot behavior that is goal-directed and establishes correct results. But in human-robot-interaction, robot behavior cannot merely be judged in terms of correct results, but must be agree-able to human stakeholders. This means that the robot behavior must suffice additional quality criteria. It must be safe, comfortable to human, and intuitively be understood. There are established practices to ensure safety and provide comfort by keeping sufficient distances between the robot and nearby persons. However providing behavior that is intuitively understood remains a challenge. This challenge greatly increases in cases of dynamic human-robot interactions, where the actions of the human in the future are unpredictable, and the robot needs to constantly adapt its plans to changes. This thesis provides novel approaches to improve the legibility of robot behavior in such dynamic situations. Key to that approach is not to merely consider the quality of a single plan, but the behavior of the robot as a result of replanning multiple times during an interaction. For navigation planning, this thesis introduces directional cost functions that avoid problems in conflict situations. For action planning, this thesis provides the approach of local replanning of transport actions based on navigational costs, to provide opportunistic behavior. Both measures help human observers understand the robot's beliefs and intentions during interactions and reduce confusion
Bodiroža, Saša. "Gestures in human-robot interaction." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2017. http://dx.doi.org/10.18452/17705.
Full textGestures consist of movements of body parts and are a mean of communication that conveys information or intentions to an observer. Therefore, they can be effectively used in human-robot interaction, or in general in human-machine interaction, as a way for a robot or a machine to infer a meaning. In order for people to intuitively use gestures and understand robot gestures, it is necessary to define mappings between gestures and their associated meanings -- a gesture vocabulary. Human gesture vocabulary defines which gestures a group of people would intuitively use to convey information, while robot gesture vocabulary displays which robot gestures are deemed as fitting for a particular meaning. Effective use of vocabularies depends on techniques for gesture recognition, which considers classification of body motion into discrete gesture classes, relying on pattern recognition and machine learning. This thesis addresses both research areas, presenting development of gesture vocabularies as well as gesture recognition techniques, focusing on hand and arm gestures. Attentional models for humanoid robots were developed as a prerequisite for human-robot interaction and a precursor to gesture recognition. A method for defining gesture vocabularies for humans and robots, based on user observations and surveys, is explained and experimental results are presented. As a result of the robot gesture vocabulary experiment, an evolutionary-based approach for refinement of robot gestures is introduced, based on interactive genetic algorithms. A robust and well-performing gesture recognition algorithm based on dynamic time warping has been developed. Most importantly, it employs one-shot learning, meaning that it can be trained using a low number of training samples and employed in real-life scenarios, lowering the effect of environmental constraints and gesture features. Finally, an approach for learning a relation between self-motion and pointing gestures is presented.
Akan, Batu. "Human Robot Interaction Solutions for Intuitive Industrial Robot Programming." Licentiate thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-14315.
Full textrobot colleague project
Topp, Elin Anna. "Human-Robot Interaction and Mapping with a Service Robot : Human Augmented Mapping." Doctoral thesis, Stockholm : School of computer science and communication, KTH, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4899.
Full textHuang, Chien-Ming. "Joint attention in human-robot interaction." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/41196.
Full textBremner, Paul. "Conversational gestures in human-robot interaction." Thesis, University of the West of England, Bristol, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.557106.
Full textFiore, Michelangelo. "Decision Making in Human-Robot Interaction." Thesis, Toulouse, INSA, 2016. http://www.theses.fr/2016ISAT0049/document.
Full textThere has been an increasing interest, in the last years, in robots that are able to cooperate with humans not only as simple tools, but as full agents, able to execute collaborative activities in a natural and efficient way. In this work, we have developed an architecture for Human-Robot Interaction able to execute joint activities with humans. We have applied this architecture to three different problems, that we called the robot observer, the robot coworker, and the robot teacher. After quickly giving an overview on the main aspects of human-robot cooperation and on the architecture of our system, we detail these problems.In the observer problem the robot monitors the environment, analyzing perceptual data through geometrical reasoning to produce symbolic information.We show how the system is able to infer humans' actions and intentions by linking physical observations, obtained by reasoning on humans' motions and their relationships with the environment, with planning and humans' mental beliefs, through a framework based on Markov Decision Processes and Bayesian Networks. We show, in a user study, that this model approaches the capacity of humans to infer intentions. We also discuss on the possible reactions that the robot can execute after inferring a human's intention. We identify two possible proactive behaviors: correcting the human's belief, by giving information to help him to correctly accomplish his goal, and physically helping him to accomplish the goal.In the coworker problem the robot has to execute a cooperative task with a human. In this part we introduce the Human-Aware Task Planner, used in different experiments, and detail our plan management component. The robot is able to cooperate with humans in three different modalities: robot leader, human leader, and equal partners. We introduce the problem of task monitoring, where the robot observes human activities to understand if they are still following the shared plan. After that, we describe how our robot is able to execute actions in a safe and robust way, taking humans into account. We present a framework used to achieve joint actions, by continuously estimating the robot's partner activities and reacting accordingly. This framework uses hierarchical Mixed Observability Markov Decision Processes, which allow us to estimate variables, such as the human's commitment to the task, and to react accordingly, splitting the decision process in different levels. We present an example of Collaborative Planner, for the handover problem, and then a set of laboratory experiments for a robot coworker scenario. Additionally, we introduce a novel multi-agent probabilistic planner, based on Markov Decision Processes, and discuss how we could use it to enhance our plan management component.In the robot teacher problem we explain how we can adapt the plan explanation and monitoring of the system to the knowledge of users on the task to perform. Using this idea, the robot will explain in less details tasks that the user has already performed several times, going more in-depth on new tasks. We show, in a user study, that this adaptive behavior is perceived by users better than a system without this capacity.Finally, we present a case study for a human-aware robot guide. This robot is able to guide users with adaptive and proactive behaviors, changing the speed to adapt to their needs, proposing a new pace to better suit the task's objectives, and directly engaging users to propose help. This system was integrated with other components to deploy a robot in the Schiphol Airport of Amsterdam, to guide groups of passengers to their flight gates. We performed user studies both in a laboratory and in the airport, demonstrating the robot's capacities and showing that it is appreciated by users
Alanenpää, Madelene. "Gaze detection in human-robot interaction." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-428387.
Full textAlmeida, Luís Miguel Martins. "Human-robot interaction for object transfer." Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/22374.
Full textRobots come into physical contact with humans under a variety of circumstances to perform useful work. This thesis has the ambitious aim of contriving a solution that leads to a simple case of physical human-robot interaction, an object transfer task. Firstly, this work presents a review of the current research within the field of Human-Robot Interaction, where two approaches are distinguished, but simultaneously required: a pre-contact approximation and an interaction by contact. Further, to achieve the proposed objectives, this dissertation addresses a possible answer to three major problems: (1) The robot control to perform the inherent movements of the transfer assignment, (2) the human-robot pre interaction and (3) the interaction by contact. The capabilities of a 3D sensor and force/tactile sensors are explored in order to prepare the robot to handover an object and to control the robot gripper actions, correspondingly. The complete software development is supported by the Robot Operating System (ROS) framework. Finally, some experimental tests are conducted to validate the proposed solutions and to evaluate the system's performance. A possible transfer task is achieved, even if some refinements, improvements and extensions are required to improve the solution performance and range.
Os robôs entram em contacto físico com os humanos sob uma variedade de circunstâncias para realizar trabalho útil. Esta dissertação tem como objetivo o desenvolvimento de uma solução que permita um caso simples de interação física humano-robô, uma tarefa de transferência de objetos. Inicialmente, este trabalho apresenta uma revisão da pesquisa corrente na área da interação humano-robô, onde duas abordagens são distinguíveis, mas simultaneamente necessárias: uma aproximação pré-contacto e uma interação pós-contacto. Seguindo esta linha de pensamento, para atingir os objetivos propostos, esta dissertação procura dar resposta a três grandes problemas: (1) O controlo do robô para que este desempenhe os movimentos inerentes á tarefa de transferência, (2) a pré-interação humano-robô e (3) a interação por contacto. As capacidades de um sensor 3D e de sensores de força são exploradas com o objetivo de preparar o robô para a transferência e de controlar as ações da garra robótica, correspondentemente. O desenvolvimento de arquitetura software é suportado pela estrutura Robot Operating System (ROS). Finalmente, alguns testes experimentais são realizados para validar as soluções propostas e para avaliar o desempenho do sistema. Uma possível transferência de objetos é alcançada, mesmo que sejam necessários alguns refinamentos, melhorias e extensões para melhorar o desempenho e abrangência da solução.
Kaupp, Tobias. "Probabilistic Human-Robot Information Fusion." Thesis, The University of Sydney, 2008. http://hdl.handle.net/2123/2554.
Full textKaupp, Tobias. "Probabilistic Human-Robot Information Fusion." University of Sydney, 2008. http://hdl.handle.net/2123/2554.
Full textThis thesis is concerned with combining the perceptual abilities of mobile robots and human operators to execute tasks cooperatively. It is generally agreed that a synergy of human and robotic skills offers an opportunity to enhance the capabilities of today’s robotic systems, while also increasing their robustness and reliability. Systems which incorporate both human and robotic information sources have the potential to build complex world models, essential for both automated and human decision making. In this work, humans and robots are regarded as equal team members who interact and communicate on a peer-to-peer basis. Human-robot communication is addressed using probabilistic representations common in robotics. While communication can in general be bidirectional, this work focuses primarily on human-to-robot information flow. More specifically, the approach advocated in this thesis is to let robots fuse their sensor observations with observations obtained from human operators. While robotic perception is well-suited for lower level world descriptions such as geometric properties, humans are able to contribute perceptual information on higher abstraction levels. Human input is translated into the machine representation via Human Sensor Models. A common mathematical framework for humans and robots reinforces the notion of true peer-to-peer interaction. Human-robot information fusion is demonstrated in two application domains: (1) scalable information gathering, and (2) cooperative decision making. Scalable information gathering is experimentally demonstrated on a system comprised of a ground vehicle, an unmanned air vehicle, and two human operators in a natural environment. Information from humans and robots was fused in a fully decentralised manner to build a shared environment representation on multiple abstraction levels. Results are presented in the form of information exchange patterns, qualitatively demonstrating the benefits of human-robot information fusion. The second application domain adds decision making to the human-robot task. Rational decisions are made based on the robots’ current beliefs which are generated by fusing human and robotic observations. Since humans are considered a valuable resource in this context, operators are only queried for input when the expected benefit of an observation exceeds the cost of obtaining it. The system can be seen as adjusting its autonomy at run-time based on the uncertainty in the robots’ beliefs. A navigation task is used to demonstrate the adjustable autonomy system experimentally. Results from two experiments are reported: a quantitative evaluation of human-robot team effectiveness, and a user study to compare the system to classical teleoperation. Results show the superiority of the system with respect to performance, operator workload, and usability.
Ali, Muhammad. "Contribution to decisional human-robot interaction: towards collaborative robot companions." Phd thesis, INSA de Toulouse, 2012. http://tel.archives-ouvertes.fr/tel-00719684.
Full text