To see the other types of publications on this topic, follow the link: Manipulation robotics.

Dissertations / Theses on the topic 'Manipulation robotics'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Manipulation robotics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Huckaby, Jacob O. "Knowledge transfer in robot manipulation tasks." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51902.

Full text
Abstract:
Technology today has progressed to the point that the true potential of robotics is beginning to be realized. However, programming robots to be robust across varied environments and objectives, in a way that is accessible and intuitive to most users, is still a difficult task. There remain a number of unmet needs. For example, many existing solutions today are proprietary, which makes widespread adoption of a single solution difficult to achieve. Also, most approaches are highly targeted to a specific implementation. But it is not clear that these approaches will generalize to a wider range of problems and applications. To address these issues, we define the Interaction Space, or the space created by the interaction between robots and humans. This space is used to classify relevant existing work, and to conceptualize these unmet needs. GTax, a knowledge transfer framework, is presented as a solution that is able to span the Interaction Space. The framework is based on SysML, a standard used in many different systems, which provides a formalized representation and verification. Through this work, we demonstrate that by generalizing across the Interaction Space, we can simplify robot programming and enable knowledge transfer between processes, systems and application domains.
APA, Harvard, Vancouver, ISO, and other styles
2

Berenson, Dmitry. "Constrained Manipulation Planning." Research Showcase @ CMU, 2011. http://repository.cmu.edu/dissertations/172.

Full text
Abstract:
Every planning problem in robotics involves constraints. Whether the robot must avoid collision or joint limits, there are always states that are not permissible. Some constraints are straightforward to satisfy while others can be so stringent that feasible states are very difficult to find. What makes planning with constraints challenging is that, for many constraints, it is impossible or impractical to provide the planning algorithm with the allowed states explicitly; it must discover these states as it plans. The goal of this thesis is to develop a framework for representing and exploring feasible states in the context of manipulation planning. Planning for manipulation gives rise to a rich variety of tasks that include constraints on collision- avoidance, torque, balance, closed-chain kinematics, and end-effector pose. While many researchers have developed representations and strategies to plan with a specific constraint, the goal of this the- sis is to develop a broad representation of constraints on a robot’s configuration and identify general strategies to manage these constraints during the planning process. Some of the most important con- straints in manipulation planning are functions of the pose of the manipulator’s end-effector, so we devote a large part of this thesis to end-effector placement for grasping and transport tasks. We present an efficient approach to generating paths that uses Task Space Regions (TSRs) to specify manipulation tasks which involve end-effector pose goals and/or path constraints. We show how to use TSRs for path planning using the Constrained BiDirectional RRT (CBiRRT2) algorithm and describe several extensions of the TSR representation. Among them are methods to plan with object pose uncertainty, find optimal base placements, and handle more complex pose constraints by chaining TSRs together. We also explore the problem of automatically generating end-effector pose constraints for grasping tasks and present two grasp synthesis algorithms that can generate lists of grasps in extremely clut- tered environments. We then describe how to convert these lists of grasps to TSRs so they can be used with CBiRRT2. We have applied our framework to a wide range of problems for several robots, both in simulation and in the real world. These problems include grasping in cluttered environments, lifting heavy objects, two-armed manipulation, and opening doors, to name a few. These example problems demonstrate our framework’s practicality, and our proof of probabilistic completeness gives our approach a theoretical foundation. In addition to the above framework, we have also developed the Constellation algorithm for finding configurations that satisfy multiple stringent constraints where other constraint-satisfaction strategies fail. We also present the GradienT-RRT algorithm for planning with soft constraints, which outper- forms the state-of-the-art approach to high-dimensional path planning with costs.
APA, Harvard, Vancouver, ISO, and other styles
3

Ziesmer, Jacob Ames. "Reconfigurable End Effector Allowing For In-Hand Manipulation Without Finger Gaiting Or Regrasping." [Milwaukee, Wis.] : e-Publications@Marquette, 2009. http://epublications.marquette.edu/theses_open/2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Güler, Püren. "Learning Object Properties From Manipulation for Manipulation." Doctoral thesis, KTH, Robotik, perception och lärande, RPL, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-207154.

Full text
Abstract:
The world contains objects with various properties - rigid, granular, liquid, elastic or plastic. As humans, while interacting with the objects, we plan our manipulation by considering their properties. For instance, while holding a rigid object such as a brick, we adapt our grasp based on its centre of mass not to drop it. On the other hand while manipulating a deformable object, we may consider additional properties to the centre of mass such elasticity, brittleness etc. for grasp stability. Therefore, knowing object properties is an integral part of skilled manipulation of objects.  For manipulating objects skillfully, robots should be able to predict the object properties as humans do. To predict the properties, interactions with objects are essential. These interactions give rise distinct sensory signals that contains information about the object properties. The signals coming from a single sensory modality may give ambiguous information or noisy measurements. Hence, by integrating multi-sensory modalities (vision, touch, audio or proprioceptive), a manipulated object can be observed from different aspects and this can decrease the uncertainty in the observed properties. By analyzing the perceived sensory signals, a robot reasons about the object properties and adjusts its manipulation based on this information. During this adjustment, the robot can make use of a simulation model to predict the object behavior to plan the next action. For instance, if an object is assumed to be rigid before interaction and exhibit deformable behavior after interaction, an internal simulation model can be used to predict the load force exerted on the object, so that appropriate manipulation can be planned in the next action. Thus, learning about object properties can be defined as an active procedure. The robot explores the object properties actively and purposefully by interacting with the object, and adjusting its manipulation based on the sensory information and predicted object behavior through an internal simulation model. This thesis investigates the necessary mechanisms that we mentioned above to learn object properties: (i) multi-sensory information, (ii) simulation and (iii) active exploration. In particular, we investigate these three mechanisms that represent different and complementary ways of extracting a certain object property, the deformability of objects. Firstly, we investigate the feasibility of using visual and/or tactile data to classify the content of a container based on the deformation observed when a robotic hand squeezes and deforms the container. According to our result, both visual and tactile sensory data individually give high accuracy rates while classifying the content type based on the deformation. Next, we investigate the usage of a simulation model to estimate the object deformability that is revealed through a manipulation. The proposed method identify accurately the deformability of the test objects in synthetic and real-world data. Finally, we investigate the integration of the deformation simulation in a robotic active perception framework to extract the heterogenous deformability properties of an environment through physical interactions. In the experiments that we apply on real-world objects, we illustrate that the active perception framework can map the heterogeneous deformability properties of a surface.

QC 20170517

APA, Harvard, Vancouver, ISO, and other styles
5

McEachern, Wendy A. "Manipulation strategies for applications in rehabilitation robotics." Thesis, University of Cambridge, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Arnekvist, Isac. "Reinforcement learning for robotic manipulation." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-216386.

Full text
Abstract:
Reinforcement learning was recently successfully used for real-world robotic manipulation tasks, without the need for human demonstration, usinga normalized advantage function-algorithm (NAF). Limitations on the shape of the advantage function however poses doubts to what kind of policies can be learned using this method. For similar tasks, convolutional neural networks have been used for pose estimation from images taken with fixed position cameras. For some applications however, this might not be a valid assumption. It was also shown that the quality of policies for robotic tasks severely deteriorates from small camera offsets. This thesis investigates the use of NAF for a pushing task with clear multimodal properties. The results are compared with using a deterministic policy with minimal constraints on the Q-function surface. Methods for pose estimation using convolutional neural networks are further investigated, especially with regards to randomly placed cameras with unknown offsets. By defining the coordinate frame of objects with respect to some visible feature, it is hypothesized that relative pose estimation can be accomplished even when the camera is not fixed and the offset is unknown. NAF is successfully implemented to solve a simple reaching task on a real robotic system where data collection is distributed over several robots, and learning is done on a separate server. Using NAF to learn a pushing task fails to converge to a good policy, both on the real robots and in simulation. Deep deterministic policy gradient (DDPG) is instead used in simulation and successfully learns to solve the task. The learned policy is then applied on the real robots and accomplishes to solve the task in the real setting as well. Pose estimation from fixed position camera images is learned and the policy is still able to solve the task using these estimates. By defining a coordinate frame from an object visible to the camera, in this case the robot arm, a neural network learns to regress the pushable objects pose in this frame without the assumption of a fixed camera. However, the precision of the predictions were too inaccurate to be used for solving the pushing task. Further modifications to this approach could however show to be a feasible solution to randomly placed cameras with unknown poses.
Reinforcement learning har nyligen använts framgångsrikt för att lära icke-simulerade robotar uppgifter med hjälp av en normalized advantage function-algoritm (NAF), detta utan att använda mänskliga demonstrationer. Restriktioner på funktionsytorna som använts kan dock visa sig vara problematiska för generalisering till andra uppgifter. För poseestimering har i liknande sammanhang convolutional neural networks använts med bilder från kamera med konstant position. I vissa applikationer kan dock inte kameran garanteras hålla en konstant position och studier har visat att kvaliteten på policys kraftigt förvärras när kameran förflyttas.   Denna uppsats undersöker användandet av NAF för att lära in en ”pushing”-uppgift med tydliga multimodala egenskaper. Resultaten jämförs med användandet av en deterministisk policy med minimala restriktioner på Q-funktionsytan. Vidare undersöks användandet av convolutional neural networks för pose-estimering, särskilt med hänsyn till slumpmässigt placerade kameror med okänd placering. Genom att definiera koordinatramen för objekt i förhållande till ett synligt referensobjekt så tros relativ pose-estimering kunna utföras även när kameran är rörlig och förflyttningen är okänd. NAF appliceras i denna uppsats framgångsrikt på enklare problem där datainsamling är distribuerad över flera robotar och inlärning sker på en central server. Vid applicering på ”pushing”- uppgiften misslyckas dock NAF, både vid träning på riktiga robotar och i simulering. Deep deterministic policy gradient (DDPG) appliceras istället på problemet och lär sig framgångsrikt att lösa problemet i simulering. Den inlärda policyn appliceras sedan framgångsrikt på riktiga robotar. Pose-estimering genom att använda en fast kamera implementeras också framgångsrikt. Genom att definiera ett koordinatsystem från ett föremål i bilden med känd position, i detta fall robotarmen, kan andra föremåls positioner beskrivas i denna koordinatram med hjälp av neurala nätverk. Dock så visar sig precisionen vara för låg för att appliceras på robotar. Resultaten visar ändå att denna metod, med ytterligare utökningar och modifikationer, skulle kunna lösa problemet.
APA, Harvard, Vancouver, ISO, and other styles
7

Jentoft, Leif Patrick. "Sensing and Control for Robust Grasping with Simple Hardware." Thesis, Harvard University, 2014. http://dissertations.umi.com/gsas.harvard:11657.

Full text
Abstract:
Robots can move, see, and navigate in the real world outside carefully structured factories, but they cannot yet grasp and manipulate objects without human intervention. Two key barriers are the complexity of current approaches, which require complicated hardware or precise perception to function effectively, and the challenge of understanding system performance in a tractable manner given the wide range of factors that impact successful grasping. This thesis presents sensors and simple control algorithms that relax the requirements on robot hardware, and a framework to understand the capabilities and limitations of grasping systems.
Engineering and Applied Sciences
APA, Harvard, Vancouver, ISO, and other styles
8

Dogar, Mehmet R. "Physics-Based Manipulation Planning in Cluttered Human Environments." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/310.

Full text
Abstract:
This thesis presents a series of planners and algorithms for manipulation in cluttered human environments. The focus is on using physics-based predictions, particularly for pushing operations, as an effective way to address the manipulation challenges posed by these environments. We introduce push-grasping, a physics-based action to grasp an object first by pushing it and then closing the fingers. We analyze the mechanics of push-grasping and demonstrate its effectiveness under clutter and object pose uncertainty. We integrate a planning system based on push-grasping to the geometric planners traditionally used in grasping. We then show that a similar approach can be used to perform manipulation with environmental contact in cluttered environments. We present a planner where the robot can simultaneously push multiple obstacles out of the way while grasping an object through clutter. In the second part of this thesis we focus on planning a sequence of actions to manipulate clutter. We present a planning framework to rearrange clutter using prehensile and nonprehensile primitives. We show that our planner succeeds in environments where planners which only use prehensile primitives fail. We then explore the problem of manipulating clutter to search for a hidden object. We formulate the problem as minimizing the expected time to find the target, present two algorithms, and analyze their complexity and optimality.
APA, Harvard, Vancouver, ISO, and other styles
9

Dong, Shen. "Virtual manipulation." School of Electrical, Computer and Telecommunications Engineering - Faculty of Informatics, 2008. http://ro.uow.edu.au/theses/141.

Full text
Abstract:
An empirical research on developing a new paradigm for programming a robotics manipulator to perform complex constrained motion tasks is carried out in this thesis. The teaching of the manipulation skills to the machine commences by demonstrating those skills in a haptic-rendered virtual environment. This is in contrast to conventional approach in which a robotics manipulator is programmed to perform a particular task. A manipulation skill consists of a number of basic skills that, when sequenced and integrated, can perform a desired manipulation task. By manipulation means the ability to transfer, physically transform or mate a part with another part. Haptic-rendering augments the effectiveness of computer simulation by providing force feedback for the user. This increases the quality of human - computer interaction and provides an attractive augmentation to visual display and significantly enhances the level of immersion in a virtual environment. The study is conducted based on the peg-in-hole application as it concisely represents a constrained motion-force-sensitive manufacturing task with all the attendant issues of jamming, tight clearances, and the need for quick assembly times, reliability, etc. The state recognition approach is used to identify and classify the skills acquired from the virtual environment. A human operator demonstrates both good and bad examples of the desired behaviour in the haptic virtual environment. Position and contact force and torque ii data, as well as orientation generated in the virtual environment, combined with a priori knowledge about the task, are used to identify and learn the skills in the newly demonstrated tasks and then to reproduce them in the robotics system. The robot evaluates the controller’s performance and thus learns the best way to produce that behaviour. The data obtained from the virtual environment is classified into different cluster sets using the Hidden Markov Model (HMM), Fuzzy Gustafson–Kessel (FGK) and Competitive Agglomeration (CA) respectively. Each cluster represents a contact state between the peg and the hole. The clusters in the optimum cluster set are tuned using a Locally Weighted Regression (LWR) algorithm to produce prediction models for robot trajectory performing the physical assembly based on the force/position information received from the rig. The significance of the work is highlighted. The approach developed and the outcomes achieved are reported. The development of the haptic-rendered virtual peg-in-hole model and structure of the physical experimental rig are described. The approach is validated though experimental work results are critically evaluated. Keywords: Haptic, PHANToM, ReachIn, Virtual Reality, Peg-in-hole, Skill acquisition.
APA, Harvard, Vancouver, ISO, and other styles
10

Lu, Su. "Subtask Automation in Robotic Surgery: Needle Manipulation for Surgical Suturing." Case Western Reserve University School of Graduate Studies / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=case1607429591883517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Cochran, Nigel B. "The Development of a Sensitive Manipulation Platform." Digital WPI, 2013. https://digitalcommons.wpi.edu/etd-theses/861.

Full text
Abstract:
"This thesis presents an extension of sensitive manipulation which transforms tactile sensors away from end effectors and closer to whole body sensory feedback. Sensitive manipulation is a robotics concept which more closely replicates nature by employing tactile sensing to interact with the world. While traditional robotic arms are specifically designed to avoid contact, biological systems actually embrace and intentionally contact the environment. This arm is inspired by these biological systems and therefore has compliant joints and a tactile shell surrounding the two primary links of the arm. The manipulator has also been designed to be capable of both industrial and humanoid style manipulation. There are an untold number of applications for an arm with increased tactile feedback primarily in dynamic environments such as in industrial, humanoid, and prosthetic applications. The arm developed for this thesis is intended to be a desktop research platform, however, one of the most influential applications for increased tactile feedback is in prosthetics which are operate in ever changing and contact ridden environments while continuously interacting with humans. This thesis details the simulation, design, analysis, and evaluation of a the first four degrees of freedom of a robotic arm with particular attention given to the design of modular series elastic actuators in each joint as well as the incorporation of a shell of tactile sensors. "
APA, Harvard, Vancouver, ISO, and other styles
12

Frenette, Réal. "Evaluation of video-camera controls for remote manipulation." Thesis, University of British Columbia, 1985. http://hdl.handle.net/2429/25093.

Full text
Abstract:
The control of the video-camera plays an important factor in the overall efficiency of a teleoperator system. A computer-based video-camera control has been designed to compare and evaluate four different modes of control. A situation where an operator does not have a free hand for the control of the video-camera has been selected: such a situation can be found in subsea applications where the operator is required to steer a submarine and to manipulate a robot arm. The four modes are: • manual control mode : The operator's right hand is used to control both the robot arm and the camera system. The orientation of the camera (with close-up lens) is performed by pressing push buttons. • automatic tracking mode : The camera (with close-up lens) automatically tracks the end effector of the slave arm, without direction from the operator. • voice-operated mode : The orientation of the camera (with close-up lens) is accomplished by spoken commands. • fixed-camera-position mode : A wide angle lens is used in this mode. The camera constantly remains in a straight ahead position and no controls are required. A tracking task and a pick-and-drop task were performed during the experiments. Measures of speed and accuracy were taken and analyzed; subjective remarks were also gathered. Results showed significant differences between the modes. Specifically, automatic tracking mode and voice-operated mode were found to offer the best ergonomic environment for the operator in terms of speed-accuracy tradeoff.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
13

Bullock, Ian Merrill. "Understanding Human Hand Functionality| Classification, Whole-Hand Usage, and Precision Manipulation." Thesis, Yale University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10584937.

Full text
Abstract:

A better understanding of human hand functionality can help improve robotic and prosthetic hand capability, as well as having benefits for rehabilitation or device design. While the human hand has been studied extensively in various fields, fewer existing works study the human hand within frameworks which can be easily applied to robotic applications, or attempt to quantify complex human hand functionality in real-world environments or with tasks approaching real-world complexity. This dissertation presents a study of human hand functionality from the multiple angles of high level classification methods, whole-hand grasp usage, and precision manipulation, where a small object is repositioned in the fingertips.

Our manipulation classification work presents a motion-centric scheme which can be applied to any human or hand-based robotic manipulation task. Most previous classifications are domain specific and cannot easily be applied to both robotic and human tasks, or can only be applied to a certain subset of manipulation tasks. We present a number of criteria which can be used to describe manipulation tasks and understand differences in the hand functionality used. These criteria are then applied to a number of real world example tasks, including a description of how the classification state can change over time during a dynamic manipulation task.

Next, our study of real-world grasping contributes to an understanding of whole-hand usage. Using head mounted camera video from two housekeepers and two machinists, we analyze the grasps used in their natural work environments. By tagging both grasp state and objects involved, we can measure the prevalence of each grasp and also understand how the grasp is typically used. We then use the grasp-object relationships to select small sets of versatile grasps which can still handle a wide variety of objects, which are promising candidates for implementation in robotic or prosthetic manipulators.

Following the discussion of overall hand shapes, we then present a study of precision manipulation, or how people reposition small objects in the fingertips. Little prior work was found which experimentally measures human capabilities with a full multi-finger precision manipulation task. Our work reports the size and shape for the precision manipulation workspace, and finds that the overall workspace is small, but also has a certain axis along which more object movement is possible. We then show the effect of object size and the number of fingers used on the resulting workspace volume – an ideal object size range is determined, and it is shown that adding additional fingers will reduce workspace volume, likely due to the additional kinematic constraints. Using similar methods to our main precision manipulation investigation, but with a spherical object rolled in the fingertips, we also report the overall fingertip surface usage for two- and three-fingered manipulation, and show a shift in typical fingertip area used between the two and three finger cases.

The experimental precision manipulation data is then used to refine the design of an anthropomorphic precision manipulator. The human precision manipulation workspace is used to select suitable spring ratios for the robotic fingers, and the resulting hand is shown to achieve about half of the average human workspace, despite using only three actuators.

Overall, we investigate multiple aspects of human hand function, as well as constructing a new framework for analyzing human and robotic manipulation. This work contributes to an improved understanding of human grasp usage in real-world environments, as well as human precision manipulation workspace. We provide a demonstration of how some of the studied aspects of human hand function can be applied to anthropomorphic manipulator design, but we anticipate that the results will also be of interest in other fields, such as by helping to design devices matched to hand capabilities and typical usage, or providing inspiration for future methods to rehabilitate hand function.

APA, Harvard, Vancouver, ISO, and other styles
14

Scholz, Jonathan. "Physics-based reinforcement learning for autonomous manipulation." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54366.

Full text
Abstract:
With recent research advances, the dream of bringing domestic robots into our everyday lives has become more plausible than ever. Domestic robotics has grown dramatically in the past decade, with applications ranging from house cleaning to food service to health care. To date, the majority of the planning and control machinery for these systems are carefully designed by human engineers. A large portion of this effort goes into selecting the appropriate models and control techniques for each application, and these skills take years to master. Relieving the burden on human experts is therefore a central challenge for bringing robot technology to the masses. This work addresses this challenge by introducing a physics engine as a model space for an autonomous robot, and defining procedures for enabling robots to decide when and how to learn these models. We also present an appropriate space of motor controllers for these models, and introduce ways to intelligently select when to use each controller based on the estimated model parameters. We integrate these components into a framework called Physics-Based Reinforcement Learning, which features a stochastic physics engine as the core model structure. Together these methods enable a robot to adapt to unfamiliar environments without human intervention. The central focus of this thesis is on fast online model learning for objects with under-specified dynamics. We develop our approach across a diverse range of domestic tasks, starting with a simple table-top manipulation task, followed by a mobile manipulation task involving a single utility cart, and finally an open-ended navigation task with multiple obstacles impeding robot progress. We also present simulation results illustrating the efficiency of our method compared to existing approaches in the learning literature.
APA, Harvard, Vancouver, ISO, and other styles
15

Verbryke, Matthew R. "Preliminary Implementation of a Modular Control System for Dual-Arm Manipulation with a Humanoid Robot." University of Cincinnati / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1543838768677697.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Jain, Advait. "Mobile manipulation in unstructured environments with haptic sensing and compliant joints." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45788.

Full text
Abstract:
We make two main contributions in this thesis. First, we present our approach to robot manipulation, which emphasizes the benefits of making contact with the world across all the surfaces of a manipulator with whole-arm tactile sensing and compliant actuation at the joints. In contrast, many current approaches to mobile manipulation assume most contact is a failure of the system, restrict contact to only occur at well modeled end effectors, and use stiff, precise control to avoid contact. We develop a controller that enables robots with whole-arm tactile sensing and compliant actuation at the joints to reach to locations in high clutter while regulating contact forces. We assume that low contact forces are benign and our controller does not place any penalty on contact forces below a threshold. Our controller only requires haptic sensing, handles multiple contacts across the surface of the manipulator, and does not need an explicit model of the environment prior to contact. It uses model predictive control with a time horizon of length one, and a linear quasi-static mechanical model that it constructs at each time step. We show that our controller enables both a real and simulated robots to reach goal locations in high clutter with low contact forces. While doing so, the robots bend, compress, slide, and pivot around objects. To enable experiments on real robots, we also developed an inexpensive, flexible, and stretchable tactile sensor and covered large surfaces of two robot arms with these sensors. With an informal experiment, we show that our controller and sensor have the potential to enable robots to manipulate in close proximity to, and in contact with humans while keeping the contact forces low. Second, we present an approach to give robots common sense about everyday forces in the form of probabilistic data-driven object-centric models of haptic interactions. These models can be shared by different robots for improved manipulation performance. We use pulling open doors, an important task for service robots, as an example to demonstrate our approach. Specifically, we capture and model the statistics of forces while pulling open doors and drawers. Using a portable custom force and motion capture system, we create a database of forces as human operators pull open doors and drawers in six homes and one office. We then build data-driven models of the expected forces while opening a mechanism, given knowledge of either its class (e.g, refrigerator) or the mechanism identity (e.g, a particular cabinet in Advait's kitchen). We demonstrate that these models can enable robots to detect anomalous conditions such as a locked door, or collisions between the door and the environment faster and with lower excess force applied to the door compared to methods that do not use a database of forces.
APA, Harvard, Vancouver, ISO, and other styles
17

Al-Gallaf, Ebrahim Abdulla. "Task space robot hand manipulation and optimal distribution of fingertip force functions." Thesis, University of Reading, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Atherton, John A. "Supporting Remote Manipulation: An Ecological Approach." BYU ScholarsArchive, 2009. https://scholarsarchive.byu.edu/etd/1895.

Full text
Abstract:
User interfaces for remote robotic manipulation widely lack sufficient support for situation awareness and, consequently, can induce high mental workload. With poor situation awareness, operators may fail to notice task-relevant features in the environment often leading the robot to collide with the environment. With high workload, operators may not perform well over long periods of time and may feel stressed. We present an ecological visualization that improves operator situation awareness. Our user study shows that operators using the ecological interface collided with the environment on average half as many times compared with a typical interface, even with a poorly calibrated 3D sensor; however, users performed more quickly with the typical interface. The primary benefit of the user study is identifying several changes to the design of the user interface; preliminary results indicate that these changes improve the usability of the manipulator.
APA, Harvard, Vancouver, ISO, and other styles
19

Grier, Michael Anthony 1956. "Control of modular robotic fingers toward dexterous manipulation with sliding contacts." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/276988.

Full text
Abstract:
Control and other issues related to the use of modular robotic fingers to perform dexterous manipulation are considered. The specific manipulation strategy to be implemented, which focuses on parts acquisition and takes advantage of sliding contacts which exist between the fingers and the object being manipulated, is described. The results of early implementation efforts are discussed in which a standard individual-actuator PID control approach was used. Problems related to friction and other effects are identified which were encountered in these early efforts. A computed torque control scheme which provides adaptive friction compensation is proposed for future use with the fingers. Results are discussed of simulations performed to help determine if use with the fingers of this proposed approach will improve system tracking performance in the presence of a variety of disturbances like those which will affect the fingers during actual operation. Implications of results for future implementation efforts are discussed.
APA, Harvard, Vancouver, ISO, and other styles
20

Luo, Guoliang. "Evaluation of a Model-free Approach to Object Manipulation in Robotics." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-156434.

Full text
Abstract:
Action Recognition is crucial for object manipulation in robotics. In recent years, Programming by Demonstration has been proposed as a way for a robot learning tasks from human demonstrations. Based on this concept, a model-free approach for object manipulation has recently been proposed in [1]. In this thesis, this model-free approach is evaluated for Action Recognition. In specific, the approach classifies actions by observing object-interaction changes from video. Image segmentation to videos presents various difficulties, such as motion blur, complex environment, Over- and Under- segmentation. This thesis investigates and simulates these image segmentation errors in a controllable manner. Based on the simulation, two different similarity measure methods are evaluated: The Substring Match (SSM) and Bhattacharyya Distance (B-Distance) method. The results show that the B-Distance method is more consistent and capable to classify actions with higher noise level compare to the SSM method. Further, we propose an action representation using kernel method. The evaluation shows that the novel representation improves Action Recognition rate significantly.
APA, Harvard, Vancouver, ISO, and other styles
21

De, La Bourdonnaye François. "Learning sensori-motor mappings using little knowledge : application to manipulation robotics." Thesis, Université Clermont Auvergne‎ (2017-2020), 2018. http://www.theses.fr/2018CLFAC037/document.

Full text
Abstract:
La thèse consiste en l'apprentissage d'une tâche complexe de robotique de manipulation en utilisant très peu d'aprioris. Plus précisément, la tâche apprise consiste à atteindre un objet avec un robot série. L'objectif est de réaliser cet apprentissage sans paramètres de calibrage des caméras, modèles géométriques directs, descripteurs faits à la main ou des démonstrations d'expert. L'apprentissage par renforcement profond est une classe d'algorithmes particulièrement intéressante dans cette optique. En effet, l'apprentissage par renforcement permet d’apprendre une compétence sensori-motrice en se passant de modèles dynamiques. Par ailleurs, l'apprentissage profond permet de se passer de descripteurs faits à la main pour la représentation d'état. Cependant, spécifier les objectifs sans supervision humaine est un défi important. Certaines solutions consistent à utiliser des signaux de récompense informatifs ou des démonstrations d'experts pour guider le robot vers les solutions. D'autres consistent à décomposer l'apprentissage. Par exemple, l'apprentissage "petit à petit" ou "du simple au compliqué" peut être utilisé. Cependant, cette stratégie nécessite la connaissance de l'objectif en termes d'état. Une autre solution est de décomposer une tâche complexe en plusieurs tâches plus simples. Néanmoins, cela n'implique pas l'absence de supervision pour les sous tâches mentionnées. D'autres approches utilisant plusieurs robots en parallèle peuvent également être utilisés mais nécessite du matériel coûteux. Pour notre approche, nous nous inspirons du comportement des êtres humains. Ces derniers généralement regardent l'objet avant de le manipuler. Ainsi, nous décomposons la tâche d'atteinte en 3 sous tâches. La première tâche consiste à apprendre à fixer un objet avec un système de deux caméras pour le localiser dans l'espace. Cette tâche est apprise avec de l'apprentissage par renforcement profond et un signal de récompense faiblement supervisé. Pour la tâche suivante, deux compétences sont apprises en parallèle : la fixation d'effecteur et une fonction de coordination main-oeil. Comme la précédente tâche, un algorithme d'apprentissage par renforcement profond est utilisé avec un signal de récompense faiblement supervisé. Le but de cette tâche est d'être capable de localiser l'effecteur du robot à partir des coordonnées articulaires. La dernière tâche utilise les compétences apprises lors des deux précédentes étapes pour apprendre au robot à atteindre un objet. Cet apprentissage utilise les mêmes aprioris que pour les tâches précédentes. En plus de la tâche d'atteinte, un predicteur d'atteignabilité d'objet est appris. La principale contribution de ces travaux est l'apprentissage d'une tâche de robotique complexe en n'utilisant que très peu de supervision
The thesis is focused on learning a complex manipulation robotics task using little knowledge. More precisely, the concerned task consists in reaching an object with a serial arm and the objective is to learn it without camera calibration parameters, forward kinematics, handcrafted features, or expert demonstrations. Deep reinforcement learning algorithms suit well to this objective. Indeed, reinforcement learning allows to learn sensori-motor mappings while dispensing with dynamics. Besides, deep learning allows to dispense with handcrafted features for the state spacerepresentation. However, it is difficult to specify the objectives of the learned task without requiring human supervision. Some solutions imply expert demonstrations or shaping rewards to guiderobots towards its objective. The latter is generally computed using forward kinematics and handcrafted visual modules. Another class of solutions consists in decomposing the complex task. Learning from easy missions can be used, but this requires the knowledge of a goal state. Decomposing the whole complex into simpler sub tasks can also be utilized (hierarchical learning) but does notnecessarily imply a lack of human supervision. Alternate approaches which use several agents in parallel to increase the probability of success can be used but are costly. In our approach,we decompose the whole reaching task into three simpler sub tasks while taking inspiration from the human behavior. Indeed, humans first look at an object before reaching it. The first learned task is an object fixation task which is aimed at localizing the object in the 3D space. This is learned using deep reinforcement learning and a weakly supervised reward function. The second task consists in learning jointly end-effector binocular fixations and a hand-eye coordination function. This is also learned using a similar set-up and is aimed at localizing the end-effector in the 3D space. The third task uses the two prior learned skills to learn to reach an object and uses the same requirements as the two prior tasks: it hardly requires supervision. In addition, without using additional priors, an object reachability predictor is learned in parallel. The main contribution of this thesis is the learning of a complex robotic task with weak supervision
APA, Harvard, Vancouver, ISO, and other styles
22

Coleman, Catherine. "The Development of a Sensitive Manipulation End Effector." Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-theses/160.

Full text
Abstract:
This thesis designed and realized a two-degree of freedom wrist and two finger manipulator that completes the six-degree of freedom Sensitive Manipulation Platform, the arm of which was previously developed. This platform extends the previous research in the field of robotics by covering not only the end effector with deformable tactile sensors, but also the links of the arm. Having tactile sensors on the arm will improve the dynamic model of the system during contact with its environment and will allow research in contact navigation to be explored. This type of research is intended for developing algorithms for exploring dynamic environments. Unlike traditional robots that focus on collision avoidance, this platform is designed to seek out contact and use it to gather important information about its surroundings. This small desktop platform was designed to have similar proportions and properties to a small human arm. These properties include compliant joints and tactile sensitivity along the lengths of the arms. The primary applications for the completed platform will be research in contact navigation and manipulation in dynamic environments. However, there are countless potential applications for a compliant arm with increased tactile feedback, including prosthetics and domestic robotics. This thesis covers the details behind the design, analysis, and evaluation of the two degrees of the Wrist and two two-link fingers, with particular attention being given to the integration of series elastics actuators, the decoupling of the fingers from the wrist, and the incorporation of tactile sensors in both the forearm motor module and fingers.
APA, Harvard, Vancouver, ISO, and other styles
23

Erdogan, Can. "Planning in constraint space for multi-body manipulation tasks." Diss., Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/54978.

Full text
Abstract:
Robots are inherently limited by physical constraints on their link lengths, motor torques, battery power and structural rigidity. To thrive in circumstances that push these limits, such as in search and rescue scenarios, intelligent agents can use the available objects in their environment as tools. Reasoning about arbitrary objects and how they can be placed together to create useful structures such as ramps, bridges or simple machines is critical to push beyond one's physical limitations. Unfortunately, the solution space is combinatorial in the number of available objects and the configuration space of the chosen objects and the robot that uses the structure is high dimensional. To address these challenges, we propose using constraint satisfaction as a means to test the feasibility of candidate structures and adopt search algorithms in the classical planning literature to find sufficient designs. The key idea is that the interactions between the components of a structure can be encoded as equality and inequality constraints on the configuration spaces of the respective objects. Furthermore, constraints that are induced by a broadly defined action, such as placing an object on another, can be grouped together using logical representations such as Planning Domain Definition Language (PDDL). Then, a classical planning search algorithm can reason about which set of constraints to impose on the available objects, iteratively creating a structure that satisfies the task goals and the robot constraints. To demonstrate the effectiveness of this framework, we present both simulation and real robot results with static structures such as ramps, bridges and stairs, and quasi-static structures such as lever-fulcrum simple machines.
APA, Harvard, Vancouver, ISO, and other styles
24

Fehlberg, Mark Allan. "Improving large workspace precision manipulation through use of an active handrest." Thesis, The University of Utah, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3619812.

Full text
Abstract:

Humans generally have difficulty performing precision tasks with their unsupported hands. To compensate for this difficulty, people often seek to support or rest their hand and arm on a fixed surface. However, when the precision task needs to be performed over a workspace larger than what can be reached from a fixed position, a fixed support is no longer useful.

This dissertation describes the development of the Active Handrest, a device that expands its user's dexterous workspace by providing ergonomic support and precise repositioning motions over a large workspace. The prototype Active Handrest is a planar computer-controlled support for the user's hand and arm. The device can be controlled through force input from the user, position input from a grasped tool, or a combination of inputs. The control algorithm of the Active Handrest converts the input(s) into device motions through admittance control where the device's desired velocity is calculated proportionally to the input force or its equivalent.

A robotic 2-axis admittance device was constructed as the initial Planar Active Handrest, or PAHR, prototype. Experiments were conducted to optimize the device's control input strategies. Large workspace shape tracing experiments were used to compare the PAHR to unsupported, fixed support, and passive moveable support conditions. The Active Handrest was found to reduce task error and provide better speed-accuracy performance.

Next, virtual fixture strategies were explored for the device. From the options considered, a virtual spring fixture strategy was chosen based on its effectiveness. An experiment was conducted to compare the PAHR with its virtual fixture strategy to traditional virtual fixture techniques for a grasped stylus. Virtual fixtures implemented on the Active Handrest were found to be as effective as fixtures implemented on a grasped tool.

Finally, a higher degree-of-freedom Enhanced Planar Active Handrest, or E-PAHR, was constructed to provide support for large workspace precision tasks while more closely following the planar motions of the human arm. Experiments were conducted to investigate appropriate control strategies and device utility. The E-PAHR was found to provide a skill level equal to that of the PAHR with reduced user force input and lower perceived exertion.

APA, Harvard, Vancouver, ISO, and other styles
25

Ejdeholm, Dawid, and Jacob Harsten. "Manipulation Action Recognition and Reconstruction using a Deep Scene Graph Network." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-42405.

Full text
Abstract:
Convolutional neural networks have been successfully used in action recognition but are usually restricted to operate on Euclidean data, such as images. In recent years there has been an increase in research devoted towards finding a generalized model operating on non-Euclidean data (e.g graphs) and manipulation action recognition on graphs is still a very novel subject. In this thesis a novel graph based deep neural network is developed for predicting manipulation actions and reconstructing graphs from a lower space representation. The network is trained on two manipulation action datasets and uses their, respective, previous works on action prediction as a baseline. In addition, a modular perception pipeline is developed that takes RGBD images as input and outputs a scene graph, consisting of objects and their spatial relations, which can then be fed to the network to lead to online action prediction. The network manages to outperform both baselines when training for action prediction and achieves comparable results when trained in an end-to-end manner performing both action prediction and graph reconstruction, simultaneously. Furthermore, to test the scalability of our model, the network is tested with input graphs deriving from our scene graph generator where the subject is performing 7 different demonstrations of the learned action types in a new scene context with novel objects.
APA, Harvard, Vancouver, ISO, and other styles
26

Pence, William Garrett. "Autonomous Mobility and Manipulation of a 9-DoF WMRA." Scholar Commons, 2011. http://scholarcommons.usf.edu/etd/3288.

Full text
Abstract:
The wheelchair-mounted robotic arm (WMRA) is a 9-degree of freedom (DoF) assistive system that consists of a 2-DoF modified commercial power wheelchair and a custom 7-DoF robotic arm. Kinematics and control methodology for the 9-DoF system that combine mobility and manipulation have been previously developed and implemented. This combined control allows the wheelchair and robotic arm to follow a single trajectory based on weighted optimizations. However, for the execution of activities of daily living (ADL) in the real-world environment, modified control techniques have been implemented. In order to execute macro ADL tasks, such as a "go to and pick up" task, this work has implemented several control algorithms on the WMRA system. Visual servoing based on template matching and feature extraction allows the mobile platform to approach the desired goal object. Feature extraction based on scale-invariant feature transform (SIFT) gives the system object detection capabilities to recommend actions to the user and to orient the arm to grasp the goal object using visual servoing. Finally, a collision avoidance system is implemented to detect and avoid obstacles when the wheelchair platform is moving towards the goal object. These implementations allow the WMRA system to operate autonomously from the beginning of the task where the user selects the goal object, all the way to the end of the task where the task has been fully completed.
APA, Harvard, Vancouver, ISO, and other styles
27

Tariq, Usama. "Robotic Grasping of Large Objects for Collaborative Manipulation." Thesis, Luleå tekniska universitet, Rymdteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-65866.

Full text
Abstract:
In near future, robots are envisioned to work alongside humans in professional anddomestic environments without significant restructuring of workspace. Roboticsystems in such setups must be adept at observation, analysis and rational de-cision making. To coexist in an environment, humans and robots will need tointeract and cooperate for multiple tasks. A fundamental such task is the manip-ulation of large objects in work environments which requires cooperation betweenmultiple manipulating agents for load sharing. Collaborative manipulation hasbeen studied in the literature with the focus on multi-agent planning and controlstrategies. However, for a collaborative manipulation task, grasp planning alsoplays a pivotal role in cooperation and task completion.In this work, a novel approach is proposed for collaborative grasping and manipu-lation of large unknown objects. The manipulation task was defined as a sequenceof poses and expected external wrench acting on the target object. In a two-agentmanipulation task, the proposed approach selects a grasp for the second agentafter observing the grasp location of the first agent. The solution is computed ina way that it minimizes the grasp wrenches by load sharing between both agents.To verify the proposed methodology, an online system for human-robot manipu-lation of unknown objects was developed. The system utilized depth informationfrom a fixed Kinect sensor for perception and decision making for a human-robotcollaborative lift-up. Experiments with multiple objects substantiated that theproposed method results in an optimal load sharing despite limited informationand partial observability.
APA, Harvard, Vancouver, ISO, and other styles
28

Sandberg, Robert D. "Use of tactile and vision sensing for recognition of overlapping parts for robot manipulation." Thesis, Georgia Institute of Technology, 1985. http://hdl.handle.net/1853/18910.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Ozguner, Orhan. "VISUALLY GUIDED ROBOT CONTROL FOR AUTONOMOUS LOW-LEVEL SURGICAL MANIPULATION TASKS." Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1568138320331765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Venator, Edward Stephen. "A Low-cost Mobile Manipulator for Industrial and Research Applications." Case Western Reserve University School of Graduate Studies / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=case1370512665.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Newall, Geoffrey Charles. "Manipulation of composite sheet material for automatic handling and lay-up." Thesis, University of Bristol, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.386276.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Jäkel, Rainer [Verfasser], and R. [Akademischer Betreuer] Dillmann. "Learning of Generalized Manipulation Strategies in Service Robotics / Rainer Jäkel. Betreuer: R. Dillmann." Karlsruhe : KIT-Bibliothek, 2013. http://d-nb.info/1032243201/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Boonvisut, Pasu. "Active Exploration of Deformable Object Boundary Constraints and Material Parameters Through Robotic Manipulation Data." Case Western Reserve University School of Graduate Studies / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=case1369078402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Ezequiel, Carlos Favis. "Real-Time Map Manipulation for Mobile Robot Navigation." Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4481.

Full text
Abstract:
Mobile robots are gaining increased autonomy due to advances in sensor and computing technology. In their current form however, robots still lack algorithms for rapid perception of objects in a cluttered environment and can benefit from the assistance of a human operator. Further, fully autonomous systems will continue to be computationally expensive and costly for quite some time. Humans can visually assess objects and determine whether a certain path is traversable, but need not be involved in the low-level steering around any detected obstacles as is necessary in remote-controlled systems. If only used for rapid perception tasks, the operator could potentially assist several mobile robots performing various tasks such as exploration, surveillance, industrial work and search and rescue operations. There is a need to develop better human-robot interaction paradigms that would allow the human operator to effectively control and manage one or more mobile robots. This paper proposes a method of enhancing user effectiveness in controlling multiple mobile robots through real-time map manipulation. An interface is created that would allow a human operator to add virtual obstacles to the map that represents areas that the robot should avoid. A video camera is connected to the robot that would allow a human user to view the robot's environment. The combination of real-time map editing and live video streaming enables the robot to take advantage of human vision, which is still more effective at general object identification than current computer vision technology. Experimental results show that the robot is able to plan a faster path around an obstacle when the user marks the obstacle on the map, as opposed to allowing the robot to navigate on its own around an unmapped obstacle. Tests conducted on multiple users suggest that the accuracy in placing obstacles on the map decreases with increasing distance of the viewing apparatus from the obstacle. Despite this, the user can take advantage of landmarks found in the video and in the map in order to determine an obstacle's position on the map.
APA, Harvard, Vancouver, ISO, and other styles
35

Khokar, Karan. "Laser assisted telerobotic control for remote manipulation activities." [Tampa, Fla] : University of South Florida, 2009. http://purl.fcla.edu/usf/dc/et/SFE0003128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Ye, Zhou. "Local Flow Manipulation by Rotational Motion of Magnetic Micro-Robots and Its Applications." Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/429.

Full text
Abstract:
Magnetic micro-robots are small robots under 1mm in size, made of magnetic materials, with relatively simple structures and functionalities. Such micro-robots can be actuated and controlled remotely by externally applied magnetic fields, and hence have the potential to access small and enclosed spaces. Most of the existing magnetic micro-robots can operate in wet environments. When the robots are actuated by the applied magnetic field to move inside a viscous liquid, they invoke flow motions around them inside the liquid. The induced flows are relatively local as the velocity of these flows decays rapidly with the distance from a moving robot, and the flow patterns are highly correlated with the motions of the micro-robots which are controllable by the applied magnetic field. Therefore, it is possible to generate local flow patterns that cannot be easily done using other microfluidic techniques. In this work we propose to use rotational motion of the magnetic micro-robots for local manipulation of flows. We employ electromagnetic techniques to successfully deliver actuation and motion control onto the micro-robots. Rotational magnetic field is applied to induce rotational motion of micro-robots both when they stay near a surface and are suspended in the liquid. Rotational flows are locally generated in the vicinity of micro-robots inside the viscous liquid. Implementation of three major applications using the flows generated by the rotating micro-robots are demonstrated in this work: 1) Two-dimensional (2D) non-contact manipulation of micro-objects. 2) Three-dimensional (3D) propulsion for the micro-robot to swim in a liquid. 3) Size-based sorting of micro-particles in microfluidic channels under continuous flow. The first two applications occur in otherwise quiescent liquid, while the third requires the presence of non-zero background flow. For the first application, we propose two methods to achieve precise positioning of the microrobots on a surface: 1) Using visual-feedback-control to adjust the rotation for one single microrobot. Micro-robot can be precisely positioned at any location on a surface using this method. 2) Using a specially prepared surface with magnetic micro-docks embedded in it, which act as local magnetic traps for multiple micro-robots to hold their positions and operate in parallel. Physical models are established for both the micro-robot and the micro-objects present in the induced rotational flow. The rotational flows induced by rotating micro-robots are studied with numerical simulations. Experimental demonstrations are first given at sub-millimeter scale to verify the proposed method. Micro-manipulation of polymer beads is performed with both positioncontrol methods. Automated micro-manipulation is also achieved using visual-feedback. Micromanipulation at micron-scale is then performed to demonstrate the scalability and versatility of the proposed method. Non-contact manipulation is achieved for various micro-objects, including biological samples, using a single spherical micro-robot. Inspired by flagellated microorganisms in nature, we explore the hydrodynamics of an elastic rod-like structure - the artificial flagellum, and verify by both simulation and experiments that rotation and deformation of such structure can result in a propulsive force on a micro-robot it is attached to. Optimization of flagellum geometry is achieved for a single flagellum. A swimming micro-robot design with multiple flexible flagella is proposed and fabricated via an inexpensive micro-fabrication process involving photolithography, micro-molding and manual assembly. Experiments are perform to characterize the propulsive force generation and the resulting swimming performance of the fabricated micro-robots. It is demonstrated that the swimming speed can be improved by increasing the number of attached flagella. For the size-based sorting application, we integrate the micro-robots into microfluidic channels by using the substrate embedded with magnetic micro-docks, which are capable of holding the robots under continuous flow inside the channels while the robots spin. Numerical analysis is carried out of the flows inside the microfluidic channel in the presence of rotating micro-robots, and a physical model is established and discussed for size-based lateral migration of spherical micro-objects inside the induced rotational flows. Experimental demonstrations are performed for using the induced rotational flows to divert the trajectories of micro-particles based on their sizes under continuous flow. In addition, we propose the method of using the two photon polymerization (TPP) technique to fabricate magnetic micro-robots with complex shapes. The method could also achieve fabrication of arrays of micro-robots for more sophisticated applications. However, experimental results prove that the TPP is insufficient to achieve magnetic micro-robots that meet our needs for size-based sorting application due to physical limitations of the materials. Despite that, it is potentially powerful and suitable for fabrication of micro-robots with complex structures at small scales.
APA, Harvard, Vancouver, ISO, and other styles
37

Bohg, Jeannette. "Multi-Modal Scene Understanding for Robotic Grasping." Doctoral thesis, KTH, Datorseende och robotik, CVAP, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-49062.

Full text
Abstract:
Current robotics research is largely driven by the vision of creatingan intelligent being that can perform dangerous, difficult orunpopular tasks. These can for example be exploring the surface of planet mars or the bottomof the ocean, maintaining a furnace or assembling a car.   They can also be more mundane such as cleaning an apartment or fetching groceries. This vision has been pursued since the 1960s when the first robots were built. Some of the tasks mentioned above, especially those in industrial manufacturing, arealready frequently performed by robots. Others are still completelyout of reach. Especially, household robots are far away from beingdeployable as general purpose devices. Although advancements have beenmade in this research area, robots are not yet able to performhousehold chores robustly in unstructured and open-ended environments givenunexpected events and uncertainty in perception and execution.In this thesis, we are analyzing which perceptual andmotor capabilities are necessaryfor the robot to perform common tasks in a household scenario. In that context, an essential capability is tounderstand the scene that the robot has to interact with. This involvesseparating objects from the background but also from each other.Once this is achieved, many other tasks becomemuch easier. Configuration of objectscan be determined; they can be identified or categorized; their pose can be estimated; free and occupied space in the environment can be outlined.This kind of scene model can then inform grasp planning algorithms to finally pick up objects.However, scene understanding is not a trivial problem and evenstate-of-the-art methods may fail. Given an incomplete, noisy andpotentially erroneously segmented scene model, the questions remain howsuitable grasps can be planned and how they can be executed robustly.In this thesis, we propose to equip the robot with a set of predictionmechanisms that allow it to hypothesize about parts of the sceneit has not yet observed. Additionally, the robot can alsoquantify how uncertain it is about this prediction allowing it toplan actions for exploring the scene at specifically uncertainplaces. We consider multiple modalities includingmonocular and stereo vision, haptic sensing and information obtainedthrough a human-robot dialog system. We also study several scene representations of different complexity and their applicability to a grasping scenario. Given an improved scene model from this multi-modalexploration, grasps can be inferred for each objecthypothesis. Dependent on whether the objects are known, familiar orunknown, different methodologies for grasp inference apply. In thisthesis, we propose novel methods for each of these cases. Furthermore,we demonstrate the execution of these grasp both in a closed andopen-loop manner showing the effectiveness of the proposed methods inreal-world scenarios.

QC 20111125


GRASP
APA, Harvard, Vancouver, ISO, and other styles
38

Nguyen, Hai Dai. "Constructing mobile manipulation behaviors using expert interfaces and autonomous robot learning." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50206.

Full text
Abstract:
With current state-of-the-art approaches, development of a single mobile manipulation capability can be a labor-intensive process that presents an impediment to the creation of general purpose household robots. At the same time, we expect that involving a larger community of non-roboticists can accelerate the creation of new novel behaviors. We introduce the use of a software authoring environment called ROS Commander (ROSCo) allowing end-users to create, refine, and reuse robot behaviors with complexity similar to those currently created by roboticists. Akin to Photoshop, which provides end-users with interfaces for advanced computer vision algorithms, our environment provides interfaces to mobile manipulation algorithmic building blocks that can be combined and configured to suit the demands of new tasks and their variations. As our system can be more demanding of users than alternatives such as using kinesthetic guidance or learning from demonstration, we performed a user study with 11 able-bodied participants and one person with quadriplegia to determine whether computer literate non-roboticists will be able to learn to use our tool. In our study, all participants were able to successfully construct functional behaviors after being trained. Furthermore, participants were able to produce behaviors that demonstrated a variety of creative manipulation strategies, showing the power of enabling end-users to author robot behaviors. Additionally, we introduce how using autonomous robot learning, where the robot captures its own training data, can complement human authoring of behaviors by freeing users from the repetitive task of capturing data for learning. By taking advantage of the robot's embodiment, our method creates classifiers that predict using visual appearances 3D locations on home mechanisms where user constructed behaviors will succeed. With active learning, we show that such classifiers can be learned using a small number of examples. We also show that this learning system works with behaviors constructed by non-roboticists in our user study. As far as we know, this is the first instance of perception learning with behaviors not hand-crafted by roboticists.
APA, Harvard, Vancouver, ISO, and other styles
39

Huang, Bidan. "The use of modular approaches for robots to learn grasping and manipulation." Thesis, University of Bath, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.665394.

Full text
Abstract:
Modular approaches are widely used methods in AI and engineering. This approach reduces the difficulty of solving a complex problem by subdividing the problem into several smaller parts, i.e. modules, and tackle each independently. In this dissertation, we show how modular approaches can simplify grasping and manipulation problems of service robots. We use the modular approach to tame the difficulties in solving three main research problems in this field: grasp planning, object manipulation and reach motion planning. Different from industrial controlled environments, service robots have to handle abrupt changes and uncertainties occurring in dynamic and cluttered human centered environments. Planning behaviours in such an environment needs to be fast and adaptive to changing context. Programming robot with adaptive behaviours usually is a difficult task and takes a long time. By adopting modular approaches, the task difficulty is reduced as well as the programming time. The proposed approach is based on the method of imitation learning, sometimes referred to as the Programming by Demonstration (PbD). In this framework, we first let human or robot demonstrates possible solutions of the problem. After collecting the demonstrations, we extract multiple modules from the data. Each module represents a part of the system and their corresponding demonstrations are modeled with a statistical method. According to the environment condition, a set of appropriate modules are chosen to provide the final solution. In this dissertation, we present three different modular approaches in tackling three subareas in robot grasping and manipulation: grasp planning, object manipulation adaptive control and planning reaching motions. In Chapter 3, we propose a fast method for computing grasps for known objects and extend this method by a modular approach to work with novel objects. We implemented this method with two different robot hands: the Barrett hand and the iCub hand, and show that the computation time is always in the millisecond scale. In Chapter 4, we present our modular approach in extracting adaptive control strategies using human demonstrations of object manipulation tasks. We successfully implement this method to teach a robot an manipulation tasks: opening bottle caps. In Chapter 5, we present a method to model reaching motion primitives that would allow humans to modulate robot motions by verbal commands. This method is implemented to perform a bimanual lifting task. We show that the method can generate new motions to lift boxes with different sizes and at different locations. These three studies show that robot grasping and manipulation problems can indeed be divided into modules, the solutions of which can be combined to provide a whole solution to the original problems. With modular approaches, new solutions for novel scenarios can be integrated to the original solution without difficulty. This approach allows robots to accumulate their skills. In summary, we contribute three modular and learning hybrid methods in this dissertation: (1) a fast method for grasp planning; (2) a method to extract human manipulation skills from demonstrations for object manipulation; (3) a method to recognize motions and generate motions according to human commands.
APA, Harvard, Vancouver, ISO, and other styles
40

Raiola, Gennaro. "Co-manipulation with a library of virtual guides." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLY001/document.

Full text
Abstract:
Les robots ont un rôle fondamental dans la fabrication industrielle. Non seulement ils augmentent l'efficacité et la qualité des lignes de production, mais aussi diminuent considérablement la charge de travail des humains.Cependant, en raison des limites des robots industriels en termes de flexibilité, de perception et de sécurité,Leur utilisation est limitée à un environnement structuré bien connu. En outre, il n'est pas toujours rentable d'utiliser des robots autonomes industriels dans de petites usines à faibles volumes de production.Cela signifie que des travailleurs humains sont encore nécessaires dans de nombreuses chaînes d'assemblage pour exécuter des tâches spécifiques.Par conséquent, ces dernières années, une grande impulsion a été donnée à la co-manipulation homme-robot.En permettant aux humains et aux robots de travailler ensemble, il est possible de combiner les avantages des deux; La compréhension des tâches abstraites et la perception robuste typique d'un être humain avec la précision et la force d'un robot industriel.Une approche réussie pour faciliter la co-manipulation homme-robot, est l'approche de guides virtuels qui contraint le mouvement du robot sur seulement certaines trajectoires pertinentes. Le guide virtuel ainsi réalisé agit comme un outil passif qui améliore les performances de l'utilisateur en termes de temps de tâche, de charge de travail mentale et d'erreurs.L'aspect innovant de notre travail est de présenter une bibliothèque de guides virtuels qui permet à l'utilisateur de facilement sélectionner, générer et modifier les guides grâce à une interaction intuitive haptique avec le robot.Nous avons démontré, dans deux tâches industrielles, que ces innovations fournissent une interface novatrice et intuitive pour l'accomplissement des tâches par les humains et les robots
Robots have a fundamental role in industrial manufacturing. They not only increase the efficiency and the quality of production lines, but also drastically decrease the work load carried out by humans.However, due to the limitations of industrial robots in terms of flexibility, perception and safety, their use is limited to well-known structured environment. Moreover, it is not always cost-effective to use industrial autonomous robots in small factories with low production volumes.This means that human workers are still needed in many assembly lines to carry out specific tasks.Therefore, in recent years, a big impulse has been given to human-robot co-manipulation.By allowing humans and robots to work together, it is possible to combine the advantages of both; abstract task understanding and robust perception typical of human beings with the accuracy and the strength of industrial robots.One successful method to facilitate human-robot co-manipulation, is the Virtual Guides approach which constrains the motion of the robot along only certain task-relevant trajectories. The so realized virtual guide acts as a passive tool that improves the performances of the user in terms of task time, mental workload and errors.The innovative aspect of our work is to present a library of virtual guides that allows the user to easily select, generate and modify the guides through an intuitive haptic interaction with the robot.We demonstrated in two industrial tasks that these innovations provide a novel and intuitive interface for joint human-robot completion of tasks
APA, Harvard, Vancouver, ISO, and other styles
41

Valencia, Angel. "3D Shape Deformation Measurement and Dynamic Representation for Non-Rigid Objects under Manipulation." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40718.

Full text
Abstract:
Dexterous robotic manipulation of non-rigid objects is a challenging problem but necessary to explore as robots are increasingly interacting with more complex environments in which such objects are frequently present. In particular, common manipulation tasks such as molding clay to a target shape or picking fruits and vegetables for use in the kitchen, require a high-level understanding of the scene and objects. Commonly, the behavior of non-rigid objects is described by a model. Although, well-established modeling techniques are difficult to apply in robotic tasks since objects and their properties are unknown in such unstructured environments. This work proposes a sensing and modeling framework to measure the 3D shape deformation of non-rigid objects. Unlike traditional methods, this framework explores data-driven learning techniques focused on shape representation and deformation dynamics prediction using a graph-based approach. The proposal is validated experimentally, analyzing the performance of the representation model to capture the current state of the non-rigid object shape. In addition, the performance of the prediction model is analyzed in terms of its ability to produce future states of the non-rigid object shape due to the manipulation actions of the robotic system. The results suggest that the representation model is able to produce graphs that closely capture the deformation behavior of the non-rigid object. Whereas, the prediction model produces visually plausible graphs when short-term predictions are required.
APA, Harvard, Vancouver, ISO, and other styles
42

Deyle, Travis. "Ultra high frequency (UHF) radio-frequency identification (RFID) for robot perception and mobile manipulation." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42903.

Full text
Abstract:
Personal robots with autonomy, mobility, and manipulation capabilities have the potential to dramatically improve quality of life for various user populations, such as older adults and individuals with motor impairments. Unfortunately, unstructured environments present many challenges that hinder robot deployment in ordinary homes. This thesis seeks to address some of these challenges through a new robotic sensing modality that leverages a small amount of environmental augmentation in the form of Ultra High Frequency (UHF) Radio-Frequency Identification (RFID) tags. Previous research has demonstrated the utility of infrastructure tags (affixed to walls) for robot localization; in this thesis, we specifically focus on tagging objects. Owing to their low-cost and passive (battery-free) operation, users can apply UHF RFID tags to hundreds of objects throughout their homes. The tags provide two valuable properties for robots: a unique identifier and receive signal strength indicator (RSSI, the strength of a tag's response). This thesis explores robot behaviors and radio frequency perception techniques using robot-mounted UHF RFID readers that enable a robot to efficiently discover, locate, and interact with UHF RFID tags applied to objects and people of interest. The behaviors and algorithms explicitly rely on the robot's mobility and manipulation capabilities to provide multiple opportunistic views of the complex electromagnetic landscape inside a home environment. The electromagnetic properties of RFID tags change when applied to common household objects. Objects can have varied material properties, can be placed in diverse orientations, and be relocated to completely new environments. We present a new class of optimization-based techniques for RFID sensing that are robust to the variation in tag performance caused by these complexities. We discuss a hybrid global-local search algorithm where a robot employing long-range directional antennas searches for tagged objects by maximizing expected RSSI measurements; that is, the robot attempts to position itself (1) near a desired tagged object and (2) oriented towards it. The robot first performs a sparse, global RFID search to locate a pose in the neighborhood of the tagged object, followed by a series of local search behaviors (bearing estimation and RFID servoing) to refine the robot's state within the local basin of attraction. We report on RFID search experiments performed in Georgia Tech's Aware Home (a real home). Our optimization-based approach yields superior performance compared to state of the art tag localization algorithms, does not require RF sensor models, is easy to implement, and generalizes to other short-range RFID sensor systems embedded in a robot's end effector. We demonstrate proof of concept applications, such as medication delivery and multi-sensor fusion, using these techniques. Through our experimental results, we show that UHF RFID is a complementary sensing modality that can assist robots in unstructured human environments.
APA, Harvard, Vancouver, ISO, and other styles
43

Abi-Farraj, Firas. "Contributions aux architectures de contrôle partagé pour la télémanipulation avancée." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S120/document.

Full text
Abstract:
Bien que la pleine autonomie dans des environnements inconnus soit encore loin, les architectures de contrôle partagé où l'humain et un contrôleur autonome travaillent ensemble pour atteindre un objectif commun peuvent constituer un « terrain intermédiaire » pragmatique. Dans cette thèse, nous avons abordé les différents problèmes des algorithmes de contrôle partagé pour les applications de saisie et de manipulation. En particulier, le travail s'inscrit dans le projet H2020 Romans dont l'objectif est d'automatiser le tri et la ségrégation des déchets nucléaires en développant des architectures de contrôle partagées permettant à un opérateur humain de manipuler facilement les objets d'intérêt. La thèse propose des architectures de contrôle partagé différentes pour manipulation à double bras avec un équilibre opérateur / autonomie différent en fonction de la tâche à accomplir. Au lieu de travailler uniquement sur le contrôle instantané du manipulateur, nous proposons des architectures qui prennent en compte automatiquement les tâches de pré-saisie et de post-saisie permettant à l'opérateur de se concentrer uniquement sur la tâche à accomplir. La thèse propose également une architecture de contrôle partagée pour contrôler un humanoïde à deux bras où l'utilisateur est informé de la stabilité de l'humanoïde grâce à un retour haptique. En plus, un nouvel algorithme d'équilibrage permettant un contrôle optimal de l'humanoïde lors de l'interaction avec l'environnement est également proposé
While full autonomy in unknown environments is still in far reach, shared-control architectures where the human and an autonomous controller work together to achieve a common objective may be a pragmatic "middle-ground". In this thesis, we have tackled the different issues of shared-control architectures for grasping and sorting applications. In particular, the work is framed in the H2020 RoMaNS project whose goal is to automatize the sort and segregation of nuclear waste by developing shared control architectures allowing a human operator to easily manipulate the objects of interest. The thesis proposes several shared-control architectures for dual-arm manipulation with different operator/autonomy balance depending on the task at hand. While most of the approaches provide an instantaneous interface, we also propose architectures which automatically account for the pre-grasp and post-grasp trajectories allowing the operator to focus only on the task at hand (ex., grasping). The thesis also proposes a shared control architecture for controlling a force-controlled humanoid robot in which the user is informed about the stability of the humanoid through haptic feedback. A new balancing algorithm allowing for the optimal control of the humanoid under high interaction forces is also proposed
APA, Harvard, Vancouver, ISO, and other styles
44

Hasan, Md Rakibul. "Modelling and interactional control of a multi-fingered robotic hand for grasping and manipulation." Thesis, Queen Mary, University of London, 2014. http://qmro.qmul.ac.uk/xmlui/handle/123456789/8941.

Full text
Abstract:
In this thesis, the synthesis of a grasping and manipulation controller of the Barrett hand, which is an archetypal example of a multi-fingered robotic hand, is investigated in some detail. This synthesis involves not only the dynamic modelling of the robotic hand but also the control of the joint and workspace dynamics as well as the interaction of the hand with object it is grasping and the environment it is operating in. Grasping and manipulation of an object by a robotic hand is always challenging due to the uncertainties, associated with non-linearities of the robot dynamics, unknown location and stiffness parameters of the objects which are not structured in any sense and unknown contact mechanics during the interaction of the hand’s fingers and the object. To address these challenges, the fundamental task is to establish the mathematical model of the robot hand, model the body dynamics of the object and establish the contact mechanics between the hand and the object. A Lagrangian based mathematical model of the Barrett hand is developed for controller implementation. A physical SimMechanics based model of the Barrett hand is also developed in MATLAB/Simulink environment. A computed torque controller and an adaptive sliding model controller are designed for the hand and their performance is assessed both in the joint space and in the workspace. Stability analysis of the controllers are carried out before developing the control laws. The higher order sliding model controllers are developed for the position control assuming that the uncertainties are in place. Also, this controllers enhance the performance by reducing chattering of the control torques applied to the robot hand. A contact model is developed for the Barrett hand as its fingers grasp the object in the operating environment. The contact forces during the simulation of the interaction of the fingers with the object were monitored, for objects with different stiffness values. Position and force based impedance controllers are developed to optimise the contact force. To deal with the unknown stiffness of the environment, adaptation is implemented by identifying the impedance. An evolutionary algorithm is also used to estimate the desired impedance parameters of the dynamics of the coupled robot and compliant object. A Newton-Euler based model is developed for the rigid object body. A grasp map and a hand Jacobian are defined for the Barrett hand grasping an object. A fixed contact model with friction is considered for the grasping and the manipulation control. The compliant dynamics of Barrett hand and object is developed and the control problem is defined in terms of the contact force. An adaptive control framework is developed and implemented for different grasps and manipulation trajectories of the Barrett hand. The adaptive controller is developed in two stages: first, the unknown robot and object dynamics are estimated and second, the contact force is computed from the estimated dynamics. The stability of the controllers is ensured by applying Lyapunov’s direct method.
APA, Harvard, Vancouver, ISO, and other styles
45

Chen, Yuxin. "Transfer of manipulation skills from human to machine through demonstration in a haptic rendered virtual environment." School of Electrical, Computer and Telecommunications Engineering - Faculty of Informatics, 2005. http://ro.uow.edu.au/theses/283.

Full text
Abstract:
Robots are widely used as automation tools to improve productivity in industry. Force sensitive manipulation is a generic requirement for a large number of industrial tasks, especially those associated with assembly. One of the major factors preventing greater use of robots in assembly tasks to date has been the lack of availability of fast and reliable methods of programming robots to carry out such tasks. Hence robots have in practice been unable to economically replicate the complex force and torque sensitive capability of human operators. A new approach is explored to transfer human manipulation skills to a robotics system. The teaching of the human skills to the machine starts by demonstrating those skills in a haptic-rendered virtual environment. The experience is close to real operation as the forces and torques generated during the interaction of the parts are sensed by the operator. A skill acquisition algorithm utilizes the position and contact force/torque data generated in the virtual environment combined with a priori knowledge about the task to generate the skills required to perform such a task. Such skills are translated into actual robotic trajectories for implementation in real time. The peg-in-hole insertion problem is used as a case study. A haptic rendered 3D virtual model of the peg-in-hole insertion process is developed. The haptic or tactile rendering is provided through a haptic device. A multi-layer method is developed to derive and learn the basic manipulation skills from the virtual manipulation carried out by a human operator. The force and torque data generated through virtual manipulation are used for skill acquisition. The skill acquisition algorithm primarily learns the actions which result in a proper change of contact states. Both optimum sequences and normal operation rules are learned and stored in a skill database. If the contact state is not among or near any state in the optimum sequences stored in the skill database, a corrective strategy is applied until a state among or near a state in the optimal space is produced. On-line incremental learning is also used for new cases encountered during physical manipulation. The approach is fully validated through an experimental rig set up for this purpose and the results are reported.
APA, Harvard, Vancouver, ISO, and other styles
46

Leborne, François. "Contributions à la commande de bras manipulateurs de robot sous-marin pour la manipulation à grande profondeur d'échantillons biologiques déformables." Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTS044/document.

Full text
Abstract:
Dans le cadre de la collecte sous-marine d'échantillons biologiques et minéraux pour la recherche scientifique par un robot sous-marin équipé de bras manipulateurs, ce projet de thèse a pour but principal le développement de nouvelles techniques de manipulation des échantillons, plus fiables, permettant d'en assurer l'intégrité physique et leur exploitabilité par les chercheurs. Les nouvelles techniques de manipulation proposées prennent en compte l'actionnement particulier des nouveaux bras électriques sous-marins équipant les engins récents, afin d'augmenter la précision du positionnement des outils embarqués par le manipulateur. Un outil amovible, compliant, et mesurant les efforts d'interaction entre les bras du sous-marin et leur environnement est aussi proposé, et des méthodes permettant de tirer partie des caractéristiques de cet outil sont développées et testées expérimentalement. L'engin sous-marin hybride HROV Ariane, équipé de deux bras électriques hétérogènes, offre la plateforme opérationnelle pour la validation expérimentale des solutions proposées
The research carried out in the scope of this doctorate degree aims to develop innovative techniques to improve the collection of biological and mineral samples underwater using robotic manipulators. The end goal is to enhance the handling by robotic means in order to maximise sample quality provided to marine scientists. The proposed techniques are based on an in-depth analysis of the robotic arm actuators used in most recent underwater intervention vehicles, in order to improve the accuracy of the positionning of the tools held by the manipulator arms. An instrumented tool has also been developed with the aim to measure the reaction forces and adapt the interaction between the arm's end-effector and its environment to improve samples handling. These methods and the other contributions described in this thesis have been experimentally validated using Ifremer's hybrid-ROV Ariane equipped with two electrically actuated heterogeneous robotic arms
APA, Harvard, Vancouver, ISO, and other styles
47

Hosoe, Shigeyuki, Yoshikazu Hayakawa, and Zakarya Zyada. "Fuzzy Nonprehensile Manipulation Control of a Two-Rigid-Link Object by Two Cooperative Arms." International Federation of Automatic Control (IFAC), 2011. http://hdl.handle.net/2237/20767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Iglesias, José. "A force control based strategy for extrinsic in-hand object manipulation through prehensile-pushing primitives." Thesis, KTH, Robotik, perception och lärande, RPL, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-220136.

Full text
Abstract:
Object manipulation is a complex task for robots. It often implies a compromise between the degrees-of-freedom of hand and its fingers have (dexterity) and its cost and complexity in terms of control. One strategy to increase the dexterity of robotic hands with low dexterity is called extrinsic manipulation and its principle is to exploit additional accelerations on the object caused by the effect of external forces. We propose a force control based method for performing extrinsic in-hand object manipulation, with force-torque feedback. For this purpose, we use a prehensile pushing action, which consists of pushing the object against an external surface, under quasistatic assumptions. By using a control strategy, we also achieve robustness to parameter uncertainty (such as friction) and perturbations, that are not completely captured by mathematical models of the system. The force control strategy is performed in two different ways: the contact force generated by the interaction between the object and the external surface is controlled using an admittance controller, while an additional control of gripping force applied by the gripper on the object is done through a PI controller. A Kalman filter is used for the estimation of the state of the object, based on force-torque measurements of a sensor at the wrist of the robot. We validate our approach by conducting experiments on a PR2 robot, available at the Robotics, Perception, and Learning lab at KTH Royal Institute of Technology.
Att greppa och manipulera objekt är en komplex uppgift för robotar. Det innebär ofta en kompromiss mellan hand och fingrars frihetsgrader (fingerfärdighet) mot reglersystemets kostnad och komplexitet. Extrinsic manipulation är en strategi för att öka fingerfärdigheten hos robothänder, och dess princip är att utnyttja accelerationer på objektet som orsakas av yttre krafter. Vi föreslår en metod baserad på att reglerakraft för hantering av objekt i handen, genom en återkoppling av kraftmomentet. För detta ändamål använder vi en prehensile pushing action, där objektet puttas mot en yta, under kvasistiska antaganden. Genom att använda en reglerstrategi får vi en robusthet mot parametrars osäkerhet (som friktion) och störningar, vilka inte beskrivs av systemets model. Kraftkontrollstrategin utförs på två olika sätt: kraften mellan objektet och den yttre ytan styrs med en admittance controller medan en ytterligare styrning av applicerad gripkraft på objektet görs med en PI-reglerare. Ett Kalman filter används för att estimera objektets tillstånd, baserat på mätningar av kraftmoment via en sensor vid robotens handled. Vi utvärderar vårt tillvägagångssätt genom att utföraexperiment på en PR2-robot vid KTHs Robotics, Perception och Learning Lab.
APA, Harvard, Vancouver, ISO, and other styles
49

Frasnedi, Alessio. "Optimization and convergence of manipulation tasks in the priority-level control framework." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
Redundant robots, in particular mobile manipulators, are becoming increasingly important over the years. Thanks to their flexibility, in fact, they are capable of properly interacting with the surrounding environment, exploiting the high number of possessed degrees of freedom. On the other hand, for the latter reason, the resolution of the kinematics problem becomes rather complex (in particular the inverse kinematics equations), therefore many different studies have been dedicated to this research field. One of these control strategies, called priority-level control, is characterized by the great advantage of establishing a priority based sorting of the different tasks the robot has to perform. In this way, this latter is allowed to execute several complex functions concurrently, each of which does not perturb the accomplishment of the tasks having a higher priority. This technique constitutes the core of this paper also because of its possibility of deactivating a task whenever the conditions allow to do so, mitigating any possible discontinuity in the robot behavior too. This project, strictly based on the priority-level control strategy, deals with three different possible improvements of the related framework. At first, a finite-time control technique will be described, then the implementation of a manipulability task and of a conflict avoidance approach will be presented. Consequently, a series of simulations will be carried out to prove the correctness of the aforementioned theory.
APA, Harvard, Vancouver, ISO, and other styles
50

Saut, Jean-Philippe. "Planification de Mouvement Pour la Manipulation Dextre d'Objets Rigides." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2007. http://tel.archives-ouvertes.fr/tel-00715477.

Full text
Abstract:
Cette thèse concerne la planification des tâches de manipulation effectuées par une main robotisée. Il s'agit de mettre au point un système de calcul automatique des trajectoires que doivent suivre les doigts et l'objet manipulé, pour passer d'une configuration initiale à une configuration finale données. La méthode proposée dans cette thèse s'appuie sur une formulation originale du problème de planification, basée sur l'étude de la connexité des espaces des configurations de prise. Ces espaces sont explorés par l'intermédiaire de graphes probabilistes. En particulier, un graphe est construit pour explorer GSn, l'espace des configurations de prise à n doigts, n étant le nombre de doigts de la main. Les arêtes de ce graphe sont des chemins linéaires dans GSn. Utiliser de tels chemins permet d'éviter le calcul des mouvements de reconfiguration de prise et donc de réduire les temps de calcul et l'espace mémoire requis par la construction du graphe. Ces chemins ne sont pas cinématiquement réalisables puisque la pose de l'objet et la position des contacts ne peuvent changer indépendamment mais leur utilisation est rendue possible par la généralisation de la propriété de réduction introduite par Alami et al. Les mouvements de changement de prise qui requièrent d'être explicitement calculés au cours de la construction du graphe, sont pris en compte lors d'une étape de fusion des composantes connexes du graphe. Ces fusions sont réalisées à l'aide de chemins élémentaires respectant la cinématique de la manipulation coordonnée. Ces chemins sont appelés "chemins de ressaisie" et "chemins de transfert". Une fois que les configurations initiale et finale appartiennent à une même composante connexe du graphe, les chemins dans GSn sont décomposés en une suite de mouvements de déplacement de l'objet et de reconfiguration de la prise (chemins de transfert et de ressaisie), cinématiquement réalisables. Pour assurer la stabilité des chemins construits, un critère de stabilité de la prise (fermeture de force) est vérifié le long des chemins, lors de leur construction. Pour valider cette approche, une plate-forme de simulation a été développée et a permis de planifier différentes tâches de manipulation dextre avec une main à quatre doigts. Le planificateur offre des performances très intéressantes en terme de temps de calcul et a permis de résoudre des problèmes complexes tels qu'aucun résultat pour des problèmes de difficulté équivalente n'avait jamais été présenté jusqu'à présent. La méthode proposée s'applique à n'importe quel type de main, quel que soit son nombre de doigts mais, comme elle explore uniquement GSn et GS{n-1}, elle peut manquer des solutions si la main robotisée et le modèle des contacts doigt-objet permet la prise avec un nombre différent de doigts. Pour remédier à cela, nous avons proposé une méthode légèrement différente qui s'applique à une main à cinq doigts et consiste à construire un graphe pour explorer chacune des cinq composantes connexes de GS4 à l'aide de chemins linéaires dans cet espace et à tenter de fusionner les différents graphes à l'aide de chemins linéaires dans GS5 ou de chemins de transfert-ressaisie (dans GS3). Enfin, une variante de la méthode proposée a été développée pour prendre en compte le roulement relatif des surfaces de contact au cours de la manipulation de l'objet. Les différentes modifications nécessaires, concernant la représentation des prises et le calcul de chemins de transfert, sont présentées en détail.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography