To see the other types of publications on this topic, follow the link: Cloud Robotics.

Dissertations / Theses on the topic 'Cloud Robotics'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Cloud Robotics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Forsman, Mona. "Point cloud densification." Thesis, Umeå universitet, Institutionen för fysik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-39980.

Full text
Abstract:
Several automatic methods exist for creating 3D point clouds extracted from 2D photos. In manycases, the result is a sparse point cloud, unevenly distributed over the scene.After determining the coordinates of the same point in two images of an object, the 3D positionof that point can be calculated using knowledge of camera data and relative orientation. A model created from a unevenly distributed point clouds may loss detail and precision in thesparse areas. The aim of this thesis is to study methods for densification of point clouds. This thesis contains a literature study over different methods for extracting matched point pairs,and an implementation of Least Square Template Matching (LSTM) with a set of improvementtechniques. The implementation is evaluated on a set of different scenes of various difficulty. LSTM is implemented by working on a dense grid of points in an image and Wallis filtering isused to enhance contrast. The matched point correspondences are evaluated with parameters fromthe optimization in order to keep good matches and discard bad ones. The purpose is to find detailsclose to a plane in the images, or on plane-like surfaces. A set of extensions to LSTM is implemented in the aim of improving the quality of the matchedpoints. The seed points are improved by Transformed Normalized Cross Correlation (TNCC) andMultiple Seed Points (MSP) for the same template, and then tested to see if they converge to thesame result. Wallis filtering is used to increase the contrast in the image. The quality of the extractedpoints are evaluated with respect to correlation with other optimization parameters and comparisonof standard deviation in x- and y- direction. If a point is rejected, the option to try again with a largertemplate size exists, called Adaptive Template Size (ATS).
APA, Harvard, Vancouver, ISO, and other styles
2

Bruse, Andreas. "Exploiting Cloud Resources For Semantic Scene Understanding On Mobile Robots." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-169116.

Full text
Abstract:
Modern day mobile robots are constrained in the resources available to them. Only so much hardware can be fit onto the robotic frame and at the same time they are required to perform tasks that require lots of computational resources, access to massive amounts of data and the ability to share knowledge with other robots around it. This thesis explores the cloud robotics approach in which complex compu- tations can be offloaded to a cloud service which can have a huge amount of computational resources and access to massive data sets. The Robot Operat- ing System, ROS, is extended to allow the robot to communicate with a high powered cluster and this system is used to test our approach on such a complex task as semantic scene understanding. The benefits of the cloud approach is utilized to connect to a cloud based object detection system and to build a cat- egorization system relying on large scale datasets and a parallel computation model. Finally a method is proposed for building a consistent scene description by exploiting semantic relationships between objects.
Moderna mobila robotar har begränsade resurser. Det får inte plats hur mycket hårdvara som helst på roboten och ändå förväntas de utföra arbeten som kräver extremt mycket datorkraft, tillgång till enorm mängd data och samtidigt kommunicera med andra robotar runt omkring sig. Det här examensarbetet utforskar robotik i molnet där komplexa beräk- ningar kan läggas ut i en molntjänst som kan ha tillgång till denna stora mängd datakraft och ha plats för de stora datamängder som behövs. The Ro- bot Operating System, eller ROS, byggs ut för att stödja kommunikation med en molntjänst och det här systemet används sedan för att testa vår lösning på ett så komplext problem som att förstå en omgivning eller miljö på ett seman- tiskt plan. Fördelarna med att använda en molnbaserad lösning används genom att koppla upp sig mot ett objektigenkänningssytem i molnet och för att byg- ga ett objektkategoriseringssystem som förlitar sig på storskaliga datamängder och parallella beräkningsmodeller. Slutligen föreslås en metod för att bygga en tillförlitlig miljöbeskrivning genom att utnyttja semantiska relationer mellan föremål.
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Yuwei. "OpenMP based Action Entropy Active Sensing in Cloud Computing." Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1584809369789769.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bhal, Siddharth. "Fog computing for robotics system with adaptive task allocation." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/78723.

Full text
Abstract:
The evolution of cloud computing has finally started to affect robotics. Indeed, there have been several real-time cloud applications making their way into robotics as of late. Inherent benefits of cloud robotics include providing virtually infinite computational power and enabling collaboration of a multitude of connected devices. However, its drawbacks include higher latency and overall higher energy consumption. Moreover, local devices in proximity incur higher latency when communicating among themselves via the cloud. At the same time, the cloud is a single point of failure in the network. Fog Computing is an extension of the cloud computing paradigm providing data, compute, storage and application services to end-users on a so-called edge layer. Distinguishing characteristics are its support for mobility and dense geographical distribution. We propose to study the implications of applying fog computing concepts in robotics by developing a middle-ware solution for Robotic Fog Computing Cluster solution for enabling adaptive distributed computation in heterogeneous multi-robot systems interacting with the Internet of Things (IoT). The developed middle-ware has a modular plug-in architecture based on micro-services and facilitates communication of IOT devices with the multi-robot systems. In addition, the developed middle-ware solutions support different load balancing or task allocation algorithms. In particular, we establish that we can enhance the performance of distributed system by decreasing overall system latency by using already established multi-criteria decision-making algorithms like TOPSIS and TODIM with naive Q-learning and with Neural Network based Q-learning.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
5

Nagrath, Vineet. "Software architectures for cloud robotics : the 5 view Hyperactive Transaction Meta-Model (HTM5)." Thesis, Dijon, 2015. http://www.theses.fr/2015DIJOS005/document.

Full text
Abstract:
Le développement de logiciels pour les robots connectés est une difficulté majeure dans le domaine du génie logiciel. Les systèmes proposés sont souvent issus de la fusion de une ou plusieurs plates-formes provenant des robots, des ordinateurs autonomes, des appareils mobiles, des machines virtuelles, des caméras et des réseaux. Nous proposons ici une approche orientée agent permettant de représenter les robots et tous les systèmes auxiliaires comme des agents d’un système. Ce concept de l’agence préserve l’autonomie sur chacun des agents, ce qui est essentiel dans la mise en oeuvre logique d’un nuage d’éléments connectés. Afin de procurer une flexibilité de mise en oeuvre des échanges entre les différentes entités, nous avons mis en place un mécanisme d’hyperactivité ce qui permet de libérer sélectivement une certaine autonomie d’un agent par rapport à ces associés.Actuellement, il n’existe pas de solution orientée méta-modèle pour décrire les ensembles de robots interconnectés. Dans cette thèse, nous présentons un méta-modèle appelé HTM5 pour spécifier a structure, les relations, les échanges, le comportement du système et l’hyperactivité dans un système de nuages de robots. La thèse décrit l’anatomie du méta-modèle (HTM5) en spécifiant les différentes couches indépendantes et en intégrant une plate-forme indépendante de toute plateforme spécifique. Par ailleurs, la thèse décrit également un langage de domaine spécifique pour la modélisation indépendante dans HTM5. Des études de cas concernant la conception et la mise en oeuvre d’un système multi-robots basés sur le modèle développé sont également présentés dans la thèse. Ces études présentent des applications où les décisions commerciales dynamiques sont modélisées à l’aide du modèle HTM5 confirmant ainsi la faisabilité du méta-modèle proposé
Software development for cloud connected robotic systems is a complex software engineeringendeavour. These systems are often an amalgamation of one or more robotic platforms, standalonecomputers, mobile devices, server banks, virtual machines, cameras, network elements and ambientintelligence. An agent oriented approach represents robots and other auxiliary systems as agents inthe system.Software development for distributed and diverse systems like cloud robotic systems require specialsoftware modelling processes and tools. Model driven software development for such complexsystems will increase flexibility, reusability, cost effectiveness and overall quality of the end product.The proposed 5-view meta-model has separate meta-models for specifying structure, relationships,trade, system behaviour and hyperactivity in a cloud robotic system. The thesis describes theanatomy of the 5-view Hyperactive Transaction Meta-Model (HTM5) in computation independent,platform independent and platform specific layers. The thesis also describes a domain specificlanguage for computation independent modelling in HTM5.The thesis has presented a complete meta-model for agent oriented cloud robotic systems and hasseveral simulated and real experiment-projects justifying HTM5 as a feasible meta-model
APA, Harvard, Vancouver, ISO, and other styles
6

Toris, Russell C. "Spatial and Temporal Learning in Robotic Pick-and-Place Domains via Demonstrations and Observations." Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-dissertations/135.

Full text
Abstract:
Traditional methods for Learning from Demonstration require users to train the robot through the entire process, or to provide feedback throughout a given task. These previous methods have proved to be successful in a selection of robotic domains; however, many are limited by the ability of the user to effectively demonstrate the task. In many cases, noisy demonstrations or a failure to understand the underlying model prevent these methods from working with a wider range of non-expert users. My insight is that in many mobile pick-and-place domains, teaching is done at a too fine grained level. In many such tasks, users are solely concerned with the end goal. This implies that the complexity and time associated with training and teaching robots through the entirety of the task is unnecessary. The robotic agent needs to know (1) a probable search location to retrieve the task's objects and (2) how to arrange the items to complete the task. This thesis work develops new techniques for obtaining such data from high-level spatial and temporal observations and demonstrations which can later be applied in new, unseen environments. This thesis makes the following contributions: (1) This work is built on a crowd robotics platform and, as such, we contribute the development of efficient data streaming techniques to further these capabilities. By doing so, users can more easily interact with robots on a number of platforms. (2) The presentation of new algorithms that can learn pick-and-place tasks from a large corpus of goal templates. My work contributes algorithms that produce a metric which ranks the appropriate frame of reference for each item based solely on spatial demonstrations. (3) An algorithm which can enhance the above templates with ordering constraints using coarse and noisy temporal information. Such a method eliminates the need for a user to explicitly specify such constraints and searches for an optimal ordering and placement of items. (4) A novel algorithm which is able to learn probable search locations of objects based solely on sparsely made temporal observations. For this, we introduce persistence models of objects customized to a user's environment.
APA, Harvard, Vancouver, ISO, and other styles
7

Trowbridge, Michael Aaron. "Autonomous 3D Model Generation of Orbital Debris using Point Cloud Sensors." Thesis, University of Colorado at Boulder, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=1558774.

Full text
Abstract:

A software prototype for autonomous 3D scanning of uncooperatively rotating orbital debris using a point cloud sensor is designed and tested. The software successfully generated 3D models under conditions that simulate some on-orbit orbit challenges including relative motion between observer and target, inconsistent target visibility and a target with more than one plane of symmetry. The model scanning software performed well against an irregular object with one plane of symmetry but was weak against objects with 2 planes of symmetry.

The suitability of point cloud sensors and algorithms for space is examined. Terrestrial Graph SLAM is adapted for an uncooperatively rotating orbital debris scanning scenario. A joint EKF attitude estimate and shape similiarity loop closure heuristic for orbital debris is derived and experimentally tested. The binary Extended Fast Point Feature Histogram (EFPFH) is defined and analyzed as a binary quantization of the floating point EFPFH. Both the binary and floating point EPFH are experimentally tested and compared as part of the joint loop closure heuristic.

APA, Harvard, Vancouver, ISO, and other styles
8

He, Linbo. "Improving 3D Point Cloud Segmentation Using Multimodal Fusion of Projected 2D Imagery Data : Improving 3D Point Cloud Segmentation Using Multimodal Fusion of Projected 2D Imagery Data." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157705.

Full text
Abstract:
Semantic segmentation is a key approach to comprehensive image data analysis. It can be applied to analyze 2D images, videos, and even point clouds that contain 3D data points. On the first two problems, CNNs have achieved remarkable progress, but on point cloud segmentation, the results are less satisfactory due to challenges such as limited memory resource and difficulties in 3D point annotation. One of the research studies carried out by the Computer Vision Lab at Linköping University was aiming to ease the semantic segmentation of 3D point cloud. The idea is that by first projecting 3D data points to 2D space and then focusing only on the analysis of 2D images, we can reduce the overall workload for the segmentation process as well as exploit the existing well-developed 2D semantic segmentation techniques. In order to improve the performance of CNNs for 2D semantic segmentation, the study has used input data derived from different modalities. However, how different modalities can be optimally fused is still an open question. Based on the above-mentioned study, this thesis aims to improve the multistream framework architecture. More concretely, we investigate how different singlestream architectures impact the multistream framework with a given fusion method, and how different fusion methods contribute to the overall performance of a given multistream framework. As a result, our proposed fusion architecture outperformed all the investigated traditional fusion methods. Along with the best singlestream candidate and few additional training techniques, our final proposed multistream framework obtained a relative gain of 7.3\% mIoU compared to the baseline on the semantic3D point cloud test set, increasing the ranking from 12th to 5th position on the benchmark leaderboard.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Chen. "Connectivity, Security and Integrationfor Cloud Manufacturing." Thesis, KTH, Industriell produktion, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-226522.

Full text
Abstract:
Det här mastersprojektet syftar till att ansluta industriroboten till moln plattformen och utvärdera anslutning och säkerhet. För att uppnå bättre anslutning, säkerhet och integration, föreslås en modifierad Moln Tillverkningssystem- (CRS) arkitektur, som kännetecknas av hög modularitet, standardisering och komposibilitet. Arkitekturens specifika applikationer iprivata, offentliga och hybridmoln diskuteras också. Sedan är en  systemarkitektur med detaljerad mjukvarukomposition designad för Molnrobotik. Enligt den föreslagna systemarkitekturen presenteras möjliga säkerhetshotskällor och motsvarande lösningar.Under projektet används Universell Robot 5 (UR5) som en praktisk robotinstans för att utveckla en kommunikationsrutin mellan KTH Moln och robotar. Ett applikationsprogramgränssnitt (API) skrivet i Python for Universell Robot och servern är etablerad. API: n består av två modulära delar, Gateway Agenten och Applikationsmjukvaran.Gateway Agenten realiserar kopplingen mellan Universell Robot 5 (UR5) och molnet, medan applikationsmjukvaran kan anpassas till specifika tillämpningar och krav. I detta projekt utvecklas tre huvudfunktioner i applikationsmjukvaran, inklusive datainsamling, datavisualisering och fjärrkontroll. Förutom att utvärdera anslutning och stabilitet simulerasdet privata robotik molnsystemet och det offentliga robotik molnsystemet med KTH Moln.Hybrid robotik moln systemet diskuteras också. Genom resultaten av fallstudier verifieras anslutningen och integrationen av Moln Tillverkningssystem.
This master thesis project aims to connect the industrial robot to the Cloud platform, and evaluate the connectivity and security. To realize better connectivity, security and integration, a modified Cloud Manufacturing System (CRS) architecture is proposed, which is characterized by high modularity, standardization and composability. The architecture’s specific applications in private, public and hybrid cloud are discussed as well. Then, one system architecture with detailed software composition is designed for Cloud Robotics.According to the proposed system architecture, possible security threat sources and corresponding solutions are presented.During the project, Universal Robot 5 (UR5) is utilized as a practical robot instance to develop a communication routine between KTH Cloud and robots. An Application Program Interface (API) written by Python for Universal Robots and the server is established. The API consists of two modularized part, Gateway Agent and Application Package. The Gateway Agent realizes the connection between the Universal Robot 5 (UR5) and the cloud, while theApplication Package can be customized according to specific application and requirements. In this project, three main functions are developed in the Application Package, including data acquisition, data visualization and remote control. Besides, to evaluate connectivity and stability, private robotics cloud system and public robotics cloud system are simulated with KTH Cloud. The hybrid robotics cloud system is discussed as well. Through the results of case studies, the connectivity and integration of Cloud Manufacturing System are verified.
APA, Harvard, Vancouver, ISO, and other styles
10

Chleborad, Aaron A. "Grasping unknown novel objects from single view using octant analysis." Thesis, Manhattan, Kan. : Kansas State University, 2010. http://hdl.handle.net/2097/4089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Uhlíř, Jan. "Kalibrace robotického pracoviště." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-403205.

Full text
Abstract:
This work is concerned by the issue of calibrating a robotic workplace, including the localization of a calibration object for the purpose of calibrating a 2D or 3D camera, a robotic arm and a scene of robotic workplace. At first, the problems related to the calibration of the aforementioned elements were studied. Further, an analysis of suitable methods for performing these calibrations was performed. The result of this work is application of ROS robotic system providing methods for three different types of calibration programs, whose functionality is experimentally verified at the end of this work.
APA, Harvard, Vancouver, ISO, and other styles
12

Yousif, Robert. "A Practical Approach of an Internet of Robotic Things Platform." Thesis, KTH, Mekatronik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-244412.

Full text
Abstract:
This thesis aims to design and develop a platform based on a novel concept - the Internet of Robotic Things (IoRT) constructed by a robotic platform, an Internet of Things (IoT) platform and cloud computing services. A robotic platform enables hardware abstraction, facilitating the management of input/output between software, mechanical devices  andelectronic systems. The IoT platform is a global network enabling a massive number of devices known as things to communicate with each other and transfer data over the Internet. Cloud computing is a shared pool of scalable hardware usually provisioned as cloud services by third party cloud vendors. The integration of these concepts constitutes the core of the IoRT platform, as a global infrastructure facilitating robots to interconnect over the Internet utilizing common communication technology. Moreover, the pool of cloud resources shared by the connected robots enables scalable storage and processing power. The IoRT platform developed in this study constitutes firstly of the Amazon Web Service (AWS) IoT core serving as the IoT platform. Secondly, it incorporates the Robot Operating system (ROS) as the robotic platform and thirdly the cloud services Amazon DynamoDB and AWS Lambda for data storing and data processing respectively.The platform was evaluated in terms of delays & utilization and visualization capabilities. The platform demonstrates promising result in terms of delays exchanging small packages of data, round-trip delays in order of 50-60ms were obtained between a robot placed in Stockholm and the communication platform AWS IoT placed in Dublin, Ireland. Most of the delay is due to the traveling distance, where a round trip ping between Stockholm and Dublin takes around 50ms. The platforms ability to visualize streaming data from the robots, enables an operator to visualize selected data from any service in the platform over the Internet in near real-time, with round-trip delays in order of 250-300ms where the data propagates through multiple cloud service. In conclusion, this report illustrates the feasibility of merging two major platforms together: ROS and AWS IoT, and moreover, the accessibility to exploit the power and potential enabled by the modern data centers.
Avhandlingens syfte är att utforma och utveckla en plattform baserat på konceptet Internet of Robotic Things konstruerat av en robotikplattform, en Internet of Things plattform och molntjänster. En Internet of Things plattform är ett globalt nätverk som tillåter många enheter att kommunicera med varandra och överföra data över Internet. En robotikplattform underlättar kontrollen av in/ut mellan mjukvara, mekaniska enheter och elektroniska system. Molntjänster är en gemensam pool av skalbar hårdvara som vanligtvis erbjuds av tredje parts molnleverantörer. En Internet of Robotic Things plattform är en global infrastruktur som underlättar avancerade robotar att interagera över Internet genom en gemensam kommunikationsteknik, en pool av molntjänster som delas av alla uppkopplade robotar som tillåter skalbar lagring och processorkraft.Plattformens huvudkomponenter är robotikplattformen Robot Operating System, Internet of Things plattformen AWS IoT Core och molntjänsterna Amazon DynamoDB och AWS Lambda för lagring och databearbetning.Plattformen evalueras i form av plattformegenskaperna, fördröjningar & funktionstid och visualiseringsförmåga. Plattformen visar lovande resultat i from av fördröjningar mellan två robotar som utbyter data med hjälp av IoT plattformen, där fördröjningarna är begränsade av distanssträckan. Plattformens egenskap att visualisera strömmande data från robotar möjliggör för en operatör att visualisera utvald data från plattformen över internet i realtid.
APA, Harvard, Vancouver, ISO, and other styles
13

Dorotovič, Viktor. "Detekce pohyblivých objektů v prostředí mobilního robota." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2017. http://www.nusl.cz/ntk/nusl-363889.

Full text
Abstract:
This work's aim is movement detection in the environment of a robot, that may move itself. A 2D occupancy grid representation is used, containing only the currently visible environment, without filtering in time. Motion detection is based on a grid-based particle filter introduced by Tanzmeister et al. in Grid-based Mapping and Tracking in Dynamic Environments using a Uniform Evidential Environment Representation. The system was implemented in the Robot Operating System, which allows for re-use of modules which the solution is composed of. The KITTI Visual Odometry dataset was chosen as a source~of LiDAR data for experiments, along with ground-truth pose information. Ground segmentation based on Loopy Belief Propagation was used to filter the point clouds. The implemeted motion detector is able to distiguish between static and dynamic vehicles in this dataset. Further tests in a simulated environment have shown some shortcomings in the detection of large continuous moving objects.
APA, Harvard, Vancouver, ISO, and other styles
14

Feydt, Austin Pack. "A Higher-Fidelity Approach to Bridging the Simulation-Reality Gap for 3-D Object Classification." Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case1558355175360648.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Tosello, Elisa. "Cognitive Task Planning for Smart Industrial Robots." Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3421918.

Full text
Abstract:
This research work presents a novel Cognitive Task Planning framework for Smart Industrial Robots. The framework makes an industrial mobile manipulator robot Cognitive by applying Semantic Web Technologies. It also introduces a novel Navigation Among Movable Obstacles algorithm for robots navigating and manipulating inside a firm. The objective of Industrie 4.0 is the creation of Smart Factories: modular firms provided with cyber-physical systems able to strong customize products under the condition of highly flexible mass-production. Such systems should real-time communicate and cooperate with each other and with humans via the Internet of Things. They should intelligently adapt to the changing surroundings and autonomously navigate inside a firm while moving obstacles that occlude free paths, even if seen for the first time. At the end, in order to accomplish all these tasks while being efficient, they should learn from their actions and from that of other agents. Most of existing industrial mobile robots navigate along pre-generated trajectories. They follow ectrified wires embedded in the ground or lines painted on th efloor. When there is no expectation of environment changes and cycle times are critical, this planning is functional. When workspaces and tasks change frequently, it is better to plan dynamically: robots should autonomously navigate without relying on modifications of their environments. Consider the human behavior: humans reason about the environment and consider the possibility of moving obstacles if a certain goal cannot be reached or if moving objects may significantly shorten the path to it. This problem is named Navigation Among Movable Obstacles and is mostly known in rescue robotics. This work transposes the problem on an industrial scenario and tries to deal with its two challenges: the high dimensionality of the state space and the treatment of uncertainty. The proposed NAMO algorithm aims to focus exploration on less explored areas. For this reason it extends the Kinodynamic Motion Planning by Interior-Exterior Cell Exploration algorithm. The extension does not impose obstacles avoidance: it assigns an importance to each cell by combining the efforts necessary to reach it and that needed to free it from obstacles. The obtained algorithm is scalable because of its independence from the size of the map and from the number, shape, and pose of obstacles. It does not impose restrictions on actions to be performed: the robot can both push and grasp every object. Currently, the algorithm assumes full world knowledge but the environment is reconfigurable and the algorithm can be easily extended in order to solve NAMO problems in unknown environments. The algorithm handles sensor feedbacks and corrects uncertainties. Usually Robotics separates Motion Planning and Manipulation problems. NAMO forces their combined processing by introducing the need of manipulating multiple objects, often unknown, while navigating. Adopting standard precomputed grasps is not sufficient to deal with the big amount of existing different objects. A Semantic Knowledge Framework is proposed in support of the proposed algorithm by giving robots the ability to learn to manipulate objects and disseminate the information gained during the fulfillment of tasks. The Framework is composed by an Ontology and an Engine. The Ontology extends the IEEE Standard Ontologies for Robotics and Automation and contains descriptions of learned manipulation tasks and detected objects. It is accessible from any robot connected to the Cloud. It can be considered a data store for the efficient and reliable execution of repetitive tasks; and a Web-based repository for the exchange of information between robots and for the speed up of the learning phase. No other manipulation ontology exists respecting the IEEE Standard and, regardless the standard, the proposed ontology differs from the existing ones because of the type of features saved and the efficient way in which they can be accessed: through a super fast Cascade Hashing algorithm. The Engine lets compute and store the manipulation actions when not present in the Ontology. It is based on Reinforcement Learning techniques that avoid massive trainings on large-scale databases and favors human-robot interactions. The overall system is flexible and easily adaptable to different robots operating in different industrial environments. It is characterized by a modular structure where each software block is completely reusable. Every block is based on the open-source Robot Operating System. Not all industrial robot controllers are designed to be ROS-compliant. This thesis presents the method adopted during this research in order to Open Industrial Robot Controllers and create a ROS-Industrial interface for them.
Questa ricerca presenta una nuova struttura di Pianificazione Cognitiva delle Attività ideata per Robot Industriali Intelligenti. La struttura rende Cognitivo un manipolatore industriale mobile applicando le tecnologie offerte dal Web Semantico. Viene inoltre introdotto un nuovo algoritmo di Navigazione tra Oggetti Removibili per robot che navigano e manipolano all’interno di una fabbrica. L’obiettivo di Industria 4.0 è quello di creare Fabbriche Intelligenti: fabbriche modulari dotate di sistemi cyber-fisici in grado di customizzare i prodotti pur mantenendo una produzione di massa altamente flessibile. Tali sistemi devono essere in grado di comunicare e cooperare tra loro e con gli agenti umani in tempo reale, attraverso l’Internet delle Cose. Devono sapersi autonomamente ed intelligentemente adattare ai costanti cambiamenti dell’ambiente che li circonda. Devono saper navigare autonomamente all’interno della fabbrica, anche spostando ostacoli che occludono percorsi liberi, ed essere in grado di manipolare questi oggetti anche se visti per la prima volta. Devono essere in grado di imparare dalle loro azioni e da quelle eseguite da altri agenti. La maggior parte dei robot industriali mobili naviga secondo traiettorie generate a priori. Seguono filielettrificatiincorporatinelterrenoolineedipintesulpavimento. Pianificareapriorièfunzionale se l’ambiente è immutevole e i cicli produttivi sono caratterizzati da criticità temporali. E’ preferibile adottare una pianificazione dinamica se, invece, l’area di lavoro ed i compiti assegnati cambiano frequentemente: i robot devono saper navigare autonomamente senza tener conto dei cambiamenti circostanti. Si consideri il comportamento umano: l’uomo ragiona sulla possibilità di spostare ostacolise unaposizione obiettivo nonè raggiungibileose talespostamento puòaccorciare la traiettoria da percorrere. Questo problema viene detto Navigazione tra Oggetti Removibili ed è noto alla robotica di soccorso. Questo lavoro traspone il problema in uno scenario industriale e prova ad affrontare i suoi due obiettivi principali: l’elevata dimensione dello spazio di ricerca ed il trattamento dell’incertezza. L’algoritmo proposto vuole dare priorità di esplorazione alle aree meno esplorate, per questo estende l’algoritmo noto come Kinodynamic Motion Planning by Interior-Exterior Cell Exploration. L’estensione non impone l’elusione degli ostacoli. Assegna ad ogni cella un’importanza che combina lo sforzo necessario per raggiungerla con quello necessario per liberarla da eventuali ostacoli. L’algoritmo risultante è scalabile grazie alla sua indipendenza dalla dimensione della mappa e dal numero, forma e posizione degli ostacoli. Non impone restrizioni sulle azioni da eseguire: ogni oggetto può venir spinto o afferrato. Allo stato attuale, l’algoritmo assume una completa conoscenza del mondo circonstante. L’ambiente è però riconfigurabile di modo che l’algoritmo possa venir facilmente esteso alla risoluzione di problemi di Navigazione tra Oggetti Removibili in ambienti ignoti. L’algoritmo gestisce i feedback dati dai sensori per correggere le incertezze. Solitamente la Robotica separa la risoluzione dei problemi di pianificazione del movimento da quelli di manipolazione. La Navigazione tra Ostacoli Removibili forza il loro trattamento combinato introducendo la necessità di manipolare oggetti diversi, spesso ignoti, durante la navigazione. Adottare prese pre calcolate non fa fronte alla grande quantità e diversità di oggetti esistenti. Questa tesi propone un Framework di Conoscenza Semantica a supporto dell’algoritmo sopra esposto. Essodàairobotlacapacitàdiimparareamanipolareoggettiedisseminareleinformazioni acquisite durante il compimento dei compiti assegnati. Il Framework si compone di un’Ontologia e di un Engine. L’Ontologia estende lo Standard IEEE formulato per Ontologie per la Robotica e l’Automazione andando a definire le manipolazioni apprese e gli oggetti rilevati. È accessibile a qualsiasi robot connesso al Cloud. Può venir considerato I) una raccolta di dati per l’esecuzione efficiente ed affidabile di azioni ripetute; II) un archivio Web per lo scambio di informazioni tra robot e la velocizzazione della fase di apprendimento. Ad ora, non esistono altre ontologie sulla manipolazione che rispettino lo Standard IEEE. Indipendentemente dallo standard, l’Ontologia propostadifferiscedaquelleesistentiperiltipodiinformazionisalvateeperilmodoefficienteincui un agente può accedere a queste informazioni: attraverso un algoritmo di Cascade Hashing molto veloce. L’Engine consente il calcolo e il salvataggio delle manipolazioni non ancora in Ontologia. Si basa su tecniche di Reinforcement Learning che evitano il training massivo su basi di dati a larga scala, favorendo l’interazione uomo-robot. Infatti, viene data ai robot la possibilità di imparare dagli umani attraverso un framework di Apprendimento Robotico da Dimostrazioni. Il sistema finale è flessibile ed adattabile a robot diversi operanti in diversi ambienti industriali. È caratterizzato da una struttura modulare in cui ogni blocco è completamente riutilizzabile. Ogni blocco si basa sul sistema open-source denominato Robot Operating System. Non tutti i controllori industriali sono disegnati per essere compatibili con questa piattaforma. Viene quindi presentato il metodo che è stato adottato per aprire i controllori dei robot industriali e crearne un’interfaccia ROS.
APA, Harvard, Vancouver, ISO, and other styles
16

Smith, Michael. "Non-parametric workspace modelling for mobile robots using push broom lasers." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:50224eb9-73e8-4c8a-b8c5-18360d11e21b.

Full text
Abstract:
This thesis is about the intelligent compression of large 3D point cloud datasets. The non-parametric method that we describe simultaneously generates a continuous representation of the workspace surfaces from discrete laser samples and decimates the dataset, retaining only locally salient samples. Our framework attains decimation factors in excess of two orders of magnitude without significant degradation in fidelity. The work presented here has a specific focus on gathering and processing laser measurements taken from a moving platform in outdoor workspaces. We introduce a somewhat unusual parameterisation of the problem and look to Gaussian Processes as the fundamental machinery in our processing pipeline. Our system compresses laser data in a fashion that is naturally sympathetic to the underlying structure and complexity of the workspace. In geometrically complex areas, compression is lower than that in geometrically bland areas. We focus on this property in detail and it leads us well beyond a simple application of non-parametric techniques. Indeed, towards the end of the thesis we develop a non-stationary GP framework whereby our regression model adapts to the local workspace complexity. Throughout we construct our algorithms so that they may be efficiently implemented. In addition, we present a detailed analysis of the proposed system and investigate model parameters, metric errors and data compression rates. Finally, we note that this work is predicated on a substantial amount of robotics engineering which has allowed us to produce a high quality, peer reviewed, dataset - the first of its kind.
APA, Harvard, Vancouver, ISO, and other styles
17

Montelli, Francesco. "Design and Implementation of a Data Platform for Stream Analysis: WeLASER as a Case Study"." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24928/.

Full text
Abstract:
Precision agriculture is a management strategy of agricultural activities based on data-driven decisions. This enables smarter usage of the available resources (e.g., water and crop) and ensures higher productivity. Following the integration of precision farming with the internet of things, big data, and artificial intelligence, we are witnessing the rise of ``Agriculture 5.0”. In this context, WeLASER is a European project that aims to create a system for managing weeding tasks by the adoption of robots equipped with laser technology that recognizes and burns weeds; this prevents the usage of chemical pesticides that can cause environmental damages. Such application involves the joint usage of robotic agents and data from IoT devices (e.g., weather stations) to perform effective weeding tasks. The goal of this thesis is to design, create, and test a data platform that enables the interoperability of IoT devices and robotic agents as well as data-intensive analytics on streaming data. Such data platform provides unified interfaces to collect, integrate, and analyze real-time data as well as to manage historical data.
APA, Harvard, Vancouver, ISO, and other styles
18

Bizhuta, Ermal, and Dhespina Carhoshi. "Applicability Study of Software Architectures in the Discrete Manufacturing Domain." Thesis, Mälardalens högskola, Inbyggda system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-44705.

Full text
Abstract:
Manufacturing, under the umbrella of the latest industrial revolution, has gone through enormous changes in the last decades to then later evolve in what we know now as smart manufacturing. Different companies and entities have developed their own versions of architectures for intelligentand digitalized manufacturing systems. Ideating a exible and safe architecture is one of the first steps towards a system that intends to be applicable in different environments, regardless of the vast variety of possibilities available. For this purpose, the following thesis presents an investigation on the state-of-the-art solutions of the most recent digitalized cloud-based system architectures in the domain of discreet manufacturing. Based on an initial system architecture conceived from the company ABB, an evaluation of this architecture was conducted, by taking in consideration the existing systematical approaches to the digitalization of this industry. In the following thesis work, we investigate, describe and evaluate the limitations and strengths of the most recent and known architectural approaches to cloud robotics. Finally, a few key remarks are made towards ABB's initial solution but also to the industry in general.
PADME
APA, Harvard, Vancouver, ISO, and other styles
19

Nyman, Jonas. "Faster Environment Modelling and Integration into Virtual Reality Simulations." Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-19800.

Full text
Abstract:
The use of virtual reality in engineering tasks, such as in virtual commissioning, has increased steadily in recent years, where a robot, machine or object of interest can be simulated and visualized. Yet, for a more immerse experience, an environment for the object in question needs to be constructed. However, the process for creatingan accurate environment, for a virtual simulation have remained a costly and a long endeavour. Because of this, many digital simulations are performed, either with no environment at all, or present a very basic and abstract representation of an intended environment.The aim of this thesis is to investigate if technologies such as LiDAR and digital photogrammetry could shorten the environment creation process. Therefore, a demonstrative virtual environment was created and analysed, in which the different technologies was investigated and presented in the form of a comprehensive review of the current state of the technologies with in digital recreation. Lastly, a technique specific evaluation of the time requirement, cost and user difficulty was conducted. As the field of LiDAR and digital photogrammetry is too vast to investigate all forms thereof within one project, this thesis is limited to the investigation of static laser scanners and wide lens camera photogrammetry. A semi industrious locale was chosen for digital replication, which through static laser scans and photographs would generate semi-automated 3D models.The resulting 3D models leave much to be desired, as large holes were present throughout the 3D models, sincecertain surfaces are not suitable for neither replication processes. Transparent and reflective surfaces lead to ripple effects within the 3D models geometry and textures. Moreover, certain surfaces, as blank areas for photogrammetry or black coloration for laser scanners led to missing features and model distortions.Yet despite the abnormalities, the majority of the test environment was successfully re-created. An evaluation of the created environments was performed, which list and illustrate with tables and figures the attributes, strengths and weaknesses of each technique. Moreover, technique specific limitations and a spatial analysis was carried out. With the result, seemingly illustrating that photogrammetry creates more visually accurate 3D models in comparison to the laser scanner, yet the laser scanner produces a more spatially accurate result. As such, a selective combination of the techniques can be suggested.Observations and interviews seem to point towards the full scale application, in which an accurate 3D model is re-created without much effort, to currently not exist. As both photogrammetry and static laser scanning require great effort, skill and time in order to create a seemingly perfect solid model. Yet, utilizing either, or both techniques as a template for 3D object creation could reduce the time to create an environment significantly.Furthermore, methods such as digital 3D sculpting could be used in order to remove imperfections and create what is missing from the digitally constructed 3D models. Thereby achieving an accurate result.
APA, Harvard, Vancouver, ISO, and other styles
20

Konradsson, Albin, and Gustav Bohman. "3D Instance Segmentation of Cluttered Scenes : A Comparative Study of 3D Data Representations." Thesis, Linköpings universitet, Datorseende, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177598.

Full text
Abstract:
This thesis provides a comparison between instance segmentation methods using point clouds and depth images. Specifically, their performance on cluttered scenes of irregular objects in an industrial environment is investigated. Recent work by Wang et al. [1] has suggested potential benefits of a point cloud representation when performing deep learning on data from 3D cameras. However, little work has been done to enable quantifiable comparisons between methods based on different representations, particularly on industrial data. Generating synthetic data provides accurate grayscale, depth map, and point cloud representations for a large number of scenes and can thus be used to compare methods regardless of datatype. The datasets in this work are created using a tool provided by SICK. They simulate postal packages on a conveyor belt scanned by a LiDAR, closely resembling a common industry application. Two datasets are generated. One dataset has low complexity, containing only boxes.The other has higher complexity, containing a combination of boxes and multiple types of irregularly shaped parcels. State-of-the-art instance segmentation methods are selected based on their performance on existing benchmarks. We chose PointGroup by Jiang et al. [2], which uses point clouds, and Mask R-CNN by He et al. [3], which uses images. The results support that there may be benefits of using a point cloud representation over depth images. PointGroup performs better in terms of the chosen metric on both datasets. On low complexity scenes, the inference times are similar between the two methods tested. However, on higher complexity scenes, MaskR-CNN is significantly faster.
APA, Harvard, Vancouver, ISO, and other styles
21

Lef, Annette. "CAD-Based Pose Estimation - Algorithm Investigation." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157776.

Full text
Abstract:
One fundamental task in robotics is random bin-picking, where it is important to be able to detect an object in a bin and estimate its pose to plan the motion of a robotic arm. For this purpose, this thesis work aimed to investigate and evaluate algorithms for 6D pose estimation when the object was given by a CAD model. The scene was given by a point cloud illustrating a partial 3D view of the bin with multiple instances of the object. Two algorithms were thus implemented and evaluated. The first algorithm was an approach based on Point Pair Features, and the second was Fast Global Registration. For evaluation, four different CAD models were used to create synthetic data with ground truth annotations. It was concluded that the Point Pair Feature approach provided a robust localization of objects and can be used for bin-picking. The algorithm appears to be able to handle different types of objects, however, with small limitations when the object has flat surfaces and weak texture or many similar details. The disadvantage with the algorithm was the execution time. Fast Global Registration, on the other hand, did not provide a robust localization of objects and is thus not a good solution for bin-picking.
APA, Harvard, Vancouver, ISO, and other styles
22

Schubert, Stefan. "Optimierter Einsatz eines 3D-Laserscanners zur Point-Cloud-basierten Kartierung und Lokalisierung im In- und Outdoorbereich." Master's thesis, Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-161415.

Full text
Abstract:
Die Kartierung und Lokalisierung eines mobilen Roboters in seiner Umgebung ist eine wichtige Voraussetzung für dessen Autonomie. In dieser Arbeit wird der Einsatz eines 3D-Laserscanners zur Erfüllung dieser Aufgaben untersucht. Durch die optimierte Anordnung eines rotierenden 2D-Laserscanners werden hochauflösende Bereiche vorgegeben. Zudem wird mit Hilfe von ICP die Kartierung und Lokalisierung im Stillstand durchgeführt. Bei der Betrachtung zur Verbesserung der Bewegungsschätzung wird auch eine Möglichkeit zur Lokalisierung während der Bewegung mit 3D-Scans vorgestellt. Die vorgestellten Algorithmen werden durch Experimente mit realer Hardware evaluiert.
APA, Harvard, Vancouver, ISO, and other styles
23

Al, Hakim Ezeddin. "3D YOLO: End-to-End 3D Object Detection Using Point Clouds." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-234242.

Full text
Abstract:
For safe and reliable driving, it is essential that an autonomous vehicle can accurately perceive the surrounding environment. Modern sensor technologies used for perception, such as LiDAR and RADAR, deliver a large set of 3D measurement points known as a point cloud. There is a huge need to interpret the point cloud data to detect other road users, such as vehicles and pedestrians. Many research studies have proposed image-based models for 2D object detection. This thesis takes it a step further and aims to develop a LiDAR-based 3D object detection model that operates in real-time, with emphasis on autonomous driving scenarios. We propose 3D YOLO, an extension of YOLO (You Only Look Once), which is one of the fastest state-of-the-art 2D object detectors for images. The proposed model takes point cloud data as input and outputs 3D bounding boxes with class scores in real-time. Most of the existing 3D object detectors use hand-crafted features, while our model follows the end-to-end learning fashion, which removes manual feature engineering. 3D YOLO pipeline consists of two networks: (a) Feature Learning Network, an artificial neural network that transforms the input point cloud to a new feature space; (b) 3DNet, a novel convolutional neural network architecture based on YOLO that learns the shape description of the objects. Our experiments on the KITTI dataset shows that the 3D YOLO has high accuracy and outperforms the state-of-the-art LiDAR-based models in efficiency. This makes it a suitable candidate for deployment in autonomous vehicles.
För att autonoma fordon ska ha en god uppfattning av sin omgivning används moderna sensorer som LiDAR och RADAR. Dessa genererar en stor mängd 3-dimensionella datapunkter som kallas point clouds. Inom utvecklingen av autonoma fordon finns det ett stort behov av att tolka LiDAR-data samt klassificera medtrafikanter. Ett stort antal studier har gjorts om 2D-objektdetektering som analyserar bilder för att upptäcka fordon, men vi är intresserade av 3D-objektdetektering med hjälp av endast LiDAR data. Därför introducerar vi modellen 3D YOLO, som bygger på YOLO (You Only Look Once), som är en av de snabbaste state-of-the-art modellerna inom 2D-objektdetektering för bilder. 3D YOLO tar in ett point cloud och producerar 3D lådor som markerar de olika objekten samt anger objektets kategori. Vi har tränat och evaluerat modellen med den publika träningsdatan KITTI. Våra resultat visar att 3D YOLO är snabbare än dagens state-of-the-art LiDAR-baserade modeller med en hög träffsäkerhet. Detta gör den till en god kandidat för kunna användas av autonoma fordon.
APA, Harvard, Vancouver, ISO, and other styles
24

Stålberg, Martin. "Reconstruction of trees from 3D point clouds." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-316833.

Full text
Abstract:
The geometrical structure of a tree can consist of thousands, even millions, of branches, twigs and leaves in complex arrangements. The structure contains a lot of useful information and can be used for example to assess a tree's health or calculate parameters such as total wood volume or branch size distribution. Because of the complexity, capturing the structure of an entire tree used to be nearly impossible, but the increased availability and quality of particularly digital cameras and Light Detection and Ranging (LIDAR) instruments is making it increasingly possible. A set of digital images of a tree, or a point cloud of a tree from a LIDAR scan, contains a lot of data, but the information about the tree structure has to be extracted from this data through analysis. This work presents a method of reconstructing 3D models of trees from point clouds. The model is constructed from cylindrical segments which are added one by one. Bayesian inference is used to determine how to optimize the parameters of model segment candidates and whether or not to accept them as part of the model. A Hough transform for finding cylinders in point clouds is presented, and used as a heuristic to guide the proposals of model segment candidates. Previous related works have mainly focused on high density point clouds of sparse trees, whereas the objective of this work was to analyze low resolution point clouds of dense almond trees. The method is evaluated on artificial and real datasets and works rather well on high quality data, but performs poorly on low resolution data with gaps and occlusions.
APA, Harvard, Vancouver, ISO, and other styles
25

Serra, Sabina. "Deep Learning for Semantic Segmentation of 3D Point Clouds from an Airborne LiDAR." Thesis, Linköpings universitet, Datorseende, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-168367.

Full text
Abstract:
Light Detection and Ranging (LiDAR) sensors have many different application areas, from revealing archaeological structures to aiding navigation of vehicles. However, it is challenging to interpret and fully use the vast amount of unstructured data that LiDARs collect. Automatic classification of LiDAR data would ease the utilization, whether it is for examining structures or aiding vehicles. In recent years, there have been many advances in deep learning for semantic segmentation of automotive LiDAR data, but there is less research on aerial LiDAR data. This thesis investigates the current state-of-the-art deep learning architectures, and how well they perform on LiDAR data acquired by an Unmanned Aerial Vehicle (UAV). It also investigates different training techniques for class imbalanced and limited datasets, which are common challenges for semantic segmentation networks. Lastly, this thesis investigates if pre-training can improve the performance of the models. The LiDAR scans were first projected to range images and then a fully convolutional semantic segmentation network was used. Three different training techniques were evaluated: weighted sampling, data augmentation, and grouping of classes. No improvement was observed by the weighted sampling, neither did grouping of classes have a substantial effect on the performance. Pre-training on the large public dataset SemanticKITTI resulted in a small performance improvement, but the data augmentation seemed to have the largest positive impact. The mIoU of the best model, which was trained with data augmentation, was 63.7% and it performed very well on the classes Ground, Vegetation, and Vehicle. The other classes in the UAV dataset, Person and Structure, had very little data and were challenging for most models to classify correctly. In general, the models trained on UAV data performed similarly as the state-of-the-art models trained on automotive data.
APA, Harvard, Vancouver, ISO, and other styles
26

Staniaszek, Michal. "Feature-Feature Matching For Object Retrieval in Point Clouds." Thesis, KTH, Datorseende och robotik, CVAP, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-170475.

Full text
Abstract:
In this project, we implement a system for retrieving instances of objects from point clouds using feature based matching techniques. The target dataset of point clouds consists of approximately 80 full scans of office rooms over a period of one month. The raw clouds are reprocessed to remove regions which are unlikely to contain objects. Using locations determined by one of several possible interest point selection methods, one of a number of descriptors is extracted from the processed clouds. Descriptors from a target cloud are compared to those from a query object using a nearest neighbour approach. The nearest neighbours of each descriptor in the query cloud are used to vote for the position of the object in a 3D grid overlaid on the room cloud. We apply clustering in the voting space and rank the clusters according to the number of votes they contain. The centroid of each of the clusters is used to extract a region from the target cloud which, in the ideal case, corresponds to the query object. We perform an experimental evaluation of the system using various parameter settings in order to investigate factors affecting the usability of the system, and the efficacy of the system in retrieving correct objects. In the best case, we retrieve approximately 50% of the matching objects in the dataset. In the worst case, we retrieve only 10%. We find that the best approach is to use a uniform sampling over the room clouds, and to use a descriptor which factors in both colour and shape information to describe points.
APA, Harvard, Vancouver, ISO, and other styles
27

Hamraz, Hamid. "AUTOMATED TREE-LEVEL FOREST QUANTIFICATION USING AIRBORNE LIDAR." UKnowledge, 2018. https://uknowledge.uky.edu/cs_etds/69.

Full text
Abstract:
Traditional forest management relies on a small field sample and interpretation of aerial photography that not only are costly to execute but also yield inaccurate estimates of the entire forest in question. Airborne light detection and ranging (LiDAR) is a remote sensing technology that records point clouds representing the 3D structure of a forest canopy and the terrain underneath. We present a method for segmenting individual trees from the LiDAR point clouds without making prior assumptions about tree crown shapes and sizes. We then present a method that vertically stratifies the point cloud to an overstory and multiple understory tree canopy layers. Using the stratification method, we modeled the occlusion of higher canopy layers with respect to point density. We also present a distributed computing approach that enables processing the massive data of an arbitrarily large forest. Lastly, we investigated using deep learning for coniferous/deciduous classification of point cloud segments representing individual tree crowns. We applied the developed methods to the University of Kentucky Robinson Forest, a natural, majorly deciduous, closed-canopy forest. 90% of overstory and 47% of understory trees were detected with false positive rates of 14% and 2% respectively. Vertical stratification improved the detection rate of understory trees to 67% at the cost of increasing their false positive rate to 12%. According to our occlusion model, a point density of about 170 pt/m² is needed to segment understory trees located in the third layer as accurately as overstory trees. Using our distributed processing method, we segmented about two million trees within a 7400-ha forest in 2.5 hours using 192 processing cores, showing a speedup of ~170. Our deep learning experiments showed high classification accuracies (~82% coniferous and ~90% deciduous) without the need to manually assemble the features. In conclusion, the methods developed are steps forward to remote, accurate quantification of large natural forests at the individual tree level.
APA, Harvard, Vancouver, ISO, and other styles
28

Törnblom, Nils. "Underwater 3D Surface Scanning using Structured Light." Thesis, Uppsala universitet, Centrum för bildanalys, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-138205.

Full text
Abstract:
In this thesis project, an underwater 3D scanner based on structured light has been constructed and developed. Two other scanners, based on stereoscopy and a line-swept laser, were also tested. The target application is to examine objects inside the water filled reactor vessel of nuclear power plants. Structured light systems (SLS) use a projector to illuminate the surface of the scanned object, and a camera to capture the surfaces' reflection. By projecting a series of specific line-patterns, the pixel columns of the digital projector can be identified off the scanned surface. 3D points can then be triangulated using ray-plane intersection. These points form the basis the final 3D model. To construct an accurate 3D model of the scanned surface, both the projector and the camera need to be calibrated. In the implemented 3D scanner, this was done using the Camera Calibration Toolbox for Matlab. The codebase of this scanner comes from the Matlab implementation by Lanman & Taubin at Brown University. The code has been modified and extended to meet the needs of this project. An examination of the effects of the underwater environment has been performed, both theoretically and experimentally. The performance of the scanner has been analyzed, and different 3D model visualization methods have been tested. In the constructed scanner, a small pico projector was used together with a high pixel count DSLR camera. Because these are both consumer level products, the cost of this system is just a fraction of commercial counterparts, which uses professional components. Yet, thanks to the use of a high pixel count camera, the measurement resolution of the scanner is comparable to the high-end of industrial structured light scanners.
APA, Harvard, Vancouver, ISO, and other styles
29

Gasslander, Maja. "Segmentation of Clouds in Satellite Images." Thesis, Linköpings universitet, Datorseende, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-128802.

Full text
Abstract:
The usage of 3D modelling is increasing fast, both for civilian and military areas, such as navigation, targeting and urban planning. When creating a 3D model from satellite images, clouds canbe problematic. Thus, automatic detection ofclouds inthe imagesis ofgreat use. This master thesis was carried out at Vricon, who produces 3D models of the earth from satellite images.This thesis aimed to investigate if Support Vector Machines could classify pixels into cloud or non-cloud, with a combination of texture and color as features. To solve the stated goal, the task was divided into several subproblems, where the first part was to extract features from the images. Then the images were preprocessed before fed to the classifier. After that, the classifier was trained, and finally evaluated.The two methods that gave the best results in this thesis had approximately 95 % correctly classified pixels. This result is better than the existing cloud segmentation method at Vricon, for the tested terrain and cloud types.
APA, Harvard, Vancouver, ISO, and other styles
30

Biain, Galdos Ander, and Oiarbide Iñaki Ordoki. "Cloud-based monitor and control of industrial robots." Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-19871.

Full text
Abstract:
Nowadays, interconnectivity is becoming increasingly important. According to industry, the Industrial Internet of Things (IIoT) is the core technology of Industry 4.0. Connectivity between different processes or components in the industry makes one process aware of each other, making systems more intelligent and self-sufficient. This thesis proposes various ways to control and monitor an industrial process with the cloud, explaining step by step the connections between the industrial equipment and the cloud. Furthermore, this thesis's one sub-objective is to make an industrial process, which was previously PLC-based, entirely cloud-based. For the latter work, simulation software is used to create a production line, as there is a lack of equipment. Consequently, it will also test the flexibility of the system to move between the virtual and the real environment, which is another challenge of Industry 4.0 and digitalisation. In the discussion part, the thesis talks about the shift from PLC-based to PC-based systems in the industry, mentioning the limitations and advantages.
APA, Harvard, Vancouver, ISO, and other styles
31

FONTANA, SIMONE. "Robust Point Clouds Registration." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2017. http://hdl.handle.net/10281/180707.

Full text
Abstract:
L’allineamento di Point Cloud (nuvole di punti) è un problema attuale e molto studiato, per il quale esistono molte soluzioni efficaci. Non di meno, gli approcci presenti in letteratura fanno affidamento su una buona inizializzazione e su un set di parametri adeguato. Questi approcci potrebbero venir divisi in due categorie: quelli basati sulle feature e quelli basati sulla distanza tra punti. Gli approcci che ricadono nella prima categoria, di solito, allineano due point cloud sfruttando dei punti salienti, keypoints, e un qualche tipo di descrittore, nello stesso modo che viene di solito usato con le immagini 2D. Gli approcci appartenenti alla seconda categoria, invece, non calcolano esplicitamente nessuna corrispondenza, ma queste vengono approssimate utilizzando il punto più vicino, senza la necessità di calcolare alcuna feature. L'algoritmo più importante in questa categoria è Iterative Closest Point (ICP). La maggior parte degli altri algoritmi, infatti, è una variante di ICP e lo stesso vale per uno degli approcci qui proposti. In questo lavoro introduciamo due nuove tecniche. La prima è una variante di ICP con una differente data association, basata su un modello probabilistico. Benché sia nata con l’obiettivo di allineare una point cloud molto sparsa, con una densa, ha dimostrato di essere capace di ottenere risultati qualitativamente migliori degli altri approcci anche su problemi di allineamento classici. Di notevole importanza, inoltre, è la sua bassa sensibilità ai parametri, un problema che, al contrario, affligge molti lavori presenti nella letteratura, come mostreremo, e che ne limita il loro utilizzo nel campo della robotica. Il secondo algoritmo da noi ideato è volto all’allineamento di due point cloud, senza la necessità di fornire alcuna stima iniziale sul loro allineamento. Questo è un vantaggio molto importante rispetto alle altre tecniche descritte nella letteratura. Mentre quelle basate su feature, benché siano in grado di allineare due nuvole di punti globalmente, senza necessità di una stima iniziale, non sono applicabili a point cloud sparse (uno dei casi che abbiamo dovuto affrontare durante questo lavoro), quelle basate su ICP hanno necessariamente bisogno di una stima iniziale adeguata, altrimenti convergeranno ad una soluzione sbagliata. L’approccio da noi utilizzato combina i vantaggi di entrambe le tecniche. E’ infatti applicabile ad ogni point cloud, indipendentemente dalla possibilità di estrarne feature, ma allo stesso tempo non ha bisogno di una stima iniziale. Per ottenere ciò abbiamo sfruttato un famoso algoritmo di ottimizzazione, Particle Swarm Optimization, che, benché non dia alcuna garanzia di convergenza ad un ottimo globale, in pratica ha mostrato una buona abilità nello uscire dagli ottimi locali, dove invece molti altri algoritmi rimangono intrappolati.
Point clouds registration is a very well studied problem, with many different and efficient solutions. Nevertheless, the approaches in the literature rely heavily on a good initialization and on a good set of parameters. These approaches could be roughly divided into two categories: those based on features and the so-called closest-point-based. The first category aims at aligning two point clouds by first detecting some salient points, the keypoints, and calculating their descriptors so that they can be compared, in the same way it is usually done with 2D images. On the other hand, the latter category approximates correspondences by iteratively choosing the closest point, without the need for any kind of feature. The most important algorithm in this category is Iterative Closest Point (ICP). Most other algorithms are variants of ICP, so is one of the proposed approaches. In this work we introduce two novel solutions to point clouds registration. The first one is a variant of ICP, with a different data association, derived from a probabilistic model. The experiments show that is very effective at aligning a sparse point cloud with a dense one, one of the issues we had to face during this work. On the other hand, it showed very good results also on standard alignment problems, often better than other popular state of the art algorithms. We show that, for the most common approaches, the quality of the result is heavily dependent on some parameters that, thus, need to be carefully calibrated before the algorithms could be used in real applications. Moreover, a new calibration is usually required when facing a new scenario. For this reasons we propose this innovative technique, that aims, besides at being capable of aligning two generic point clouds, independently from their density, at being more robust w.r.t. wrong parameter sets. The second technique we developed is a global point cloud registration algorithm. ICP-like techniques requires, in order to converge to the right solution, an initial estimate of the transformation between the two point clouds. Without a proper initial guess, the algorithm will probably remain stuck in a local minima. On the other hand, feature-based techniques do not require any initial estimate but are not applicable to sparse point clouds, because they do not contain enough information to extract meaningful descriptors. The approach we developed combines the advantages of both approaches. It is based on a soft-computing technique, Particle Swarm Optimization, that is known for being able to escape from local optima. The result is an algorithm capable of aligning any kind of point cloud, without the need of any initial estimate of the transformation.
APA, Harvard, Vancouver, ISO, and other styles
32

Tosteberg, Patrik. "Semantic Segmentation of Point Clouds Using Deep Learning." Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-136793.

Full text
Abstract:
In computer vision, it has in recent years become more popular to use point clouds to represent 3D data. To understand what a point cloud contains, methods like semantic segmentation can be used. Semantic segmentation is the problem of segmenting images or point clouds and understanding what the different segments are. An application for semantic segmentation of point clouds are e.g. autonomous driving, where the car needs information about objects in its surrounding. Our approach to the problem, is to project the point clouds into 2D virtual images using the Katz projection. Then we use pre-trained convolutional neural networks to semantically segment the images. To get the semantically segmented point clouds, we project back the scores from the segmentation into the point cloud. Our approach is evaluated on the semantic3D dataset. We find our method is comparable to state-of-the-art, without any fine-tuning on the Semantic3Ddataset.
APA, Harvard, Vancouver, ISO, and other styles
33

Chitic, Stefan-Gabriel. "Middleware and programming models for multi-robot systems." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEI018/document.

Full text
Abstract:
Malgré de nombreuses années de travail en robotique, il existe toujours un manque d’architecture logicielle et de middleware stables pour les systèmes multi-robot. Un intergiciel robotique devrait être conçu pour faire abstraction de l’architecture matérielle de bas niveau, faciliter la communication et l’intégration de nouveaux logiciels. Cette thèse se concentre sur le middleware pour systèmes multi-robot et sur la façon dont nous pouvons améliorer les frameworks existantes dans un contexte multi-robot en ajoutant des services de coordination multi-robot, des outils de développement et de déploiement massif. Nous nous attendons à ce que les robots soient de plus en plus utiles car ils peuvent tirer profit des données provenant d’autres périphériques externes dans leur prise de décision au lieu de simplement réagir à leur environnement local (capteurs, robots coopérant dans une flotte, etc.). Cette thèse évalue d’abord l’un des intergiciels les plus récents pour robot(s) mobile(s), Robot operating system (ROS), suivi par la suite d’un état de l’art sur les middlewares couramment utilisés en robotique. Basé sur les conclusions, nous proposons une contribution originale dans le contexte multi-robots, appelé SDfR (Service discovery for Robots), un mécanisme de découverte des services pour les robots. L’objectif principal est de proposer un mécanisme permettant aux robots de garder une trace des pairs accessibles à l’intérieur d’une flotte tout en utilisant une infrastructure ad-hoc. A cause de la mobilité des robots, les techniques classiques de configuration de réseau pair à pair ne conviennent pas. SDfR est un protocole hautement dynamique, adaptatif et évolutif adapté du protocole SSDP (Simple Service Discovery Protocol). Nous conduisons un ensemble d’expériences, en utilisant une flotte de robots Turtlebot, pour mesurer et montrer que le surdébit de SDfR est limité. La dernière partie de la thèse se concentre sur un modèle de programmation basé sur un automate temporisé. Ce type de programmation a l’avantage d’avoir un modèle qui peut être vérifié et simulé avant de déployer l’application sur de vrais robots. Afin d’enrichir et de faciliter le développement d’applications robotiques, un nouveau modèle de programmation basé sur des automates à états temporisés est proposé, appelé ROSMDB (Robot Operating system Model Driven Behaviour). Il fournit une vérification de modèle lors de la phase de développement et lors de l’exécution. Cette contribution est composée de plusieurs composants : une interface graphique pour créer des modèles basés sur un automate temporisé, un vérificateur de modèle intégré basé sur UPPAAL et un générateur de squelette de code. Enfin, nous avons effectué deux expériences : une avec une flotte de drones Parrot et l’autre avec des Turtlebots afin d’illustre le modèle proposé et sa capacité à vérifier les propriétés
Despite many years of work in robotics, there is still a lack of established software architecture and middleware for multi-robot systems. A robotic middleware should be designed to abstract the low-level hardware architecture, facilitate communication and integration of new software. This PhD thesis is focusing on middleware for multi-robot system and how we can improve existing frameworks for fleet purposes by adding multi-robot coordination services, development and massive deployment tools. We expect robots to be increasingly useful as they can take advantage of data pushed from other external devices in their decision making instead of just reacting to their local environment (sensors, cooperating robots in a fleet, etc). This thesis first evaluates one of the most recent middleware for mobile robot(s), Robot operating system (ROS) and continues with a state of the art about the commonly used middlewares in robotics. Based on the conclusions, we propose an original contribution in the multi-robot context, called SDfR (Service discovery for Robots), a service discovery mechanism for Robots. The main goal is to propose a mechanism that allows highly mobile robots to keep track of the reachable peers inside a fleet while using an ad-hoc infrastructure. Another objective is to propose a network configuration negotiation protocol. Due to the mobility of robots, classical peer to peer network configuration techniques are not suitable. SDfR is a highly dynamic, adaptive and scalable protocol adapted from Simple Service Discovery Protocol (SSDP). We conduced a set of experiments, using a fleet of Turtlebot robots, to measure and show that the overhead of SDfR is limited. The last part of the thesis focuses on programming model based on timed automata. This type of programming has the benefits of having a model that can be verified and simulated before deploying the application on real robots. In order to enrich and facilitate the development of robotic applications, a new programming model based on timed automata state machines is proposed, called ROSMDB (Robot Operating system Model Driven Behaviour). It provides model checking at development phase and at runtime. This contribution is composed of several components: a graphical interface to create models based on timed automata, an integrated model checker based on UPPAAL and a code skeleton generator. Moreover, a ROS specific framework is proposed to verify the correctness of the execution of the models and to trigger alerts. Finally, we conduct two experiments: one with a fleet of Parrot drones and second with Turtlebots in order to illustrates the proposed model and its ability to check properties
APA, Harvard, Vancouver, ISO, and other styles
34

Pereira, Diego da Silva. "Uma arquitetura de Cloud Robotic baseada em clones para uma equipe de CellBots." Universidade Federal Rural do Semi-Árido, 2016. http://bdtd.ufersa.edu.br:80/tede/handle/tede/666.

Full text
Abstract:
Submitted by Lara Oliveira (lara@ufersa.edu.br) on 2017-04-24T20:52:37Z No. of bitstreams: 1 DiegoSP_DISSERT.pdf: 9492386 bytes, checksum: 183575844339dfc0054cdb879dbb8dd8 (MD5)
Approved for entry into archive by Vanessa Christiane (referencia@ufersa.edu.br) on 2017-04-26T12:13:55Z (GMT) No. of bitstreams: 1 DiegoSP_DISSERT.pdf: 9492386 bytes, checksum: 183575844339dfc0054cdb879dbb8dd8 (MD5)
Approved for entry into archive by Vanessa Christiane (referencia@ufersa.edu.br) on 2017-04-26T12:16:15Z (GMT) No. of bitstreams: 1 DiegoSP_DISSERT.pdf: 9492386 bytes, checksum: 183575844339dfc0054cdb879dbb8dd8 (MD5)
Made available in DSpace on 2017-04-26T12:16:26Z (GMT). No. of bitstreams: 1 DiegoSP_DISSERT.pdf: 9492386 bytes, checksum: 183575844339dfc0054cdb879dbb8dd8 (MD5) Previous issue date: 2016-07-29
This work about the implementation of an ad hoc communication architecture for a Cellbots team as infrastructure to allow the use of cloud computing. Cellbots are robots that have a financial low cost because they use smarthphones as a central control unit and require few electronic components for their construction. The goal is to expand the horizons of applications for these devices through an architecture that enables communication between robots, allowing them to work as a team, cooperatively, like a Multi-Robot System (MRS), performing di erent tasks, such as searching objects. Another objective of the proposed architecture is to provide the Cellbots the use of resources from cloud computing, treated in the literature as Cloud Robotics.The expected results is to increase the overall capacity of system processing, to enable knowledge sharing and increase automia battery of robots by reduction spending on local processing information. It is proposed a clone based approach in order to achieve the benefits cited with the Cloud Robotics tecnology. The Rapyuta framework was used to instanciete clones and setting up their parameters. To provide communication between robots of the team was adopted the AODV(Ad Hoc On-Demand Distance Vector) protocol. At the end manuscrit some experiments are show to validade the proposed architecture
Este trabalho retrata a implementação de uma arquitetura de comunicação Ad Hoc para uma equipe de CellBots como infraestrutura para utilização de computação na nuvem. CellBots são robôs que fazem uso de smarthphones como computador controlador aliados a dispositivos eletrônicos simples e de baixo custo.Oobjetivo é expandir os horizontes de aplicações para estes dispositvos robóticos através da formação de um MRS (Multi-Robot System) permitindo que eles possam trabalhar de forma cooperativa na conclusão das mais diversas tarefas. Além disso, é empregado o modelo de CR (Cloud Robotic) baseado em Clones para facultar a utilização de recursos oriundos da computação na nuvem através de robôs clone instanciados sob demanda, permitindo aumentar a capacidade global de processamento do MRS, viabilizar o compartilhamento de conhecimento, garantir uma maior automia para as baterias dos robôs através da redução dos gastos com o processamento local de informações, dentre outros benefícios. Na comunicação dentro do MRS foi adotado o protocolo AODV (Ad Hoc On-Demand Distance Vector) e para prover os clones na nuvem foi escolhida a plataforma Rapyuta. Para validação foram feitos testes de RTT (Round Trip Time) em variados cenários. A arquitetura sagrou-se viável tanto para comunicação dentro do MRS quanto para comunicação dos robôs com a nuvem
2017-04-24
APA, Harvard, Vancouver, ISO, and other styles
35

Ospina, Eslava David Mauricio, and Avendaño Flores Santiago. "Virtual Commissioning of Robotic Cell Using Cloud-based Technologies and Advanced Visualization System." Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-19793.

Full text
Abstract:
The manufacturing industry is quickly adapting to new technologies. Some of these trending technologies are virtual commissioning, virtual reality, and cloud-based technologies. This project summarizes these three technologies and aims to create a commissioning tool adapted to the 4.0 Industry. The project’s methodology was to analyse a problem and consequently create a solution that solves it. The process of designing and developing was repeated iteratively, each time an evaluation was made. The final product developed has shown that it is worth spending time introducing the cloudbased technologies inside many applications since it saves time and allows to work remotely. Applying virtual reality to virtual commissioning has proven to add efficiency. At the same time, it gives an immersive experience with a real-time display of quantitative data and the process itself in a visual mode without interfering with the actual production. With these two technologies, virtual commissioning evolves and goes a step further. This project also proved that the user experience and interface in this type of immersive applications need much attention on creating a comfortableinterface that does not fatigue or cause rejection in the user.
APA, Harvard, Vancouver, ISO, and other styles
36

Kulkarni, Amey S. "Motion Segmentation for Autonomous Robots Using 3D Point Cloud Data." Digital WPI, 2020. https://digitalcommons.wpi.edu/etd-theses/1370.

Full text
Abstract:
Achieving robot autonomy is an extremely challenging task and it starts with developing algorithms that help the robot understand how humans perceive the environment around them. Once the robot understands how to make sense of its environment, it is easy to make efficient decisions about safe movement. It is hard for robots to perform tasks that come naturally to humans like understanding signboards, classifying traffic lights, planning path around dynamic obstacles, etc. In this work, we take up one such challenge of motion segmentation using Light Detection and Ranging (LiDAR) point clouds. Motion segmentation is the task of classifying a point as either moving or static. As the ego-vehicle moves along the road, it needs to detect moving cars with very high certainty as they are the areas of interest which provide cues to the ego-vehicle to plan it's motion. Motion segmentation algorithms segregate moving cars from static cars to give more importance to dynamic obstacles. In contrast to the usual LiDAR scan representations like range images and regular grid, this work uses a modern representation of LiDAR scans using permutohedral lattices. This representation gives ease of representing unstructured LiDAR points in an efficient lattice structure. We propose a machine learning approach to perform motion segmentation. The network architecture takes in two sequential point clouds and performs convolutions on them to estimate if 3D points from the first point cloud are moving or static. Using two temporal point clouds help the network in learning what features constitute motion. We have trained and tested our learning algorithm on the FlyingThings3D dataset and a modified KITTI dataset with simulated motion.
APA, Harvard, Vancouver, ISO, and other styles
37

Shiel, Michael P. "Multi-level information fusion for environment aware robotic navigation." Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/61955/1/Michael_Shiel_Thesis.pdf.

Full text
Abstract:
This thesis develops the hardware and software framework for an integrated navigation system. Dynamic data fusion algorithms are used to develop a system with a high level of resistance to the typical problems that affect standard navigation systems.
APA, Harvard, Vancouver, ISO, and other styles
38

Wiklander, Marcus. "Classification of tree species from 3D point clouds using convolutional neural networks." Thesis, Umeå universitet, Institutionen för fysik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-174662.

Full text
Abstract:
In forest management, knowledge about a forest's distribution of tree species is key. Being able to automate tree species classification for large forest areas is of great interest, since it is tedious and costly labour doing it manually. In this project, the aim was to investigate the efficiency of classifying individual tree species (pine, spruce and deciduous forest) from 3D point clouds acquired by airborne laser scanning (ALS), using convolutional neural networks. Raw data consisted of 3D point clouds and photographic images of forests in northern Sweden, collected from a helicopter flying at low altitudes. The point cloud of each individual tree was connected to its representation in the photos, which allowed for manual labeling of training data to be used for training of convolutional neural networks. The training data consisted of labels and 2D projections created from the point clouds, represented as images. Two different convolutional neural networks were trained and tested; an adaptation of the LeNet architecture and the ResNet architecture. Both networks reached an accuracy close to 98 %, the LeNet adaptation having a slightly lower loss score for both validation and test data compared to that of ResNet. Confusion matrices for both networks showed similar F1 scores for all tree species, between 97 % and 98 %. The accuracies computed for both networks were found higher than those achieved in similar studies using ALS data to classify individual tree species. However, the results in this project were never tested against a true population sample to confirm the accuracy. To conclude, the use of convolutional neural networks is indeed an efficient method for classification of tree species, but further studies on unbiased data is needed to validate these results.
APA, Harvard, Vancouver, ISO, and other styles
39

Arlotti, Luca. "Studio di fattibilità tecnico economico per l'automazione di un reparto presse tramite l'applicazione di cobot." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16184/.

Full text
Abstract:
In questa tesi viene eseguita una ricerca applicativa sull'implementazione di un robot collaborativo, con lo scopo di sgravare gli operatori da mansioni ripetitive e impiegare il loro tempo per migliorare la qualità della produzione e dei prodotti finali. Dopo una parte introduttiva dedicata alla descrizione dei punti salienti dell’Industria 4.0 e alle scelte che il mercato propone riguardo ai robot collaborativi, si è preso in considerazione il caso specifico di ASA San Marino: l’analisi del processo produttivo del reparto presse ha posto la lente di ingrandimento sulle mansioni che il singolo operatore è chiamato a svolgere. Le problematiche evidenziate non riguardavano tutto il reparto ma solo un gruppo di 6 macchine che operano su più turni e costantemente. Per risolvere le problematiche si è ipotizzata un’implementazione robotica di tipo collaborativa, che potesse garantire l’interazione tra uomo e macchina, che non invadesse il layout di reparto con gabbie di recinzione e che, soprattutto, fosse di facile e veloce installazione. Partendo dall'esperienza maturata durante il percorso di tirocinio in ambito di programmazione del cobot, l’obiettivo della tesi è arrivare all'installazione effettiva del cobot al termine del processo di produzione della pressa, utilizzandolo per pallettizzare i prodotti in maniera automatica, collaborando con l’operatore nel raggiungimento dell’obiettivo comune. Per ottenere ciò si è dunque passati prima per un’analisi delle caratteristiche del cobot, inquadrando le sue esigenze e definendo i suoi limiti, poi risolvendo i vari problemi sorti e implementando il sistema per far svolgere le mansioni al robot senza l’aiuto di altri macchinari che richiedessero modifiche del layout.
APA, Harvard, Vancouver, ISO, and other styles
40

Azhari, Faris. "Automated crack detection and characterisation from 3D point clouds of unstructured surfaces." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/234510/1/Faris_Azhari_Thesis.pdf.

Full text
Abstract:
This thesis proposes a novel automated crack detection and characterisation method on unstructured surfaces using 3D point cloud. Crack detection on unstructured surfaces poses a challenge compared to flat surfaces such as pavements and concrete, which typically utilise image-based sensors. The detection method utilises a point cloud-based deep learning method to perform point-wise classification. The detected points are then automatically characterised to estimate the detected cracks’ properties such as width profile, orientation, and length. The proposed method enables the deployment of autonomous systems to conduct reliable surveys in environments risky to humans.
APA, Harvard, Vancouver, ISO, and other styles
41

Forni, Tommaso. "Autonomous Robotic Arm Object Grasping through Consistent Depth Estimation." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
This thesis propose an innovative method for the estimation of the depth in a consistent means, focusing on a grasping task scenario. Infer depth informations from the environment through sensors, such as Kinect or a stereo camera, results difficult for tasks that involves robotic manipulators. Beside the usually high cost of these sensors, it is also challenging mount them on the manipulator end effector, due to their weight and their significant dimensions. Nowadays, a popular solution is monocular Depth Estimation, whose goal is to predict the depth value of each pixel, given only a single RGB image as input, allowing to infer scene geometry from 2D images. However, their quality is limited by the ill-posed nature of the problem and the lack of high quality datasets. Recent state-of-the-art methods, such as Structure-from-Motion and Multi-View Stereo, are able to generate consistent depth values but, differently from monocular depth estimation, they also require a huge number of images to do so. This mean more time spent gathering environment informations and consequently more time spent to perform the overall grasping procedure. The method proposed in this thesis, aim to reduce the amount of data needed to predict depth values, without losing consistency and reliability on the predictions. Firstly, a monocular depth estimator is used to predict raw depth values, who will be refined by a second neural network. The core idea, is the implementation of a particular Autoencoder Neural Network, whose loss function is computed by a warping procedure based on the predicted raw depth values itself. Lastly, a Salient Object Detector, is used to remove outliers from the refined depth values provided by the autoencoder. In order to test and validate the proposed method, an UR5 robotic manipulator along with a Kinect camera, has been implemented in the CoppeliaSim simulator.
APA, Harvard, Vancouver, ISO, and other styles
42

Jelínek, Aleš. "Vektorizovaná mračna bodů pro mobilní robotiku." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-364602.

Full text
Abstract:
Disertační práce se zabývá zpracováním mračenen bodů z laserových skenerů pomocí vektorizace a následnému vyhledávání korespondencí mezi takto získanými aproximacemi pro potřeby současné sebelokalizace a mapování v mobilní robotice. První nová metoda je určena pro segmentaci a filtraci surových dat a realizuje obě operace najednou v jednom algoritmu. Pro vektorizaci je představen optimalizovaný algoritmus založený na úplné metodě nejmenších čtverců, který je v současnosti patrně nejrychlejší ve své třídě a blíží se tak eliminačním metodám, které ovšem produkují výrazně horší aproxi- mace. Inovativní analytické metody jsou představeny i pro účely vyjádření podobnosti mezi dvěma vektorizovanými skeny, pro jejich optimální sesazení a pro vyhledávání korespondencí mezi nimi. Všechny představené algoritmy jsou intezivně testovány a jejich vlastnosti ověřeny množstvím experimentů.
APA, Harvard, Vancouver, ISO, and other styles
43

Nordlund, Fredrik Hans. "Enabling Network-Aware Cloud Networked Robots with Robot Operating System : A machine learning-based approach." Thesis, KTH, Radio Systems Laboratory (RS Lab), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-160877.

Full text
Abstract:
During the recent years, a new area called Cloud Networked Robotics (CNR) has evolved from conventional robotics, thanks to the increasing availability of cheap robot systems and steady improvements in the area of cloud computing. Cloud networked robots refers to robots with the ability to offload computation heavy modules to a cloud, in order to make use of storage, scalable computation power, and other functionalities enabled by a cloud such as shared knowledge between robots on a global level. However, these cloud robots face a problem with reachability and QoS of crucial modules that are offloaded to the cloud, when operating in unstable network environments. Under such conditions, the robots might lose the connection to the cloud at any moment; in worst case, leaving the robots “brain-dead”. This thesis project proposes a machine learning-based network aware framework for a cloud robot, that can choose the most efficient module placement based on location, task, and the network condition. The proposed solution was implemented upon a cloud robot prototype based on the TurtleBot 2 robot development kit, running Robot Operating System (ROS). A continuous experiment was conducted where the cloud robot was ordered to execute a simple task in the laboratory corridor under various network conditions. The proposed solution was evaluated by comparing the results from the continuous experiment with measurements taken from the same robot, with all modules placed locally, doing the same task. The results show that the proposed framework can potentially decrease the battery consumption by 10% while improving the efficiency of the task by 2.4 seconds (2.8%). However, there is an inherent bottleneck in the proposed solution where each new robot would need 2 months to accumulate enough data for the training set, in order to show good performance. The proposed solution can potentially benefit the area of CNR if connected and integrated with a shared-knowledge platform which can enable new robots to skip the training phase, by downloading the existing knowledge from the cloud.
Under de senaste åren har ett nytt forskningsområde kallat Cloud Networked Robotics (CNR) växt fram inom den konventionella robottekniken, tack vare den ökade tillgången på billiga robotsystem och stadiga framsteg inom området cloud computing. Molnrobotar syftar på robotar med förmågan att flytta resurstunga moduler till ett moln för att ta del av lagringskapaciteten, den skalbara processorkraften och andra tjänster som ett moln kan tillhandahålla, t.ex. en kunskapsdatabas för robotar över hela världen. Det finns dock ett problem med dessa sorters robotar gällande nåbarhet och QoS för kritiska moduler placerade på ett moln, när dessa robotar verkar i instabila nätverksmiljöer. I ett sådant scenario kan robotarna när som helst förlora anslutningen till molnet, vilket i värsta fall lämnar robotarna hjärndöda. Den här rapporten föreslår en maskininlärningsbaserad nätverksmedveten ramverkslösning för en molnrobot, som kan välja de mest effektiva modulplaceringarna baserat på robotens position, den givna uppgiften och de rådande nätverksförhållanderna. Ramverkslösningen implementerades på en molnrobotsprototyp, baserad på ett robot development kit kallat TurtleBot 2, som använder sig av ett middleware som heter Robot Operating System (ROS). Ett fortskridande experiment utfördes där molnroboten fick i uppgift att utföra ett enkelt uppdrag i laboratoriets korridor, under varierande nätverksförhållanden. Ramverkslösningen utvärderades genom att jämföra resultaten från det fortskridrande experimentet med mätningar som gjordes med samma robot som utförde samma uppgift, fast med alla moduler placerade lokalt på roboten. Resultaten visar att den föreslagna ramverkslösningen kan potentiellt minska batterikonsumptionen med 10%, samtidigt som tiden för att utföra en uppgift kan minskas med 2.4 sekunder (2.8%). Däremot uppstår en flaskhals i framtagna lösningen där varje ny robot kräver 2 månader för att samla ihop nog med data för att maskinilärningsalgoritmen ska visa bra prestanda. Den förlsagna lösningen kan dock vara fördelaktig för CNR om man integrerar den med en kunskapsdatabas för robotar, som kan möjliggöra för varje ny robot att kringå den 2 månader långa träningsperioden, genom att ladda ner existerande kunskap från molnet.
APA, Harvard, Vancouver, ISO, and other styles
44

Hui, Fei. "Visual Tracking of Deformation and Classification of Object Elasticity with Robotic Hand Probing." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/36477.

Full text
Abstract:
Performing tasks with a robotic hand often requires a complete knowledge of the manipulated object, including its properties (shape, rigidity, surface texture) and its location in the environment, in order to ensure safe and efficient manipulation. While well-established procedures exist for the manipulation of rigid objects, as well as several approaches for the manipulation of linear or planar deformable objects such as ropes or fabric, research addressing the characterization of deformable objects occupying a volume remains relatively limited. The fundamental objectives of this research are to track the deformation of non-rigid objects under robotic hand manipulation using RGB-D data, and to automatically classify deformable objects as either rigid, elastic, plastic, or elasto-plastic, based on the material they are made of, and to support recognition of the category of such objects through a robotic probing process in order to enhance manipulation capabilities. The goal is not to attempt to formally model the material of the object, but rather employ a data-driven approach to make decisions based on the observed properties of the object, capture implicitly its deformation behavior, and support adaptive control of a robotic hand for other research in the future. The proposed approach advantageously combines color image and point cloud processing techniques, and proposes a novel combination of the fast level set method with a log-polar mapping of the visual data to robustly detect and track the contour of a deformable object in a RGB-D data stream. Dynamic time warping is employed to characterize the object properties independently from the varying length of the detected contour as the object deforms. The research results demonstrate that a recognition rate over all categories of material of up to 98.3% is achieved based on the detected contour. When integrated in the control loop of a robotic hand, it can contribute to ensure stable grasp, and safe manipulation capability that will preserve the physical integrity of the object.
APA, Harvard, Vancouver, ISO, and other styles
45

Dörr, Stefan [Verfasser], and Alexander [Akademischer Betreuer] Verl. "Cloud-based cooperative long-term SLAM for mobile robots in industrial applications / Stefan Dörr ; Betreuer: Alexander Verl." Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2020. http://d-nb.info/1223928780/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Pech, Thomas Joel. "A Deep-Learning Approach to Evaluating the Navigability of Off-Road Terrain from 3-D Imaging." Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case1496377449249936.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Macknojia, Rizwan. "Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces." Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/23976.

Full text
Abstract:
This thesis presents an approach for configuring and calibrating a network of RGB-D sensors used to guide a robotic arm to interact with objects that get rapidly modeled in 3D. The system is based on Microsoft Kinect sensors for 3D data acquisition. The work presented here also details an analysis and experimental study of the Kinect’s depth sensor capabilities and performance. The study comprises examination of the resolution, quantization error, and random distribution of depth data. In addition, the effects of color and reflectance characteristics of an object are also analyzed. The study examines two versions of Kinect sensors, one dedicated to operate with the Xbox 360 video game console and the more recent Microsoft Kinect for Windows version. The study of the Kinect sensor is extended to the design of a rapid acquisition system dedicated to large workspaces by the linkage of multiple Kinect units to collect 3D data over a large object, such as an automotive vehicle. A customized calibration method for this large workspace is proposed which takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy between local sections of point clouds that is within the range of the depth measurements accuracy permitted by the Kinect technology. The method is developed to calibrate all Kinect units with respect to a reference Kinect. The internal calibration of the sensor in between the color and depth measurements is also performed to optimize the alignment between the modalities. The calibration of the 3D vision system is also extended to formally estimate its configuration with respect to the base of a manipulator robot, therefore allowing for seamless integration between the proposed vision platform and the kinematic control of the robot. The resulting vision-robotic system defines the comprehensive calibration of reference Kinect with the robot. The latter can then be used to interact under visual guidance with large objects, such as vehicles, that are positioned within a significantly enlarged field of view created by the network of RGB-D sensors. The proposed design and calibration method is validated in a real world scenario where five Kinect sensors operate collaboratively to rapidly and accurately reconstruct a 180 degrees coverage of the surface shape of various types of vehicles from a set of individual acquisitions performed in a semi-controlled environment, that is an underground parking garage. The vehicle geometrical properties generated from the acquired 3D data are compared with the original dimensions of the vehicle.
APA, Harvard, Vancouver, ISO, and other styles
48

Contreras, Samamé Luis Federico. "SLAM collaboratif dans des environnements extérieurs." Thesis, Ecole centrale de Nantes, 2019. http://www.theses.fr/2019ECDN0012/document.

Full text
Abstract:
Cette thèse propose des modèles cartographiques à grande échelle d'environnements urbains et ruraux à l'aide de données en 3D acquises par plusieurs robots. La mémoire contribue de deux manières principales au domaine de recherche de la cartographie. La première contribution est la création d'une nouvelle structure, CoMapping, qui permet de générer des cartes 3D de façon collaborative. Cette structure s’applique aux environnements extérieurs en ayant une approche décentralisée. La fonctionnalité de CoMapping comprend les éléments suivants : Tout d’abord, chaque robot réalise la construction d'une carte de son environnement sous forme de nuage de points.Pour cela, le système de cartographie a été mis en place sur des ordinateurs dédiés à chaque voiture, en traitant les mesures de distance à partir d'un LiDAR 3D se déplaçant en six degrés de liberté (6-DOF). Ensuite, les robots partagent leurs cartes locales et fusionnent individuellement les nuages de points afin d'améliorer leur estimation de leur cartographie locale. La deuxième contribution clé est le groupe de métriques qui permettent d'analyser les processus de fusion et de partage de cartes entre les robots. Nous présentons des résultats expérimentaux en vue de valider la structure CoMapping et ses métriques. Tous les tests ont été réalisés dans des environnements extérieurs urbains du campus de l’École Centrale de Nantes ainsi que dans des milieux ruraux
This thesis proposes large-scale mapping model of urban and rural environments using 3D data acquired by several robots. The work contributes in two main ways to the research field of mapping. The first contribution is the creation of a new framework, CoMapping, which allows to generate 3D maps in a cooperative way. This framework applies to outdoor environments with a decentralized approach. The CoMapping's functionality includes the following elements: First of all, each robot builds a map of its environment in point cloud format.To do this, the mapping system was set up on computers dedicated to each vehicle, processing distance measurements from a 3D LiDAR moving in six degrees of freedom (6-DOF). Then, the robots share their local maps and merge the point clouds individually to improve their local map estimation. The second key contribution is the group of metrics that allow to analyze the merging and card sharing processes between robots. We present experimental results to validate the CoMapping framework with their respective metrics. All tests were carried out in urban outdoor environments on the surrounding campus of the École Centrale de Nantes as well as in rural areas
APA, Harvard, Vancouver, ISO, and other styles
49

Sylvan, Andreas. "Internet of Things in Surface Mount TechnologyElectronics Assembly." Thesis, KTH, Medieteknik och interaktionsdesign, MID, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209243.

Full text
Abstract:
Currently manufacturers in the European Surface Mount Technology (SMT) industry seeproduction changeover, machine downtime and process optimization as their biggestchallenges. They also see a need for collecting data and sharing information betweenmachines, people and systems involved in the manufacturing process. Internet of Things (IoT)technology provides an opportunity to make this happen. This research project gives answers tothe question of what the potentials and challenges of IoT implementation are in European SMTmanufacturing. First, key IoT concepts are introduced. Then, through interviews with expertsworking in SMT manufacturing, the current standpoint of the SMT industry is defined. The studypinpoints obstacles in SMT IoT implementation and proposes a solution. Firstly, local datacollection and sharing needs to be achieved through the use of standardized IoT protocols andAPIs. Secondly, because SMT manufacturers do not trust that sensitive data will remain securein the Cloud, a separation of proprietary data and statistical data is needed in order take a stepfurther and collect Big Data in a Cloud service. This will allow for new services to be offered byequipment manufacturers.
I dagsläget upplever tillverkare inom den europeiska ytmonteringsindustrin för elektronikproduktionsomställningar, nedtid för maskiner och processoptimering som sina störstautmaningar. De ser även ett behov av att samla data och dela information mellan maskiner,människor och system som som är delaktiga i tillverkningsprocessen.Sakernas internet, även kallat Internet of Things (IoT), erbjuder teknik som kan göra dettamöjligt. Det här forskningsprojektet besvarar frågan om vilken potential som finns samt vilkautmaningar en implementation av sakernas internet inom europeisk ytmonteringstillverkning avelektronik innebär. Till att börja med introduceras nyckelkoncept inom sakernas internet. Sedandefinieras utgångsläget i elektroniktillverkningsindustrin genom intervjuer med experter.Studien belyser de hinder som ligger i vägen för implementation och föreslår en lösning. Dettainnebär först och främst att datainsamling och delning av data måste uppnås genomanvändning av standardiserade protokoll för sakernas internet ochapplikationsprogrammeringsgränssnitt (APIer). På grund av att elektroniktillverkare inte litar påatt känslig data förblir säker i molnet måste proprietär data separeras från statistisk data. Dettaför att möjliggöra nästa steg som är insamling av så kallad Big Data i en molntjänst. Dettamöjliggör i sin tur för tillverkaren av produktionsmaskiner att erbjuda nya tjänster.
APA, Harvard, Vancouver, ISO, and other styles
50

Avanthey, Loïca. "Acquisition et reconstruction de données 3D denses sous-marines en eau peu profonde par des robots d'exploration." Thesis, Paris, ENST, 2016. http://www.theses.fr/2016ENST0055/document.

Full text
Abstract:
Notre planète est pour l’essentiel recouverte par les mers et les océans, or notre connaissance des fonds marins est très inférieure à celle que nous possédons sur les terres émergées. Dans ce mémoire, nous cherchons à concevoir un système dédié à la cartographie thématique à grande échelle pour obtenir à la demande un nuage de points dense représentatif d’une scène sous-marine ou subaquatique par reconstruction tridimensionnelle. Le caractère complexe de ce type de système nous amène à privilégier une approche délibérément transversale. Nous nous intéresserons en particulier aux problématiques posées par l’étude à l’échelle des individus de petites zones en eau peu profonde. Les premières concernent l’acquisition in situ efficace de couples stéréoscopiques avec une logistique adaptée à la taille des zones observées : nous proposons pour cela un microsystème agile, peu coûteux et suffisamment automatisé pour fournir des données reproductibles et comparables. Les secondes portent sur l’extraction fiable de l’information tridimensionnelle à partir des données acquises : nous exposons les algorithmes que nous avons élaborés pour prendre en compte les caractéristiques particulières du milieu aquatique (dynamisme, propagation difficile des ondes électromagnétiques, etc.). Nous abordons donc en détail dans ce mémoire les problèmes d’appariement dense, d’étalonnage, d’acquisition in situ, de recalage et de redondance des données rencontrés dans le milieu sous-marin
Our planet is mostly covered by seas and oceans. However, our knowledge of the seabed is far more restricted than that of land surface. In this thesis, we seek to develop a system dedicated to precise thematic mapping to obtain a dense point cloud of an underwater area on demand by using three-dimensional reconstruction. The complex nature of this type of system leads us to favor a multidisciplinary approach. We will examine in particular the issues raised by studying small shallow water areas on the scale of individual objects. The first problems concern the effective in situ acquisition of stereo pairs with logistics adapted to the sizes of the observed areas: for this, we propose an agile, affordable microsystem which is sufficiently automated to provide reproducible and comparable data. The second set of problems relates to the reliable extraction of three-dimensional information from the acquired data: we outline the algorithms we have developed to take into account the particular characteristics of the aquatic environment (such as its dynamics or its light absorption). We therefore discuss in detail the issues encountered in the underwater environment concerning dense matching, calibration, in situ acquisition, data registration and redundancy
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography