Academic literature on the topic 'Deep learning, convolutional neural networks, classification, object detection'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Deep learning, convolutional neural networks, classification, object detection.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Dissertations / Theses on the topic "Deep learning, convolutional neural networks, classification, object detection"

1

Lidberg, Love. "Object Detection using deep learning and synthetic data." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-150555.

Full text
Abstract:
This thesis investigates how synthetic data can be utilized when training convolutional neural networks to detect flags with threatening symbols. The synthetic data used in this thesis consisted of rendered 3D flags with different textures and flags cut out from real images. The synthetic data showed that it can achieve an accuracy above 80% compared to 88% accuracy achieved by a data set containing only real images. The highest accuracy scored was achieved by combining real and synthetic data showing that synthetic data can be used as a complement to real data. Some attempts to improve the accuracy score was made using generative adversarial networks without achieving any encouraging results.
APA, Harvard, Vancouver, ISO, and other styles
2

Marko, Arsenović. "Detekcija bolesti biljaka tehnikama dubokog učenja." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2020. https://www.cris.uns.ac.rs/record.jsf?recordId=114816&source=NDLTD&language=en.

Full text
Abstract:
Istraživanja predstavljena u disertaciji imala su za cilj razvoj nove metode bazirane na dubokim konvolucijskim neuoronskim mrežama u cilju detekcije bolesti biljaka na osnovu slike lista. U okviru eksperimentalnog dela rada prikazani su dosadašnji literaturno dostupni pristupi u automatskoj detekciji bolesti biljaka kao i ograničenja ovako dobijenih modela kada se koriste u prirodnim uslovima. U okviru disertacije uvedena je nova baza slika listova, trenutno najveća po broju slika u poređenju sa javno dostupnim bazama, potvrđeni su novi pristupi augmentacije bazirani na GAN arhitekturi nad slikama listova uz novi specijalizovani dvo-koračni pristup kao potencijalni odgovor na nedostatke postojećih rešenja.<br>The research presented in this thesis was aimed at developing a novel method based on deep convolutional neural networks for automated plant disease detection. Based on current available literature, specialized two-phased deep neural network method introduced in the experimental part of thesis solves the limitations of state-of-the-art plant disease detection methods and provides the possibility for a practical usage of the newly developed model. In addition, a new dataset was introduced, that has more images of leaves than other publicly available datasets, also GAN based augmentation approach on leaves images is experimentally confirmed.
APA, Harvard, Vancouver, ISO, and other styles
3

Hřebíček, Zdeněk. "Klasifikace obrazů s pomocí hlubokého učení." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2016. http://www.nusl.cz/ntk/nusl-241140.

Full text
Abstract:
This thesis deals with image object detection and its classification into classes. Classification is provided by models of framework for deep learning BVLC/Caffe. Object detection is provided by AlpacaDB/selectivesearch and belltailjp/selective_search_py algorithms. One of results of this thesis is modification and usage of deep convolutional neural network AlexNet in BVLC/Caffe framework. This model was trained with precision 51,75% for classification into 1 000 classes. Then it was modified and trained for classification into 20 classes with precision 75.50%. Contribution of this thesis is implementation of graphical interface for object detction and their classification into classes, which is implemented as aplication based on web server in Python language. Aplication integrates object detection algorithms mentioned abowe with classification with help of BVLC/Caffe. Resulting aplication can be used for both object detection (and classification) and for fast verification of any classification model of BVLC/Caffe. This aplication was published on server GitHub under license Apache 2.0 so it can be further implemented and used.
APA, Harvard, Vancouver, ISO, and other styles
4

Norrstig, Andreas. "Visual Object Detection using Convolutional Neural Networks in a Virtual Environment." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-156609.

Full text
Abstract:
Visual object detection is a popular computer vision task that has been intensively investigated using deep learning on real data. However, data from virtual environments have not received the same attention. A virtual environment enables generating data for locations that are not easily reachable for data collection, e.g. aerial environments. In this thesis, we study the problem of object detection in virtual environments, more specifically an aerial virtual environment. We use a simulator, to generate a synthetic data set of 16 different types of vehicles captured from an airplane. To study the performance of existing methods in virtual environments, we train and evaluate two state-of-the-art detectors on the generated data set. Experiments show that both detectors, You Only Look Once version 3 (YOLOv3) and Single Shot MultiBox Detector (SSD), reach similar performance quality as previously presented in the literature on real data sets. In addition, we investigate different fusion techniques between detectors which were trained on two different subsets of the dataset, in this case a subset which has cars with fixed colors and a dataset which has cars with varying colors. Experiments show that it is possible to train multiple instances of the detector on different subsets of the data set, and combine these detectors in order to boost the performance.
APA, Harvard, Vancouver, ISO, and other styles
5

Dickens, James. "Depth-Aware Deep Learning Networks for Object Detection and Image Segmentation." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42619.

Full text
Abstract:
The rise of convolutional neural networks (CNNs) in the context of computer vision has occurred in tandem with the advancement of depth sensing technology. Depth cameras are capable of yielding two-dimensional arrays storing at each pixel the distance from objects and surfaces in a scene from a given sensor, aligned with a regular color image, obtaining so-called RGBD images. Inspired by prior models in the literature, this work develops a suite of RGBD CNN models to tackle the challenging tasks of object detection, instance segmentation, and semantic segmentation. Prominent architectures for object detection and image segmentation are modified to incorporate dual backbone approaches inputting RGB and depth images, combining features from both modalities through the use of novel fusion modules. For each task, the models developed are competitive with state-of-the-art RGBD architectures. In particular, the proposed RGBD object detection approach achieves 53.5% mAP on the SUN RGBD 19-class object detection benchmark, while the proposed RGBD semantic segmentation architecture yields 69.4% accuracy with respect to the SUN RGBD 37-class semantic segmentation benchmark. An original 13-class RGBD instance segmentation benchmark is introduced for the SUN RGBD dataset, for which the proposed model achieves 38.4% mAP. Additionally, an original depth-aware panoptic segmentation model is developed, trained, and tested for new benchmarks conceived for the NYUDv2 and SUN RGBD datasets. These benchmarks offer researchers a baseline for the task of RGBD panoptic segmentation on these datasets, where the novel depth-aware model outperforms a comparable RGB counterpart.
APA, Harvard, Vancouver, ISO, and other styles
6

Tang, Yuxing. "Weakly supervised learning of deformable part models and convolutional neural networks for object detection." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEC062/document.

Full text
Abstract:
Dans cette thèse, nous nous intéressons au problème de la détection d’objets faiblement supervisée. Le but est de reconnaître et de localiser des objets dans les images, n’ayant à notre disposition durant la phase d’apprentissage que des images partiellement annotées au niveau des objets. Pour cela, nous avons proposé deux méthodes basées sur des modèles différents. Pour la première méthode, nous avons proposé une amélioration de l’approche ”Deformable Part-based Models” (DPM) faiblement supervisée, en insistant sur l’importance de la position et de la taille du filtre racine initial spécifique à la classe. Tout d’abord, un ensemble de candidats est calculé, ceux-ci représentant les positions possibles de l’objet pour le filtre racine initial, en se basant sur une mesure générique d’objectness (par region proposals) pour combiner les régions les plus saillantes et potentiellement de bonne qualité. Ensuite, nous avons proposé l’apprentissage du label des classes latentes de chaque candidat comme un problème de classification binaire, en entrainant des classifieurs spécifiques pour chaque catégorie afin de prédire si les candidats sont potentiellement des objets cible ou non. De plus, nous avons amélioré la détection en incorporant l’information contextuelle à partir des scores de classification de l’image. Enfin, nous avons élaboré une procédure de post-traitement permettant d’élargir et de contracter les régions fournies par le DPM afin de les adapter efficacement à la taille de l’objet, augmentant ainsi la précision finale de la détection. Pour la seconde approche, nous avons étudié dans quelle mesure l’information tirée des objets similaires d’un point de vue visuel et sémantique pouvait être utilisée pour transformer un classifieur d’images en détecteur d’objets d’une manière semi-supervisée sur un large ensemble de données, pour lequel seul un sous-ensemble des catégories d’objets est annoté avec des boîtes englobantes nécessaires pour l’apprentissage des détecteurs. Nous avons proposé de transformer des classifieurs d’images basés sur des réseaux convolutionnels profonds (Deep CNN) en détecteurs d’objets en modélisant les différences entre les deux en considérant des catégories disposant à la fois de l’annotation au niveau de l’image globale et l’annotation au niveau des boîtes englobantes. Cette information de différence est ensuite transférée aux catégories sans annotation au niveau des boîtes englobantes, permettant ainsi la conversion de classifieurs d’images en détecteurs d’objets. Nos approches ont été évaluées sur plusieurs jeux de données tels que PASCAL VOC, ImageNet ILSVRC et Microsoft COCO. Ces expérimentations ont démontré que nos approches permettent d’obtenir des résultats comparables à ceux de l’état de l’art et qu’une amélioration significative a pu être obtenue par rapport à des méthodes récentes de détection d’objets faiblement supervisées<br>In this dissertation we address the problem of weakly supervised object detection, wherein the goal is to recognize and localize objects in weakly-labeled images where object-level annotations are incomplete during training. To this end, we propose two methods which learn two different models for the objects of interest. In our first method, we propose a model enhancing the weakly supervised Deformable Part-based Models (DPMs) by emphasizing the importance of location and size of the initial class-specific root filter. We first compute a candidate pool that represents the potential locations of the object as this root filter estimate, by exploring the generic objectness measurement (region proposals) to combine the most salient regions and “good” region proposals. We then propose learning of the latent class label of each candidate window as a binary classification problem, by training category-specific classifiers used to coarsely classify a candidate window into either a target object or a non-target class. Furthermore, we improve detection by incorporating the contextual information from image classification scores. Finally, we design a flexible enlarging-and-shrinking post-processing procedure to modify the DPMs outputs, which can effectively match the approximate object aspect ratios and further improve final accuracy. Second, we investigate how knowledge about object similarities from both visual and semantic domains can be transferred to adapt an image classifier to an object detector in a semi-supervised setting on a large-scale database, where a subset of object categories are annotated with bounding boxes. We propose to transform deep Convolutional Neural Networks (CNN)-based image-level classifiers into object detectors by modeling the differences between the two on categories with both image-level and bounding box annotations, and transferring this information to convert classifiers to detectors for categories without bounding box annotations. We have evaluated both our approaches extensively on several challenging detection benchmarks, e.g. , PASCAL VOC, ImageNet ILSVRC and Microsoft COCO. Both our approaches compare favorably to the state-of-the-art and show significant improvement over several other recent weakly supervised detection methods
APA, Harvard, Vancouver, ISO, and other styles
7

Schennings, Jacob. "Deep Convolutional Neural Networks for Real-Time Single Frame Monocular Depth Estimation." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-336923.

Full text
Abstract:
Vision based active safety systems have become more frequently occurring in modern vehicles to estimate depth of the objects ahead and for autonomous driving (AD) and advanced driver-assistance systems (ADAS). In this thesis a lightweight deep convolutional neural network performing real-time depth estimation on single monocular images is implemented and evaluated. Many of the vision based automatic brake systems in modern vehicles only detect pre-trained object types such as pedestrians and vehicles. These systems fail to detect general objects such as road debris and roadside obstacles. In stereo vision systems the problem is resolved by calculating a disparity image from the stereo image pair to extract depth information. The distance to an object can also be determined using radar and LiDAR systems. By using this depth information the system performs necessary actions to avoid collisions with objects that are determined to be too close. However, these systems are also more expensive than a regular mono camera system and are therefore not very common in the average consumer car. By implementing robust depth estimation in mono vision systems the benefits from active safety systems could be utilized by a larger segment of the vehicle fleet. This could drastically reduce human error related traffic accidents and possibly save many lives. The network architecture evaluated in this thesis is more lightweight than other CNN architectures previously used for monocular depth estimation. The proposed architecture is therefore preferable to use on computationally lightweight systems. The network solves a supervised regression problem during the training procedure in order to produce a pixel-wise depth estimation map. The network was trained using a sparse ground truth image with spatially incoherent and discontinuous data and output a dense spatially coherent and continuous depth map prediction. The spatially incoherent ground truth posed a problem of discontinuity that was addressed by a masked loss function with regularization. The network was able to predict a dense depth estimation on the KITTI dataset with close to state-of-the-art performance.
APA, Harvard, Vancouver, ISO, and other styles
8

Kastberg, Maria. "Using Convolutional Neural Networks to Detect People Around Wells in South Sudan." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160325.

Full text
Abstract:
The organization International Aid Services (IAS) provides people in East Africawith clean water through well drilling. The wells are located in surroundingsfar away for the investors to inspect and therefore IAS wishes to be able to monitortheir wells to get a better overview if different types of improvements needto be made. To see the load on different water sources at different times of theday and during the year, and to know how many people that are visiting thewells, is of particular interest. In this paper, a method is proposed for countingpeople around the wells. The goal is to choose a suitable method for detectinghumans in images and evaluate how it performs. The area of counting humansin images is not a new topic, though it needs to be taken into account that thesituation implies some restrictions. A Raspberry Pi with an associated camerais used, which is a small embedded system that cannot handle large and complexsoftware. There is also a limited amount of data in the project. The methodproposed in this project uses a pre-trained convolutional neural network basedobject detector called the Single Shot Detector, which is adapted to suit smallerdevices and applications. The pre-trained network that it is based on is calledMobileNet, a network that is developed to be used on smaller systems. To see howgood the chosen detector performs it will be compared with some other models.Among them a detector based on the Inception network, a significantly larger networkthan the MobileNet. The base network is modified by transfer learning.Results shows that a fine-tuned and modified network can achieve better result,from a F1-score of 0.49 for a non-fine-tuned model to 0.66 for the fine-tuned one.
APA, Harvard, Vancouver, ISO, and other styles
9

Rönnqvist, Johannes, and Johannes Sjölund. "A Deep Learning Approach to Detection and Classification of Small Defects on Painted Surfaces : A Study Made on Volvo GTO, Umeå." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-160194.

Full text
Abstract:
In this thesis we conclude that convolutional neural networks, together with phase-measuring deflectometry techniques, can be used to create models which can detect and classify defects on painted surfaces very well, even compared to experienced humans. Further, we show which preprocessing measures enhances the performance of the models. We see that standardisation does increase the classification accuracy of the models. We demonstrate that cleaning the data through relabelling and removing faulty images improves classification accuracy and especially the models' ability to distinguish between different types of defects. We show that oversampling might be a feasible method to improve accuracy through increasing and balancing the data set by augmenting existing observations. Lastly, we find that combining many images with different patterns heavily increases the classification accuracy of the models. Our proposed approach is demonstrated to work well in a real-time factory environment. An automated quality control of the painted surfaces of Volvo Truck cabins could give great benefits in cost and quality. The automated quality control could provide data for a root-cause analysis and a quick and efficient alarm system. This could significantly streamline production and at the same time reduce costs and errors in production. Corrections and optimisation of the processes could be made in earlier stages in time and with higher precision than today.<br>I den här rapporten visar vi att modeller av typen convolutional neural networks, tillsammans med phase-measuring deflektometri, kan hitta och klassificera defekter på målade ytor med hög precision, även jämfört med erfarna operatörer. Vidare visar vi vilka databehandlingsåtgärder som ökar modellernas prestanda. Vi ser att standardisering ökar modellernas klassificeringsförmåga. Vi visar att städning av data genom ommärkning och borttagning av felaktiga bilder förbättrar klassificeringsförmågan och särskilt modellernas förmåga att särskilja mellan olika typer av defekter. Vi visar att översampling kan vara en metod för att förbättra precisionen genom att öka och balansera datamängden genom att förändra och duplicera befintliga observationer. Slutligen finner vi att kombinera flera bilder med olika mönster ökar modellernas klassificeringsförmåga väsentligt. Vårt föreslagna tillvägagångssätt har visat sig fungera bra i realtid inom en produktionsmiljö. En automatiserad kvalitetskontroll av de målade ytorna på Volvos lastbilshytter kan ge stora fördelar med avseende på kostnad och kvalitet. Den automatiska kvalitetskontrollen kan ge data för en rotorsaksanalys och ett snabbt och effektivt alarmsystem. Detta kan väsentligt effektivisera produktionen och samtidigt minska kostnader och fel i produktionen. Korrigeringar och optimering av processerna kan göras i tidigare skeden och med högre precision än idag.
APA, Harvard, Vancouver, ISO, and other styles
10

Melcherson, Tim. "Image Augmentation to Create Lower Quality Images for Training a YOLOv4 Object Detection Model." Thesis, Uppsala universitet, Signaler och system, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-429146.

Full text
Abstract:
Research in the Arctic is of ever growing importance, and modern technology is used in news ways to map and understand this very complex region and how it is effected by climate change. Here, animals and vegetation are tightly coupled with their environment in a fragile ecosystem, and when the environment undergo rapid changes it risks damaging these ecosystems severely.  Understanding what kind of data that has potential to be used in artificial intelligence, can be of importance as many research stations have data archives from decades of work in the Arctic. In this thesis, a YOLOv4 object detection model has been trained on two classes of images to investigate the performance impacts of disturbances in the training data set. An expanded data set was created by augmenting the initial data to contain various disturbances. A model was successfully trained on the augmented data set and a correlation between worse performance and presence of noise was detected, but changes in saturation and altered colour levels seemed to have less impact than expected. Reducing noise in gathered data is seemingly of greater importance than enhancing images with lacking colour levels. Further investigations with a larger and more thoroughly processed data set is required to gain a clearer picture of the impact of the various disturbances.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography