To see the other types of publications on this topic, follow the link: Deformable Parts Model.

Dissertations / Theses on the topic 'Deformable Parts Model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 dissertations / theses for your research on the topic 'Deformable Parts Model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Xu, Jiaolong. "Domain adaptation of deformable part-based models." Doctoral thesis, Universitat Autònoma de Barcelona, 2015. http://hdl.handle.net/10803/290266.

Full text
Abstract:
La detecció de vianants és crucial per als sistemes d’assistència a la conducció (ADAS). Disposar d’un classificador precís és fonamental per a un detector de vianants basat en visió. Al entrenar un classificador, s’assumeix que les característiques de les dades d’entrenament segueixen la mateixa distribució de probabilitat que la de les dades de prova. Tot i això, a la pràctica, aquesta assumpció pot no complir-se per diferents causes. En aquests casos, en la comunitat de visió per computador és cada cop més comú utilitzar tècniques que permeten adaptar els classificadors existents del seu entorn d’entrenament (domini d’origen) al nou entorn de prova (domini de destí). En aquesta tesi ens centrem en l’adaptació de domini dels detectors de vianants basats en models deformables basats en parts (DPMs). Com a prova de concepte, utilitzem dades sintètiques com a domini d’origen (món virtual) i adaptem el detector DPM entrenat en el món virtual per a funcionar en diferents escenaris reals. Començem explotant al màxim les capacitats de detecció del DPM entrenant en dades del món virtual, però, tot i això, al aplicar-lo a diferents conjunts del món real, el detector encara perd poder de discriminació degut a les diferències entre el món virtual i el real. És per això, que ens centrem en l’adaptació de domini del DPM. Per començar, considerem un únic domini d’origen per a adaptar-lo a un únic domini de destí mitjançant dos mètodes d’aprenentatge per lots, l’A-SSVM i el SASSVM. Després, l’ampliem a treballar amb múltiples (sub-)dominis mitjançant una adaptació progressiva, utilitzant una jerarquia adaptativa basada en SSVM (HASSVM) en el procés d’optimització. Finalment, extenem HA-SSVM per a aconseguir un detector que s’adapti de forma progressiva i sense intervenció humana al domini de destí. Cal destacar que cap dels mètodes proposats en aquesta tesi requereix visitar les dades del domini d’origen. L’evaluació dels resultats, realitzada amb el sistema d’evaluació de Caltech, mostra que el SA-SSVM millora lleugerament respecte el ASSVM i millora en 15 punts respecte el detector no adaptat. El model jeràrquic entrenat mitjançant el HA-SSVM encara millora més els resultats de la adaptació de domini. Finalment, el mètode sequencial d’adaptació de domini ha demostrat que pot obtenir resultats comparables a la adaptació per lots, però sense necessitat d’etiquetar manualment cap exemple del domini de destí. L’adaptació de domini aplicada a la detecció de vianants és de gran importància i és una àrea que es troba relativament sense explorar. Desitgem que aquesta tesi pugui assentar les bases del treball futur d’aquesta àrea.
La detección de peatones es crucial para los sistemas de asistencia a la conducción (ADAS). Disponer de un clasificador preciso es fundamental para un detector de peatones basado en visión. Al entrenar un clasificador, se asume que las características de los datos de entrenamiento siguen la misma distribución de probabilidad que las de los datos de prueba. Sin embargo, en la práctica, esta asunción puede no cumplirse debido a diferentes causas. En estos casos, en la comunidad de visión por computador cada vez es más común utilizar técnicas que permiten adaptar los clasificadores existentes de su entorno de entrenamiento (dominio de origen) al nuevo entorno de prueba (dominio de destino). En esta tesis nos centramos en la adaptación de dominio de los detectores de peatones basados en modelos deformables basados en partes (DPMs). Como prueba de concepto, usamos como dominio de origen datos sintéticos (mundo virtual) y adaptamos el detector DPM entrenado en el mundo virtual para funcionar en diferentes escenarios reales. Comenzamos explotando al máximo las capacidades de detección del DPM entrenado en datos del mundo virtual pero, aun así, al aplicarlo a diferentes conjuntos del mundo real, el detector todavía pierde poder de discriminaci ón debido a las diferencias entre el mundo virtual y el real. Es por ello que nos centramos en la adaptación de dominio del DPM. Para comenzar, consideramos un único dominio de origen para adaptarlo a un único dominio de destino mediante dos métodos de aprendizaje por lotes, el A-SSVM y SA-SSVM. Después, lo ampliamos a trabajar con múltiples (sub-)dominios mediante una adaptación progresiva usando una jerarquía adaptativa basada en SSVM (HA-SSVM) en el proceso de optimización. Finalmente, extendimos HA-SSVM para conseguir un detector que se adapte de forma progresiva y sin intervención humana al dominio de destino. Cabe destacar que ninguno de los métodos propuestos en esta tesis requieren visitar los datos del dominio de origen. La evaluación de los resultados, realizadas con el sistema de evaluación de Caltech, muestran que el SA-SSVM mejora ligeramente respecto al A-SSVM y mejora en 15 puntos respecto al detector no adaptado. El modelo jerárquico entrenado mediante el HA-SSVM todavía mejora más los resultados de la adaptación de dominio. Finalmente, el método secuencial de adaptación de domino ha demostrado que puede obtener resultados comparables a la adaptación por lotes pero sin necesidad de etiquetar manualmente ningún ejemplo del dominio de destino. La adaptación de domino aplicada a la detección de peatones es de gran importancia y es un área que se encuentra relativamente sin explorar. Deseamos que esta tesis pueda sentar las bases del trabajo futuro en esta área.
On-board pedestrian detection is crucial for Advanced Driver Assistance Systems (ADAS). An accurate classi cation is fundamental for vision-based pedestrian detection. The underlying assumption for learning classi ers is that the training set and the deployment environment (testing) follow the same probability distribution regarding the features used by the classi ers. However, in practice, there are di erent reasons that can break this constancy assumption. Accordingly, reusing existing classi ers by adapting them from the previous training environment (source domain) to the new testing one (target domain) is an approach with increasing acceptance in the computer vision community. In this thesis we focus on the domain adaptation of deformable part-based models (DPMs) for pedestrian detection. As a prof of concept, we use a computer graphic based synthetic dataset, i.e. a virtual world, as the source domain, and adapt the virtual-world trained DPM detector to various real-world dataset. We start by exploiting the maximum detection accuracy of the virtual-world trained DPM. Even though, when operating in various real-world datasets, the virtualworld trained detector still su er from accuracy degradation due to the domain gap of virtual and real worlds. We then focus on domain adaptation of DPM. At the rst step, we consider single source and single target domain adaptation and propose two batch learning methods, namely A-SSVM and SA-SSVM. Later, we further consider leveraging multiple target (sub-)domains for progressive domain adaptation and propose a hierarchical adaptive structured SVM (HA-SSVM) for optimization. Finally, we extend HA-SSVM for the challenging online domain adaptation problem, aiming at making the detector to automatically adapt to the target domain online, without any human intervention. All of the proposed methods in this thesis do not require revisiting source domain data. The evaluations are done on the Caltech pedestrian detection benchmark. Results show that SA-SSVM slightly outperforms A-SSVM and avoids accuracy drops as high as 15 points when comparing with a non-adapted detector. The hierarchical model learned by HA-SSVM further boosts the domain adaptation performance. Finally, the online domain adaptation method has demonstrated that it can achieve comparable accuracy to the batch learned models while not requiring manually label target domain examples. Domain adaptation for pedestrian detection is of paramount importance and a relatively unexplored area. We humbly hope the work in this thesis could provide foundations for future work in this area.
APA, Harvard, Vancouver, ISO, and other styles
2

Danelljan, Martin. "Visual Tracking." Thesis, Linköpings universitet, Datorseende, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-105659.

Full text
Abstract:
Visual tracking is a classical computer vision problem with many important applications in areas such as robotics, surveillance and driver assistance. The task is to follow a target in an image sequence. The target can be any object of interest, for example a human, a car or a football. Humans perform accurate visual tracking with little effort, while it remains a difficult computer vision problem. It imposes major challenges, such as appearance changes, occlusions and background clutter. Visual tracking is thus an open research topic, but significant progress has been made in the last few years. The first part of this thesis explores generic tracking, where nothing is known about the target except for its initial location in the sequence. A specific family of generic trackers that exploit the FFT for faster tracking-by-detection is studied. Among these, the CSK tracker have recently shown obtain competitive performance at extraordinary low computational costs. Three contributions are made to this type of trackers. Firstly, a new method for learning the target appearance is proposed and shown to outperform the original method. Secondly, different color descriptors are investigated for the tracking purpose. Evaluations show that the best descriptor greatly improves the tracking performance. Thirdly, an adaptive dimensionality reduction technique is proposed, which adaptively chooses the most important feature combinations to use. This technique significantly reduces the computational cost of the tracking task. Extensive evaluations show that the proposed tracker outperform state-of-the-art methods in literature, while operating at several times higher frame rate. In the second part of this thesis, the proposed generic tracking method is applied to human tracking in surveillance applications. A causal framework is constructed, that automatically detects and tracks humans in the scene. The system fuses information from generic tracking and state-of-the-art object detection in a Bayesian filtering framework. In addition, the system incorporates the identification and tracking of specific human parts to achieve better robustness and performance. Tracking results are demonstrated on a real-world benchmark sequence.
APA, Harvard, Vancouver, ISO, and other styles
3

Martínez, Bertí Enrique. "SEGUIMIENTO DE PERSONAS APLICANDO RESTRICCIONES CINEMÁTICAS BASADAS EN MODELOS DE CUERPOS RÍGIDOS ARTICULADOS." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/86159.

Full text
Abstract:
The present thesis deals with the study of vision techniques for the detection of human pose based on the analysis of a single image, as well as the tracking of these poses along a sequence of images. It is proposed to model the human pose by four kinematic chains that model the four articulated extremities. These kinematic chains and head remain attached to the body. The four kinematic chains are composed by three keypoints. Therefore, the model initially has a total of $14$ parts. In this thesis it is proposed to modify the technique called Deformable Parts Model (DPM), adding the depth channel. Initially, the DPM model was defined over three RGB channel images. While in this thesis it is proposed to work on images of four RGBD channels, so the proposed extension is called 4D-DPM. The experiments performed with 4D-DPM demonstrate an improvement in the accuracy of pose detection with respect to the initial DPM model, at the cost of increasing its computational cost when treating an additional channel. On the other hand, it is defined to reduce the previous computational cost by simplifying the model that defines the human pose. The idea is to reduce the number of variables to be detected with the 4D-DPM model, so that the suppressed variables can be calculated from the detected variables using inverse kinematics models based on dual quaternions. In addition, it is proposed to use a particle filter models to continue improving the accuracy of detection of human poses along a sequence of images. Considering the problem of detection and monitoring of human body pose along a video sequence, this thesis proposes the use of the following method. 1. Camara calibration. RGBD image processing. Subtraction of the image background with the MSER method. 2. 4D-DPM: method used to detect the keypoints (variables of the pose model) within an image. 3. Particle filters: this type of filter is designed to track the keypoints over time and correct the data obtained by the sensor. 4. Inverse kinematic modeling: the control of kinematic chains is performed with the help of dual cuaternions in order to obtain the complete pose model of the human body. The overall contribution of this thesis is the proposal of the previous method that, combining the previous methods, is able to improve the accuracy in the detection and the follow up of the human body pose in a video sequence, also reducing its computational cost . This is possible due to the combination of the 4D-DPM method with the use of inverse kinematics techniques. The original DPM method should detect $14$ point of interest on an RGB image to estimate the human pose. However, the proposed method, where a point of interest for each limb is removed, must detect $10$ point of interest on an RGBD image. Subsequently, the eliminated $4$ point of interest are calculated by using inverse kinematics methods from the calculated $10$ point of interest. To solve the problem of inverse kinematics a dual quaternions methods is proposed for each of the $4$ kinematic chains that model the extremities of the skeleton of the human body. The particle filter is applied over the time sequence of the 10 points of interest of the posture model detected through the 4D-DPM method. To design these particle filters it is proposed to add the following restrictions to weight the particles generated: 1. Restrictions on joint limits. 2. Softness restrictions. 3. Collision detection. 4. Projection of poly-spheres
La presente tesis trata sobre el estudio de técnicas de visión para la detección de la postura del esqueleto del cuerpo humano basada en el análisis de una sola imagen, además del seguimiento de estas posturas a lo largo de una secuencia de imágenes. Se propone modelar la postura del esqueleto cuerpo humano mediante cuatro cadenas cinemáticas que modelan las cuatro extremidades articuladas. Estas cadenas cinemáticas y la cabeza permanecen unidas al cuerpo. Las cuatro cadenas cinemáticas se componen de tres puntos de interés. Por lo tanto, el modelo inicialmente dispone de un total de 14 puntos de interés. En esta tesis se propone modificar la técnica denominada Deformable Parts Model (DPM), añadiendo el canal de profundidad denominado ``Depth''. Inicialmente el modelo DPM se definió sobre imágenes de tres canales RGB. Mientras que en esta tesis se propone trabajar sobre imágenes de cuatro canales RGBD, por ello a la ampliación propuesta se le denomina 4D-DPM. Por otra parte, se propone reducir el coste computacional anterior simplificando el modelo que define la postura del cuerpo humano. La idea es reducir el número de variables a detectar con el modelo 4D-DPM, de tal manera que las variables suprimidas se puedan calcular a partir de las variables detectadas, utilizando modelos de cinemática inversa basados en cuaterniones duales. Los experimentos realizados demuestran que la combinación de estas dos técnicas permite, reduciendo el coste computacional del método original DPM, mejorar la precisión de la detección de postura debido a la información extra del canal de profundidad. Adicionalmente, se propone utilizar modelos de filtros de partículas para continuar mejorando la precisión de la detección de las posturas humanas a lo largo de una secuencia de imágenes. Atendiendo al problema de detección y seguimiento de las postura del esqueleto del cuerpo humano a lo largo de una secuencia de vídeo, esta tesis propone el uso del siguiente método. 1. Calibración de cámaras. Procesamiento de imágenes RGBD. Sustracción del fondo de la imagen con el método MSER. 2. 4D-DPM: método utilizado para detectar los puntos de interés (variables del modelo de postura) dentro de una imagen. 3. Filtros de partículas: se diseña este tipo de filtros para realizar el seguimiento de los puntos de interés a lo largo del tiempo y corregir los datos obtenidos por el sensor. 4. Modelado cinemático inverso: se realiza el control de cadenas cinemáticas con la ayuda de cuaterniones duales con el fin de obtener el modelo completo de la postura del esqueleto del cuerpo humano. La contribución global de esta tesis es la propuesta del método anterior que, combinando los métodos anteriores, es capaz de mejorar la precisión en la detección y el seguimiento de la postura del esqueleto del cuerpo humano en una secuencia de vídeo, reduciendo además su coste computacional. El método original DPM debe detectar 14 puntos de interés sobre una imagen RGB para estimar la postura de un cuerpo humano. Sin embargo, el método propuesto debe detectar 10 puntos de interés sobre una imagen RGBD. Posteriormente, los 4 puntos de interés eliminados se calculan mediante la utilización de métodos de cinemática inversa a partir de los 10 puntos de interés calculados. Para resolver el problema de la cinemática inversa se propone utilizar cuaterniones duales para cada una de las 4 cadenas cinemáticas que modelan las extremidades del esqueleto del cuerpo humano. El filtro de partículas se aplica sobre la secuencia temporal de los 10 puntos de interés del modelo de postura detectados a través del método 4D-DPM. Para diseñar estos filtros de partículas se propone añadir las siguientes restricciones, explicadas en la memoria, para ponderar las partículas generadas: 1. Restricciones en los límites de articulaciones. 2. Restricciones de suavidad. 3. Detección de colisiones. 4. Proyección de las poli-esferas.
La present tesi tracta sobre l'estudi de tècniques de visió per a la detecció de la postura de l'esquelet del cos humà basada en l'anàlisi d'una sola imatge, a més del seguiment d'estes postures al llarg d'una seqüència d'imatges. Es proposa modelar la postura de l'esquelet del cos humà per mitjà de quatre cadenes cinemàtiques que modelen les quatre extremitats articulades. Estes cadenes cinemàtiques i el cap romanen unides al cos. Les quatre cadenes cinemàtiques es componen de tres punts d'interés. Per tant, el model inicialment disposa d'un total de $14$ punts d'interés. En esta tesi es proposa modificar la tècnica denominada Deformable Parts Model (DPM) , afegint el canal de profunditat denominat ``Depth''. Inicialment el model DPM es va definir sobre imatges de tres canals RGB. Mentres que en esta tesi es proposa treballar sobre imatges de quatre canals RGBD, per això a l'ampliació proposada se la denomina 4D-DPM. D'altra banda, es proposa reduir el cost computacional anterior simplificant el model que definix la postura del cos humà. La idea és reduir el nombre de variables a detectar amb el model 4D-DPM, de tal manera que les variables suprimides es puguen calcular a partir de les variables detectades, utilitzant models de cinemàtica inversa basats en quaternions duals. Els experiments realitzats demostren que la combinació d'estes dos tècniques permet, reduint el cost computacional del mètode original DPM, millorar la precisió de la detecció de la postura degut a la informació extra del canal de profunditat. Addicionalment, es proposa utilitzar models de filtres de partícules per a continuar millorant la precisió de la detecció de les postures humanes al llarg d'una seqüència d'imatges. Atenent al problema de detecció i seguiment de les postura de l'esquelet del cos humà al llarg d'una seqüència de vídeo, esta tesi proposa l'ús del següent mètode. 1. Calibratge de càmeres. Processament d'imatges RGBD. Sostracció del fons de la imatge amb el mètode MSER. 2. 4D-DPM: mètode utilitzat per a detectar els punts d'interés (variables del model de postura) dins d'una imatge. 3. Filtres de partícules: es dissenya este tipus de filtres per a realitzar el seguiment dels punts d'interés al llarg del temps i corregir les dades obtingudes pel sensor. 4. Modelatge cinemàtic invers: es realitza el control de cadenes cinemàtiques amb l'ajuda de quaternions duals a fi d'obtindre el model complet de l'esquelet del cos humà. La contribució global d'esta tesi és la proposta del mètode anterior que, combinant els mètodes anteriors, és capaç de millorar la precisió en la detecció i el seguiment de la postura de l'esquelet del cos humà en una seqüència de vídeo, reduint a més el seu cost computacional. Açò és possible a causa de la combinació del mètode 4D-DPM amb la utilització de tècniques de cinemàtica inversa. El mètode original DPM ha de detectar 14 punts d'interés sobre una imatge RGB per a estimar la postura d'un cos humà. No obstant això, el mètode proposat ha de detectar 10 punts d'interés sobre una imatge RGBD. Posteriorment, els 4 punts d'interés eliminats es calculen per mitjà de la utilització de mètodes de cinemàtica inversa a partir dels 10 punts d'interés calculats. Per a resoldre el problema de la cinemàtica inversa es proposa utilitzar quaternions duals per a cada una de les 4 cadenes cinemàtiques que modelen les extremitats de l'esquelet del cos humà. El filtre de partícules s'aplica sobre la seqüència temporal dels 10 punts d'interés del model de postura detectats a través del mètode 4D-DPM. Per a dissenyar estos filtres de partícules es proposa afegir les següents restriccions per a ponderar les partícules generades: 1. Restriccions en els límits d'articulacions. 2. Restriccions de suavitat. 3. Detecció de col·lisions. 4. Projecció de les poli-esferes.
Martínez Bertí, E. (2017). SEGUIMIENTO DE PERSONAS APLICANDO RESTRICCIONES CINEMÁTICAS BASADAS EN MODELOS DE CUERPOS RÍGIDOS ARTICULADOS [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/86159
TESIS
APA, Harvard, Vancouver, ISO, and other styles
4

Bui, Manh-Tuan. "Vision-based multi-sensor people detection system for heavy machines." Thesis, Compiègne, 2014. http://www.theses.fr/2014COMP2156/document.

Full text
Abstract:
Ce travail de thèse a été réalisé dans le cadre de la coopération entre l’Université de Technologie de Compiègne (UTC) et le Centre Technique des Industries Mécaniques (CETIM). Nous présentons un système de détection de personnes pour l’aide à la conduite dans les engins de chantier. Une partie du travail a été dédiée à l’analyse du contexte de l’application, ce qui a permis de proposer un système de perception composé d’une caméra monoculaire fisheye et d’un Lidar. L’utilisation des caméras fisheye donne l’avantage d’un champ de vision très large avec en contrepartie, la nécessité de gérer les fortes distorsions dans l’étape de détection. A notre connaissance, il n’y a pas eu de recherches dédiées au problème de la détection de personnes dans les images fisheye. Pour cette raison, nous nous sommes concentrés sur l’étude et la quantification de l’impact des distorsions radiales sur l’apparence des personnes dans les images et nous avons proposé des approches adaptatives pour gérer ces spécificités. Nos propositions se sont inspirées de deux approches de l’état de l’art pour la détection des personnes : les histogrammes de gradient orientés (HOG) et le modèle des parties déformables (DPM). Tout d’abord, en enrichissant la base d’apprentissage avec des imagettes fisheye artificielles, nous avons pu montrer que les classificateurs peuvent prendre en compte les distorsions dans la phase d’apprentissage. Cependant, adapter les échantillons d’entrée, n’est pas la solution optimale pour traiter le problème de déformation de l’apparence des personnes dans les images. Nous avons alors décidé d’adapter l’approche de DPM pour prendre explicitement en compte le modèle de distorsions. Il est apparu que les modèles déformables peuvent être modifiés pour s’adapter aux fortes distorsions des images fisheye, mais ceci avec un coût de calculatoire supérieur. Dans cette thèse, nous présentons également une approche de fusion Lidar/camera fisheye. Une architecture de fusion séquentielle est utilisée et permet de réduire les fausses détections et le coût calculatoire de manière importante. Un jeu de données en environnement de chantier a été construit et différentes expériences ont été réalisées pour évaluer les performances du système. Les résultats sont prometteurs, à la fois en terme de vitesse de traitement et de performance de détection
This thesis has been carried out in the framework of the cooperation between the Compiègne University of Technology (UTC) and the Technical Centre for Mechanical Industries (CETIM). In this work, we present a vision-based multi-sensors people detection system for safety on heavy machines. A perception system composed of a monocular fisheye camera and a Lidar is proposed. The use of fisheye cameras provides an advantage of a wide field-of-view but yields the problem of handling the strong distortions in the detection stage.To the best of our knowledge, no research works have been dedicated to people detection in fisheye images. For that reason, we focus on investigating and quantifying the strong radial distortions impacts on people appearance and proposing adaptive approaches to handle that specificity. Our propositions are inspired by the two state-of-the-art people detection approaches : the Histogram of Oriented Gradient (HOG) and the Deformable Parts Model (DPM). First, by enriching the training data set, we prove that the classifier can take into account the distortions. However, fitting the training samples to the model, is not the best solution to handle the deformation of people appearance. We then decided to adapt the DPM approach to handle properly the problem. It turned out that the deformable models can be modified to be even better adapted to the strong distortions of the fisheye images. Still, such approach has adrawback of the high computation cost and complexity. In this thesis, we also present a framework that allows the fusion of the Lidar modality to enhance the vision-based people detection algorithm. A sequential Lidar-based fusion architecture is used, which addresses directly the problem of reducing the false detections and computation cost in vision-based-only system. A heavy machine dataset have been also built and different experiments have been carried out to evaluate the performances of the system. The results are promising, both in term of processing speed and performances
APA, Harvard, Vancouver, ISO, and other styles
5

Tang, Yuxing. "Weakly supervised learning of deformable part models and convolutional neural networks for object detection." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEC062/document.

Full text
Abstract:
Dans cette thèse, nous nous intéressons au problème de la détection d’objets faiblement supervisée. Le but est de reconnaître et de localiser des objets dans les images, n’ayant à notre disposition durant la phase d’apprentissage que des images partiellement annotées au niveau des objets. Pour cela, nous avons proposé deux méthodes basées sur des modèles différents. Pour la première méthode, nous avons proposé une amélioration de l’approche ”Deformable Part-based Models” (DPM) faiblement supervisée, en insistant sur l’importance de la position et de la taille du filtre racine initial spécifique à la classe. Tout d’abord, un ensemble de candidats est calculé, ceux-ci représentant les positions possibles de l’objet pour le filtre racine initial, en se basant sur une mesure générique d’objectness (par region proposals) pour combiner les régions les plus saillantes et potentiellement de bonne qualité. Ensuite, nous avons proposé l’apprentissage du label des classes latentes de chaque candidat comme un problème de classification binaire, en entrainant des classifieurs spécifiques pour chaque catégorie afin de prédire si les candidats sont potentiellement des objets cible ou non. De plus, nous avons amélioré la détection en incorporant l’information contextuelle à partir des scores de classification de l’image. Enfin, nous avons élaboré une procédure de post-traitement permettant d’élargir et de contracter les régions fournies par le DPM afin de les adapter efficacement à la taille de l’objet, augmentant ainsi la précision finale de la détection. Pour la seconde approche, nous avons étudié dans quelle mesure l’information tirée des objets similaires d’un point de vue visuel et sémantique pouvait être utilisée pour transformer un classifieur d’images en détecteur d’objets d’une manière semi-supervisée sur un large ensemble de données, pour lequel seul un sous-ensemble des catégories d’objets est annoté avec des boîtes englobantes nécessaires pour l’apprentissage des détecteurs. Nous avons proposé de transformer des classifieurs d’images basés sur des réseaux convolutionnels profonds (Deep CNN) en détecteurs d’objets en modélisant les différences entre les deux en considérant des catégories disposant à la fois de l’annotation au niveau de l’image globale et l’annotation au niveau des boîtes englobantes. Cette information de différence est ensuite transférée aux catégories sans annotation au niveau des boîtes englobantes, permettant ainsi la conversion de classifieurs d’images en détecteurs d’objets. Nos approches ont été évaluées sur plusieurs jeux de données tels que PASCAL VOC, ImageNet ILSVRC et Microsoft COCO. Ces expérimentations ont démontré que nos approches permettent d’obtenir des résultats comparables à ceux de l’état de l’art et qu’une amélioration significative a pu être obtenue par rapport à des méthodes récentes de détection d’objets faiblement supervisées
In this dissertation we address the problem of weakly supervised object detection, wherein the goal is to recognize and localize objects in weakly-labeled images where object-level annotations are incomplete during training. To this end, we propose two methods which learn two different models for the objects of interest. In our first method, we propose a model enhancing the weakly supervised Deformable Part-based Models (DPMs) by emphasizing the importance of location and size of the initial class-specific root filter. We first compute a candidate pool that represents the potential locations of the object as this root filter estimate, by exploring the generic objectness measurement (region proposals) to combine the most salient regions and “good” region proposals. We then propose learning of the latent class label of each candidate window as a binary classification problem, by training category-specific classifiers used to coarsely classify a candidate window into either a target object or a non-target class. Furthermore, we improve detection by incorporating the contextual information from image classification scores. Finally, we design a flexible enlarging-and-shrinking post-processing procedure to modify the DPMs outputs, which can effectively match the approximate object aspect ratios and further improve final accuracy. Second, we investigate how knowledge about object similarities from both visual and semantic domains can be transferred to adapt an image classifier to an object detector in a semi-supervised setting on a large-scale database, where a subset of object categories are annotated with bounding boxes. We propose to transform deep Convolutional Neural Networks (CNN)-based image-level classifiers into object detectors by modeling the differences between the two on categories with both image-level and bounding box annotations, and transferring this information to convert classifiers to detectors for categories without bounding box annotations. We have evaluated both our approaches extensively on several challenging detection benchmarks, e.g. , PASCAL VOC, ImageNet ILSVRC and Microsoft COCO. Both our approaches compare favorably to the state-of-the-art and show significant improvement over several other recent weakly supervised detection methods
APA, Harvard, Vancouver, ISO, and other styles
6

Leon, Leissi Margarita Castaneda. "Detecção de objetos em vídeos usando misturas de modelos baseados em partes deformáveis obtidas de um conjunto de imagens." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-31102013-094950/.

Full text
Abstract:
A detecção de objetos, pertencentes a uma determinada classe, em vídeos é de uma atividade amplamente estudada devido às aplicações potenciais que ela implica. Por exemplo, para vídeos obtidos por uma câmera estacionária, temos aplicações como segurança ou vigilância do tráfego, e por uma câmera dinâmica, para assistência ao condutor, entre outros. Na literatura, há diferentes métodos para tratar indistintamente cada um dos casos mencionados, e que consideram só imagens obtidas por um único tipo de câmera para treinar os detectores. Isto pode levar a uma baixa performance quando se aplica a técnica em vídeos de diferentes tipos de câmeras. O estado da arte na detecção de objetos de apenas uma classe, mostra uma tendência pelo uso de histogramas, treinamento supervisionado e, basicamente, seguem a seguinte estrutura: construção do modelo da classe de objeto, detecção de candidatos em uma imagem/quadro, e aplicação de uma medida sobre esses candidatos. Outra desvantagem observada é o uso de diferentes modelos para cada linha de visada de um objeto, gerando muitos modelos e, em alguns casos, um classificador para cada linha de visada. Nesta dissertação, abordamos o problema de detecção de objetos, usando um modelo da classe do objeto criada com um conjunto de dados de imagens estáticas e posteriormente usamos o modelo para detectar objetos na seqüência de imagens (vídeos) que foram coletadas a partir de câmeras estacionárias e dinâmicas, ou seja, num cenário totalmente diferente do usado para o treinamento. A criação do modelo é feita em uma fase de aprendizagem off-line, utilizando o conjunto de imagens PASCAL 2007. O modelo baseia-se em uma mistura de modelos baseados em partes deformáveis (MDPM), originalmente proposto por Felzenszwalb et al. (2010b) no âmbito da detecção de objetos em imagens. Não limitamos o modelo para uma determinada linha de visada. Foi elaborado um conjunto de experimentos que exploram o melhor número de componentes da mistura e o número de partes do modelo. Além disso, foi realizado um estudo comparativo de MDPMs simétricas e assimétricas. Testamos esse método para detectar objetos como pessoas e carros em vídeos obtidos por câmera estacionária e dinâmica. Nossos resultados não mostram apenas o bom desempenho da MDPM e melhores resultados que o estado da arte na detecção de objetos em vídeos obtidos por câmeras estacionárias ou dinâmicas, mas também mostram o melhor número de componentes da mistura e as partes para o modelo criado. Finalmente, os resultados mostram algumas diferenças entre as MDPMs simétricas e assimétricas na detecção de objetos em diferentes vídeos.
The problem of detecting objects that belong to a specific class of objects, in videos is a widely studied activity due to its potential applications. For example, for videos that have been taken from a stationary camera, we can mention applications such as security and traffic surveillance; when the video have been taken from a dynamic camera, a possible application is autonomous driving. The literature, presents several different approaches to treat indiscriminately with each of the cases mentioned, and only consider images obtained from a stationary or dynamic camera to train the detectors. These approaches can lead to poor performaces when the tecniques are used in sequences of images from different types of camera. The state of the art in the detection of objects that belong to a specific class shows a tendency to the use of histograms, supervised training and basically follows the structure: object class model construction, detection of candidates in the image/frame, and application of a distance measure to those candidates. Another disadvantage is that some approaches use several models for each point of view of the car, generating a lot of models and, in some cases, one classifier for each point of view. In this work, we approach the problem of object detection, using a model of the object class created with a dataset of static images and we use the model to detect objects in videos (sequence of images) that were collected from static and dynamic cameras, i.e., in a totally different setting than used for training. The creation of the model is done by an off-line learning phase, using an image database of cars in several points of view, PASCAL 2007. The model is based on a mixture of deformable part models (MDPM), originally proposed by Felzenszwalb et al. (2010b) for detection in static images. We do not limit the model for any specific viewpoint. A set of experiments was elaborated to explore the best number of components of the integration, as well as the number of parts of the model. In addition, we performed a comparative study of symmetric and asymmetric MDPMs. We evaluated the proposed method to detect people and cars in videos obtained by a static or a dynamic camera. Our results not only show good performance of MDPM and better results than the state of the art approches in object detection on videos obtained from a stationary, or dynamic, camera, but also show the best number of components of the integration and parts or the created object. Finally, results show differences between symmetric and asymmetric MDPMs in the detection of objects in different videos.
APA, Harvard, Vancouver, ISO, and other styles
7

Trulls, Fortuny Eduard. "Enhancing low-level features with mid-level cues." Doctoral thesis, Universitat Politècnica de Catalunya, 2015. http://hdl.handle.net/10803/286325.

Full text
Abstract:
Local features have become an essential tool in visual recognition. Much of the progress in computer vision over the past decade has built on simple, local representations such as SIFT or HOG. SIFT in particular shifted the paradigm in feature representation. Subsequent works have often focused on improving either computational efficiency, or invariance properties. This thesis belongs to the latter group. Invariance is a particularly relevant aspect if we intend to work with dense features. The traditional approach to sparse matching is to rely on stable interest points, such as corners, where scale and orientation can be reliably estimated, enforcing invariance; dense features need to be computed on arbitrary points. Dense features have been shown to outperform sparse matching techniques in many recognition problems, and form the bulk of our work. In this thesis we present strategies to enhance low-level, local features with mid-level, global cues. We devise techniques to construct better features, and use them to handle complex ambiguities, occlusions and background changes. To deal with ambiguities, we explore the use of motion to enforce temporal consistency with optical flow priors. We also introduce a novel technique to exploit segmentation cues, and use it to extract features invariant to background variability. For this, we downplay image measurements most likely to belong to a region different from that where the descriptor is computed. In both cases we follow the same strategy: we incorporate mid-level, "big picture" information into the construction of local features, and proceed to use them in the same manner as we would the baseline features. We apply these techniques to different feature representations, including SIFT and HOG, and use them to address canonical vision problems such as stereo and object detection, demonstrating that the introduction of global cues yields consistent improvements. We prioritize solutions that are simple, general, and efficient. Our main contributions are as follows: (a) An approach to dense stereo reconstruction with spatiotemporal features, which unlike existing works remains applicable to wide baselines. (b) A technique to exploit segmentation cues to construct dense descriptors invariant to background variability, such as occlusions or background motion. (c) A technique to integrate bottom-up segmentation with recognition efficiently, amenable to sliding window detectors.
Les "features" locals s'han convertit en una eina fonamental en el camp del reconeixement visual. Gran part del progrés experimentat en el camp de la visió per computador al llarg de l'última decada es basa en representacions locals de baixa complexitat, com SIFT o HOG. SIFT, en concret, ha canviat el paradigma en representació de característiques visuals. Els treballs que l'han succeït s'acostumen a centrar o bé a millorar la seva eficiencia computacional, o bé propietats d'invariança. El treball presentat en aquesta tesi pertany al segon grup. L'invariança es un aspecte especialment rellevant quan volem treballab amb "features" denses, és a dir per a cada pixel. La manera tradicional d'atacar el problema amb "features" de baixa densitat consisteix en seleccionar punts d'interés estables, com per exemple cantonades, on l'escala i l'orientació poden ser estimades de manera robusta. Les "features" denses, per definició, han de ser calculades en punts arbitraris de la imatge. S'ha demostrat que les "features" denses obtenen millors resultats en tècniques de correspondència per a molts problemes en reconeixement, i formen la major part del nostre treball. En aquesta tesi presentem estratègies per a enriquir "features" locals de baix nivell amb "cues" o dades globals, de mitja complexitat. Dissenyem tècniques per a construïr millors "features", que usem per a atacar problemes tals com correspondències amb un grau elevat d'ambigüetat, oclusions, i canvis del fons de la imatge. Per a atacar ambigüetats, explorem l'ús del moviment per a imposar consistència espai-temporal mitjançant informació d'"optical flow". També presentem una tècnica per explotar dades de segmentació que fem servir per a extreure "features" invariants a canvis en el fons de la imatge. Aquest mètode consisteix en atenuar els components de la imatge (i per tant les "features") que probablement corresponguin a regions diferents a la del descriptor que estem calculant. En ambdós casos seguim la mateixa estratègia: la nostra voluntat és incorporar dades globals d'un nivell de complexitat mitja a la construcció de "features" locals, que procedim a utilitzar de la mateixa manera que les "features" originals. Aquestes tècniques són aplicades a diferents tipus de representacions, incloent SIFT i HOG, i mostrem com utilitzar-les per a atacar problemes fonamentals en visió per computador tals com l'estèreo i la detecció d'objectes. En aquest treball demostrem que introduïnt informació global en la construcció de "features" locals podem obtenir millores consistentment. Donem prioritat a solucions senzilles, generals i eficients. Aquestes són les principals contribucions de la tesi: (a) Una tècnica per a reconstrucció estèreo densa mitjançant "features" espai-temporals, amb l'avantatge respecte a treballs existents que podem aplicar-la a càmeres en qualsevol configuració geomètrica ("wide-baseline"). (b) Una tècnica per a explotar dades de segmentació dins la construcció de descriptors densos, fent-los invariants a canvis al fons de la imatge, i per tant a problemes com les oclusions en estèreo o objectes en moviment. (c) Una tècnica per a integrar segmentació de manera ascendent ("bottom-up") en problemes de reconeixement d'una manera eficient, dissenyada per a detectors de tipus "sliding window".
APA, Harvard, Vancouver, ISO, and other styles
8

Andersson, Daniel. "Automatic vertebrae detection and labeling in sagittal magnetic resonance images." Thesis, Linköpings universitet, Medicinsk informatik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-115874.

Full text
Abstract:
Radiologists are often plagued by limited time for completing their work, with an ever increasing workload. A picture archiving and communication system (PACS) is a platform for daily image reviewing that improves their work environment, and on that platform for example spinal MR images can be reviewed. When reviewing spinal images a radiologist wants vertebrae labels, and in Sectra's PACS platform there is a good opportunity for implementing an automatic method for spinal labeling. In this thesis a method for performing automatic spinal labeling, called a vertebrae classifier, is presented. This method should remove the need for radiologists to perform manual spine labeling, and could be implemented in Sectra's PACS software to improve radiologists overall work experience.Spine labeling is the process of marking vertebrae centres with a name on a spinal image. The method proposed in this thesis for performing that process was developed using a machine learning approach for vertebrae detection in sagittal MR images. The developed classifier works for both the lumbar and the cervical spine, but it is optimized for the lumbar spine. During the development three different methods for the purpose of vertebrae detection were evaluated. Detection is done on multiple sagittal slices. The output from the detection is then labeled using a pictorial structure based algorithm which uses a trained model of the spine to correctly assess correct labeling. The suggested method achieves 99.6% recall and 99.9% precision for the lumbar spine. The cervical spine achieves slightly worse performance, with 98.1% for both recall and precision. This result was achieved by training the proposed method on 43 images and validated with 89 images for the lumbar spine. The cervical spine was validated using 26 images. These results are promising, especially for the lumbar spine. However, further evaluation is needed to test the method in a clinical setting.
Radiologer får bara mindre och mindre tid för att utföra sina arbetsuppgifter, då arbetsbördan bara blir större. Ett picture archiving and communication system (PACS) är en platform där radiologer kan undersöka medicinska bilder, däribland magnetic resonance (MR) bilder av ryggraden. När radiologerna tittar på dessa bilder av ryggraden vill de att kotorna ska vara markerade med sina namn, och i Sectra's PACS platform finns det en bra möjlighet för att implementera en automatisk metod för att namnge ryggradens kotor på bilden. I detta examensarbete presenteras en metod för att automatiskt markera alla kotorna utifrån saggitala MR bilder. Denna metod kan göra så att radiologer inte längre behöver manuellt markera kotor, och den skulle kunna implementeras i Sectra's PACS för att förbättra radiologernas arbetsmiljö. Det som menas med att markera kotor är att man ger mitten av alla kotor ett namn utifrån en MR bild på ryggraden. Metoden som presenteras i detta arbete kan utföra detta med hjälp av ett "machine learning" arbetssätt. Metoden fungerar både för övre och nedre delen av ryggraden, men den är optimerad för den nedre delen. Under utvecklingsfasen var tre olika metoder för att detektera kotor evaluerade. Resultatet från detektionen är sedan använt för att namnge alla kotor med hjälp av en algoritm baserad på pictorial structures, som använder en tränad model för att kunna evaluera vad som bör anses vara korrekt namngivning. Metoden uppnår 99.6% recall och 99.9% precision för nedre ryggraden. För övre ryggraden uppnås något sämre resultat, med 98.1% vad gäller både recall och precision. Detta resultat uppnådes då metoden tränades på 43 bilder och validerades på 89 bilder för nedre ryggraden. För övre ryggraden användes 26 stycken bilder. Resultaten är lovande, speciellt för den nedre delen. Dock måste ytterligare utvärdering göras för metoden i en klinisk miljö.
APA, Harvard, Vancouver, ISO, and other styles
9

Azizpour, Hossein. "Visual Representations and Models: From Latent SVM to Deep Learning." Doctoral thesis, KTH, Datorseende och robotik, CVAP, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-192289.

Full text
Abstract:
Two important components of a visual recognition system are representation and model. Both involves the selection and learning of the features that are indicative for recognition and discarding those features that are uninformative. This thesis, in its general form, proposes different techniques within the frameworks of two learning systems for representation and modeling. Namely, latent support vector machines (latent SVMs) and deep learning. First, we propose various approaches to group the positive samples into clusters of visually similar instances. Given a fixed representation, the sampled space of the positive distribution is usually structured. The proposed clustering techniques include a novel similarity measure based on exemplar learning, an approach for using additional annotation, and augmenting latent SVM to automatically find clusters whose members can be reliably distinguished from background class.  In another effort, a strongly supervised DPM is suggested to study how these models can benefit from privileged information. The extra information comes in the form of semantic parts annotation (i.e. their presence and location). And they are used to constrain DPMs latent variables during or prior to the optimization of the latent SVM. Its effectiveness is demonstrated on the task of animal detection. Finally, we generalize the formulation of discriminative latent variable models, including DPMs, to incorporate new set of latent variables representing the structure or properties of negative samples. Thus, we term them as negative latent variables. We show this generalization affects state-of-the-art techniques and helps the visual recognition by explicitly searching for counter evidences of an object presence. Following the resurgence of deep networks, in the last works of this thesis we have focused on deep learning in order to produce a generic representation for visual recognition. A Convolutional Network (ConvNet) is trained on a largely annotated image classification dataset called ImageNet with $\sim1.3$ million images. Then, the activations at each layer of the trained ConvNet can be treated as the representation of an input image. We show that such a representation is surprisingly effective for various recognition tasks, making it clearly superior to all the handcrafted features previously used in visual recognition (such as HOG in our first works on DPM). We further investigate the ways that one can improve this representation for a task in mind. We propose various factors involving before or after the training of the representation which can improve the efficacy of the ConvNet representation. These factors are analyzed on 16 datasets from various subfields of visual recognition.

QC 20160908

APA, Harvard, Vancouver, ISO, and other styles
10

Memarzadeh, Milad. "Automated 2D Detection and Localization of Construction Resources in Support of Automated Performance Assessment of Construction Operations." Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/76908.

Full text
Abstract:
This study presents two computer vision based algorithms for automated 2D detection of construction workers and equipment from site video streams. The state-of-the-art research proposes semi-automated detection methods for tracking of construction workers and equipment. Considering the number of active equipment and workers on jobsites and their frequency of appearance in a camera's field of view, application of semi-automated techniques can be time-consuming. To address this limitation, two new algorithms based on Histograms of Oriented Gradients and Colors (HOG+C), 1) HOG+C sliding detection window technique, and 2) HOG+C deformable part-based model are proposed and their performance are compared to the state-of-the-art algorithm in computer vision community. Furthermore, a new comprehensive benchmark dataset containing over 8,000 annotated video frames including equipment and workers from different construction projects is introduced. This dataset contains a large range of pose, scale, background, illumination, and occlusion variation. The preliminary results with average performance accuracies of 100%, 92.02%, and 89.69% for workers, excavators, and dump trucks respectively, indicate the applicability of the proposed methods for automated activity analysis of workers and equipment from single video cameras. Unlike other state-of-the-art algorithms in automated resource tracking, these methods particularly detects idle resources and does not need manual or semi-automated initialization of the resource locations in 2D video frames.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
11

Tsogkas, Stavros. "Mid-level representations for modeling objects." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC012/document.

Full text
Abstract:
Dans cette thèse, nous proposons l'utilisation de représentations de niveau intermédiaire, et en particulier i) d'axes médians, ii) de parties d'objets, et iii) des caractéristiques convolutionnels, pour modéliser des objets.La première partie de la thèse traite de détecter les axes médians dans des images naturelles en couleur. Nous adoptons une approche d'apprentissage, en utilisant la couleur, la texture et les caractéristiques de regroupement spectral pour construire un classificateur qui produit une carte de probabilité dense pour la symétrie. Le Multiple Instance Learning (MIL) nous permet de traiter l'échelle et l'orientation comme des variables latentes pendant l'entraînement, tandis qu'une variante fondée sur les forêts aléatoires offre des gains significatifs en termes de temps de calcul.Dans la deuxième partie de la thèse, nous traitons de la modélisation des objets, utilisant des modèles de parties déformables (DPM). Nous développons une approche « coarse-to-fine » hiérarchique, qui utilise des bornes probabilistes pour diminuer le coût de calcul dans les modèles à grand nombre de composants basés sur HOGs. Ces bornes probabilistes, calculés de manière efficace, nous permettent d'écarter rapidement de grandes parties de l'image, et d'évaluer précisément les filtres convolutionnels seulement à des endroits prometteurs. Notre approche permet d'obtenir une accélération de 4-5 fois sur l'approche naïve, avec une perte minimale en performance.Nous employons aussi des réseaux de neurones convolutionnels (CNN) pour améliorer la détection d'objets. Nous utilisons une architecture CNN communément utilisée pour extraire les réponses de la dernière couche de convolution. Nous intégrons ces réponses dans l'architecture DPM classique, remplaçant les descripteurs HOG fabriqués à la main, et nous observons une augmentation significative de la performance de détection (~14.5% de mAP).Dans la dernière partie de la thèse nous expérimentons avec des réseaux de neurones entièrement convolutionnels pous la segmentation de parties d'objets.Nous réadaptons un CNN utilisé à l'état de l'art pour effectuer une segmentation sémantique fine de parties d'objets et nous utilisons un CRF entièrement connecté comme étape de post-traitement pour obtenir des bords fins.Nous introduirons aussi un à priori sur les formes à l'aide d'une Restricted Boltzmann Machine (RBM), à partir des segmentations de vérité terrain.Enfin, nous concevons une nouvelle architecture entièrement convolutionnel, et l'entraînons sur des données d'image à résonance magnétique du cerveau, afin de segmenter les différentes parties du cerveau humain.Notre approche permet d'atteindre des résultats à l'état de l'art sur les deux types de données
In this thesis we propose the use of mid-level representations, and in particular i) medial axes, ii) object parts, and iii)convolutional features, for modelling objects.The first part of the thesis deals with detecting medial axes in natural RGB images. We adopt a learning approach, utilizing colour, texture and spectral clustering features, to build a classifier that produces a dense probability map for symmetry. Multiple Instance Learning (MIL) allows us to treat scale and orientation as latent variables during training, while a variation based on random forests offers significant gains in terms of running time.In the second part of the thesis we focus on object part modeling using both hand-crafted and learned feature representations. We develop a coarse-to-fine, hierarchical approach that uses probabilistic bounds for part scores to decrease the computational cost of mixture models with a large number of HOG-based templates. These efficiently computed probabilistic bounds allow us to quickly discard large parts of the image, and evaluate the exact convolution scores only at promising locations. Our approach achieves a $4times-5times$ speedup over the naive approach with minimal loss in performance.We also employ convolutional features to improve object detection. We use a popular CNN architecture to extract responses from an intermediate convolutional layer. We integrate these responses in the classic DPM pipeline, replacing hand-crafted HOG features, and observe a significant boost in detection performance (~14.5% increase in mAP).In the last part of the thesis we experiment with fully convolutional neural networks for the segmentation of object parts.We re-purpose a state-of-the-art CNN to perform fine-grained semantic segmentation of object parts and use a fully-connected CRF as a post-processing step to obtain sharp boundaries.We also inject prior shape information in our model through a Restricted Boltzmann Machine, trained on ground-truth segmentations.Finally, we train a new fully-convolutional architecture from a random initialization, to segment different parts of the human brain in magnetic resonance image data.Our methods achieve state-of-the-art results on both types of data
APA, Harvard, Vancouver, ISO, and other styles
12

Brink, Hanno. "Deformable part model with CNN features for facial landmark detection under occlusion." Thesis, 2018. https://hdl.handle.net/10539/26699.

Full text
Abstract:
Detecting and localizing facial regions in images is a fundamental building block of many applications in the field of affective computing and human-computer interaction. This allows systems to do a variety of higher level analysis such as facial expression recognition. Facial expression recognition is based on the effective extraction of relevant facial features. Many techniques have been proposed to deal with the robust extraction of these features under a wide variety of poses and occlusion conditions. These techniques include Deformable Part Models (DPMs), and more recently deep Convolutional Neural Networks (CNNs). Recently, hybrid models based on DPMs and CNNs have been proposed considering the generalization properties of CNNs and DPMs. In this work we propose a combined system, using CNNs as features for a DPM with a focus on dealing with occlusion. We also propose a method of face localization allowing occluded regions to be detected and explicitly ignored during the detection step.
XL2019
APA, Harvard, Vancouver, ISO, and other styles
13

Hsu, Ming-Hung, and 徐明宏. "Fast Object Detection and Tracking Using Multistage Particle Window Deformable Part Model." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/40192241013104554839.

Full text
Abstract:
碩士
國立中正大學
資訊工程研究所
101
Object detection is one of the fundamental challenges in computer vision. Efficient detection and good accuracy are both important for many applications. The deformable part model (DPM) achieves the best performance in the PASCAL VOC detection challenge. However, the main computational bottleneck of the deformable part model is response evaluation for all sliding windows. Although the cascade DPM proposed by Felzenszwalb et al. makes detection faster, the computational cost is still high because all sliding windows are exhaustively examined. We propose a fast object detection framework using the cascade deformable part model with the multistage particle window scheme. The detection process is separated into several stages. At each stage we generate particle windows based on the measurement density function estimated by the responses of particle windows with the cascade DPM at the previous stage. The measurement density function presents where the targeted object would appear. Three improvements are proposed to speed up the multistage particle window scheme. We can rapidly select the possible sliding windows instead of exhaustive examination by using this scheme. The tracking technique, particle filter, is further adopted in on-road vehicle detection for accelerating the detection process. In the experiments, we evaluate performance of the proposed method. And the experimental results show that the proposed method runs 34.5 times faster than the conventional DPM in the PASCAL VOC challenge 2007 dataset.
APA, Harvard, Vancouver, ISO, and other styles
14

CUCLICIU, TANASE, and TANASE CUCLICIU. "Joint HOG-LBP Features for Medical Image Detection Based on Deformable Part Model." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/11428975907725375120.

Full text
Abstract:
碩士
亞洲大學
生物資訊與醫學工程學系
105
Most medical image computing algorithms perform operations that fall under broad categories like image segmentation, image registration, visualization etc. What most of these algorithms have in common is that they target a specific body part, organ or area in order to process their specific output, meaning they rely on well-annotated datasets of images in order to perform their task. In this study, we wish to offer a solution by introducing a novel object detector that can confidently label body parts in CT/PET images. Because of the increasingly high number of traumatic brain injuries and disease we chose to model the cranium of the human body. After researching the best image modalities to work with, in correlation with what features can be extracted, are relevant and work well in conjunction, we chose to use a joint shape and texture descriptor, using HOG and LBP features respectively. Inspired by deformable part models we created our own model that accounts for differences that appear within the medical context of the images. We then used a multiscale cascade detection algorithm in conjunction with an AdaBoost classifier in order to detect the cranium. In conclusion, our algorithm proved able to detect the cranium with an accuracy of approximately 90%, therefore offering a valid solution to replacing manually annotated images.
APA, Harvard, Vancouver, ISO, and other styles
15

Lin, Zhe-Yi, and 林哲逸. "Nighttime Pedestrian Detection with Far/Near Infrared Feature Level Fusion Based on a Deformable Part Model." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/62909305888178340382.

Full text
Abstract:
碩士
國立臺灣大學
電機工程學研究所
101
Pedestrian detection is a crucial part in driver assistance system, but reseach direction of the field at present is mainly on day light images, while little is on nighttime, when accidents happen more. Moreover, nighttime pedestrian database is hard to access from public for the researchers in the field of nighttime pedestrian Detection, which leads to a large amount of time consuming for database setup. Far infrared and near infrared are two choices for night vision, which are nicely complemented to each other, and therefore suitable for image fusion to improve detection rate of pedestrian detection system. For object detector, deformable part model is the most successful and well researched detector among all presently. Accordingly, this article built a public nighttime pedestrian database from far and near infrared camera, and built a far/near infrared feature level fusion nighttime pedestrian detection system, which is based on deformable part model. Experimental result shows that our system’s detection rate has a significant improvement to single sensor system, and is better than other nighttime pedestrian systems in this field.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography