To see the other types of publications on this topic, follow the link: Uncontrolled environments.

Dissertations / Theses on the topic 'Uncontrolled environments'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 18 dissertations / theses for your research on the topic 'Uncontrolled environments.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Fu, Y. "Face recognition in uncontrolled environments." Thesis, University College London (University of London), 2015. http://discovery.ucl.ac.uk/1468901/.

Full text
Abstract:
This thesis concerns face recognition in uncontrolled environments in which the images used for training and test are collected from the real world instead of laboratories. Compared with controlled environments, images from uncontrolled environments contain more variation in pose, lighting, expression, occlusion, background, image quality, scale, and makeup. Therefore, face recognition in uncontrolled environments is much more challenging than in controlled conditions. Moreover, many real world applications require good recognition performance in uncontrolled environments. Example applications include social networking, human-computer interaction and electronic entertainment. Therefore, researchers and companies have shifted their interest from controlled environments to uncontrolled environments over the past seven years. In this thesis, we divide the history of face recognition into four stages and list the main problems and algorithms at each stage. We find that face recognition in unconstrained environments is still an unsolved problem although many face recognition algorithms have been proposed in the last decade. Existing approaches have two major limitations. First, many methods do not perform well when tested in uncontrolled databases even when all the faces are close to frontal. Second, most current algorithms cannot handle large pose variation, which has become a bottleneck for improving performance. In this thesis, we investigate Bayesian models for face recognition. Our contributions extend Probabilistic Linear Discriminant Analysis (PLDA) [Prince and Elder 2007]. In PLDA, images are described as a sum of signal and noise components. Each component is a weighted combination of basis functions. We firstly investigate the effect of degree of the localization of these basis functions and find better performance is obtained when the signal is treated more locally and the noise more globally. We call this new algorithm multi-scale PLDA and our experiments show it can handle lighting variation better than PLDA but fails for pose variation. We then analyze three existing Bayesian face recognition algorithms and combine the advantages of PLDA and the Joint Bayesian Face algorithm [Chen et al. 2012] to propose Joint PLDA. We find that our new algorithm improves performance compared to existing Bayesian face recognition algorithms. Finally, we propose Tied Joint Bayesian Face algorithm and Tied Joint PLDA to address large pose variations in the data, which drastically decreases performance in most existing face recognition algorithms. To provide sufficient training images with large pose difference, we introduce a new database called the UCL Multi-pose database. We demonstrate that our Bayesian models improve face recognition performance when the pose of the face images varies.
APA, Harvard, Vancouver, ISO, and other styles
2

Yao, Yi. "Hand gesture recognition in uncontrolled environments." Thesis, University of Warwick, 2014. http://wrap.warwick.ac.uk/74268/.

Full text
Abstract:
Human Computer Interaction has been relying on mechanical devices to feed information into computers with low efficiency for a long time. With the recent developments in image processing and machine learning methods, the computer vision community is ready to develop the next generation of Human Computer Interaction methods, including Hand Gesture Recognition methods. A comprehensive Hand Gesture Recognition based semantic level Human Computer Interaction framework for uncontrolled environments is proposed in this thesis. The framework contains novel methods for Hand Posture Recognition, Hand Gesture Recognition and Hand Gesture Spotting. The Hand Posture Recognition method in the proposed framework is capable of recognising predefined still hand postures from cluttered backgrounds. Texture features are used in conjunction with Adaptive Boosting to form a novel feature selection scheme, which can effectively detect and select discriminative texture features from the training samples of the posture classes. A novel Hand Tracking method called Adaptive SURF Tracking is proposed in this thesis. Texture key points are used to track multiple hand candidates in the scene. This tracking method matches texture key points of hand candidates within adjacent frames to calculate the movement directions of hand candidates. With the gesture trajectories provided by the Adaptive SURF Tracking method, a novel classi�er called Partition Matrix is introduced to perform gesture classification for uncontrolled environments with multiple hand candidates. The trajectories of all hand candidates extracted from the original video under different frame rates are used to analyse the movements of hand candidates. An alternative gesture classifier based on Convolutional Neural Network is also proposed. The input images of the Neural Network are approximate trajectory images reconstructed from the tracking results of the Adaptive SURF Tracking method. For Hand Gesture Spotting, a forward spotting scheme is introduced to detect the starting and ending points of the prede�ned gestures in the continuously signed gesture videos. A Non-Sign Model is also proposed to simulate meaningless hand movements between the meaningful gestures. The proposed framework can perform well with unconstrained scene settings, including frontal occlusions, background distractions and changing lighting conditions. Moreover, it is invariant to changing scales, speed and locations of the gesture trajectories.
APA, Harvard, Vancouver, ISO, and other styles
3

Huerta, Casado Ivan. "Foreground Object Segmentation and Shadow Detection for Video Sequences in Uncontrolled Environments." Doctoral thesis, Universitat Autònoma de Barcelona, 2010. http://hdl.handle.net/10803/5797.

Full text
Abstract:
Aquesta tesis esta dividida en dos parts principalment. A la primera, es presenta un estudi dels problemes que es poden trobar en la segmentació per moviment, basant-se en aquest estudi es presenta un algoritme genèric el qual es capaç de solucionar d'una forma acurada la majoria dels problemes que es poden trobar en aquest tipus de segmentació. En la segona part, es tracta el tema de les ombres en profunditat. Primer, es presenta un algoritme bottom-up basat en un detector de ombres cromàtiques el qual es capaç no només de solucionar les ombres que es troben a la penombra, sinó també les ombres que podem trobar a l'umbra. Segon, es presenta un sistema topdown basat en un sistema de tracking per tal de trackejar les ombres i d'aquesta manera millorar la detecció de les ombres cromàtiques.
En la nostra primera contribució, presentem un anàlisis del possibles problemes que trobem en la segmentació per moviment quan utilitzem el color, els gradients, o la intensitat. La nostra segona aportació es una arquitectura hibrida la qual pot solucionar els principals problemes observats en l'anàlisi, mitjançant la fusió de (i) la informació obtinguda per aquestes tres cues, i (ii) un algoritme de diferencia temporal. Per un costat, em aconseguit millorat els models de color i de gradients per que puguin solucionar tant el problemes amb els canvis de il·luminació global i local (com les ombres no cromàtiques) i els camuflatges en intensitat. A més a més, la informació local es explotada per tal de solucionar el problema dels camuflatges en croma. Per una altra banda, la intensitat es aplicada quan el color i els gradients no estan disponibles degut a problemes en la obtenció d'aquests (es troben fora del rang dinàmic). Addicionalment, la diferencia temporal es inclosa en la segmentació per moviment en el moment en que cap de les cues estudiades no estan disponibles, com per exemple quan el fons de la imatge no es visible en el període de entrenament. Per últim en aquesta primera part, el nostre algoritme també ha de solucionar el problema de les segmentacions fantasma. Com a resultat, el nostre algoritme obté una segmentació robusta i acurada tant en escenaris d'interior com d'exterior, tal i com s'ha demostrat tant quantitativament com qualitativament en els resultats experimentals, mitjançant la comparació del nostre algoritme amb els més coneguts algoritmes de l'estat de l'art.
La segmentació en moviment té que tenir en compte el problema de les ombres per tal de evitar distorsions quan intentem segmentar els objectes en moviment. Però molts dels algoritmes que son capaços de detectar les ombres solament son capaços de detectar les ombres a la penombra. En conseqüència, aquestes tècniques no son capaces de detectar les ombres a l'umbra les quals son normalment detectades com part dels objectes en moviment.
En aquesta tesis presentem primer una innovadora tècnica que es basa en els models de gradients i de color per tal de separar aquestes ombres cromàtiques dels objectes en moviment. Primerament, construïm tant un model de color en forma de con, com també un model de gradient els quals son invariant a les cromaticitats per tal d'aconseguir fer una segmentació automàtica a la vegada que totes les possibles ombres son detectades. En un segon pas, les regions que poden ser ombres son agrupades considerant "l'efecte blau" i les particions obtingudes mitjançant els gradients. Finalment, analitzem (i) les similituds temporals entre els les estructures locals dels gradients i (ii) les similituds espacials entre els angles cromàtics i les distorsions de la lluminositat de totes les ombres potencials per tal d'identificar les ombres a la umbra.
Segon, en el procés top-down després de la detecció dels objectes i les ombres els dos son seguits usant un filtre de Kalman, per d'aquesta manera millorar la detecció de lesombrescromàtiques. Primerament, l'algoritme fa una associacióentre elsblobs (foreground i ombres) i els filtres de Kalman. Segon, es realitza un anàlisis dels possibles casos entre las associacions obtingudes anteriorment, i a més a més es tracten les oclusions mitjançant un Model Probabilístic d'Aparença. Basant-se en aquesta associació es busca la consistència temporal entre els foregrounds, les ombres, i els seus respectius filtres de Kalman. A partir d'aquesta nova associació son estudiats diferents casos, com a resultat les ombres cromàtiques que s'havien perdut son detectades. Finalment, els resultats son utilitzats com a feedback per millorar la detecciódela ombra i del objecte.
Pel contrari que altres algoritmes el nostre mètode no fa cap assumpcióapriori sobre la localitzaciódelacàmera, les geometries o les textures de les superfícies, les formes o els possibles tipus de ombres, objectes o de fons de la imatge. Els resultats experimentals mostren la performance i la precisió del nostre algoritme en la detecció de les ombres cromàtiques en diferents materials i amb diferents condicions de il·luminació.
This Thesis is mainly divided in two parts. The first one presents a study of motion segmentation problems. Based on this study, a novel algorithm for mobile-object segmentation from a static background scene is also presented. This approach is demonstrated robust and accurate under most of the common problems in motion segmentation. The second one tackles the problem of shadows in depth. Firstly, a bottom-up approach based on a chromatic shadow detector is presented to deal with umbra shadows. Secondly, a top-down approach based on a tracking system has been developed in order to enhance the chromatic shadow detection.
In our first contribution, a case analysis of motion segmentation problems is presented by taking into account the problems associated with different cues, namely colour, edge and intensity. Our second contribution is a hybrid architecture which handles the main problems observed in such a case analysis, by fusing (i) the knowledge from these three cues and (ii) a temporal difference algorithm. On the one hand, we enhance the colour and edge models to solve both global/local illumination changes (shadows and highlights) and camouflage in intensity. In addition, local information is exploited to cope with a very challenging problem such as the camouflage in chroma. On the other hand, the intensity cue is also applied when colour and edge cues are not available, such as when beyond the dynamic range. Additionally, temporal difference is included to segment motion when these three cues are not available, such as that background not visible during the training period. Lastly, the approach is enhanced for allowing ghost detection. As a result, our approach obtains very accurate and ro¬bust motion segmentation in both indoor and outdoor scenarios, as quantitatively and qualitatively demonstrated in the experimental results, by comparing our approach with most best-known state-of-the-art approaches.
Motion Segmentation has to deal with shadows to avoid distortions when detecting moving objects. Most segmentation approaches dealing with shadow detection are typically restricted to penumbra shadows. Therefore, such techniques cannot cope well with umbra shadows. Consequently, umbra shadows are usually detected as part of moving objects.
Firstly, a bottom-up approach for detection and removal of chromatic moving shadows in surveillance scenarios is proposed. Secondly, a top-down approach based on kalman filters to detect and track shadows has been developed in order to enhance the chromatic shadow detection. In the Bottom-up part, the shadow detection approach applies a novel technique based on gradient and colour models for separating chromatic moving shadows from moving objects.
Well-known colour and gradient models are extended and improved into an invariant colour cone model and an invariant gradient model, respectively, to perform automatic segmentation while detecting potential shadows. Hereafter, the regions corresponding to potential shadows are grouped by considering "a bluish effect" and an edge partitioning. Lastly, (i) temporal similarities between local gradient structures and (ii) spatial similarities between chrominance angle and brightness distortions are analysed for all potential shadow regions in order to finally identify umbra shadows.
In the top-down process, after detection of objects and shadows both are tracked using Kalman filters, in order to enhance the chromatic shadow detection, when it fails to detect a shadow. Firstly, this implies a data association between the blobs (foreground and shadow) and Kalman filters. Secondly, an event analysis of the different data association cases is performed, and occlusion handling is managed by a Probabilistic Appearance Model (PAM). Based on this association, temporal consistency is looked for the association between foregrounds and shadows and their respective Kalman Filters. From this association several cases are studied, as a result lost chromatic shadows are correctly detected. Finally, the tracking results are used as feedback to improve the shadow and object detection.
Unlike other approaches, our method does not make any a-priori assumptions about camera location, surface geometries, surface textures, shapes and types of shadows, objects, and background. Experimental results show the performance and accuracy of our approach in different shadowed materials and illumination conditions.
APA, Harvard, Vancouver, ISO, and other styles
4

Moore, Kristin Suzanne. "Comparison of eye movement data to direct measures of situation awareness for development of a novel measurement technique in dynamic, uncontrolled test environments." Connect to this title online, 2009. http://etd.lib.clemson.edu/documents/1263402095/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fathi, Kazerouni Masoud [Verfasser], and Klaus-Dieter [Gutachter] Kuhnert. "Fully-automated plant recognition systems in challenging controlled and uncontrolled environments using classical and Deep Learning methods / Masoud Fathi Kazerouni ; Gutachter: Klaus-Dieter Kuhnert." Siegen : Universitätsbibliothek der Universität Siegen, 2019. http://d-nb.info/1208506811/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Svens, Lisa. "Mathematical Analysis of Intensity Based Segmentation Algorithms with Implementations on Finger Images in an Uncontrolled Environment." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-43650.

Full text
Abstract:
The main task of this thesis is to perform image segmentation on images of fingers to partition the image into two parts, one with the fingers and one with all that is not fingers. First, we present the theory behind several well-used image segmentation methods, such as SNIC superpixels, the k-means algorithm, and the normalised cut algorithm. These have then been implemented and tested on images of fingers and the results are shown. The implementations are unfortunately not stable and give segmentations of varying results.
APA, Harvard, Vancouver, ISO, and other styles
7

Braga, Marilita Gnecco de Camargo. "The vehicle driver's perception of attributes of the road environment that influence safety at four-arm uncontrolled junctions." Thesis, Imperial College London, 1990. http://hdl.handle.net/10044/1/47784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

BLACK, DAVID PAUL. "SYNERGIES IN WITHIN- AND BETWEEN-PERSON INTERLIMB RHYTHMIC COORDINATION: EFFECTS OF COORDINATION STABILITY AND ENVIRONMENTAL ANCHORING." University of Cincinnati / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1129553094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yu, Xiezhi. "Assessment and bioremediation of solis contaminated by uncontrolled recycling of electronic-waste at Guiyu, SE China." HKBU Institutional Repository, 2008. http://repository.hkbu.edu.hk/etd_ra/876.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Junior, Jozias Rolim de Araújo. "Reconhecimento multibiométrico baseado em imagens de face parcialmente ocluídas." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-24122018-011508/.

Full text
Abstract:
Com o avanço da tecnologia, as estratégias tradicionais para identificação de pessoas se tornaram mais suscetíveis a falhas. De forma a superar essas dificuldades algumas abordagens vêm sendo propostas na literatura. Dentre estas abordagens destaca-se a Biometria. O campo da Biometria abarca uma grande variedade de tecnologias usadas para identificar ou verificar a identidade de uma pessoa por meio da mensuração e análise de aspectos físicos e/ou comportamentais do ser humano. Em função disso, a biometria tem um amplo campo de aplicações em sistemas que exigem uma identificação segura de seus usuários. Os sistemas biométricos mais populares são baseados em reconhecimento facial ou em impressões digitais. Entretanto, existem sistemas biométricos que utilizam a íris, varredura de retina, voz, geometria da mão e termogramas faciais. Atualmente, tem havido progresso significativo em reconhecimento automático de face em condições controladas. Em aplicações do mundo real, o reconhecimento facial sofre de uma série de problemas nos cenários não controlados. Esses problemas são devidos, principalmente, a diferentes variações faciais que podem mudar muito a aparência da face, incluindo variações de expressão, de iluminação, alterações da pose, assim como oclusões parciais. Em comparação com o grande número de trabalhos na literatura em relação aos problemas de variação de expressão/iluminação/pose, o problema de oclusão é relativamente negligenciado pela comunidade científica. Embora tenha sido dada pouca atenção ao problema de oclusão na literatura de reconhecimento facial, a importância deste problema deve ser enfatizada, pois a presença de oclusão é muito comum em cenários não controlados e pode estar associada a várias questões de segurança. Por outro lado, a Multibiométria é uma abordagem relativamente nova para representação de conhecimento biométrico que visa consolida múltiplas fontes de informação visando melhorar a performance do sistema biométrico. Multibiométria é baseada no conceito de que informações obtidas a partir de diferentes modalidades ou da mesma modalidade capturada de diversas formas se complementam. Consequentemente, uma combinação adequada dessas informações pode ser mais útil que o uso de informações obtidas a partir de qualquer uma das modalidades individualmente. A fim de melhorar a performance dos sistemas biométricos faciais na presença de oclusão parciais será investigado o emprego de diferentes técnicas de reconstrução de oclusões parciais de forma a gerar diferentes imagens de face, as quais serão combinadas no nível de extração de característica e utilizadas como entrada para um classificador neural. Os resultados demonstram que a abordagem proposta é capaz de melhorar a performance dos sistemas biométricos baseados em face parcialmente ocluídas
With the advancement of technology, traditional strategies for identifying people have become more susceptible to failures. In order to overcome these difficulties, some approaches have been proposed in the literature. Among these approaches, Biometrics stands out. The field of biometrics covers a wide range of technologies used to identify or verify a person\'s identity by measuring and analyzing physical and / or behavioral aspects of the human being. As a result, a biometry has a wide field of applications in systems that require a secure identification of its users. The most popular biometric systems are based on facial recognition or fingerprints. However, there are biometric systems that use the iris, retinal scan, voice, hand geometry, and facial thermograms. Currently, there has been significant progress in automatic face recognition under controlled conditions. In real world applications, facial recognition suffers from a number of problems in uncontrolled scenarios. These problems are mainly due to different facial variations that can greatly change the appearance of the face, including variations in expression, illumination, posture, as well as partial occlusions. Compared with the large number of papers in the literature regarding problems of expression / illumination / pose variation, the occlusion problem is relatively neglected by the research community. Although attention has been paid to the occlusion problem in the facial recognition literature, the importance of this problem should be emphasized, since the presence of occlusion is very common in uncontrolled scenarios and may be associated with several safety issues. On the other hand, multibiometry is a relatively new approach to biometric knowledge representation that aims to consolidate multiple sources of information to improve the performance of the biometric system. Multibiometry is based on the concept that information obtained from different modalities or from the same modalities captured in different ways complement each other. Accordingly, a suitable combination of such information may be more useful than the use of information obtained from any of the individuals modalities. In order to improve the performance of facial biometric systems in the presence of partial occlusion, the use of different partial occlusion reconstruction techniques was investigated in order to generate different face images, which were combined at the feature extraction level and used as input for a neural classifier. The results demonstrate that the proposed approach is capable of improving the performance of biometric systems based on partially occluded faces
APA, Harvard, Vancouver, ISO, and other styles
11

Picard, François. "Contextualisation & Capture de Gestuelles Utilisateur : Contributions à l'Adaptativité des Applications Interactives Scénarisées." Phd thesis, Université de La Rochelle, 2011. http://tel.archives-ouvertes.fr/tel-00691944.

Full text
Abstract:
Depuis 50 ans, une évolution permanente de l'interaction homme-machine a permis qu'aujourd'hui, nous tendions vers des systèmes temps réel proposant une interaction simple et intuitive à l'utilisateur et s'adaptant automatiquement à l'activité observée et interprétée. L'utilisateur peut dorénavant interagir avec un système informatique, volontairement ou de manière non consciente, par le biais de plusieurs modalités, comme il ferait dans la vie courante. Actuellement, les systèmes les plus développés sont ceux permettant à l'utilisateur d'interagir par le biais de gestuelles, qu'elles soient explicites ou implicites. Ces systèmes sont intégrés de plus en plus dans notre environnement, la plupart du temps invisibles pour nous, et s'adaptent en fonction d'un scénario qui définit les objectifs que nous devons atteindre.Plusieurs domaines d'application sont le cadre du développement de tels systèmes, comme celui du jeu vidéo ou encore de la surveillance vidéo. Nous proposons, dans ces travaux de thèse, l'architecture d'un système interactif, sur lequel s'exécute une application scénarisée de type jeu vidéo. L'interactivité est alimentée par les gestuelles du corps d'un unique utilisateur, en temps réel et de manière non invasive. Le système réagit également à divers événements prenant place au sein de la scène réelle, qui sont dus au dynamisme de cette dernière et dont l'utilisateur n'est pas directement responsable (changement d'éclairage, présence de spectateurs, etc.). L'utilisateur est immergé au sein d'un environnement virtuel qui traduit la réponse interactive du système. Ce dernier répond également, en temps réel, à l'activité observée et interprétée, de manière adaptative, adaptant à la fois sa réponse interactive et son fonctionnement global.Nos travaux se basent sur l'hypothèse que l'activité observée au sein de la scène est caractérisée par le contexte d'interaction qui l'englobe. Notre système reconnaît ainsi l'activité en modélisant le contexte d'interaction au sein duquel elle prend place. Notre contribution principale se traduit donc par l'introduction de la notion de contexte au sein des processus interactif et adaptatif. La modélisation du contexte d'interaction que nous proposons nous a servi de base pour celle du scénario d'une application. La gestion et l'analyse de ce contexte au cours de l'interactivité permettent au système d'interpréter l'activité observée et de paramétrer les différents mécanismes adaptatifs qui en découlent. L'activité est capturée et codifiée par un système étendu, adopté en accord avec le cadre industriel de cette thèse (convention CIFRE avec la société XD Productions). Initialement dédié à la capture, invasive et en milieu contrôlé, des mouvements de l'utilisateur, nos travaux ont permis l'augmentation du processus vers une capture plus générale de l'activité globale,de manière non invasive et en environnement dynamique. Enfin, nous avons développé une application immergeant l'utilisateur au sein d'une simulation virtuelle d'entrainement au tennis. Par le biais d'études de cas extraites de ce scénario, nous avons implémenté les processus interactif et adaptatif prenant place entre le système et l'utilisateur. L'adaptativité du système, supportée par le scénario de l'application, est concrétisée par la mise en place de mécanismes spécifiques à tous les niveaux de l'architecture, en fonction de l'activité observée et interprétée. Nous mettons en évidence un ensemble de boucles logicielles, appelées " boucles vertueuses ", générées par l'accumulation des effets d'adaptation du système et améliorant en permanence l'interactivité. Les perspectives à ces travaux de thèse concernent la normalisation et la gestion haut niveau de notre modèle de contexte, la complexification de notre scénario et l'amélioration de notre système de capture dans le cadre de nouvelles applications scénarisées.
APA, Harvard, Vancouver, ISO, and other styles
12

Grancho, Emanuel da Silva. "Ocular recognition in uncontrolled environments." Master's thesis, 2014. http://hdl.handle.net/10400.6/5647.

Full text
Abstract:
Biometry is an area in great expansion and is considered as possible solution to cases where high authentication parameters are required. Although this area is quite advanced in theoretical terms, using it in practical terms still carries some problems. The systems available still depend on a high cooperation level to achieve acceptable performance levels, which was the backdrop to the development of the following project. By studying the state of the art, we propose the creation of a new and less cooperative biometric system that reaches acceptable performance levels.
A constante necessidade de parâmetros mais elevados de segurança, nomeadamente ao nível de autenticação, leva ao estudo biometria como possível solução. Actualmente os mecanismos existentes nesta área tem por base o conhecimento de algo que se sabe ”password” ou algo que se possui ”codigo Pin”. Contudo este tipo de informação é facilmente corrompida ou contornada. Desta forma a biometria é vista como uma solução mais robusta, pois garante que a autenticação seja feita com base em medidas físicas ou compartimentais que definem algo que a pessoa é ou faz (”who you are” ou ”what you do”). Sendo a biometria uma solução bastante promissora na autenticação de indivíduos, é cada vez mais comum o aparecimento de novos sistemas biométricos. Estes sistemas recorrem a medidas físicas ou comportamentais, de forma a possibilitar uma autenticação (reconhecimento) com um grau de certeza bastante considerável. O reconhecimento com base no movimento do corpo humano (gait), feições da face ou padrões estruturais da íris, são alguns exemplos de fontes de informação em que os sistemas actuais se podem basear. Contudo, e apesar de provarem um bom desempenho no papel de agentes de reconhecimento autónomo, ainda estão muito dependentes a nível de cooperação exigida. Tendo isto em conta, e tudo o que já existe no ramo do reconhecimento biometrico, esta área está a dar passos no sentido de tornar os seus métodos o menos cooperativos poss??veis. Possibilitando deste modo alargar os seus objectivos para além da mera autenticação em ambientes controlados, para casos de vigilância e controlo em ambientes não cooperativos (e.g. motins, assaltos, aeroportos). É nesta perspectiva que o seguinte projecto surge. Através do estudo do estado da arte, pretende provar que é possível criar um sistema capaz de agir perante ambientes menos cooperativos, sendo capaz de detectar e reconhecer uma pessoa que se apresente ao seu alcance.O sistema proposto PAIRS (Periocular and Iris Recognition Systema) tal como nome indica, efectua o reconhecimento através de informação extraída da íris e da região periocular (região circundante aos olhos). O sistema é construído com base em quatro etapas: captura de dados, pré-processamento, extração de características e reconhecimento. Na etapa de captura de dados, foi montado um dispositivo de aquisição de imagens com alta resolução com a capacidade de capturar no espectro NIR (Near-Infra-Red). A captura de imagens neste espectro tem como principal linha de conta, o favorecimento do reconhecimento através da íris, visto que a captura de imagens sobre o espectro visível seria mais sensível a variações da luz ambiente. Posteriormente a etapa de pré-processamento implementada, incorpora todos os módulos do sistema responsáveis pela detecção do utilizador, avaliação de qualidade de imagem e segmentação da íris. O modulo de detecção é responsável pelo desencadear de todo o processo, uma vez que esta é responsável pela verificação da exist?ncia de um pessoa em cena. Verificada a sua exist?ncia, são localizadas as regiões de interesse correspondentes ? íris e ao periocular, sendo também verificada a qualidade com que estas foram adquiridas. Concluídas estas etapas, a íris do olho esquerdo é segmentada e normalizada. Posteriormente e com base em vários descritores, é extraída a informação biométrica das regiões de interesse encontradas, e é criado um vector de características biométricas. Por fim, é efectuada a comparação dos dados biometricos recolhidos, com os já armazenados na base de dados, possibilitando a criação de uma lista com os níveis de semelhança em termos biometricos, obtendo assim um resposta final do sistema. Concluída a implementação do sistema, foi adquirido um conjunto de imagens capturadas através do sistema implementado, com a participação de um grupo de voluntários. Este conjunto de imagens permitiu efectuar alguns testes de desempenho, verificar e afinar alguns parâmetros, e proceder a optimização das componentes de extração de características e reconhecimento do sistema. Analisados os resultados foi possível provar que o sistema proposto tem a capacidade de exercer as suas funções perante condições menos cooperativas.
APA, Harvard, Vancouver, ISO, and other styles
13

Lin, Kuan-Yi, and 林冠儀. "A Robust Recognition System in Uncontrolled Environments." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/29416349745444604888.

Full text
Abstract:
碩士
元智大學
電機工程學系
98
Gender recognition is a challenging task in real time surveillance videos due to their relatively low-solution, uncontrolled environment and viewing angles of human subject. In this work, a system of real-time gender recognition for surveillance videos is proposed. The contribution of this work is four-fold. In order to make the system robust, a mechanism of decision making based on the combination of surrounding face detection, context-regions enhancement and confidence-based weighting assignment is designed. Considering the spatiotemporal consistency of the gender between consecutive faces in successive frames, a belief propagation model is employed to characterize this feature. Experiment results obtained by using extensive datasets show that our system is effective and efficient in recognizing genders in real surveillance videos.
APA, Harvard, Vancouver, ISO, and other styles
14

Waddington, Christopher. "Adaptive Fringe Pattern Projection Techniques for Imgae Saturation Avoidance in 3D Surface Measurement." Thesis, 2010. http://hdl.handle.net/10012/5616.

Full text
Abstract:
Fringe-pattern projection (FPP) techniques are commonly used for surface-shape measurement in a wide range of applications including object and scene modeling, part inspection, and reverse engineering. Periodic intensity fringe patterns with a specific amplitude are projected by the projector onto an object and a camera captures images of the fringe patterns, which appear distorted by the object surface from the perspective of the camera. The images are then used to compute the height or depth of the object at each pixel. One of the problems with FPP is that camera sensor saturation may occur if there is a large change in ambient lighting or a large range in surface reflectivity when measuring object surfaces. Camera sensor saturation occurs when the reflected intensity exceeds the maximum quantization level of the camera. A low SNR occurs when there is a low intensity modulation of the fringe pattern compared to the amount of noise in the image. Camera sensor saturation and low SNR can result in significant measurement error. Careful selection of the camera aperture or exposure time can reduce the error due to camera sensor saturation or low SNR. However, this is difficult to perform automatically, which may be necessary when measuring objects in uncontrolled environments where the lighting may change and objects have different surface reflectivity. This research presents three methods to avoid camera sensor saturation when measuring surfaces subject to changes in ambient lighting and objects with a large range in reflectivity. All these methods use the same novel approach of lowering the maximum input gray level (MIGL) to the projector for saturation avoidance. This approach avoids saturation by lowering the reflected intensity so that formerly saturated intensities can be captured by the camera. The first method of saturation avoidance seeks a trade-off between robustness to intensity saturation and low SNR. Measurements of a flat white plate at different MIGL resulted in a trade-off MIGL that yielded the highest accuracy for a single adjustment of MIGL that is uniform within and across the projected images. The second method used several sets of images, taken at constant steps of MIGL, and combined the images pixel-by-pixel into a single set of composite images, by selecting the highest unsaturated intensities at each pixel. White plate measurements using this method had comparable accuracy to the first method but required more images to form the composite image. Measurement of a checkerboard showed a higher accuracy than the first method since the second method maintains a higher SNR when the object has a large range of reflectivity. The last method also used composite images where the step size was determined dynamically, based on the estimated percentage of pixels that would become unsaturated at the next step. In measurements of a flat white plate and a checkerboard the dynamic step size was found to add flexibility to the measurement system compared to the constant steps using the second method. Using dynamic steps, the measurement system was able to measure objects with either a low or high range of reflectivity with high accuracy and without manually adjusting the step size. This permits fully automated measurement of unknown objects with variable reflectivity in unstructured environments with changing lighting conditions. The methods can be used for measurement in uncontrolled environments, for specular surfaces, and those with a large range of reflectivity or luminance. This would allow a wider range of measurement applications using FPP techniques.
APA, Harvard, Vancouver, ISO, and other styles
15

Chang, Chih-deng, and 張智登. "Inclined License Plate Recognition Systems in Uncontrolled Environment." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/42011976339601779593.

Full text
Abstract:
碩士
國立臺灣科技大學
電機工程系
96
In this thesis, a method for the localization and recognition of inclined license plates is proposed. Our algorithm requires only simple mathematical operations and thus, its computational burden is reduced. The proposed method consists of two parts: localization and preprocessing. Localization is to find probable position of the considered vehicle license plate. Preprocessing on the plate image is to increase character partition accuracy. 100 images in uncontrolled environment are considered to test the effectiveness of our method. These images have different illumination conditions, different resolutions, different distances, different shooting angles, different inclined angles of license plates, and different colors of license plates background. Besides, the efficiency of each step in our algorithm is analyzed. It provides us with information about the critical steps.
APA, Harvard, Vancouver, ISO, and other styles
16

Kirsten, Zubaydah. "Detection of airborne mycobacterium tuberculosis in an uncontrolled environment." Thesis, 2010. http://hdl.handle.net/10210/3130.

Full text
Abstract:
M. Tech.
Internationally, 9.2 million new cases and 1.7 million deaths occurred from tuberculosis (TB) in 2006. The vast majority of TB deaths occurred in the developing world, that is, Asia and Africa (WHO, 2008). The risk of infection is worsened by overcrowding of healthcare facilities which share re-circulated air without high efficiency particulate arrestance (HEPA) filtration or effective decontamination devices. With emerging and re-emerging infectious agents, the importance of the microbial influence on indoor air quality is gaining momentum around the world. The transmission of Mycobacterium tuberculosis (MTB) is a recognized occupational hazard and the mode of airborne transmission in risk settings needs to be investigated. The current study examined the efficacy of Polymerase Chain Reaction (PCR) for the early detection of airborne MTB using three types of filters: Polytetrafluoroethylene (PTFE), Polycarbonate (PC) and Gelatine, and a sedimentation gel (impaction method). A total of 520 samples, 68 internal positive controls and 68 internal negative controls were tested using two different PCR detection methods. The four different sampling types were each exposed to samples containing an avirulent strain of MTB (H37Ra) and negative controls exposed to aerosolized distilled water in an uncontrolled environment. The air was filtered at a flow rate of 2.5 L/min for a specified time. The filter membranes and sedimentation gel were removed from their respective holders, washed and analysed using conventional and real time (RT) PCR. An additional step using magnetic bead separation was used to assess its performance in overcoming inhibition. The sampling methods used included an in - house preparation of sedimentation gel. This sampling method was used in a study by Vadrot and colleagues in 2004 for detection of airborne MTB using the PCR method. Commercially available filters that were used as sampling methods included PTFE, PC and Gelatine. The detection methods included conventional PCR which detected the MTB complex in three hours, RT-PCR which detected MTB species in 70 minutes and RT-PCR coupled to magnetic bead separation (RT-PCR M) which detected MTB species in 90 minutes. The magnetic bead separation method purified the nucleic acids in the sample by eliminating the inhibitors that were present. The current study showed that by using the magnetic bead capture assay in conjunction with RT-PCR, gave excellent results of 100% sensitivity and 100% specificity when using PTFE, PC and Gelatine sampling methods. The sedimentation gel showed results of 90% sensitivity and 90% specificity. The Gelatine sampling method results showed 100% inhibition with conventional PCR. In conclusion, the use of conventional PCR is limiting for the detection of airborne MTB, possibly due to inhibition factors. In addition, the PTFE filter demonstrated excellent results for all detection methods used. The sedimentation gel did not perform well with PCR and RT-PCR, however gave excellent results with RT-PCR M. The PC filter can be considered the second sampling method of choice after PTFE filter, showing superior results for all diagnostic parameters using RT-PCR M, followed by RT-PCR and then PCR. The greatest application of using this validated method will be in the area of infection control. Environmental practitioners and occupational hygienists would be able to use this method to evaluate environmental control measures and monitor the air quality in healthcare facilities and other workplaces.
APA, Harvard, Vancouver, ISO, and other styles
17

Huang, Bo-En, and 黃柏恩. "Shape-based Hand Recognition using Moment Invariants in Uncontrolled Environment." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/44120345644423334210.

Full text
Abstract:
碩士
國立暨南國際大學
電機工程學系
98
Biometric is to identify individual physical or behavioral characteristics, and makes sure the user’s identity. If it is consistent with universality, uniqueness, persistence and the collection of words, it could be as biometric features. Then the biometric features could be acted in the operation mode and identification mode. We discussed hand shape recognition system based on hand features in this thesis. Most of the input image was a simple background, or be provided a platform for capturing images in the development of hand shape identification. In this thesis, image acquisition is done in the uncontrolled environment. After that we can extract the image features from the acquisition images. It is facilitated for us to do the classification. Here we choose a method to do feature extraction called Eigenmoment. Then we compare Eigenmoment with Zernike moment and Hu moment of recognition rate. The final part of the system we choose SVM(Support vector machine)to be the classifier. We do the experiment in three different databases. In the simple background we find out that Eigenmoment recognition rate can reach 97%. It is also the standard of 85% in a complex background environment. The performance is better than two other moments.
APA, Harvard, Vancouver, ISO, and other styles
18

Stacy, Emily Margaret. "Human and algorithm facial recognition performance : face in a crowd." Thesis, 2017. http://hdl.handle.net/10453/116916.

Full text
Abstract:
University of Technology Sydney. Faculty of Science.
Developing a method of identifying persons of interest (POIs) in uncontrolled environments, accurately and rapidly, is paramount in the 21st century. One such technique to do this is by using automated facial recognition systems (FRS). To date, FRS have mainly been tested in laboratory conditions (controlled) however there is little publically available research to indicate the performance levels, and therefore the feasibility of using FRS in public, uncontrolled environments, known as face-in-a-crowd (FIAC). This research project was hence directed at determining the feasibility of FIAC technology in uncontrolled, operational environments with the aim of being able to identify POIs. This was done by processing imagery obtained from a range of environments and camera technologies through one of the latest FR algorithms to evaluate the current level of FIAC performance. The hypothesis was that FR performance with higher resolution imagery would produce better FR results and that FIAC will be feasible in an operational environment when certain variables are controlled, such as camera type (resolution), lighting and number of people in the field of view. Key findings from this research revealed that although facial recognition algorithms for FIAC applications have shown improvement over the past decade, the feasibility of its deployment into uncontrolled environments remains unclear. The results support previous literature regarding the quality of the imagery being processed largely affecting the FRS performance, as imagery produced from high resolution cameras produced better performance results than imagery produced from CCTV cameras. The results suggest the current FR technology can potentially be viable in a FIAC scenario, if the operational environment can be modified to become better suited for optimal image acquisition. However, in areas where the environmental constraints were less controlled, the performance levels are seen to decrease significantly. The essential conclusion is that the data be processed with new versions of the algorithms that can track subjects through the environment, which is expected to vastly increase the performance, as well as potentially run an additional trial in alternate locations to gain a greater understanding of the feasibility of FIAC generically.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography