Dissertations / Theses on the topic 'Visual object recognition'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Visual object recognition.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Figueroa, Flores Carola. "Visual Saliency for Object Recognition, and Object Recognition for Visual Saliency." Doctoral thesis, Universitat Autònoma de Barcelona, 2021. http://hdl.handle.net/10803/671964.
Full textEl reconocimiento de objetos para los seres humanos es un proceso instantáneo, preciso y extremadamente adaptable. Además, tenemos la capacidad innata de aprender nuevas categorias de objetos a partir de unos pocos ejemplos. El cerebro humano reduce la complejidad de los datos entrantes filtrando parte de la información y procesando las cosas que captan nuestra atención. Esto, combinado con nuestra predisposición biológica a responder a determinadas formas o colores, nos permite reconocer en una simple mirada las regiones más importantes o destacadas de una imagen. Este mecanismo se puede observar analizando en qué partes de las imágenes los sujetos ponen su atención; por ejemplo donde fijan sus ojos cuando se les muestra una imagen. La forma más precisa de registrar este comportamiento es rastrear los movimientos de los ojos mientras se muestran imágenes. La estimación computacional del ‘saliency’, tiene como objetivo diseñar algoritmos que, dada una imagen de entrada, estimen mapas de ‘saliency’. Estos mapas se pueden utilizar en una variada gama de aplicaciones, incluida la detección de objetos, la compresión de imágenes y videos y el seguimiento visual. La mayoría de la investigación en este campo se ha centrado en estimar automáticamente estos mapas de ‘saliency’, dada una imagen de entrada. En cambio, en esta tesis, nos propusimos incorporar la estimación de ‘saliency’ en un procedimiento de reconocimiento de objeto, puesto que, queremos investigar si los mapas de ‘saliency’ pueden mejorar los resultados de la tarea de reconocimiento de objetos. En esta tesis, identificamos varios problemas relacionados con la estimación del ‘saliency’ visual. Primero, pudimos determinar en qué medida se puede aprovechar la estimación del ‘saliency’ para mejorar el entrenamiento de un modelo de reconocimiento de objetos cuando se cuenta con escasos datos de entrenamiento. Para resolver este problema, diseñamos una red de clasificación de imágenes que incorpora información de ‘saliency’ como entrada. Esta red procesa el mapa de ‘saliency’ a través de una rama de red dedicada y utiliza las características resultantes para modular las características visuales estándar ascendentes de la entrada de la imagen original. Nos referiremos a esta técnica como clasificación de imágenes moduladas por prominencia (SMIC en inglés). En numerosos experimentos realizando sobre en conjuntos de datos de referencia estándar para el reconocimiento de objetos ‘fine-grained’, mostramos que nuestra arquitectura propuesta puede mejorar significativamente el rendimiento, especialmente en conjuntos de datos con datos con escasos datos de entrenamiento. Luego, abordamos el principal inconveniente del problema anterior: es decir, SMIC requiere explícitamente un algoritmo de ‘saliency’, el cual debe entrenarse en un conjunto de datos de ‘saliency’. Para resolver esto, implementamos un mecanismo de alucinación que nos permite incorporar la rama de estimación de ‘saliency’ en una arquitectura de red neuronal entrenada de extremo a extremo que solo necesita la imagen RGB como entrada. Un efecto secundario de esta arquitectura es la estimación de mapas de ‘saliency’. En varios experimentos, demostramos que esta arquitectura puede obtener resultados similares en el reconocimiento de objetos como SMIC pero sin el requisito de mapas de ‘saliency’ para entrenar el sistema. Finalmente, evaluamos la precisión de los mapas de ‘saliency’ que ocurren como efecto secundario del reconocimiento de objetos. Para ello, utilizamos un de conjuntos de datos de referencia para la evaluación de la prominencia basada en experimentos de seguimiento ocular. Sorprendentemente, los mapas de ‘saliency’ estimados son muy similares a los mapas que se calculan a partir de experimentos de seguimiento ocular humano. Nuestros resultados muestran que estos mapas de ‘saliency’ pueden obtener resultados competitivos en mapas de ‘saliency’ de referencia.
For humans, the recognition of objects is an almost instantaneous, precise and extremely adaptable process. Furthermore, we have the innate capability to learn new object classes from only few examples. The human brain lowers the complexity of the incoming data by filtering out part of the information and only processing those things that capture our attention. This, mixed with our biological predisposition to respond to certain shapes or colors, allows us to recognize in a simple glance the most important or salient regions from an image. This mechanism can be observed by analyzing on which parts of images subjects place attention; where they fix their eyes when an image is shown to them. The most accurate way to record this behavior is to track eye movements while displaying images. Computational saliency estimation aims to identify to what extent regions or objects stand out with respect to their surroundings to human observers. Saliency maps can be used in a wide range of applications including object detection, image and video compression, and visual tracking. The majority of research in the field has focused on automatically estimating saliency maps given an input image. Instead, in this thesis, we set out to incorporate saliency maps in an object recognition pipeline: we want to investigate whether saliency maps can improve object recognition results. In this thesis, we identify several problems related to visual saliency estimation. First, to what extent the estimation of saliency can be exploited to improve the training of an object recognition model when scarce training data is available. To solve this problem, we design an image classification network that incorporates saliency information as input. This network processes the saliency map through a dedicated network branch and uses the resulting characteristics to modulate the standard bottom-up visual characteristics of the original image input. We will refer to this technique as saliency-modulated image classification (SMIC). In extensive experiments on standard benchmark datasets for fine-grained object recognition, we show that our proposed architecture can significantly improve performance, especially on dataset with scarce training data. Next, we address the main drawback of the above pipeline: SMIC requires an explicit saliency algorithm that must be trained on a saliency dataset. To solve this, we implement a hallucination mechanism that allows us to incorporate the saliency estimation branch in an end-to-end trained neural network architecture that only needs the RGB image as an input. A side-effect of this architecture is the estimation of saliency maps. In experiments, we show that this architecture can obtain similar results on object recognition as SMIC but without the requirement of ground truth saliency maps to train the system. Finally, we evaluated the accuracy of the saliency maps that occur as a side-effect of object recognition. For this purpose, we use a set of benchmark datasets for saliency evaluation based on eye-tracking experiments. Surprisingly, the estimated saliency maps are very similar to the maps that are computed from human eye-tracking experiments. Our results show that these saliency maps can obtain competitive results on benchmark saliency maps. On one synthetic saliency dataset this method even obtains the state-of-the-art without the need of ever having seen an actual saliency image for training.
Universitat Autònoma de Barcelona. Programa de Doctorat en Informàtica
Fergus, Robert. "Visual object category recognition." Thesis, University of Oxford, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.425029.
Full textWallenberg, Marcus. "Embodied Visual Object Recognition." Doctoral thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-132762.
Full textEmbodied Visual Object Recognition
FaceTrack
Breuel, Thomas M. "Geometric Aspects of Visual Object Recognition." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/7342.
Full textMeger, David Paul. "Visual object recognition for mobile platforms." Thesis, University of British Columbia, 2013. http://hdl.handle.net/2429/44682.
Full textMahmood, Hamid. "Visual Attention-based Object Detection and Recognition." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-94024.
Full textVillalba, Michael Joseph. "Fast visual recognition of large object sets." Thesis, Massachusetts Institute of Technology, 1990. http://hdl.handle.net/1721.1/42211.
Full textLindqvist, Zebh. "Design Principles for Visual Object Recognition Systems." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-80769.
Full textTeynor, Alexandra. "Visual object class recognition using local descriptions." [S.l. : s.n.], 2008. http://nbn-resolving.de/urn:nbn:de:bsz:25-opus-62371.
Full textPemula, Latha. "Low-shot Visual Recognition." Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/73321.
Full textMaster of Science
Naha, Shujon. "Zero-shot Learning for Visual Recognition Problems." IEEE, 2015. http://hdl.handle.net/1993/31806.
Full textOctober 2016
Yang, Fan. "Visual Infrastructure based Accurate Object Recognition and Localization." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1492752246062673.
Full textPiñol, Naranjo Mónica. "Reinforcement learning of visual descriptors for object recognition." Doctoral thesis, Universitat Autònoma de Barcelona, 2014. http://hdl.handle.net/10803/283927.
Full textThe human visual system is able to recognize the object in an image even if the object is partially occluded, from various points of view, in different colors, or with independence of the distance to the object. To do this, the eye obtains an image and extracts features that are sent to the brain, and then, in the brain the object is recognized. In computer vision, the object recognition branch tries to learns from the human visual system behaviour to achieve its goal. Hence, an algorithm is used to identify representative features of the scene (detection), then another algorithm is used to describe these points (descriptor) and finally the extracted information is used for classifying the object in the scene. The selection of this set of algorithms is a very complicated task and thus, a very active research field. In this thesis we are focused on the selection/learning of the best descriptor for a given image. In the state of the art there are several descriptors but we do not know how to choose the best descriptor because depends on scenes that we will use (dataset) and the algorithm chosen to do the classification. We propose a framework based on reinforcement learning and bag of features to choose the best descriptor according to the given image. The system can analyse the behaviour of different learning algorithms and descriptor sets. Further- more the proposed framework for improving the classification/recognition ratio can be used with minor changes in other computer vision fields, such as video retrieval.
Wilson, Susan E. "Perceptual organization and symmetry in visual object recognition." Thesis, University of British Columbia, 1991. http://hdl.handle.net/2429/29802.
Full textScience, Faculty of
Computer Science, Department of
Graduate
Wallenberg, Marcus, and Per-Erik Forssén. "A Research Platform for Embodied Visual Object Recognition." Linköpings universitet, Datorseende, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-70769.
Full textLovell, Kylie Sarah. "Implicit and explicit processes in visual object recognition." Thesis, University of Reading, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.430835.
Full textSudderth, Erik B. (Erik Blaine) 1977. "Graphical models for visual object recognition and tracking." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/34023.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 277-301).
We develop statistical methods which allow effective visual detection, categorization, and tracking of objects in complex scenes. Such computer vision systems must be robust to wide variations in object appearance, the often small size of training databases, and ambiguities induced by articulated or partially occluded objects. Graphical models provide a powerful framework for encoding the statistical structure of visual scenes, and developing corresponding learning and inference algorithms. In this thesis, we describe several models which integrate graphical representations with nonparametric statistical methods. This approach leads to inference algorithms which tractably recover high-dimensional, continuous object pose variations, and learning procedures which transfer knowledge among related recognition tasks. Motivated by visual tracking problems, we first develop a nonparametric extension of the belief propagation (BP) algorithm. Using Monte Carlo methods, we provide general procedures for recursively updating particle-based approximations of continuous sufficient statistics. Efficient multiscale sampling methods then allow this nonparametric BP algorithm to be flexibly adapted to many different applications.
(cont.) As a particular example, we consider a graphical model describing the hand's three-dimensional (3D) structure, kinematics, and dynamics. This graph encodes global hand pose via the 3D position and orientation of several rigid components, and thus exposes local structure in a high-dimensional articulated model. Applying nonparametric BP, we recover a hand tracking algorithm which is robust to outliers and local visual ambiguities. Via a set of latent occupancy masks, we also extend our approach to consistently infer occlusion events in a distributed fashion. In the second half of this thesis, we develop methods for learning hierarchical models of objects, the parts composing them, and the scenes surrounding them. Our approach couples topic models originally developed for text analysis with spatial transformations, and thus consistently accounts for geometric constraints. By building integrated scene models, we may discover contextual relationships, and better exploit partially labeled training images. We first consider images of isolated objects, and show that sharing parts among object categories improves accuracy when learning from few examples.
(cont.) Turning to multiple object scenes, we propose nonparametric models which use Dirichlet processes to automatically learn the number of parts underlying each object category, and objects composing each scene. Adapting these transformed Dirichlet processes to images taken with a binocular stereo camera, we learn integrated, 3D models of object geometry and appearance. This leads to a Monte Carlo algorithm which automatically infers 3D scene structure from the predictable geometry of known object categories.
by Erik B. Sudderth.
Ph.D.
Craddock, Matthew Peter. "Comparing the attainment of object constancy in haptic and visual object recognition." Thesis, University of Liverpool, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.539615.
Full textOsman, Erol. "Relational Strategies for the Study of Visual Object Recognition." Diss., lmu, 2008. http://nbn-resolving.de/urn:nbn:de:bvb:19-90393.
Full textShotton, Jamie Daniel Joseph. "Contour and texture for visual recognition of object categories." Thesis, University of Cambridge, 2007. https://www.repository.cam.ac.uk/handle/1810/252047.
Full textWojnowski, Christine. "Reasoning with visual knowledge in an object recognition system /." Online version of thesis, 1990. http://hdl.handle.net/1850/10596.
Full textCarreira, Joao [Verfasser]. "Bottom-up Object Segmentation for Visual Recognition / Joao Carreira." Bonn : Universitäts- und Landesbibliothek Bonn, 2013. http://d-nb.info/1044868961/34.
Full textCorradi, Tadeo. "Integrating visual and tactile robotic perception." Thesis, University of Bath, 2018. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.761005.
Full textWallenberg, Marcus. "Components of Embodied Visual Object Recognition : Object Perception and Learning on a Robotic Platform." Licentiate thesis, Linköpings universitet, Datorseende, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-93812.
Full textEmbodied Visual Object Recognition
Gathers, Ann D. "DEVELOPMENTAL FMRI STUDY: FACE AND OBJECT RECOGNITION." Lexington, Ky. : [University of Kentucky Libraries], 2005. http://lib.uky.edu/ETD/ukyanne2005d00276/etd.pdf.
Full textTitle from document title page (viewed on November 4, 2005). Document formatted into pages; contains xi, 152 p. : ill. Includes abstract and vita. Includes bibliographical references (p. 134-148).
Loos, Hartmut S. [Verfasser]. "User-Assisted Learning of Visual Object Recognition / Hartmut S Loos." Aachen : Shaker, 2003. http://d-nb.info/117451339X/34.
Full textLakshmi, Ratan Aparna. "The role of fixation and visual attention in object recognition." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/38734.
Full textIncludes bibliographical references (p. 94-97).
by Aparna Lakshmi Ratan.
M.S.
Zoccoli, Sandra L. "Object features and object recognition Semantic memory abilities during the normal aging process /." Ann Arbor, Mich. : ProQuest, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3288933.
Full textTitle from PDF title page (viewed Nov. 19, 2009). Source: Dissertation Abstracts International, Volume: 68-11, Section: B, page: 7695. Adviser: Alan S. Brown. Includes bibliographical references.
Wu, Jia Jane. "Comparing Visual Features for Morphing Based Recognition." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/30547.
Full textLeeds, Daniel Demeny. "Searching for the Visual Components of Object Perception." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/313.
Full textSmith, Wendy. "The contribution of meaning in forming holistic and segmented based visual representations." Thesis, University of Southampton, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.340325.
Full textWang, Josiah Kwok-Siang. "Learning visual recognition of fine-grained object categories from textual descriptions." Thesis, University of Leeds, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.597096.
Full textSalama, F. A. O. "The role of depth cues on visual object recognition and naming." Thesis, Swansea University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.638745.
Full textAlter, Tao Daniel. "The role of saliencey and error propagation in visual object recognition." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/38055.
Full textRamanan, Amirthalingam. "Designing a resource-allocating codebook for patch-based visual object recognition." Thesis, University of Southampton, 2010. https://eprints.soton.ac.uk/159175/.
Full textViau, Claude. "Multispectral Image Analysis for Object Recognition and Classification." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34532.
Full textCollin, Charles Alain. "Effects of spatial frequency overlap on face and object recognition." Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=36896.
Full textA second question that is examined concerns the effect of calibration of stimuli on recognition of spatially filtered images. Past studies using non-calibrated presentation methods have inadvertently introduced aberrant frequency content to their stimuli. The effect this has on recognition performance has not been examined, leading to doubts about the comparability of older and newer studies. Examining the impact of calibration on recognition is an ancillary goal of this dissertation.
Seven experiments examining the above questions are reported here. Results suggest that spatial frequency overlap had a strong effect on face recognition and a lesser effect on object recognition. Indeed, contrary to much previous research it was found that the band of frequencies occupied by a face image had little effect on recognition, but that small variations in overlap had significant effects. This suggests that the overlap factor is important in understanding various phenomena in visual recognition. Overlap effects likely contribute to the apparent superiority of certain spatial bands for different recognition tasks, and to the inferiority of line drawings in face recognition. Results concerning the mnemonic representation of faces and objects suggest that these are both encoded in a format that retains spatial frequency information, and do not support certain proposed fundamental differences in how these two stimulus classes are stored. Data on calibration generally shows non-calibration having little impact on visual recognition, suggesting moderate confidence in results of older studies.
Rouhafzay, Ghazal. "3D Object Representation and Recognition Based on Biologically Inspired Combined Use of Visual and Tactile Data." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42122.
Full textChoi, Changhyun. "Visual object perception in unstructured environments." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53003.
Full textCaywood, Matthew Shields. "Approaches to the function of object recognition areas of the visual cortex." Diss., Search in ProQuest Dissertations & Theses. UC Only, 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3378530.
Full textMisra, Navendu. "Comparison of motor-based versus visual sensory representations in object recognition tasks." Texas A&M University, 2005. http://hdl.handle.net/1969.1/2544.
Full textSaifullah, Mohammad. "Biologically-Based Interactive Neural Network Models for Visual Attention and Object Recognition." Doctoral thesis, Linköpings universitet, Institutionen för datavetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-79336.
Full textGosling, Angela. "An electrophysiological investigation of the role of attention in visual object recognition." Thesis, Goldsmiths College (University of London), 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.523119.
Full textRajalingham, Rishi. "How does the primate ventral visual stream causally support core object recognition?" Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120625.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 161-173).
Primates are able to rapidly, accurately and effortlessly perform the computationally difficult visual task of invariant object recognition - the ability to discriminate between different objects in the face of high variation in object viewing parameters and background conditions. This ability is thought to rely on the ventral visual stream, a hierarchy of visual cortical areas culminating in inferior temporal (IT) cortex. In particular, decades of research strongly suggests that the population of neurons in IT supports invariant object recognition behavior. However, direct causal evidence for this decoding hypothesis has been equivocal to date, especially beyond the specific case of face-selective sub-regions of IT. This research aims to directly test the general causal role of IT in invariant object recognition. To do so, we first characterized human and macaque monkey behavior over a large behavioral domain consisting of binary discriminations between images of basic-level objects, establishing behavioral metrics and benchmarks for computational models of this behavior. This work suggests that, in the domain of basic-level core object recognition, humans and monkeys are remarkably similar in their behavioral responses, while leading models of the visual system significantly diverge from primate behavior. We then reversibly inactivated individual, millimeter-scale regions of IT via injection of muscimol while monkeys performed several interleaved binary object discrimination tasks. We found that inactivating different millimeter-scale regions of primate IT resulted in different patterns of object recognition deficits, each predicted by the local region's neuronal selectivity. Our results provide causal evidence that IT directly underlies primate object recognition behavior in a topographically organized manner. Taken together, these results establish quantitative experimental constraints for computational models of the ventral visual stream and object recognition behavior.
by Rishi Rajalingham.
Ph. D.
Durán, Gabriela. "Effects of concurrent task performance on object processing." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2009. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.
Full textSaifullah, Mohammad. "Exploring Biologically-Inspired Interactive Networks for Object Recognition." Licentiate thesis, Linköpings universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-64692.
Full textWang, Qian. "Zero-shot visual recognition via latent embedding learning." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/zeroshot-visual-recognition-via-latent-embedding-learning(bec510af-6a53-4114-9407-75212e1a08e1).html.
Full textRecktenwald, Eric William. "VISUAL RECOGNITION OF THE STATIONARY ENVIRONMENT IN LEOPARD FROGS." Diss., Temple University Libraries, 2014. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/292229.
Full textPh.D.
Leopard frogs (Rana pipiens) rely on vision to recognize behaviorally meaningful aspects of their environment. The optic tectum has been shown to mediate the frog's ability to recognize and respond to moving prey and looming objects. Nonetheless, atectal frogs are still able to appropriately respond to non-moving aspects of their environment. There appears to be independent visual systems operating in the frog: one system for recognizing moving objects; and another system for recognizing stationary objects. Little is known about the neural mechanisms mediating the recognition of stationary objects in frogs. Our laboratory showed that a retino-recipient area in the anterior lateral thalamus--the NB/CG zone--is involved in processing visual information concerning stationary aspects of the environment. This thesis aims to characterize the frog's responses to a range of stationary stimuli, and to elucidate the thalamic visual system that mediates those responses. I tested leopard frogs' responses to different stationary stimuli and found they respond in stereotypical ways. I discovered that leopard frogs are attracted to dark, stationary, opaque objects; and tested the extent of this attraction under different conditions. I found that frogs' preference to move toward a dark area versus a light source depends on the intensity of the light source relative to the intensity of ambient light. Unilateral lesions applied to the NB/CG zone of the anterior lateral thalamus resulted in temporary deficits in frogs' responses to stationary stimuli presented in the contralateral visual field. Deficits were observed in response to: dark objects, entrances to dark areas, light sources, and gaps between stationary barriers. However, responses to moving prey and looming stimuli were unaffected. Interestingly, these deficits tended to recover after about 6 days in most cases. Recovery time ranged from 2 - 28 days. The NB/CG zone is anatomically and functionally connected to a structure in the posterior thalamus called the "PMDT." The PMDT has no other connections in the brain. Thus, I have discovered a "satellite" of the NB/CG zone. Preliminary evidence suggests that the PMDT is another component of the visual system mediating stationary object recognition in the frog.
Temple University--Theses
Boisard, Olivier. "Optimization and implementation of bio-inspired feature extraction frameworks for visual object recognition." Thesis, Dijon, 2016. http://www.theses.fr/2016DIJOS016/document.
Full textIndustry has growing needs for so-called “intelligent systems”, capable of not only ac-quire data, but also to analyse it and to make decisions accordingly. Such systems areparticularly useful for video-surveillance, in which case alarms must be raised in case ofan intrusion. For cost saving and power consumption reasons, it is better to perform thatprocess as close to the sensor as possible. To address that issue, a promising approach isto use bio-inspired frameworks, which consist in applying computational biology modelsto industrial applications. The work carried out during that thesis consisted in select-ing bio-inspired feature extraction frameworks, and to optimize them with the aim toimplement them on a dedicated hardware platform, for computer vision applications.First, we propose a generic algorithm, which may be used in several use case scenarios,having an acceptable complexity and a low memory print. Then, we proposed opti-mizations for a more global framework, based on precision degradation in computations,hence easing up its implementation on embedded systems. Results suggest that whilethe framework we developed may not be as accurate as the state of the art, it is moregeneric. Furthermore, the optimizations we proposed for the more complex frameworkare fully compatible with other optimizations from the literature, and provide encourag-ing perspective for future developments. Finally, both contributions have a scope thatgoes beyond the sole frameworks that we studied, and may be used in other, more widelyused frameworks as well
Farivar-Mohseni, Reza. "Object recognition by integration of information across the dorsal and ventral visual pathways." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=21982.
Full textLe cerveau décompose l'informations visuelle en ses composants de forme et de mouvement, et les traite de manière indépendante par deux voies anatomiques distinctes‹l¹information ayant attrait au mouvement et à la relation spatiale par la voie dorsale qui se termine dans le lobe pariétal et l¹information ayant attrait à la forme par la voie ventrale qui se termine dans le cortex inférotemporal. Certaines informations de profondeur, tel que la structure-par-mouvement 3-D (SPM), sont presque entièrement analysées par la voie dorsale; toutefois, les objets décris par la SPM sont aussi reconnus par les voies ventrales. Cette thèse débute par une discussion théorique décrivant la manière dont l¹information de profondeur calculée par la voie dorsale peut contribuer aux machinismes de reconnaissance des objets (voie ventrale). Les résultats des expériences psychophysiques et neuropsychologiques indiquent que l¹information de SPM peut permettre la reconnaissance des objets complexes, même des visages peu familiers, et cela peut constituer un case d¹intégration entre les deux voies indépendantes. De plus, les résultats des expériences neuropsychologiques présentées suggèrent que la perception de forme-par-mouvement 2-D est dissociable de celle de structure par mouvement 3-D. Finalement, par le biais d'imagerie par résonance magnétique fonctionnelle, nous avons démontré que les objets décris par SPM n¹activent pas le même méchanisme cérébral que des photos de ces mêmes objets. Ensemble, les résultats présentés ci-après suggèrent que la reconnaissance des objets visuels peut être distribuée entre les deux voies visuelles.