Littérature scientifique sur le sujet « Visual place recognition »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Visual place recognition ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Visual place recognition"

1

Lowry, Stephanie, Niko Sunderhauf, Paul Newman, John J. Leonard, David Cox, Peter Corke et Michael J. Milford. « Visual Place Recognition : A Survey ». IEEE Transactions on Robotics 32, no 1 (février 2016) : 1–19. http://dx.doi.org/10.1109/tro.2015.2496823.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Torii, Akihiko, Josef Sivic, Masatoshi Okutomi et Tomas Pajdla. « Visual Place Recognition with Repetitive Structures ». IEEE Transactions on Pattern Analysis and Machine Intelligence 37, no 11 (1 novembre 2015) : 2346–59. http://dx.doi.org/10.1109/tpami.2015.2409868.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Grill-Spector, Kalanit, et Nancy Kanwisher. « Visual Recognition ». Psychological Science 16, no 2 (février 2005) : 152–60. http://dx.doi.org/10.1111/j.0956-7976.2005.00796.x.

Texte intégral
Résumé :
What is the sequence of processing steps involved in visual object recognition? We varied the exposure duration of natural images and measured subjects' performance on three different tasks, each designed to tap a different candidate component process of object recognition. For each exposure duration, accuracy was lower and reaction time longer on a within-category identification task (e.g., distinguishing pigeons from other birds) than on a perceptual categorization task (e.g., birds vs. cars). However, strikingly, at each exposure duration, subjects performed just as quickly and accurately on the categorization task as they did on a task requiring only object detection: By the time subjects knew an image contained an object at all, they already knew its category. These findings place powerful constraints on theories of object recognition.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Zeng, Zhiqiang, Jian Zhang, Xiaodong Wang, Yuming Chen et Chaoyang Zhu. « Place Recognition : An Overview of Vision Perspective ». Applied Sciences 8, no 11 (15 novembre 2018) : 2257. http://dx.doi.org/10.3390/app8112257.

Texte intégral
Résumé :
Place recognition is one of the most fundamental topics in the computer-vision and robotics communities, where the task is to accurately and efficiently recognize the location of a given query image. Despite years of knowledge accumulated in this field, place recognition still remains an open problem due to the various ways in which the appearance of real-world places may differ. This paper presents an overview of the place-recognition literature. Since condition-invariant and viewpoint-invariant features are essential factors to long-term robust visual place-recognition systems, we start with traditional image-description methodology developed in the past, which exploits techniques from the image-retrieval field. Recently, the rapid advances of related fields, such as object detection and image classification, have inspired a new technique to improve visual place-recognition systems, that is, convolutional neural networks (CNNs). Thus, we then introduce the recent progress of visual place-recognition systems based on CNNs to automatically learn better image representations for places. Finally, we close with discussions and mention of future work on place recognition.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Masone, Carlo, et Barbara Caputo. « A Survey on Deep Visual Place Recognition ». IEEE Access 9 (2021) : 19516–47. http://dx.doi.org/10.1109/access.2021.3054937.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Stumm, Elena S., Christopher Mei et Simon Lacroix. « Building Location Models for Visual Place Recognition ». International Journal of Robotics Research 35, no 4 (28 avril 2015) : 334–56. http://dx.doi.org/10.1177/0278364915570140.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Wang, Bo, Xin-sheng Wu, An Chen, Chun-yu Chen et Hai-ming Liu. « The Research Status of Visual Place Recognition ». Journal of Physics : Conference Series 1518 (avril 2020) : 012039. http://dx.doi.org/10.1088/1742-6596/1518/1/012039.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Horst, Michael, et Ralf Möller. « Visual Place Recognition for Autonomous Mobile Robots ». Robotics 6, no 2 (17 avril 2017) : 9. http://dx.doi.org/10.3390/robotics6020009.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Oertel, Amadeus, Titus Cieslewski et Davide Scaramuzza. « Augmenting Visual Place Recognition With Structural Cues ». IEEE Robotics and Automation Letters 5, no 4 (octobre 2020) : 5534–41. http://dx.doi.org/10.1109/lra.2020.3009077.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Chen, Baifan, Xiaoting Song, Hongyu Shen et Tao Lu. « Hierarchical Visual Place Recognition Based on Semantic-Aggregation ». Applied Sciences 11, no 20 (14 octobre 2021) : 9540. http://dx.doi.org/10.3390/app11209540.

Texte intégral
Résumé :
A major challenge in place recognition is to be robust against viewpoint changes and appearance changes caused by self and environmental variations. Humans achieve this by recognizing objects and their relationships in the scene under different conditions. Inspired by this, we propose a hierarchical visual place recognition pipeline based on semantic-aggregation and scene understanding for the images. The pipeline contains coarse matching and fine matching. Semantic-aggregation happens in residual aggregation of visual information and semantic information in coarse matching, and semantic association of semantic edges in fine matching. Through the above two processes, we realized a robust coarse-to-fine pipeline of visual place recognition across viewpoint and condition variations. Experimental results on the benchmark datasets show that our method performs better than several state-of-the-art methods, improving the robustness against severe viewpoint changes and appearance changes while maintaining good matching-time performance. Moreover, we prove that it is possible for a computer to realize place recognition based on scene understanding.
Styles APA, Harvard, Vancouver, ISO, etc.
Plus de sources

Thèses sur le sujet "Visual place recognition"

1

Stumm, Elena. « Location models for visual place recognition ». Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30341/document.

Texte intégral
Résumé :
Cette thèse traite de la cartographie et de la reconnaissance de lieux par vision en robotique mobile. Les recherches menées visent à identifier comment les modèles de localisation peuvent être améliorés en enrichissant les représentations existantes afin de mieux exploiter l'information visuelle disponible. Les problèmes de la cartographie et de la reconnaissance visuelle de lieux présentent un certain nombre de défis : les solutions doivent notamment être robustes vis-à-vis des scènes similaires, des changements de points de vue de d'éclairage, de la dynamique de l'environnement, du bruit des données acquises. La définition de la manière de modéliser et de comparer les observations de lieux est donc un élément crucial de définition d'une solution opérationnelle. Cela passe par la spécification des caractéristiques des images à exploiter, par la définition de la notion de lieu, et par des algorithmes de comparaison des lieux. Dans la littérature, les lieux visuels sont généralement définis par un ensemble ou une séquence d'observations, ce qui ne permet pas de bien traiter des problèmes de similarité de scènes ou de reconnaissance invariante aux déplacements. Dans nos travaux, le modèle d'un lieu exploite la structure d'une scène représentée par des graphes de covisibilité, qui capturent des relations géométriques approximatives entre les points caractéristiques observés. Grâce à cette représentation, un lieu est identifié et reconnu comme un sous-graphe. La reconnaissance de lieux exploite un modèle génératif, dont la sensibilité par rapport aux similarités entre scènes, aux bruits d'observation et aux erreurs de cartographie est analysée. En particulier, les probabilités de reconnaissance sont estimées de manière rigoureuse, rendant la reconnaissance des lieux robuste, et ce pour une complexité algorithme sous-linéaire en le nombre de lieux définis. Enfin les modèles de lieux basés sur des sacs de mots visuels sont étendus pour exploiter les informations structurelles fournies par le graphe de covisibilité, ce qui permet un meilleur compromis entre la qualité et la complexité du processus de reconnaissance
This thesis deals with the task of appearance-based mapping and place recognition for mobile robots. More specifically, this work aims to identify how location models can be improved by exploring several existing and novel location representations in order to better exploit the available visual information. Appearance-based mapping and place recognition presents a number of challenges, including making reliable data-association decisions given repetitive and self-similar scenes (perceptual aliasing), variations in view-point and trajectory, appearance changes due to dynamic elements, lighting changes, and noisy measurements. As a result, choices about how to model and compare observations of locations is crucial to achieving practical results. This includes choices about the types of features extracted from imagery, how to define the extent of a location, and how to compare locations. Along with investigating existing location models, several novel methods are developed in this work. These are developed by incorporating information about the underlying structure of the scene through the use of covisibility graphs which capture approximate geometric relationships between local landmarks in the scene by noting which ones are observed together. Previously, the range of a location generally varied between either using discrete poses or loosely defined sequences of poses, facing problems related to perceptual aliasing and trajectory invariance respectively. Whereas by working with covisibility graphs, scenes are dynamically retrieved as clusters from the graph in a way which adapts to the environmental structure and given query. The probability of a query observation coming from a previously seen location is then obtained by applying a generative model such that the uniqueness of an observation is accounted for. Behaviour with respect to observation errors, mapping errors, perceptual aliasing, and parameter sensitivity are examined, motivating the use of a novel normalization scheme and observation likelihoods representations. The normalization method presented in this work is robust to redundant locations in the map (from missed loop-closures, for example), and results in place recognition which now has sub-linear complexity in the number of locations in the map. Beginning with bag-of-words representations of locations, location models are extended in order to include more discriminative structural information from the covisibility map. This results in various representations ranging between unstructured sets of features and full graphs of features, providing a tradeoff between complexity and recognition performance
Styles APA, Harvard, Vancouver, ISO, etc.
2

Vysotska, Olga [Verfasser]. « Visual Place Recognition in Changing Environments / Olga Vysotska ». Bonn : Universitäts- und Landesbibliothek Bonn, 2019. http://d-nb.info/119900538X/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Vysotska, Olga [Verfasser]. « Visual Place Recognition in Changing Environments / Olga Vysotska ». Bonn : Universitäts- und Landesbibliothek Bonn, 2020. http://d-nb.info/1217404473/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Qiao, Yongliang. « Place recognition based visual localization in changing environments ». Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCA004/document.

Texte intégral
Résumé :
Dans de nombreuses applications, il est crucial qu'un robot ou un véhicule se localise, notamment pour la navigation ou la conduite autonome. Cette thèse traite de la localisation visuelle par des méthodes de reconnaissance de lieux. Le principe est le suivant: lors d'une phase hors-ligne, des images géo-référencées de l'environnement d'évolution du véhicule sont acquises, des caractéristiques en sont extraites et sauvegardées. Puis lors de la phase en ligne, il s'agit de retrouver l'image (ou la séquence d'images) de la base d'apprentissage qui correspond le mieux à l'image (ou la séquence d'images) courante. La localisation visuelle reste un challenge car l'apparence et l'illumination changent drastiquement en particulier avec le temps, les conditions météorologiques et les saisons. Dans cette thèse, on cherche alors à améliorer la reconnaissance de lieux grâce à une meilleure capacité de description et de reconnaissance de la scène. Plusieurs approches sont proposées dans cette thèse:1) La reconnaissance visuelle de lieux est améliorée en considérant les informations de profondeur, de texture et de forme par la combinaison de plusieurs de caractéristiques visuelles, à savoir les descripteurs CSLBP (extraits sur l'image couleur et l'image de profondeur) et HOG. De plus l'algorithme LSH (Locality Sensitive Hashing) est utilisée pour améliorer le temps de calcul;2) Une méthode de la localisation visuelle basée sur une reconnaissance de lieux par mise en correspondance de séquence d'images (au lieu d'images considérées indépendamment) et combinaison des descripteurs GIST et CSLBP est également proposée. Cette approche est en particulier testée lorsque les bases d'apprentissage et de test sont acquises à des saisons différentes. Les résultats obtenus montrent que la méthode est robuste aux changements perceptuels importants;3) Enfin, la dernière approche de localisation visuelle proposée est basée sur des caractéristiques apprises automatiquement (à l'aide d'un réseau de neurones à convolution) et une mise en correspondance de séquences localisées d'images. Pour améliorer l'efficacité computationnelle, l'algorithme LSH est utilisé afin de viser une localisation temps-réel avec une dégradation de précision limitée
In many applications, it is crucial that a robot or vehicle localizes itself within the world especially for autonomous navigation and driving. The goal of this thesis is to improve place recognition performance for visual localization in changing environment. The approach is as follows: in off-line phase, geo-referenced images of each location are acquired, features are extracted and saved. While in the on-line phase, the vehicle localizes itself by identifying a previously-visited location through image or sequence retrieving. However, visual localization is challenging due to drastic appearance and illumination changes caused by weather conditions or seasonal changing. This thesis addresses the challenge of improving place recognition techniques through strengthen the ability of place describing and recognizing. Several approaches are proposed in this thesis:1) Multi-feature combination of CSLBP (extracted from gray-scale image and disparity map) and HOG features is used for visual localization. By taking the advantages of depth, texture and shape information, visual recognition performance can be improved. In addition, local sensitive hashing method (LSH) is used to speed up the process of place recognition;2) Visual localization across seasons is proposed based on sequence matching and feature combination of GIST and CSLBP. Matching places by considering sequences and feature combination denotes high robustness to extreme perceptual changes;3) All-environment visual localization is proposed based on automatic learned Convolutional Network (ConvNet) features and localized sequence matching. To speed up the computational efficiency, LSH is taken to achieve real-time visual localization with minimal accuracy degradation
Styles APA, Harvard, Vancouver, ISO, etc.
5

Lowry, Stephanie Margaret. « Visual place recognition for persistent robot navigation in changing environments ». Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/79404/1/Stephanie_Lowry_Thesis.pdf.

Texte intégral
Résumé :
This thesis demonstrates that robots can learn about how the world changes, and can use this information to recognise where they are, even when the appearance of the environment has changed a great deal. The ability to localise in highly dynamic environments using vision only is a key tool for achieving long-term, autonomous navigation in unstructured outdoor environments. The proposed learning algorithms are designed to be unsupervised, and can be generated by the robot online in response to its observations of the world, without requiring information from a human operator or other external source.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Neubert, Peer. « Superpixels and their Application for Visual Place Recognition in Changing Environments ». Doctoral thesis, Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-190241.

Texte intégral
Résumé :
Superpixels are the results of an image oversegmentation. They are an established intermediate level image representation and used for various applications including object detection, 3d reconstruction and semantic segmentation. While there are various approaches to create such segmentations, there is a lack of knowledge about their properties. In particular, there are contradicting results published in the literature. This thesis identifies segmentation quality, stability, compactness and runtime to be important properties of superpixel segmentation algorithms. While for some of these properties there are established evaluation methodologies available, this is not the case for segmentation stability and compactness. Therefore, this thesis presents two novel metrics for their evaluation based on ground truth optical flow. These two metrics are used together with other novel and existing measures to create a standardized benchmark for superpixel algorithms. This benchmark is used for extensive comparison of available algorithms. The evaluation results motivate two novel segmentation algorithms that better balance trade-offs of existing algorithms: The proposed Preemptive SLIC algorithm incorporates a local preemption criterion in the established SLIC algorithm and saves about 80 % of the runtime. The proposed Compact Watershed algorithm combines Seeded Watershed segmentation with compactness constraints to create regularly shaped, compact superpixels at the even higher speed of the plain watershed transformation. Operating autonomous systems over the course of days, weeks or months, based on visual navigation, requires repeated recognition of places despite severe appearance changes as they are for example induced by illumination changes, day-night cycles, changing weather or seasons - a severe problem for existing methods. Therefore, the second part of this thesis presents two novel approaches that incorporate superpixel segmentations in place recognition in changing environments. The first novel approach is the learning of systematic appearance changes. Instead of matching images between, for example, summer and winter directly, an additional prediction step is proposed. Based on superpixel vocabularies, a predicted image is generated that shows, how the summer scene could look like in winter or vice versa. The presented results show that, if certain assumptions on the appearance changes and the available training data are met, existing holistic place recognition approaches can benefit from this additional prediction step. Holistic approaches to place recognition are known to fail in presence of viewpoint changes. Therefore, this thesis presents a new place recognition system based on local landmarks and Star-Hough. Star-Hough is a novel approach to incorporate the spatial arrangement of local image features in the computation of image similarities. It is based on star graph models and Hough voting and particularly suited for local features with low spatial precision and high outlier rates as they are expected in the presence of appearance changes. The novel landmarks are a combination of local region detectors and descriptors based on convolutional neural networks. This thesis presents and evaluates several new approaches to incorporate superpixel segmentations in local region detection. While the proposed system can be used with different types of local regions, in particular the combination with regions obtained from the novel multiscale superpixel grid shows to perform superior to the state of the art methods - a promising basis for practical applications.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Pepperell, Edward. « Visual sequence-based place recognition for changing conditions and varied viewpoints ». Thesis, Queensland University of Technology, 2016. https://eprints.qut.edu.au/93741/1/Edward_Pepperell_Thesis.pdf.

Texte intégral
Résumé :
Correctly identifying previously-visited locations is essential for robotic place recognition and localisation. This thesis presents training-free solutions to vision-based place recognition under changing environmental conditions and camera viewpoints. Using vision as a primary sensor, the proposed approaches combine image segmentation and rescaling techniques over sequences of visual imagery to enable successful place recognition over a range of challenging environments where prior techniques have failed.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Garg, Sourav. « Robust visual place recognition under simultaneous variations in viewpoint and appearance ». Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/134410/1/Sourav%20Garg%20Thesis.pdf.

Texte intégral
Résumé :
This thesis explores the problem of visual place recognition and localization for a mobile robot, particularly dealing with the challenges of simultaneous variations in scene appearance and camera viewpoint. The proposed methods draw inspiration from humans and make use of semantic cues to represent places. This approach enables effective place recognition from similar or opposing viewpoints, despite variations in scene appearance caused by different times of day or seasons. The research contributions presented in the thesis advance visual place recognition techniques, making them more useful for deployment in a wide range of robotic and autonomous vehicle scenarios.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Stone, Thomas Jonathan. « Mechanisms of place recognition and path integration based on the insect visual system ». Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28909.

Texte intégral
Résumé :
Animals are often able to solve complex navigational tasks in very challenging terrain, despite using low resolution sensors and minimal computational power, providing inspiration for robots. In particular, many species of insect are known to solve complex navigation problems, often combining an array of different behaviours (Wehner et al., 1996; Collett, 1996). Their nervous system is also comparatively simple, relative to that of mammals and other vertebrates. In the first part of this thesis, the visual input of a navigating desert ant, Cataglyphis velox, was mimicked by capturing images in ultraviolet (UV) at similar wavelengths to the ant’s compound eye. The natural segmentation of ground and sky lead to the hypothesis that skyline contours could be used by ants as features for navigation. As proof of concept, sky-segmented binary images were used as input for an established localisation algorithm SeqSLAM (Milford and Wyeth, 2012), validating the plausibility of this claim (Stone et al., 2014). A follow-up investigation sought to determine whether using the sky as a feature would help overcome image matching problems that the ant often faced, such as variance in tilt and yaw rotation. A robotic localisation study showed that using spherical harmonics (SH), a representation in the frequency domain, combined with extracted sky can greatly help robots localise on uneven terrain. Results showed improved performance to state of the art point feature localisation methods on fast bumpy tracks (Stone et al., 2016a). In the second part, an approach to understand how insects perform a navigational task called path integration was attempted by modelling part of the brain of the sweat bee Megalopta genalis. A recent discovery that two populations of cells act as a celestial compass and visual odometer, respectively, led to the hypothesis that circuitry at their point of convergence in the central complex (CX) could give rise to path integration. A firing rate-based model was developed with connectivity derived from the overlap of observed neural arborisations of individual cells and successfully used to build up a home vector and steer an agent back to the nest (Stone et al., 2016b). This approach has the appeal that neural circuitry is highly conserved across insects, so findings here could have wide implications for insect navigation in general. The developed model is the first functioning path integrator that is based on individual cellular connections.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Hausler, Stephen D. « Appearance and viewpoint invariant visual place recognition using multi-scale and multi-modality systems ». Thesis, Queensland University of Technology, 2021. https://eprints.qut.edu.au/226953/1/Stephen_Hausler_Thesis.pdf.

Texte intégral
Résumé :
This thesis provides solutions to the problem of visual place recognition, that is, the ability for an autonomous system to recognise where it is in the world using images. The thesis shows how novel ensemble and multi-scale methods can be combined with modern artificial intelligence techniques to provide robust and reliable place recognition capabilities. It provides autonomous systems with enhanced positioning capabilities that can facilitate navigation even in challenging environmental conditions, such as at night or during adverse weather.
Styles APA, Harvard, Vancouver, ISO, etc.
Plus de sources

Livres sur le sujet "Visual place recognition"

1

Solebo, Ameenat Lola. Identification of visual impairments. Sous la direction de Alan Emond. Oxford University Press, 2019. http://dx.doi.org/10.1093/med/9780198788850.003.0021.

Texte intégral
Résumé :
The development of optimal visual function is important for the future quality of life. Early recognition of morphological abnormalities, such as cataracts, allow for early intervention and a reduction in long-term impairment. There is a period of sensitivity, during which it is important that a clear image is presented to the retina. If treatment is not undertaken in a timely fashion, it can lead to permanent amblyopia. Apart from the newborn and 6–8-week examinations, the only recommended routine examination of the eyes should take place at 4–5 years of age. This should only be undertaken by properly trained individuals with appropriate equipment.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Bouzas, Antia Mato, et Lorenzo Casini, dir. Migration in the Making of the Gulf Space : Social, Political, and Cultural Dimensions. Berghahn Books, 2022. http://dx.doi.org/10.3167/9781800733503.

Texte intégral
Résumé :
Combining visual and literary analyses and original ethnographic studies as part of a more general political reflection, Migration in the Making of Gulf Space examines the role of migrants and non-citizens in the processes of settling in the Arab States of the Gulf region. The contributions underscore the aspirational character of the Gulf as a place where migrant recognition can be attained while also reflecting on practices of exclusion. The book is the result of an interdisciplinary dialogue among scholars and includes an original contribution by the acclaimed author of the novel Temporary People, Deepak Unnikrishnan.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Kleege, Georgina. More than Meets the Eye. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780190604356.001.0001.

Texte intégral
Résumé :
More Than Meets the Eye: What Blindness Brings to Art explores the ways blindness and visual art are linked in many facets of the culture. The author writes from her position as the blind daughter of two visual artists. Due to this background, she claims to know something about art, but recognizes that this claim challenges cultural notions that conflate seeing with knowing. The book examines the ways blindness has been represented in philosophy, visual culture, and cognitive science, showing how these traditional understandings of blindness rely on an over-determined, one-to-one correspondence between touch in the blind and sight in the sighted, as if the other senses and other forms of cognition play no role in perception. Unfortunately, this reductive image of blindness often influences the design of museum access programs for the blind, including touch tours and verbal description of art. The book places these representations in conversation with autobiographical accounts by blind people, especially blind and visually impaired artists. It also gives a first-hand account of access programs at art institutions around the world, and speculates on how acceptance of the idea of blind artists and blind art lovers can change future museum practices and aesthetic values. The book is more of an extended, speculative essay than a scholarly treatment or how-to manual that seeks to show that what blindness brings to art is the recognition that there is more to it than meets the eye.
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Visual place recognition"

1

Qi, Junkun, Rui Wang, Chuan Wang et Xiaochun Cao. « Coarse-to-Fine Visual Place Recognition ». Dans Neural Information Processing, 28–39. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-92273-3_3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Tsintotas, Konstantinos A., Loukas Bampis et Antonios Gasteratos. « Dynamic Places’ Definition for Sequence-Based Visual Place Recognition ». Dans Online Appearance-Based Place Recognition and Mapping, 55–69. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-09396-8_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Panphattarasap, Pilailuck, et Andrew Calway. « Visual Place Recognition Using Landmark Distribution Descriptors ». Dans Computer Vision – ACCV 2016, 487–502. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-54190-7_30.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

dos Santos, Filipe Neves, Paulo Cerqueira Costa et António Paulo Moreira. « Visual Signature for Place Recognition in Indoor Scenarios ». Dans Lecture Notes in Electrical Engineering, 647–56. Cham : Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-10380-8_62.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Gong, Mingying, Lifeng Sun, Shiqiang Yang et Yun Yang. « Robust Place Recognition by Avoiding Confusing Features and Fast Geometric Re-ranking ». Dans Computational Visual Media, 210–17. Berlin, Heidelberg : Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34263-9_27.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Xu, Zhenyu, Qieshi Zhang, Fusheng Hao, Ziliang Ren, Yuhang Kang et Jun Cheng. « VGG-CAE : Unsupervised Visual Place Recognition Using VGG16-Based Convolutional Autoencoder ». Dans Pattern Recognition and Computer Vision, 91–102. Cham : Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88007-1_8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Ali, Abbas M., et Tarik A. Rashid. « Kernel Visual Keyword Description for Object and Place Recognition ». Dans Advances in Intelligent Systems and Computing, 27–38. Cham : Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-28658-7_3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Jafar, Fairul Azni, Nurul Azma Zakaria, Ahamad Zaki Mohamed Noor et Kazutaka Yokota. « Environmental Visual Features Based Place Recognition in Manufacturing Environment ». Dans Lecture Notes in Mechanical Engineering, 47–59. Singapore : Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-8954-3_6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Du, Dapeng, Na Liu, Xiangyang Xu et Gangshan Wu. « Don’t Be Confused : Region Mapping Based Visual Place Recognition ». Dans Advances in Multimedia Information Processing – PCM 2017, 467–76. Cham : Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77383-4_46.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Mihankhah, Ehsan, et Danwei Wang. « Avoiding to Face the Challenges of Visual Place Recognition ». Dans Advances in Intelligent Systems and Computing, 738–49. Cham : Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01054-6_52.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Visual place recognition"

1

Garg, Sourav, Tobias Fischer et Michael Milford. « Where Is Your Place, Visual Place Recognition ? » Dans Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California : International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/603.

Texte intégral
Résumé :
Visual Place Recognition (VPR) is often characterized as being able to recognize the same place despite significant changes in appearance and viewpoint. VPR is a key component of Spatial Artificial Intelligence, enabling robotic platforms and intelligent augmentation platforms such as augmented reality devices to perceive and understand the physical world. In this paper, we observe that there are three "drivers" that impose requirements on spatially intelligent agents and thus VPR systems: 1) the particular agent including its sensors and computational resources, 2) the operating environment of this agent, and 3) the specific task that the artificial agent carries out. In this paper, we characterize and survey key works in the VPR area considering those drivers, including their place representation and place matching choices. We also provide a new definition of VPR based on the visual overlap - akin to spatial view cells in the brain - that enables us to find similarities and differences to other research areas in the robotics and computer vision fields. We identify several open challenges and suggest areas that require more in-depth attention in future works.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Alijani, Farid, Jukka Peltomaki, Jussi Puura, Heikki Huttunen, Joni-Kristian Kamarainen et Esa Rahtu. « Long-term Visual Place Recognition ». Dans 2022 26th International Conference on Pattern Recognition (ICPR). IEEE, 2022. http://dx.doi.org/10.1109/icpr56361.2022.9956392.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Torii, Akihiko, Josef Sivic, Toma Pajdla et Masatoshi Okutomi. « Visual Place Recognition with Repetitive Structures ». Dans 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2013. http://dx.doi.org/10.1109/cvpr.2013.119.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Stumm, Elena, Christopher Mei, Simon Lacroix et Margarita Chli. « Location graphs for visual place recognition ». Dans 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015. http://dx.doi.org/10.1109/icra.2015.7139964.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Gehrig, Mathias, Elena Stumm, Timo Hinzmann et Roland Siegwart. « Visual place recognition with probabilistic voting ». Dans 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017. http://dx.doi.org/10.1109/icra.2017.7989362.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Kim, Yong Nyeon, Dong Wook Ko et Il Hong Suh. « Visual navigation using place recognition with visual line words ». Dans 2014 11th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI). IEEE, 2014. http://dx.doi.org/10.1109/urai.2014.7057494.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Hansen, Peter, et Brett Browning. « Visual place recognition using HMM sequence matching ». Dans 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014). IEEE, 2014. http://dx.doi.org/10.1109/iros.2014.6943207.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Stumm, Elena, Christopher Mei, Simon Lacroix, Juan Nieto, Marco Hutter et Roland Siegwart. « Robust Visual Place Recognition with Graph Kernels ». Dans 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2016. http://dx.doi.org/10.1109/cvpr.2016.491.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Camara, Luis G., et Libor Preucil. « Spatio-Semantic ConvNet-Based Visual Place Recognition ». Dans 2019 European Conference on Mobile Robots (ECMR). IEEE, 2019. http://dx.doi.org/10.1109/ecmr.2019.8870948.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Hafez, A. H. Abdul, Saed Alqaraleh et Ammar Tello. « Encoded Deep Features for Visual Place Recognition ». Dans 2020 28th Signal Processing and Communications Applications Conference (SIU). IEEE, 2020. http://dx.doi.org/10.1109/siu49456.2020.9302266.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie