To see the other types of publications on this topic, follow the link: Visual place recognition.

Journal articles on the topic 'Visual place recognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Visual place recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lowry, Stephanie, Niko Sunderhauf, Paul Newman, et al. "Visual Place Recognition: A Survey." IEEE Transactions on Robotics 32, no. 1 (2016): 1–19. http://dx.doi.org/10.1109/tro.2015.2496823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Torii, Akihiko, Josef Sivic, Masatoshi Okutomi, and Tomas Pajdla. "Visual Place Recognition with Repetitive Structures." IEEE Transactions on Pattern Analysis and Machine Intelligence 37, no. 11 (2015): 2346–59. http://dx.doi.org/10.1109/tpami.2015.2409868.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Grill-Spector, Kalanit, and Nancy Kanwisher. "Visual Recognition." Psychological Science 16, no. 2 (2005): 152–60. http://dx.doi.org/10.1111/j.0956-7976.2005.00796.x.

Full text
Abstract:
What is the sequence of processing steps involved in visual object recognition? We varied the exposure duration of natural images and measured subjects' performance on three different tasks, each designed to tap a different candidate component process of object recognition. For each exposure duration, accuracy was lower and reaction time longer on a within-category identification task (e.g., distinguishing pigeons from other birds) than on a perceptual categorization task (e.g., birds vs. cars). However, strikingly, at each exposure duration, subjects performed just as quickly and accurately o
APA, Harvard, Vancouver, ISO, and other styles
4

Zeng, Zhiqiang, Jian Zhang, Xiaodong Wang, Yuming Chen, and Chaoyang Zhu. "Place Recognition: An Overview of Vision Perspective." Applied Sciences 8, no. 11 (2018): 2257. http://dx.doi.org/10.3390/app8112257.

Full text
Abstract:
Place recognition is one of the most fundamental topics in the computer-vision and robotics communities, where the task is to accurately and efficiently recognize the location of a given query image. Despite years of knowledge accumulated in this field, place recognition still remains an open problem due to the various ways in which the appearance of real-world places may differ. This paper presents an overview of the place-recognition literature. Since condition-invariant and viewpoint-invariant features are essential factors to long-term robust visual place-recognition systems, we start with
APA, Harvard, Vancouver, ISO, and other styles
5

Masone, Carlo, and Barbara Caputo. "A Survey on Deep Visual Place Recognition." IEEE Access 9 (2021): 19516–47. http://dx.doi.org/10.1109/access.2021.3054937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Stumm, Elena S., Christopher Mei, and Simon Lacroix. "Building Location Models for Visual Place Recognition." International Journal of Robotics Research 35, no. 4 (2015): 334–56. http://dx.doi.org/10.1177/0278364915570140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Bo, Xin-sheng Wu, An Chen, Chun-yu Chen, and Hai-ming Liu. "The Research Status of Visual Place Recognition." Journal of Physics: Conference Series 1518 (April 2020): 012039. http://dx.doi.org/10.1088/1742-6596/1518/1/012039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Horst, Michael, and Ralf Möller. "Visual Place Recognition for Autonomous Mobile Robots." Robotics 6, no. 2 (2017): 9. http://dx.doi.org/10.3390/robotics6020009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Oertel, Amadeus, Titus Cieslewski, and Davide Scaramuzza. "Augmenting Visual Place Recognition With Structural Cues." IEEE Robotics and Automation Letters 5, no. 4 (2020): 5534–41. http://dx.doi.org/10.1109/lra.2020.3009077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Baifan, Xiaoting Song, Hongyu Shen, and Tao Lu. "Hierarchical Visual Place Recognition Based on Semantic-Aggregation." Applied Sciences 11, no. 20 (2021): 9540. http://dx.doi.org/10.3390/app11209540.

Full text
Abstract:
A major challenge in place recognition is to be robust against viewpoint changes and appearance changes caused by self and environmental variations. Humans achieve this by recognizing objects and their relationships in the scene under different conditions. Inspired by this, we propose a hierarchical visual place recognition pipeline based on semantic-aggregation and scene understanding for the images. The pipeline contains coarse matching and fine matching. Semantic-aggregation happens in residual aggregation of visual information and semantic information in coarse matching, and semantic assoc
APA, Harvard, Vancouver, ISO, and other styles
11

Arshad, Saba, and Gon-Woo Kim. "Semantic Visual Place Recognition in Dynamic Urban Environment." Journal of Korea Robotics Society 17, no. 3 (2022): 334–38. http://dx.doi.org/10.7746/jkros.2022.17.3.334.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Waheed, Maria, Michael Milford, Klaus McDonald-Maier, and Shoaib Ehsan. "Improving Visual Place Recognition Performance by Maximising Complementarity." IEEE Robotics and Automation Letters 6, no. 3 (2021): 5976–83. http://dx.doi.org/10.1109/lra.2021.3088779.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Ozdemir, Anil, Mark Scerri, Andrew B. Barron, et al. "EchoVPR: Echo State Networks for Visual Place Recognition." IEEE Robotics and Automation Letters 7, no. 2 (2022): 4520–27. http://dx.doi.org/10.1109/lra.2022.3150505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Imbriaco, Raffaele, Egor Bondarev, and Peter H. N. de With. "Multiscale Convolutional Descriptor Aggregation for Visual Place Recognition." Electronic Imaging 2020, no. 10 (2020): 313–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.10.ipas-312.

Full text
Abstract:
Visual place recognition using query and database images from different sources remains a challenging task in computer vision. Our method exploits global descriptors for efficient image matching and local descriptors for geometric verification. We present a novel, multi-scale aggregation method for local convolutional descriptors, using memory vector construction for efficient aggregation. The method enables to find preliminary set of image candidate matches and remove visually similar but erroneous candidates. We deploy the multi-scale aggregation for visual place recognition on 3 large-scale
APA, Harvard, Vancouver, ISO, and other styles
15

Vysotska, Olga, and Cyrill Stachniss. "Effective Visual Place Recognition Using Multi-Sequence Maps." IEEE Robotics and Automation Letters 4, no. 2 (2019): 1730–36. http://dx.doi.org/10.1109/lra.2019.2897160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Dai, Deyun, Zonghai Chen, Jikai Wang, Peng Bao, and Hao Zhao. "Robust Visual Place Recognition Based on Context Information." IFAC-PapersOnLine 52, no. 22 (2019): 49–54. http://dx.doi.org/10.1016/j.ifacol.2019.11.046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Pronobis, A., B. Caputo, P. Jensfelt, and H. I. Christensen. "A realistic benchmark for visual indoor place recognition." Robotics and Autonomous Systems 58, no. 1 (2010): 81–96. http://dx.doi.org/10.1016/j.robot.2009.07.025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

V, Rostami, Rahman Ramli Abd, Samsudin Khairulmizam, and Iqbal Saripan M. "Place recognition using semantic concepts of visual words." Scientific Research and Essays 6, no. 17 (2011): 3751–59. http://dx.doi.org/10.5897/sre11.861.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Cadena, Cesar, John McDonald, John J. Leonard, and Jose Neira. "Place Recognition using Near and Far Visual Information." IFAC Proceedings Volumes 44, no. 1 (2011): 6822–28. http://dx.doi.org/10.3182/20110828-6-it-1002.03029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Ibelaiden, Farah, and Slimane Larabi. "Visual place representation and recognition from depth images." International Journal of Computational Vision and Robotics 1, no. 1 (2022): 1. http://dx.doi.org/10.1504/ijcvr.2022.10052055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Leyva-Vallina, Maria, Nicola Strisciuglio, Manuel Lopez Antequera, Radim Tylecek, Michael Blaich, and Nicolai Petkov. "TB-Places: A Data Set for Visual Place Recognition in Garden Environments." IEEE Access 7 (2019): 52277–87. http://dx.doi.org/10.1109/access.2019.2910150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Lu, Feng, Baifan Chen, Xiang-Dong Zhou, and Dezhen Song. "STA-VPR: Spatio-Temporal Alignment for Visual Place Recognition." IEEE Robotics and Automation Letters 6, no. 3 (2021): 4297–304. http://dx.doi.org/10.1109/lra.2021.3067623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

CHEN, Yutian, Wenyan GAN, Shanshan JIAO, Youwei XU, and Yuntian FENG. "Salient Feature Selection for CNN-Based Visual Place Recognition." IEICE Transactions on Information and Systems E101.D, no. 12 (2018): 3102–7. http://dx.doi.org/10.1587/transinf.2018edp7175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Juan Cao. "Image Comparison for Place Recognition in Visual Robotic Navigation." Journal of Convergence Information Technology 8, no. 7 (2013): 1123–30. http://dx.doi.org/10.4156/jcit.vol8.issue7.138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Xu, Ming, Niko Sunderhauf, and Michael Milford. "Corrections to “Probabilistic Visual Place Recognition for Hierarchical Localization”." IEEE Robotics and Automation Letters 6, no. 3 (2021): 6139. http://dx.doi.org/10.1109/lra.2021.3090115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Huang, Guanyu, An Chen, Hongxia Gao, and Puguang Yang. "SMCN: Simplified mini-column network for visual place recognition." Journal of Physics: Conference Series 2024, no. 1 (2021): 012032. http://dx.doi.org/10.1088/1742-6596/2024/1/012032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Wang, Min-Liang, and Huei-Yung Lin. "An extended-HCT semantic description for visual place recognition." International Journal of Robotics Research 30, no. 11 (2011): 1403–20. http://dx.doi.org/10.1177/0278364911406760.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Mao, Jun, Xiaoping Hu, Xiaofeng He, Lilian Zhang, Liao Wu, and Michael J. Milford. "Learning to Fuse Multiscale Features for Visual Place Recognition." IEEE Access 7 (2019): 5723–35. http://dx.doi.org/10.1109/access.2018.2889030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Lowry, Stephanie, and Henrik Andreasson. "Lightweight, Viewpoint-Invariant Visual Place Recognition in Changing Environments." IEEE Robotics and Automation Letters 3, no. 2 (2018): 957–64. http://dx.doi.org/10.1109/lra.2018.2793308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Chancan, Marvin, Luis Hernandez-Nunez, Ajay Narendra, Andrew B. Barron, and Michael Milford. "A Hybrid Compact Neural Architecture for Visual Place Recognition." IEEE Robotics and Automation Letters 5, no. 2 (2020): 993–1000. http://dx.doi.org/10.1109/lra.2020.2967324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Fan, Chen, Zetao Chen, Adam Jacobson, Xiaoping Hu, and Michael Milford. "Biologically-inspired visual place recognition with adaptive multiple scales." Robotics and Autonomous Systems 96 (October 2017): 224–37. http://dx.doi.org/10.1016/j.robot.2017.07.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Ujala Razaq, Muhammad Muneeb Ullah, and Muhammad Usman. "Local and Deep Features for Robust Visual Indoor Place Recognition." Open Journal of Science and Technology 3, no. 2 (2020): 140–47. http://dx.doi.org/10.31580/ojst.v3i2.1475.

Full text
Abstract:
This study focuses on the area of visual indoor place recognition (e.g., in an office setting, automatically recognizing different places, such as offices, corridor, wash room, etc.). The potential applications include robot navigation, augmented reality, and image retrieval. However, the task is extremely demanding because of the variations in appearance in such dynamic setups (e.g., view-point, occlusion, illumination, scale, etc.). Recently, Convolutional Neural Network (CNN) has emerged as a powerful learning mechanism, able to learn deep higher-level features when provided with a comparat
APA, Harvard, Vancouver, ISO, and other styles
33

Li, Zhen, Lin Zhou, and Zeqin Lin. "Robust Visual Place Recognition Method for Robot Facing Drastic Illumination Changes." Journal of Physics: Conference Series 2209, no. 1 (2022): 012001. http://dx.doi.org/10.1088/1742-6596/2209/1/012001.

Full text
Abstract:
Abstract The robustness of visual place recognition determines the accuracy of the SLAM to construct the environmental map. However, when the robot moves in the outdoor environment for a long time, it must face the challenge of drastic illumination changes (time shift, season or rain and fog weather factors), which leads to the robot’s ability to identify places is greatly restricted. This paper proposes a method for visual place recognition that is more robust to severe illumination changes. First, a generative adversarial network is introduced into visual SLAM to enhance the quality of candi
APA, Harvard, Vancouver, ISO, and other styles
34

Tsintotas, Konstantinos A., Loukas Bampis, and Antonios Gasteratos. "Tracking‐DOSeqSLAM: A dynamic sequence‐based visual place recognition paradigm." IET Computer Vision 15, no. 4 (2021): 258–73. http://dx.doi.org/10.1049/cvi2.12041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Lee, Keundong, Seungjae Lee, Won Jo Jung, and Kee Tae Kim. "Fast and Accurate Visual Place Recognition Using Street-View Images." ETRI Journal 39, no. 1 (2017): 97–107. http://dx.doi.org/10.4218/etrij.17.0116.0034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Wu, Lin, Teng Wang, and Changyin Sun. "Multi-Modal Visual Place Recognition in Dynamics-Invariant Perception Space." IEEE Signal Processing Letters 28 (2021): 2197–201. http://dx.doi.org/10.1109/lsp.2021.3123907.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Tang, Li, Yue Wang, Qimeng Tan, and Rong Xiong. "Explicit feature disentanglement for visual place recognition across appearance changes." International Journal of Advanced Robotic Systems 18, no. 6 (2021): 172988142110374. http://dx.doi.org/10.1177/17298814211037497.

Full text
Abstract:
In the long-term deployment of mobile robots, changing appearance brings challenges for localization. When a robot travels to the same place or restarts from an existing map, global localization is needed, where place recognition provides coarse position information. For visual sensors, changing appearances such as the transition from day to night and seasonal variation can reduce the performance of a visual place recognition system. To address this problem, we propose to learn domain-unrelated features across extreme changing appearance, where a domain denotes a specific appearance condition,
APA, Harvard, Vancouver, ISO, and other styles
38

Zhang Guoshan, 张国山, 张培崇 Zhang Peichong, and 王欣博 Wang Xinbo. "Visual place recognition based on multi-level feature difference map." Infrared and Laser Engineering 47, no. 2 (2018): 203004. http://dx.doi.org/10.3788/irla201847.0203004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Ferrarini, Bruno, Maria Waheed, Sania Waheed, Shoaib Ehsan, Michael J. Milford, and Klaus D. McDonald-Maier. "Exploring Performance Bounds of Visual Place Recognition Using Extended Precision." IEEE Robotics and Automation Letters 5, no. 2 (2020): 1688–95. http://dx.doi.org/10.1109/lra.2020.2969197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Fischer, Tobias, and Michael Milford. "Event-Based Visual Place Recognition With Ensembles of Temporal Windows." IEEE Robotics and Automation Letters 5, no. 4 (2020): 6924–31. http://dx.doi.org/10.1109/lra.2020.3025505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Oh, J. H., and B. H. Lee. "Dynamic programming approach to visual place recognition in changing environments." Electronics Letters 53, no. 6 (2017): 391–93. http://dx.doi.org/10.1049/el.2017.0037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Cieslewski, Titus, and Davide Scaramuzza. "Efficient Decentralized Visual Place Recognition Using a Distributed Inverted Index." IEEE Robotics and Automation Letters 2, no. 2 (2017): 640–47. http://dx.doi.org/10.1109/lra.2017.2650153.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Kenshimov, Chingiz, Loukas Bampis, Beibut Amirgaliyev, Marat Arslanov, and Antonios Gasteratos. "Deep learning features exception for cross-season visual place recognition." Pattern Recognition Letters 100 (December 2017): 124–30. http://dx.doi.org/10.1016/j.patrec.2017.10.028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Gronát, Petr, Josef Sivic, Guillaume Obozinski, and Tomas Pajdla. "Learning and Calibrating Per-Location Classifiers for Visual Place Recognition." International Journal of Computer Vision 118, no. 3 (2016): 319–36. http://dx.doi.org/10.1007/s11263-015-0878-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Rebai, Karima, Ouahiba Azouaoui, and Nouara Achour. "Fuzzy ART-based place recognition for visual loop closure detection." Biological Cybernetics 107, no. 2 (2012): 247–59. http://dx.doi.org/10.1007/s00422-012-0539-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

KAMIYA, Kazuhisa, Tomoya IWAZAKI, Yudai MORISHITA, Tomoe HIROKI, and Kanji TANAKA. "Cross-Domain Visual Place Recognition Using Landscape Image Big Data." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2022 (2022): 2P1—H09. http://dx.doi.org/10.1299/jsmermd.2022.2p1-h09.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Hettiarachchi, Dulmini, Ye Tian, Han Yu, and Shunsuke Kamijo. "Text Spotting towards Perceptually Aliased Urban Place Recognition." Multimodal Technologies and Interaction 6, no. 11 (2022): 102. http://dx.doi.org/10.3390/mti6110102.

Full text
Abstract:
Recognizing places of interest (POIs) can be challenging for humans, especially in foreign environments. In this study, we leverage smartphone sensors (i.e., camera, GPS) and deep learning algorithms to propose an intelligent solution to recognize POIs in an urban environment. Recent studies have approached landmark recognition as an image retrieval problem. However, visual similarity alone is not robust against challenging conditions such as extreme appearance variance and perceptual aliasing in urban environments. To this end, we propose to fuse visual, textual, and positioning information.
APA, Harvard, Vancouver, ISO, and other styles
48

Keetha, Nikhil Varma, Michael Milford, and Sourav Garg. "A Hierarchical Dual Model of Environment- and Place-Specific Utility for Visual Place Recognition." IEEE Robotics and Automation Letters 6, no. 4 (2021): 6969–76. http://dx.doi.org/10.1109/lra.2021.3096751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Qiao, Yongliang, та Zhao Zhang. "Visual Localization by Place Recognition Based on Multifeature (D-λLBP++HOG)". Journal of Sensors 2017 (2017): 1–18. http://dx.doi.org/10.1155/2017/2157243.

Full text
Abstract:
Visual localization is widely used in the autonomous navigation system and Advanced Driver Assistance Systems (ADAS). This paper presents a visual localization method based on multifeature fusion and disparity information using stereo images. We integrate disparity information into complete center-symmetric local binary patterns (CSLBP) to obtain a robust global image description (D-CSLBP). In order to represent the scene in depth, multifeature fusion of D-CSLBP and HOG features provides valuable information and permits decreasing the effect of some typical problems in place recognition such a
APA, Harvard, Vancouver, ISO, and other styles
50

Yang, Bo, Xiaosu Xu, Jun Li, and Hong Zhang. "Landmark Generation in Visual Place Recognition Using Multi-Scale Sliding Window for Robotics." Applied Sciences 9, no. 15 (2019): 3146. http://dx.doi.org/10.3390/app9153146.

Full text
Abstract:
Landmark generation is an essential component in landmark-based visual place recognition. In this paper, we present a simple yet effective method, called multi-scale sliding window (MSW), for landmark generation in order to improve the performance of place recognition. In our method, we generate landmarks that form a uniform distribution in multiple landmark scales (sizes) within an appropriate range by a process that samples an image with a sliding window. This is in contrast to conventional methods of landmark generation that typically depend on detecting objects whose size distributions are
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!