To see the other types of publications on this topic, follow the link: Visual object recognition.

Journal articles on the topic 'Visual object recognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Visual object recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Logothetis, N. K., and D. L. Sheinberg. "Visual Object Recognition." Annual Review of Neuroscience 19, no. 1 (March 1996): 577–621. http://dx.doi.org/10.1146/annurev.ne.19.030196.003045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Grauman, Kristen, and Bastian Leibe. "Visual Object Recognition." Synthesis Lectures on Artificial Intelligence and Machine Learning 5, no. 2 (April 19, 2011): 1–181. http://dx.doi.org/10.2200/s00332ed1v01y201103aim011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Grill-Spector, Kalanit, and Nancy Kanwisher. "Visual Recognition." Psychological Science 16, no. 2 (February 2005): 152–60. http://dx.doi.org/10.1111/j.0956-7976.2005.00796.x.

Full text
Abstract:
What is the sequence of processing steps involved in visual object recognition? We varied the exposure duration of natural images and measured subjects' performance on three different tasks, each designed to tap a different candidate component process of object recognition. For each exposure duration, accuracy was lower and reaction time longer on a within-category identification task (e.g., distinguishing pigeons from other birds) than on a perceptual categorization task (e.g., birds vs. cars). However, strikingly, at each exposure duration, subjects performed just as quickly and accurately on the categorization task as they did on a task requiring only object detection: By the time subjects knew an image contained an object at all, they already knew its category. These findings place powerful constraints on theories of object recognition.
APA, Harvard, Vancouver, ISO, and other styles
4

Vasylenko, Mykola, and Maksym Haida. "Visual Object Recognition System." Electronics and Control Systems 3, no. 73 (November 24, 2022): 9–19. http://dx.doi.org/10.18372/1990-5548.73.17007.

Full text
Abstract:
This article introduces the problem of object detection and recognition. The potential mobility of this solution, ease of installation and ease of initial setup, as well as the absence of expensive, resource-intensive and complex image collection and processing systems are presented. Solutions to the problem are demonstrated, along with the advantages and disadvantages of each. The selection of contours by a filter based on the Prewitt operator and a detector of characteristic points is an algorithm of the system, developed within the framework of object recognition techniques. The reader can follow the interim and final demonstrations of the system algorithm in this article to learn about its advantages over traditional video surveillance systems, as well as some of its disadvantages. A webcam with a video frame rate of 25 frames per second, a mobile phone and a PC with the Matlab2020 programming environment installed (due to its convenience and built-in image processing functions) are required to illustrate how the system works.
APA, Harvard, Vancouver, ISO, and other styles
5

Jiao, Chenlei, Binbin Lian, Zhe Wang, Yimin Song, and Tao Sun. "Visual–tactile object recognition of a soft gripper based on faster Region-based Convolutional Neural Network and machining learning algorithm." International Journal of Advanced Robotic Systems 17, no. 5 (September 1, 2020): 172988142094872. http://dx.doi.org/10.1177/1729881420948727.

Full text
Abstract:
Object recognition is a prerequisite to control a soft gripper successfully grasping an unknown object. Visual and tactile recognitions are two commonly used methods in a grasping system. Visual recognition is limited if the size and weight of the objects are involved, whereas the efficiency of tactile recognition is a problem. A visual–tactile recognition method is proposed to overcome the disadvantages of both methods in this article. The design and fabrication of the soft gripper considering the visual and tactile sensors are implemented, where the Kinect v2 is adopted for visual information, bending and pressure sensors are embedded to the soft fingers for tactile information. The proposed method is divided into three steps: initial recognition by vision, detail recognition by touch, and a data fusion decision making. Experiments show that the visual–tactile recognition has the best results. The average recognition accuracy of the daily objects by the proposed method is also the highest. The feasibility of the visual–tactile recognition is verified.
APA, Harvard, Vancouver, ISO, and other styles
6

Tannenbaum, Allen, Anthony Yezzi, and Alex Goldstein. "Visual Tracking and Object Recognition." IFAC Proceedings Volumes 34, no. 6 (July 2001): 1539–42. http://dx.doi.org/10.1016/s1474-6670(17)35408-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Guo, Fei, Yuan Yang, and Yong Gao. "Optimization of Visual Information Presentation for Visual Prosthesis." International Journal of Biomedical Imaging 2018 (2018): 1–12. http://dx.doi.org/10.1155/2018/3198342.

Full text
Abstract:
Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis.
APA, Harvard, Vancouver, ISO, and other styles
8

Rennig, Johannes, Sonja Cornelsen, Helmut Wilhelm, Marc Himmelbach, and Hans-Otto Karnath. "Preserved Expert Object Recognition in a Case of Visual Hemiagnosia." Journal of Cognitive Neuroscience 30, no. 2 (February 2018): 131–43. http://dx.doi.org/10.1162/jocn_a_01193.

Full text
Abstract:
We examined a stroke patient (HWS) with a unilateral lesion of the right medial ventral visual stream, involving the right fusiform and parahippocampal gyri. In a number of object recognition tests with lateralized presentations of target stimuli, HWS showed significant symptoms of hemiagnosia with contralesional recognition deficits for everyday objects. We further explored the patient's capacities of visual expertise that were acquired before the current perceptual impairment became effective. We confronted him with objects he was an expert for already before stroke onset and compared this performance with the recognition of familiar everyday objects. HWS was able to identify significantly more of the specific (“expert”) than of the everyday objects on the affected contralesional side. This observation of better expert object recognition in visual hemiagnosia allows for several interpretations. The results may be caused by enhanced information processing for expert objects in the ventral system in the affected or the intact hemisphere. Expert knowledge could trigger top–down mechanisms supporting object recognition despite of impaired basic functions of object processing. More importantly, the current work demonstrates that top–down mechanisms of visual expertise influence object recognition at an early stage, probably before visual object information propagates to modules of higher object recognition. Because HWS showed a lesion to the fusiform gyrus and spared capacities of expert object recognition, the current study emphasizes possible contributions of areas outside the ventral stream to visual expertise.
APA, Harvard, Vancouver, ISO, and other styles
9

Woods, Andrew T., Allison Moore, and Fiona N. Newell. "Canonical Views in Haptic Object Perception." Perception 37, no. 12 (January 1, 2008): 1867–78. http://dx.doi.org/10.1068/p6038.

Full text
Abstract:
Previous investigations of visual object recognition have found that some views of both familiar and unfamiliar objects promote more efficient recognition performance than other views. These views are considered as canonical and are often the views that present the most information about an object's 3-D structure and features in the image. Although objects can also be efficiently recognised with touch alone, little is known whether some views promote more efficient recognition than others. This may seem unlikely, given that the object structure and features are readily available to the hand during object exploration. We conducted two experiments to investigate whether canonical views existed in haptic object recognition. In the first, participants were required to position each object in a way that would present the best view for learning the object with touch alone. We found a large degree of consistency of viewpoint position across participants for both familiar and unfamiliar objects. In a second experiment, we found that these consistent, or canonical, views promoted better haptic recognition performance than other random views of the objects. Interestingly, these haptic canonical views were not necessarily the same as the canonical views normally found in visual perception. Nevertheless, our findings provide support for the idea that both the visual and the tactile systems are functionally equivalent in terms of how objects are represented in memory and subsequently recognised.
APA, Harvard, Vancouver, ISO, and other styles
10

Zelinsky, Gregory J., and Gregory L. Murphy. "Synchronizing Visual and Language Processing: An Effect of Object Name Length on Eye Movements." Psychological Science 11, no. 2 (March 2000): 125–31. http://dx.doi.org/10.1111/1467-9280.00227.

Full text
Abstract:
Are visual and verbal processing systems functionally independent? Two experiments (one using line drawings of common objects, the other using faces) explored the relationship between the number of syllables in an object's name (one or three) and the visual inspection of that object. The tasks were short-term recognition and visual search. Results indicated more fixations and longer gaze durations on objects having three-syllable names when the task encouraged a verbal encoding of the objects (i.e., recognition). No effects of syllable length on eye movements were found when implicit naming demands were minimal (i.e., visual search). These findings suggest that implicitly naming a pictorial object constrains the oculomotor inspection of that object, and that the visual and verbal encoding of an object are synchronized so that the faster process must wait for the slower to be completed before gaze shifts to another object. Both findings imply a tight coupling between visual and linguistic processing, and highlight the utility of an oculomotor methodology to understand this coupling.
APA, Harvard, Vancouver, ISO, and other styles
11

Emrich, Stephen M., Hana Burianová, and Susanne Ferber. "Transient Perceptual Neglect: Visual Working Memory Load Affects Conscious Object Processing." Journal of Cognitive Neuroscience 23, no. 10 (October 2011): 2968–82. http://dx.doi.org/10.1162/jocn_a_00028.

Full text
Abstract:
Visual working memory (VWM) is a capacity-limited cognitive resource that plays an important role in complex cognitive behaviors. Recent studies indicate that regions subserving VWM may play a role in the perception and recognition of visual objects, suggesting that conscious object perception may depend on the same cognitive and neural architecture that supports the maintenance of visual object information. In the present study, we examined this question by testing object processing under a concurrent VWM load. Under a high VWM load, recognition was impaired for objects presented in the left visual field, in particular when two objects were presented simultaneously. Multivariate fMRI revealed that two independent but partially overlapping networks of brain regions contribute to object recognition. The first network consisted of regions involved in VWM encoding and maintenance. Importantly, these regions were also sensitive to object load. The second network comprised regions of the ventral temporal lobes traditionally associated with object recognition. Importantly, activation in both networks predicted object recognition performance. These results indicate that information processing in regions that mediate VWM may be critical to conscious visual perception. Moreover, the observation of a hemifield asymmetry in object recognition performance has important theoretical and clinical significance for the study of visual neglect.
APA, Harvard, Vancouver, ISO, and other styles
12

Chow, Jason, Thomas Palmeri, and Isabel Gauthier. "Tactile object recognition performance on graspable objects, but not texture-like objects, relates to visual object recognition ability." Journal of Vision 20, no. 11 (October 20, 2020): 188. http://dx.doi.org/10.1167/jov.20.11.188.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Edelman, Shimon, and Sharon Duvdevani-Bar. "A model of visual recognition and categorization." Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 352, no. 1358 (August 29, 1997): 1191–202. http://dx.doi.org/10.1098/rstb.1997.0102.

Full text
Abstract:
To recognize a previously seen object, the visual system must overcome the variability in the object's appearance caused by factors such as illumination and pose. Developments in computer vision suggest that it may be possible to counter the influence of these factors, by learning to interpolate between stored views of the target object, taken under representative combinations of viewing conditions. Daily life situations, however, typically require categorization, rather than recognition, of objects. Due to the open–ended character of both natural and artificial categories, categorization cannot rely on interpolation between stored examples. Nonetheless, knowledge of several representative members, or prototypes, of each of the categories of interest can still provide the necessary computational substrate for the categorization of new instances. The resulting representational scheme based on similarities to prototypes appears to be computationally viable, and is readily mapped onto the mechanisms of biological vision revealed by recent psychophysical and physiological studies.
APA, Harvard, Vancouver, ISO, and other styles
14

Wang, Yingxu. "On Visual Semantic Algebra (VSA)." International Journal of Software Science and Computational Intelligence 1, no. 4 (October 2009): 1–16. http://dx.doi.org/10.4018/jssci.2009062501.

Full text
Abstract:
A new form of denotational mathematics known as Visual Semantic Algebra (VSA) is presented for abstract visual object and architecture manipulations. A set of cognitive theories for pattern recognition is explored such as cognitive principles of visual perception and basic mechanisms of object and pattern recognition. The cognitive process of pattern recognition is rigorously modeled using VSA and Real-Time Process Algebra (RTPA), which reveals the fundamental mechanisms of natural pattern recognition by the brain. Case studies on VSA in pattern recognition are presented to demonstrate VAS’ expressive power for algebraic manipulations of visual objects. VSA can be applied not only in machinable visual and spatial reasoning, but also in computational intelligence as a powerful man-machine language for representing and manipulating visual objects and patterns. On the basis of VSA, computational intelligent systems such as robots and cognitive computers may process and inference visual and image objects rigorously and efficiently.
APA, Harvard, Vancouver, ISO, and other styles
15

Liu, Huaping, Yupei Wu, Fuchun Sun, and Di Guo. "Recent progress on tactile object recognition." International Journal of Advanced Robotic Systems 14, no. 4 (July 1, 2017): 172988141771705. http://dx.doi.org/10.1177/1729881417717056.

Full text
Abstract:
Conventional visual perception technology is subject to many restrictions, such as illumination, background clutter, and occlusion. Many intrinsic properties of objects, like stiffness, hardness, and internal state, cannot be effectively perceived by visual sensors. For robots, tactile perception is a key approach to obtain environmental and object information. Different from vision sensors, tactile sensors can directly measure some physical properties of objects and environment. At the same time, humans also utilize touch sensory receptors as an important means to perceive and interact with the environment. In this article, we present a detailed discussion on tactile object recognition problem. We divide the current studies on the tactile object recognition into three subcategories and detailed analysis has been put forward on them. In addition, we also discuss some advanced topics such as visual–tactile fusion, exploratory procedure, and data sets.
APA, Harvard, Vancouver, ISO, and other styles
16

Afraz, Arash, Daniel L. K. Yamins, and James J. DiCarlo. "Neural Mechanisms Underlying Visual Object Recognition." Cold Spring Harbor Symposia on Quantitative Biology 79 (2014): 99–107. http://dx.doi.org/10.1101/sqb.2014.79.024729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Gerlach, Christian, Claes T. Aaside, Glyn W. Humphreys, Anders Gade, Olaf B. Paulson, and Ian Law. "Integrative processes in visual object recognition." NeuroImage 13, no. 6 (June 2001): 885. http://dx.doi.org/10.1016/s1053-8119(01)92227-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Laatu, Sari, Antti Revonsuo, Päivi Hämäläinen, Ville Ojanen, and Juhani Ruutiainen. "Visual object recognition in multiple sclerosis." Journal of the Neurological Sciences 185, no. 2 (April 2001): 77–88. http://dx.doi.org/10.1016/s0022-510x(01)00461-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Leksut, Jatuporn Toy, Jiaping Zhao, and Laurent Itti. "Learning visual variation for object recognition." Image and Vision Computing 98 (June 2020): 103912. http://dx.doi.org/10.1016/j.imavis.2020.103912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Heisele, B. "Visual object recognition with supervised learning." IEEE Intelligent Systems 18, no. 3 (May 2003): 38–42. http://dx.doi.org/10.1109/mis.2003.1200726.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Bunke, Horst. "Graph matching for visual object recognition." Spatial Vision 13, no. 2-3 (2000): 335–40. http://dx.doi.org/10.1163/156856800741153.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Liu, Huaping, Yuanlong Yu, Fuchun Sun, and Jason Gu. "Visual–Tactile Fusion for Object Recognition." IEEE Transactions on Automation Science and Engineering 14, no. 2 (April 2017): 996–1008. http://dx.doi.org/10.1109/tase.2016.2549552.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Gerlach, Christian. "Category-specificity in visual object recognition." Cognition 111, no. 3 (June 2009): 281–301. http://dx.doi.org/10.1016/j.cognition.2009.02.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Reynolds, Greg D. "Infant visual attention and object recognition." Behavioural Brain Research 285 (May 2015): 34–43. http://dx.doi.org/10.1016/j.bbr.2015.01.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Wood, Justin N., and Samantha M. W. Wood. "The development of newborn object recognition in fast and slow visual worlds." Proceedings of the Royal Society B: Biological Sciences 283, no. 1829 (April 27, 2016): 20160166. http://dx.doi.org/10.1098/rspb.2016.0166.

Full text
Abstract:
Object recognition is central to perception and cognition. Yet relatively little is known about the environmental factors that cause invariant object recognition to emerge in the newborn brain. Is this ability a hardwired property of vision? Or does the development of invariant object recognition require experience with a particular kind of visual environment? Here, we used a high-throughput controlled-rearing method to examine whether newborn chicks ( Gallus gallus ) require visual experience with slowly changing objects to develop invariant object recognition abilities. When newborn chicks were raised with a slowly rotating virtual object, the chicks built invariant object representations that generalized across novel viewpoints and rotation speeds. In contrast, when newborn chicks were raised with a virtual object that rotated more quickly, the chicks built viewpoint-specific object representations that failed to generalize to novel viewpoints and rotation speeds. Moreover, there was a direct relationship between the speed of the object and the amount of invariance in the chick's object representation. Thus, visual experience with slowly changing objects plays a critical role in the development of invariant object recognition. These results indicate that invariant object recognition is not a hardwired property of vision, but is learned rapidly when newborns encounter a slowly changing visual world.
APA, Harvard, Vancouver, ISO, and other styles
26

Newell, F. N. "Searching for Objects in the Visual Periphery: Effects of Orientation." Perception 25, no. 1_suppl (August 1996): 110. http://dx.doi.org/10.1068/v96l1111.

Full text
Abstract:
Previous studies have found that the recognition of familiar objects is dependent on the orientation of the object in the picture plane. Here the time taken to locate rotated objects in the periphery was examined. Eye movements were also recorded. In all experiments, familiar objects were arranged in a clock face display. In experiment 1, subjects were instructed to locate a match to a central, upright object from amongst a set of randomly rotated objects. The target object was rotated in the frontoparallel plane. Search performance was dependent on rotation, yielding the classic ‘M’ function found in recognition tasks. When matching a single object in periphery, match times were dependent on the angular deviations between the central and target objects and showed no advantage for the upright (experiment 2). In experiment 3 the central object was shown in either the upright rotation or rotated by 120° from the upright. The target object was similarly rotated given four different match conditions. Distractor objects were aligned with the target object. Search times were faster when the centre and target object were aligned and also when the centre object was rotated and the target was upright. Search times were slower when matching a central upright object to a rotated target object. These results suggest that in simple tasks matching is based on image characteristics. However, in complex search tasks a contribution from the object's representation is made which gives an advantage to the canonical, upright view in peripheral vision.
APA, Harvard, Vancouver, ISO, and other styles
27

Lawson, Rebecca, Glyn W. Humphreys, and Derrick G. Watson. "Object Recognition under Sequential Viewing Conditions: Evidence for Viewpoint-Specific Recognition Procedures." Perception 23, no. 5 (May 1994): 595–613. http://dx.doi.org/10.1068/p230595.

Full text
Abstract:
In many computational approaches to vision it has been emphasised that object recognition involves the encoding of view-independent descriptions prior to matching to a stored object model, thus enabling objects to be identified across different retinal projections. In contrast, neurophysiological studies suggest that image descriptions are matched to less abstract, view-specific representations, resulting in more efficient access to stored object knowledge for objects presented from a view similar to a stored viewpoint. Evidence favouring a primary role for view-specific object descriptions in object recognition is reported. In a series of experiments employing line drawings of familiar objects, the effects of depth rotation upon the efficiency of object recognition were investigated. Subjects were required to identify an object from a sequence of very briefly presented pictures. The results suggested that object recognition is based upon the matching of image descriptions to view-specific stored representations, and that priming effects under sequential viewing conditions are strongly influenced by the visual similarity of different views of objects.
APA, Harvard, Vancouver, ISO, and other styles
28

Snow, Jacqueline C., Lars Strother, and Glyn W. Humphreys. "Haptic Shape Processing in Visual Cortex." Journal of Cognitive Neuroscience 26, no. 5 (May 2014): 1154–67. http://dx.doi.org/10.1162/jocn_a_00548.

Full text
Abstract:
Humans typically rely upon vision to identify object shape, but we can also recognize shape via touch (haptics). Our haptic shape recognition ability raises an intriguing question: To what extent do visual cortical shape recognition mechanisms support haptic object recognition? We addressed this question using a haptic fMRI repetition design, which allowed us to identify neuronal populations sensitive to the shape of objects that were touched but not seen. In addition to the expected shape-selective fMRI responses in dorsal frontoparietal areas, we observed widespread shape-selective responses in the ventral visual cortical pathway, including primary visual cortex. Our results indicate that shape processing via touch engages many of the same neural mechanisms as visual object recognition. The shape-specific repetition effects we observed in primary visual cortex show that visual sensory areas are engaged during the haptic exploration of object shape, even in the absence of concurrent shape-related visual input. Our results complement related findings in visually deprived individuals and highlight the fundamental role of the visual system in the processing of object shape.
APA, Harvard, Vancouver, ISO, and other styles
29

Liao, W., C. Yang, M. Ying Yang, and B. Rosenhahn. "SECURITY EVENT RECOGNITION FOR VISUAL SURVEILLANCE." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-1/W1 (May 30, 2017): 19–26. http://dx.doi.org/10.5194/isprs-annals-iv-1-w1-19-2017.

Full text
Abstract:
With rapidly increasing deployment of surveillance cameras, the reliable methods for automatically analyzing the surveillance video and recognizing special events are demanded by different practical applications. This paper proposes a novel effective framework for security event analysis in surveillance videos. First, convolutional neural network (CNN) framework is used to detect objects of interest in the given videos. Second, the owners of the objects are recognized and monitored in real-time as well. If anyone moves any object, this person will be verified whether he/she is its owner. If not, this event will be further analyzed and distinguished between two different scenes: moving the object away or stealing it. To validate the proposed approach, a new video dataset consisting of various scenarios is constructed for more complex tasks. For comparison purpose, the experiments are also carried out on the benchmark databases related to the task on abandoned luggage detection. The experimental results show that the proposed approach outperforms the state-of-the-art methods and effective in recognizing complex security events.
APA, Harvard, Vancouver, ISO, and other styles
30

DiPietro, Norma T., Edward A. Wasserman, and Michael E. Young. "Effects of Occlusion on Pigeons' Visual Object Recognition." Perception 31, no. 11 (November 2002): 1299–312. http://dx.doi.org/10.1068/p3441.

Full text
Abstract:
Casual observation suggests that pigeons and other animals can recognize occluded objects; yet laboratory research has thus far failed to show that pigeons can do so. In a series of experiments, we investigated pigeons' ability to ‘name’ shaded, textured stimuli by associating each with a different response. After first learning to recognize four unoccluded objects, pigeons had to recognize the objects when they were partially occluded by another surface or when they were placed on top of another surface; in each case, recognition was weak. Following training with the unoccluded stimuli and with the stimuli placed on top of the occluder, pigeons' recognition of occluded objects dramatically improved. Pigeons' improved recognition of occluded objects was not limited to the trained objects but transferred to novel objects as well. Evidently, the recognition of occluded objects requires pigeons to learn to discriminate the object from the occluder; once this discrimination is mastered, occluded objects can be better recognized.
APA, Harvard, Vancouver, ISO, and other styles
31

Lloyd-Jones, Toby J., and David Vernon. "Semantic interference from visual object recognition on visual imagery." Journal of Experimental Psychology: Learning, Memory, and Cognition 29, no. 4 (2003): 563–80. http://dx.doi.org/10.1037/0278-7393.29.4.563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Pentland, Alex. "Part Segmentation for Object Recognition." Neural Computation 1, no. 1 (March 1989): 82–91. http://dx.doi.org/10.1162/neco.1989.1.1.82.

Full text
Abstract:
Visual object recognition is a difficult problem that has been solved by biological visual systems. An approach to object recognition is described in which the image is segmented into parts using two simple, biologically-plausible mechanisms: a filtering operation to produce a large set of potential object “parts,” followed by a new type of network that searches among these part hypotheses to produce the simplest, most likely description of the image's part structure.
APA, Harvard, Vancouver, ISO, and other styles
33

Wood, Justin N. "Spontaneous Preference for Slowly Moving Objects in Visually Naïve Animals." Open Mind 1, no. 2 (September 2017): 111–22. http://dx.doi.org/10.1162/opmi_a_00012.

Full text
Abstract:
To perceive the world successfully, newborns need certain types of visual experiences. The development of object recognition, for example, requires visual experience with slowly moving objects. To date, however, it is unknown whether newborns actively seek out the best visual experiences for developing object recognition. To address this question, I used an automated controlled-rearing method to examine whether visually naïve animals (newborn chicks) seek out slowly moving objects. Despite receiving equal exposure to slowly and to quickly rotating objects, the majority of the chicks developed a preference for slowly rotating objects. This preference was robust, producing large effect sizes across objects, experiments, and successive test days. These results indicate that newborn brains rapidly develop mechanisms for orienting young animals toward optimal visual experiences, thus facilitating the development of object recognition. This study also demonstrates that automation can be a valuable tool for studying the origins and development of visual preferences.
APA, Harvard, Vancouver, ISO, and other styles
34

Luo, Ruotian, Ning Zhang, Bohyung Han, and Linjie Yang. "Context-Aware Zero-Shot Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 11709–16. http://dx.doi.org/10.1609/aaai.v34i07.6841.

Full text
Abstract:
We present a novel problem setting in zero-shot learning, zero-shot object recognition and detection in the context. Contrary to the traditional zero-shot learning methods, which simply infers unseen categories by transferring knowledge from the objects belonging to semantically similar seen categories, we aim to understand the identity of the novel objects in an image surrounded by the known objects using the inter-object relation prior. Specifically, we leverage the visual context and the geometric relationships between all pairs of objects in a single image, and capture the information useful to infer unseen categories. We integrate our context-aware zero-shot learning framework into the traditional zero-shot learning techniques seamlessly using a Conditional Random Field (CRF). The proposed algorithm is evaluated on both zero-shot region classification and zero-shot detection tasks. The results on Visual Genome (VG) dataset show that our model significantly boosts performance with the additional visual context compared to traditional methods.
APA, Harvard, Vancouver, ISO, and other styles
35

Petitjean, Sylvain. "A Computational Geometric Approach to Visual Hulls." International Journal of Computational Geometry & Applications 08, no. 04 (August 1998): 407–36. http://dx.doi.org/10.1142/s0218195998000229.

Full text
Abstract:
Recognizing 3D objects from their 2D silhouettes is a popular topic in computer vision. Object reconstruction can be performed using the volume intersection approach. The visual hull of an object is the best approximation of an object that can be obtained by volume intersection. From the point of view of recognition from silhouettes, the visual hull can not be distinguished from the original object. In this paper, we present efficient algorithms for computing visual hulls. We start with the case of planar figures (polygons and curved objects) and base our approach on an efficient algorithm for computing the visibility graph of planar figures. We present and tackle many topics related to the query of visual hulls and to the recognition of objects equal to their visual hulls. We then move on to the 3-dimensional case and give a flavor of how it may be approached.
APA, Harvard, Vancouver, ISO, and other styles
36

Zhang, Haopeng, Tarek El-Gaaly, Ahmed Elgammal, and Zhiguo Jiang. "Joint Object and Pose Recognition Using Homeomorphic Manifold Analysis." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 30, 2013): 1012–19. http://dx.doi.org/10.1609/aaai.v27i1.8634.

Full text
Abstract:
Object recognition is a key precursory challenge in the fields of object manipulation and robotic/AI visual reasoning in general. Recognizing object categories, particular instances of objects and viewpoints/poses of objects are three critical subproblems robots must solve in order to accurately grasp/manipulate objects and reason about their environ- ments. Multi-view images of the same object lie on intrinsic low-dimensional manifolds in descriptor spaces (e.g. visual/depth descriptor spaces). These object manifolds share the same topology despite being geometrically different. Each object manifold can be represented as a deformed version of a unified manifold. The object manifolds can thus be parametrized by its homeomorphic mapping/reconstruction from the unified manifold. In this work, we construct a manifold descriptor from this mapping between homeomorphic manifolds and use it to jointly solve the three challenging recognition sub-problems. We extensively experiment on a challenging multi-modal (i.e. RGBD) dataset and other object pose datasets and achieve state-of-the-art results.
APA, Harvard, Vancouver, ISO, and other styles
37

Westphal, Günter, and Rolf P. Würtz. "Combining Feature- and Correspondence-Based Methods for Visual Object Recognition." Neural Computation 21, no. 7 (July 2009): 1952–89. http://dx.doi.org/10.1162/neco.2009.12-07-675.

Full text
Abstract:
We present an object recognition system built on a combination of feature- and correspondence-based pattern recognizers. The feature-based part, called preselection network, is a single-layer feedforward network weighted with the amount of information contributed by each feature to the decision at hand. For processing arbitrary objects, we employ small, regular graphs whose nodes are attributed with Gabor amplitudes, termed parquet graphs. The preselection network can quickly rule out most irrelevant matches and leaves only the ambiguous cases, so-called model candidates, to be verified by a rudimentary version of elastic graph matching, a standard correspondence-based technique for face and object recognition. According to the model, graphs are constructed that describe the object in the input image well. We report the results of experiments on standard databases for object recognition. The method achieved high recognition rates on identity and pose. Unlike many other models, it can also cope with varying background, multiple objects, and partial occlusion.
APA, Harvard, Vancouver, ISO, and other styles
38

KIM, SUNGHO, GIJEONG JANG, WANG-HEON LEE, and IN SO KWEON. "COMBINED MODEL-BASED 3D OBJECT RECOGNITION." International Journal of Pattern Recognition and Artificial Intelligence 19, no. 07 (November 2005): 839–52. http://dx.doi.org/10.1142/s0218001405004368.

Full text
Abstract:
This paper presents a combined model-based 3D object recognition method motivated by the robust properties of human vision. The human visual system (HVS) is very efficient and robust in identifying and grabbing objects, in part because of its properties of visual attention, contrast mechanism, feature binding, multiresolution and part-based representation. In addition, the HVS combines bottom-up and top-down information effectively using combined model representation. We propose a method for integrating these aspects under a Monte Carlo method. In this scheme, object recognition is regarded as a parameter optimization problem. The bottom-up process initializes parameters, and the top-down process optimizes them. Experimental results show that the proposed recognition model is feasible for 3D object identification and pose estimation.
APA, Harvard, Vancouver, ISO, and other styles
39

Akbarpour, Mohammadesmaeil, Nasser Mehrshad, and Seyyed-Mohammad Razavi. "Object Recognition Inspiring HVS." Indonesian Journal of Electrical Engineering and Computer Science 12, no. 2 (November 1, 2018): 783. http://dx.doi.org/10.11591/ijeecs.v12.i2.pp783-793.

Full text
Abstract:
<p><span>Human recognize objects in complex natural images very fast within a fraction of a second. Many computational object recognition models inspired from this powerful ability of human. The Human Visual System (HVS) recognizes object in several processing layers which we know them as hierarchically model. Due to amazing complexity of HVS and the connections in visual pathway, computational modeling of HVS directly from its physiology is not possible. So it considered as a some blocks and each block modeled separately. One models inspiring of HVS is HMAX which its main problem is selecting patches in random way. As HMAX is a hierarchical model, HMAX can enhanced with enhancing each layer separately. In this paper instead of random patch extraction, Desirable Patches for HMAX (DPHMAX) will extracted. HVS for extracting patch first selected patches with more information. For simulating this block patches with more variance will be selected. Then HVS will chose patches with more similarity in a class. For simulating this block one algorithm is used. For evaluating proposed method, Caltech 5 and Caltech101 datasets are used. Results show that the proposed method (DPMAX) provides a significant performance over HMAX and other models with the same framework.</span></p>
APA, Harvard, Vancouver, ISO, and other styles
40

Kjellström, Hedvig, Javier Romero, and Danica Kragić. "Visual object-action recognition: Inferring object affordances from human demonstration." Computer Vision and Image Understanding 115, no. 1 (January 2011): 81–90. http://dx.doi.org/10.1016/j.cviu.2010.08.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

A. El Sayad, Ismail. "A New Visual Vocabulary for Object Recognition." International Journal of New Computer Architectures and their Applications 8, no. 3 (2018): 125–28. http://dx.doi.org/10.17781/p002449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Schurgin, Mark W., and Jonathan I. Flombaum. "Exploiting core knowledge for visual object recognition." Journal of Experimental Psychology: General 146, no. 3 (March 2017): 362–75. http://dx.doi.org/10.1037/xge0000270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Vinken, Kasper, and Gabriel Kreiman. "Adaptation in models of visual object recognition." Journal of Vision 19, no. 10 (September 6, 2019): 210a. http://dx.doi.org/10.1167/19.10.210a.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Singer, J. M., and G. Kreiman. "Short temporal asynchrony disrupts visual object recognition." Journal of Vision 14, no. 5 (May 12, 2014): 7. http://dx.doi.org/10.1167/14.5.7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Boucart, M., M. Fabre-Thorpe, S. Thorpe, C. Arndt, and J. C. Hache. "Covert object recognition at large visual eccentricity." Journal of Vision 1, no. 3 (March 14, 2010): 471. http://dx.doi.org/10.1167/1.3.471.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Kravitz, Dwight J., Latrice D. Vinson, and Chris I. Baker. "How position dependent is visual object recognition?" Trends in Cognitive Sciences 12, no. 3 (March 2008): 114–22. http://dx.doi.org/10.1016/j.tics.2007.12.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Huang, Yongzhen, Zifeng Wu, Liang Wang, and Chunfeng Song. "Multiple spatial pooling for visual object recognition." Neurocomputing 129 (April 2014): 225–31. http://dx.doi.org/10.1016/j.neucom.2013.09.037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Helbig, Hannah Barbara, Jasmin Steinwender, Markus Graf, and Markus Kiefer. "Action observation can prime visual object recognition." Experimental Brain Research 200, no. 3-4 (August 8, 2009): 251–58. http://dx.doi.org/10.1007/s00221-009-1953-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Pelli, Denis G., and Patrick Cavanagh. "Object Recognition: Visual Crowding from a Distance." Current Biology 23, no. 11 (June 2013): R478—R479. http://dx.doi.org/10.1016/j.cub.2013.04.022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Torii, Akihiko, Michal Havlena, and Tomáš Pajdla. "Omnidirectional Image Stabilization for Visual Object Recognition." International Journal of Computer Vision 91, no. 2 (May 21, 2010): 157–74. http://dx.doi.org/10.1007/s11263-010-0350-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography