Academic literature on the topic 'Perception of Occlusion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Perception of Occlusion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Perception of Occlusion"

1

Nakashima, Ryoichi, and Takatsune Kumada. "Peripersonal versus extrapersonal visual scene information for egocentric direction and position perception." Quarterly Journal of Experimental Psychology 71, no. 5 (January 1, 2018): 1090–99. http://dx.doi.org/10.1080/17470218.2017.1310267.

Full text
Abstract:
When perceiving the visual environment, people simultaneously perceive their own direction and position in the environment (i.e., egocentric spatial perception). This study investigated what visual information in a scene is necessary for egocentric spatial perceptions. In two perception tasks (the egocentric direction and position perception tasks), observers viewed two static road images presented sequentially. In Experiment 1, the critical manipulation involved an occluded region in the road image, an extrapersonal region (far-occlusion) and a peripersonal region (near-occlusion). Egocentric direction perception was worse in the far-occlusion condition than in the no-occlusion condition, and egocentric position perceptions were worse in the far- and near-occlusion conditions than in the no-occlusion condition. In Experiment 2, we conducted the same tasks manipulating the observers’ gaze location in a scene—an extrapersonal region (far-gaze), a peripersonal region (near-gaze) and the intermediate region between the former two (middle-gaze). Egocentric direction perception performance was the best in the far-gaze condition, and egocentric position perception performances were not different among gaze location conditions. These results suggest that egocentric direction perception is based on fine visual information about the extrapersonal region in a road landscape, and egocentric position perception is based on information about the entire visual scene.
APA, Harvard, Vancouver, ISO, and other styles
2

Xu, Jie, Hanyuan Wang, Mingzhu Xu, Fan Yang, Yifei Zhou, and Xiaolong Yang. "Feature-Enhanced Occlusion Perception Object Detection for Smart Cities." Wireless Communications and Mobile Computing 2021 (March 29, 2021): 1–14. http://dx.doi.org/10.1155/2021/5544194.

Full text
Abstract:
Object detection is used widely in smart cities including safety monitoring, traffic control, and car driving. However, in the smart city scenario, many objects will have occlusion problems. Moreover, most popular object detectors are often sensitive to various real-world occlusions. This paper proposes a feature-enhanced occlusion perception object detector by simultaneously detecting occluded objects and fully utilizing spatial information. To generate hard examples with occlusions, a mask generator localizes and masks discriminated regions with weakly supervised methods. To obtain enriched feature representation, we design a multiscale representation fusion module to combine hierarchical feature maps. Moreover, this method exploits contextual information by heaping up representations from different regions in feature maps. The model is trained end-to-end learning by minimizing the multitask loss. Our model obtains superior performance compared to previous object detectors, 77.4% mAP and 74.3% mAP on PASCAL VOC 2007 and PASCAL VOC 2012, respectively. It also achieves 24.6% mAP on MS COCO. Experiments demonstrate that the proposed method is useful to improve the effectiveness of object detection, making it highly suitable for smart cities application that need to discover key objects with occlusions.
APA, Harvard, Vancouver, ISO, and other styles
3

Lee, Abigail, Robert Allison, and Laurie Wilcox. "Depth perception from successive occlusion." Journal of Vision 21, no. 9 (September 27, 2021): 1963. http://dx.doi.org/10.1167/jov.21.9.1963.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Palmer, E. M., and P. J. Kellman. "(Mis)Perception of motion and form after occlusion: Anorthoscopic perception revisited." Journal of Vision 3, no. 9 (March 16, 2010): 251. http://dx.doi.org/10.1167/3.9.251.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ramachandran, V. S., V. Inada, and G. Kiama. "Perception of illusory occlusion in apparent motion." Vision Research 26, no. 10 (January 1986): 1741–49. http://dx.doi.org/10.1016/0042-6989(86)90061-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Vallortigara, Giorgio, and Paola Bressan. "Occlusion and the perception of coherent motion." Vision Research 31, no. 11 (January 1991): 1967–78. http://dx.doi.org/10.1016/0042-6989(91)90191-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Häkkinen, Jukka, and Göte Nyman. "Occlusion Constraints and Stereoscopic Slant." Perception 26, no. 1 (January 1997): 29–38. http://dx.doi.org/10.1068/p260029.

Full text
Abstract:
In binocular vision horizontal magnification of one retinal image leads to a percept of three-dimensional slant around a vertical axis. It is demonstrated that the perception of slant is diminished when an occlusion interpretation is possible. A frontoparallel plane located in the immediate vicinity of a slanted surface in a location which allows a perception of occlusion reduces the magnitude of perceived slant significantly. When the same plane is placed on the other side, the slant perception is normal because there is no alternative occlusion interpretation. The results indicate that a common border between the occluder and a slanted surface is not a necessary condition for the reduction effect. If the edges are displaced and the edge of the slanted surface is placed in a location in which it could be occluded, the effect still appears.
APA, Harvard, Vancouver, ISO, and other styles
8

Ono, Hiroshi, Brian J. Rogers, Masao Ohmi, and Mika E. Ono. "Dynamic Occlusion and Motion Parallax in Depth Perception." Perception 17, no. 2 (April 1988): 255–66. http://dx.doi.org/10.1068/p170255.

Full text
Abstract:
Random-dot techniques were used to examine the interactions between the depth cues of dynamic occlusion and motion parallax in the perception of three-dimensional (3-D) structures, in two different situations: (a) when an observer moved laterally with respect to a rigid 3-D structure, and (b) when surfaces at different distances moved with respect to a stationary observer. In condition (a), the extent of accretion/deletion (dynamic occlusion) and the amount of relative motion (motion parallax) were both linked to the motion of the observer. When the two cues specified opposite, and therefore contradictory, depth orders, the perceived order in depth of the simulated surfaces was dependent on the magnitude of the depth separation. For small depth separations, motion parallax determined the perceived order, whereas for large separations it was determined by dynamic occlusion. In condition (b), where the motion parallax cues for depth order were inherently ambiguous, depth order was determined principally by the unambiguous occlusion information.
APA, Harvard, Vancouver, ISO, and other styles
9

Andersen, George J., and James M. Cortese. "2-D contour perception resulting from kinetic occlusion." Perception & Psychophysics 46, no. 1 (January 1989): 49–55. http://dx.doi.org/10.3758/bf03208073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gillam, B. "Shape and meaning in the perception of occlusion." Journal of Vision 7, no. 9 (March 23, 2010): 608. http://dx.doi.org/10.1167/7.9.608.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Perception of Occlusion"

1

Daniels, Victoria. "Studies of occlusion and associated illusions." Thesis, University of Exeter, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.241130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Duncan, Robert O. "Occlusion and the interpretation of visual motion : perceptual, oculomotor, and neuronal effects of context /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC IP addresses, 1999. http://wwwlib.umi.com/cr/ucsd/fullcit?p9956445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kelso, Carl Ryan. "Direct occlusion handling for high level image processing algorithms /." Online version of thesis, 2009. http://hdl.handle.net/1850/9497.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Min, Rui. "Reconnaissance de visage robuste aux occultations." Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0020/document.

Full text
Abstract:
La reconnaissance faciale est une technologie importante en vision par ordinateur, avec un rôle central en biométrie, interface homme-machine, contrôle d’accès, indexation multimédia, etc. L’occultation partielle, qui change complétement l’apparence d’une partie du visage, ne provoque pas uniquement une dégradation des performances en reconnaissance faciale, mai peut aussi avoir des conséquences en termes de sécurité. Dans cette thèse, nous concentrons sur le problème des occultations en reconnaissance faciale en environnements non contrôlés. Nous proposons une séquence qui consiste à analyser de manière explicite les occultations et à fiabiliser la reconnaissance faciale soumises à diverses occultations. Nous montrons dans cette thèse que l’approche proposée est plus efficace que les méthodes de l’état de l’art opérant sans traitement explicite dédié aux occultations. Nous identifions deux nouveaux types d’occultations, à savoir éparses et dynamiques. Des solutions sont introduites pour gérer ces problèmes d’occultation nouvellement identifiés dans un contexte de vidéo surveillance avancé. Récemment, le nouveau capteur Kinect a été utilisé avec succès dans de nombreuses applications en vision par ordinateur. Nous introduisons ce nouveau capteur dans le contexte de la reconnaissance faciale, en particulier en présence d’occultations, et démontrons son efficacité par rapport aux caméras traditionnelles. Finalement, nous proposons deux approches basées 2D et 3D permettant d’améliorer les techniques de base en reconnaissance de visages. L’amélioration des méthodes de base peut alors générer un impact positif sur les résultats de reconnaissance en présence d’occultations
Face recognition is an important technology in computer vision, which often acts as an essential component in biometrics systems, HCI systems, access control systems, multimedia indexing applications, etc. Partial occlusion, which significantly changes the appearance of part of a face, cannot only cause large performance deterioration of face recognition, but also can cause severe security issues. In this thesis, we focus on the occlusion problem in automatic face recognition in non-controlled environments. Toward this goal, we propose a framework that consists of applying explicit occlusion analysis and processing to improve face recognition under different occlusion conditions. We demonstrate in this thesis that the proposed framework is more efficient than the methods based on non-explicit occlusion treatments from the literature. We identify two new types of facial occlusions, namely the sparse occlusion and dynamic occlusion. Solutions are presented to handle the identified occlusion problems in more advanced surveillance context. Recently, the emerging Kinect sensor has been successfully applied in many computer vision fields. We introduce this new sensor in the context of face recognition, particularly in presence of occlusions, and demonstrate its efficiency compared with traditional 2D cameras. Finally, we propose two approaches based on 2D and 3D to improve the baseline face recognition techniques. Improving the baseline methods can also have the positive impact on the recognition results when partial occlusion occurs
APA, Harvard, Vancouver, ISO, and other styles
5

Lindsey, David H. "Orthodontists' and Parents' Perspective of Occlusion in Varying Anterior-Posterior Positions: A Comparative Study." VCU Scholars Compass, 2017. http://scholarscompass.vcu.edu/etd/4758.

Full text
Abstract:
Objective: The purpose was to compare orthodontists’ and parents’ perception of orthodontic treatment outcomes in the anterior-posterior (AP) dimension. Assessment of treatment time and compliance were also investigated. Material and Methods: Parallel surveys for orthodontists (n=1000) and parents (n=750) displayed occlusions from 3 mm Class III (Cl III:3) to 3 mm Class II. Participants rated occlusal relationships on a 100 mm VAS from least to most acceptable (0-100). Results: 233 orthodontists (23%) and 243 parents (32%) responded. Orthodontists (mean=93.9, 25.9) and parents (mean=80.7, 40.9) rated Class I (Cl I) occlusion most and Cl III:3 least acceptable. No significant difference was found between outcomes at 18 months versus 24 months. For all cases, parents were willing to extend treatment duration longer than orthodontists. Conclusions: Orthodontists and parents viewed treatment outcomes in the AP dimension differently, rating Cl I as most acceptable. Parents were willing to extend treatment longer than orthodontists.
APA, Harvard, Vancouver, ISO, and other styles
6

Barnes, Timothy. "Visual depth perception from texture accretion and deletion: a neural model of figure-ground segregation and occlusion." Thesis, Boston University, 2012. https://hdl.handle.net/2144/31504.

Full text
Abstract:
Thesis (Ph.D.)--Boston University
PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you.
Freezing is an effective defense strategy for some prey, because their predators rely on visual motion to distinguish objects from their surroundings. An object moving over a background progressively covers (deletes) and uncovers (accretes) background texture while simultaneously producing discontinuities in the optic flow field. These events unambiguously specify kinetic occlusion and can produce a crisp edge, depth perception, and figure-ground segregation between identically textured surfaces -- percepts which all disappear without motion. Given two abutting regions of uniform random texture with different motion velocities, one region will appear to be situated farther away and behind the other (i.e., the ground), if its texture is accreted or deleted at the boundary between the regions, irrespective of region and boundary velocities. Consequently, a region with moving texture appears farther away than a stationary region if the boundary is stationary, but it appears closer (i.e. the figure) if the boundary is moving coherently with the moving texture. The perception of kinetic occlusion requires the detection of an unexpected onset or offset of otherwise predictably moving or stationary contrast patches. A computational model of directional selectivity in visual cells is here extended to also detect motion onsets and offsets. The connectivity of these model cells not only affords the detection of local texture accretion and deletion events but also explains results showing that human reaction times differ for motion onsets versus offsets. These theorized cells are placed into a larger computational model of visual areas V1 and V2 to show how interactions between orientation- and direction-selective cells first create a motion-defined boundary and then signal texture accretion or deletion at that boundary. A weak speed-depth bias brings faster-moving texture regions forward in depth. This is consistent with percepts: the faster of two surfaces appears closer when moving parallel to the resulting emergent boundary between them (shearing motion). Activation of model occlusion detectors tuned to a particular velocity results in the model assigning the adjacent surface with a matching velocity to the far depth. These processes together reproduce human psychophysical reports of depth ordering for a representative set of all kinetic occlusion displays.
2031-01-01
APA, Harvard, Vancouver, ISO, and other styles
7

Chambers, Destinee L. "Understanding Occlusion Inhibition: A Study of the Visual Processing of Superimposed Figures." Amherst, Mass. : University of Massachusetts Amherst, 2009. http://scholarworks.umass.edu/open_access_dissertations/6/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Filiz, Anil Yigit. "A New Approach For Better Load Balancing Of Visibility Detection And Target Acquisition Calculations." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612255/index.pdf.

Full text
Abstract:
Calculating visual perception of entities in simulations requires complex intersection tests between the line of sight and the virtual world. In this study, we focus on outdoor environments which consist of a terrain and various objects located on terrain. Using hardware capabilities of graphics cards, such as occlusion queries, provides a fast method for implementing these tests. In this thesis, we introduce an approach for better load balancing of visibility detection and target acquisition calculations by the use of occlusion queries. Our results show that, the proposed approach is 1.5 to 2 times more efficient than the existing algorithms on the average.
APA, Harvard, Vancouver, ISO, and other styles
9

Patel, Nimisha Bhanuprasad. "Investigations into the neurophysiological basis of respiratory perception in humans using transient inspiratory occlusions." Thesis, Keele University, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.491697.

Full text
Abstract:
In humans breathing is an essential behaviour for life. It is recognized that humans and animals can perceive or sense their breathing, although the actual cortical and sub cortical structures by which this occurs remains unknown. The processing of sensations presumably arise from afferent information originating from mechanoreceptors within the muscles of the upper and lower airways, lungs and chest wall. This information is integrated by the central nervous system, which leads to a perception of respiratory sensations at th~ cortex, although the specific contnbutions of these sources remain unknown. In addition, distressing respiratory sensations such as breathlessness (dyspnoea) . and hyperinflation occur in individuals exhibiting pulmonary disease, such as asthma and chronic obstructive pulmonary disease (COPD). In response to these sensations, individuals can also voluntarily or behaviourally adjust their breathing. Hence, the aim of this thesis was to investigate: (i) the modulation of respiratory related sensory activity measured from the cortex in humans, using electroencephalography in response to applications of transient inspiratory occlusions (TIas) during hyperinflation, voluntary breathing and in tracheostomy patients' who lack an upper airway and (ii) the cortical and subcortical structures mediating the response to the TIO by using functional magnetic resonance imaging (fMRI). The results of these studies show that (i) voluntary breathing modulates respiratory perception, whereas perception is unaffected in tracheostomy patients and in hyperinflated states in response to TIas; and (ii) TIas can also generate cortical and sub cortical activity specifically activating sensory - motor structures including the, primary motor cortex, primary somatosensory cortex, supplementary motor area, inferior parietal areas, thalamus and cerebellum. In conclusion, respiratory perception (i) is altered by voluntary breathing; (ii) is unaffected .in hyperinflated and tracheostomized states; and (iii) can be investigated . using fMRl through the application of TIOs.
APA, Harvard, Vancouver, ISO, and other styles
10

Djezzar, Linda. "Contribution à l'étude acoustico-perceptive des occlusives du français." Nancy 1, 1995. http://www.theses.fr/1995NAN10009.

Full text
Abstract:
Diverses études ont montré que l'identification des occlusives repose sur l'utilisation de nombreux et redondants indices qui sont essentiellement fournis par le bruit d'explosion et les transitions formantiques. Néanmoins, dans les systèmes de reconnaissance de la parole existants, les taux de reconnaissance des palato-vélaires en contexte antérieur et des dentales en contexte labialise ne sont pas encore satisfaisants. Cette thèse a consisté en une étude acoustico-perceptive en vue d'une meilleure compréhension et connaissance du pouvoir de discrimination du bruit d'explosion par rapport à la reconnaissance du lieu d'articulation consonantique. Cette étude est constituée de trois parties. Expériences de perception du bruit d'explosion extrait d'occlusives sourdes du français. Notre objectif était de tester les performances des auditeurs dans l'identification du lieu d'articulation consonantique indépendamment de la voyelle qui suit. Nous nous sommes tout particulièrement intéressés aux caractéristiques spectrales du bruit d'explosion et à l'effet de la connaissance explicite de la voyelle sur les performances auditives. Description acoustique des principaux indices acoustiques fournis par le bruit d'explosion. Notre but était de tester la robustesse de chacun de ces indices vis-à-vis des variations contextuelles et individuelles. Pour ce faire, nous avons développe une base de données acoustico-phonétiques multi-locuteurs contenant tous les contextes vocaliques. Mise en œuvre d'un décodeur acoustico-phonétique d'occlusives hybride intégrant les résultats obtenus dans les deux parties précédentes
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Perception of Occlusion"

1

Oluikpe, George C. Visual guidance of locomotion: Occlusion of background by intervening object as information about the imminence of collision with the object. 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Anderson, Barton L. A Layered Experience of Lightness and Color. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780199794607.003.0037.

Full text
Abstract:
One of the fundamental debates about our experience of lightness and color involves their representational format. Some theories assert that the visual system decomposes the input into a layered representation of separated causes, whereas other theories do not. This chapter presents a variety of phenomena that directly demonstrate that layered image decompositions can play a causal role in our experience of lightness and color and discusses the theoretical implications and unresolved issues that are raised by these effects. The issue of the relationship between transparency and occlusion is discussed, as is relevance of the transparency phenomena to the problem of lightness and color perception more generally, which is an ongoing research problem and unresolved issue.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Perception of Occlusion"

1

Song, Yang, Luis Goncalves, and Pietro Perona. "Monocular Perception of Biological Motion - Clutter and Partial Occlusion." In Lecture Notes in Computer Science, 719–33. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-45053-x_46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hund, Marcus, and Bärbel Mertsching. "Occlusion as a Monocular Depth Cue Derived from Illusory Contour Perception." In KI 2009: Advances in Artificial Intelligence, 97–105. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04617-9_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Amin, Akhter Al, Saad Hassan, and Matt Huenerfauth. "Effect of Occlusion on Deaf and Hard of Hearing Users’ Perception of Captioned Video Quality." In Lecture Notes in Computer Science, 202–20. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-78095-1_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

"The Conditions for Perceiving Dynamic Occlusion of a Line." In Indirect Perception. The MIT Press, 1997. http://dx.doi.org/10.7551/mitpress/3727.003.0028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rogers, Brian. "5. Perception of a 3-D world." In Perception: A Very Short Introduction, 59–83. Oxford University Press, 2017. http://dx.doi.org/10.1093/actrade/9780198791003.003.0005.

Full text
Abstract:
The ability to perceive the 3-D world has often been regarded as a task that poses particular problems for the visual system. However, ‘Perception of a 3-D world’ argues that we are particularly fortunate because there are multiple sources of information to tell us about the different aspects of the 3-D structure of objects. It discusses three of these sources of information—perspective, occlusion, and shading—and then explains motion parallax, optic flow, binocular stereopsis, eye vergence and depth constancy, vertical disparities and differential perspective, and primary and secondary depth cues. The effectiveness of these different sources of 3-D information is considered along with how they are all brought together.
APA, Harvard, Vancouver, ISO, and other styles
6

Craton, Lincoln G., and Albert Yonas. "Chapter 2 The Role of Motion in Infants' Perception of Occlusion." In The Development of attention - Research and Theory, 21–46. Elsevier, 1990. http://dx.doi.org/10.1016/s0166-4115(08)60449-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Fein, Elizabeth. "The Pathogen and the Package." In Living on the Spectrum, 133–65. NYU Press, 2020. http://dx.doi.org/10.18574/nyu/9781479864355.003.0006.

Full text
Abstract:
This chapter charts a “neurodevelopmental turn” in psychiatric diagnosis away from an understanding of mental illnesses as discrete disease categories and toward the assessment and remediation of capacities, assumed to be both neural and universal and arrayed along a spectrum, for perception, memory, attention, learning, and sociality. In this time of paradigm shift, multiple diagnostic entities are produced that coexist under the same name. Different traditions in contemporary biomedicine—one focused on detecting and eliminating discrete disease entities and the other seeking to map and modulate comprehensive “connectomic” systems—produce different but overlapping “autisms”: a “pathogen model” of autism as separable, exclusively negative, and damaging to the self, on one hand, and a “package model” of autism that is an inseparable and constitutive element of personhood with both valued and troubling aspects, on the other. Through a process referred to here as “divided medicalization,” the former is misrepresented as the latter: complex, multivalent neurodevelopmental conditions are produced and then reduced to fit within a preexisting, disease-oriented clinical paradigm. Through divided medicalization, the former comes to stand in for the latter, allowing for the occlusion and potentially the suppression of autism’s multivalent, aesthetic, and identitarian dimensions.
APA, Harvard, Vancouver, ISO, and other styles
8

Vega, Julio, Eduardo Perdices, and José María Cañas. "Attentive Visual Memory for Robot Localization." In Robotic Vision, 406–36. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2672-0.ch021.

Full text
Abstract:
Cameras are one of the most relevant sensors in autonomous robots. Two challenges with them are to extract useful information from captured images and to manage the small field of view of regular cameras. This chapter proposes a visual perceptive system for a robot with a mobile camera on board that cope with these two issues. The system is composed of a dynamic visual memory that stores the information gathered from images, an attention system that continuously chooses where to look at, and a visual evolutionary localization algorithm that uses the visual memory as input. The visual memory is a collection of relevant task-oriented objects and 3D segments. Its scope and persistence is wider than the camera field of view and so provides more information about robot surroundings and more robustness to occlusions than current image. The control software takes its contents into account when making behavior or navigation decisions. The attention system considers the need of reobserving objects already stored, of exploring new areas and of testing hypothesis about objects in the robot surroundings. A robust evolutionary localization algorithm has been developed that can use both the current instantaneous images or the visual memory. The system has been programmed and several experiments have been carried out both with simulated and real robots (wheeled Pioneer and Nao humanoid) to validate it.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Perception of Occlusion"

1

Cuiral-Zueco, Ignacio, and Gonzalo Lopez-Nicolas. "Dynamic Occlusion Handling for Real Time Object Perception." In 2020 5th International Conference on Robotics and Automation Engineering (ICRAE). IEEE, 2020. http://dx.doi.org/10.1109/icrae50850.2020.9310850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lee, Szu-Han, Ya-Ting Chou, and Chih-Wei Tang. "Human perception inspired occlusion detection for stereo vision." In 2015 Picture Coding Symposium (PCS). IEEE, 2015. http://dx.doi.org/10.1109/pcs.2015.7170040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lesniak, Kevin, and Conrad S. Tucker. "Real-Time Occlusion Between Real and Digital Objects in Augmented Reality." In ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/detc2018-86346.

Full text
Abstract:
The method presented in this work reduces the frequency of virtual objects incorrectly occluding real-world objects in Augmented Reality (AR) applications. Current AR rendering methods cannot properly represent occlusion between real and virtual objects because the objects are not represented in a common coordinate system. These occlusion errors can lead users to have an incorrect perception of the environment around them when using an AR application, namely not knowing a real-world object is present due to a virtual object incorrectly occluding it and incorrect perception of depth or distance by the user due to incorrect occlusions. The authors of this paper present a method that brings both real-world and virtual objects into a common coordinate system so that distant virtual objects do not obscure nearby real-world objects in an AR application. This method captures and processes RGB-D data in real-time, allowing the method to be used in a variety of environments and scenarios. A case study shows the effectiveness and usability of the proposed method to correctly occlude real-world and virtual objects and provide a more realistic representation of the combined real and virtual environments in an AR application. The results of the case study show that the proposed method can detect at least 20 real-world objects with potential to be incorrectly occluded while processing and fixing occlusion errors at least 5 times per second.
APA, Harvard, Vancouver, ISO, and other styles
4

Franz, Arthur, and Jochen Triesch. "Modeling the development of causality and occlusion perception in infants." In 2008 7th IEEE International Conference on Development and Learning (ICDL 2008). IEEE, 2008. http://dx.doi.org/10.1109/devlrn.2008.4640825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Eidenberger, Robert, Raoul Zoellner, and Josef Scharinger. "Probabilistic occlusion estimation in cluttered environments for active perception planning." In 2009 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM). IEEE, 2009. http://dx.doi.org/10.1109/aim.2009.5229779.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Zixu, and Jaime Fisac. "Safe Occlusion-Aware Autonomous Driving via Game-Theoretic Active Perception." In Robotics: Science and Systems 2021. Robotics: Science and Systems Foundation, 2021. http://dx.doi.org/10.15607/rss.2021.xvii.066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Idesawa, Masanori, and Qi Zhang. "Occlusion cues and sustaining cues in 3D illusory object perception with binocular viewing." In AeroSense '97, edited by Steven K. Rogers. SPIE, 1997. http://dx.doi.org/10.1117/12.271541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fukiage, Taiki, Takeshi Oishi, and Katsushi Ikeuchi. "Reduction of contradictory partial occlusion in mixed reality by using characteristics of transparency perception." In 2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 2012. http://dx.doi.org/10.1109/ismar.2012.6402549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jain, Pranav, and Conrad Tucker. "Mobile Based Real-Time Occlusion Between Real and Digital Objects in Augmented Reality." In ASME 2019 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/detc2019-98440.

Full text
Abstract:
Abstract In this paper, a mobile-based augmented reality (AR) method is presented that is capable of accurate occlusion between digital and real-world objects in real-time. AR occlusion is the process of hiding or showing virtual objects behind physical ones. Existing approaches that address occlusion in AR applications typically require the use of markers or depth sensors, coupled with compute machines (e.g., laptop or desktop). Furthermore, real-world environments are cluttered and contain motion artifacts that result in occlusion errors and improperly rendered virtual objects, relative to the real world environment. These occlusion errors can lead users to have an incorrect perception of the environment around them while using an AR application, namely not knowing a real-world object is present. Moving the technology to mobile-based AR environments is necessary to reduce the cost and complexity of these technologies. This paper presents a mobile-based AR method that brings real and virtual objects into a similar coordinate system so that virtual objects do not obscure nearby real-world objects in an AR environment. This method captures and processes visual data in real-time, allowing the method to be used in a variety of non-static environments and scenarios. The results of the case study show that the method has the potential to reduce compute complexity, maintain high frame rates to run in real-time, and maintain occlusion efficacy.
APA, Harvard, Vancouver, ISO, and other styles
10

Fiorentino, Michele, Antonio E. Uva, and Giuseppe Monno. "Product Manufacturing Information Management in Interactive Augmented Technical Drawings." In ASME 2011 World Conference on Innovative Virtual Reality. ASMEDC, 2011. http://dx.doi.org/10.1115/winvr2011-5516.

Full text
Abstract:
This work presents a novel Augmented Realty (AR) application to superimpose interactive Product Manufacturing Information (PMI) onto paper technical drawings. We augment drawings with contextual data and use a novel tangible interface to access the data in a natural way. We present an optimized PMI data visualization algorithm for CAD models in order to avoid model and annotation cluttering.. Our algorithm ranks the model faces with technical annotations according to angle, distance, occlusion and area. The number of annotations visualized on 3D model is chosen following the cognitive perception theory to avoid information overload. We also extended the navigation metaphor adding the concept of tangible model navigation and flipping using the duplex drawing. As case studies we used annotated models from ASME standards. By using PC hardware and common paper drawings, this approach can be integrated at low-cost in existing industrial processes.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Perception of Occlusion"

1

Finkel, Leif H. Visual Perception of Depth-from-Occlusion: A Neural Network Model. Fort Belvoir, VA: Defense Technical Information Center, July 1992. http://dx.doi.org/10.21236/ada253343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Finkel, Leif H. Visual Perception of Depth-from-Occlusion: A Neural Network Model. Fort Belvoir, VA: Defense Technical Information Center, December 1990. http://dx.doi.org/10.21236/ada249771.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Findel, Leif H. Visual Perception of Depth from Occlusion: A Neural Network Model. Fort Belvoir, VA: Defense Technical Information Center, January 1992. http://dx.doi.org/10.21236/ada249035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography