Academic literature on the topic 'Prosopagnosia, Self recognition, Eye movements'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Prosopagnosia, Self recognition, Eye movements.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Prosopagnosia, Self recognition, Eye movements"

1

ADACHI, Tomomi, Midori TOKITA, and Akira ISHIGUCHI. "Eye movements during self-face recognition." Proceedings of the Annual Convention of the Japanese Psychological Association 75 (September 15, 2011): 3PM073. http://dx.doi.org/10.4992/pacjpa.75.0_3pm073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Avidan, Galia, and Marlene Behrmann. "Spatial Integration in Normal Face Processing and Its Breakdown in Congenital Prosopagnosia." Annual Review of Vision Science 7, no. 1 (2021): 301–21. http://dx.doi.org/10.1146/annurev-vision-113020-012740.

Full text
Abstract:
Congenital prosopagnosia (CP), a life-long impairment in face processing that occurs in the absence of any apparent brain damage, provides a unique model in which to explore the psychological and neural bases of normal face processing. The goal of this review is to offer a theoretical and conceptual framework that may account for the underlying cognitive and neural deficits in CP. This framework may also provide a novel perspective in which to reconcile some conflicting results that permits the expansion of the research in this field in new directions. The crux of this framework lies in linkin
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Muhua, and James J. Clark. "A Temporal Stability Approach to Position and Attention-Shift-Invariant Recognition." Neural Computation 16, no. 11 (2004): 2293–321. http://dx.doi.org/10.1162/0899766041941907.

Full text
Abstract:
Incorporation of visual-related self-action signals can help neural networks learn invariance. We describe a method that can produce a network with invariance to changes in visual input caused by eye movements and covert attention shifts. Training of the network is controlled by signals associated with eye movements and covert attention shifting. A temporal perceptual stability constraint is used to drive the output of the network toward remaining constant across temporal sequences of saccadicmotions and covert attention shifts. We use a four-layer neural network model to perform the position-
APA, Harvard, Vancouver, ISO, and other styles
4

Sun, Xiangyang, and Zihan Cai. "Research on an Eye Control Method Based on the Fusion of Facial Expression and Gaze Intention Recognition." Applied Sciences 14, no. 22 (2024): 10520. http://dx.doi.org/10.3390/app142210520.

Full text
Abstract:
With the deep integration of psychology and artificial intelligence technology and other related technologies, eye control technology has achieved certain results at the practical application level. However, it is found that the accuracy of the current single-modal eye control technology is still not high, which is mainly caused by the inaccurate eye movement detection caused by the high randomness of eye movements in the process of human–computer interaction. Therefore, this study will propose an intent recognition method that fuses facial expressions and eye movement information and expects
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Fuwang, Xiaolei Zhang, and Rongrong Fu. "Research on Home-Auxiliary Robot System Based on Characteristics of Human Physiological and Motion Signals." Complexity 2020 (February 11, 2020): 1–13. http://dx.doi.org/10.1155/2020/8195893.

Full text
Abstract:
A home-auxiliary robot system based on characteristics of the electrooculogram (EOG) and tongue signal is developed in the current study, which can provide daily life assistance for people with physical mobility disabilities. It relies on five simple actions (blinking twice in a row, tongue extension, upward tongue rolling, and left and right eye movements) of the human head itself to complete the motions (moving up/down/left/right and double-click) of a mouse in the system screen. In this paper, the brain network and BP neural network algorithms are used to identify these five types of action
APA, Harvard, Vancouver, ISO, and other styles
6

Gold, Daniel. "Nystagmus and Saccadic Intrusions." CONTINUUM: Lifelong Learning in Neurology 31, no. 2 (2025): 503–26. https://doi.org/10.1212/con.0000000000001561.

Full text
Abstract:
ABSTRACT OBJECTIVE This article describes the diagnosis and differentiation of the many possible localizations and causes of nystagmus. LATEST DEVELOPMENTS The eyes move to keep the fovea on the object of visual regard. To account for the movement of targets, the environment, or the self, different classes of eye movement are necessary to achieve visual stability. These movements involve the vergence, smooth pursuit, saccadic, vestibular, and optokinetic systems, as well as the ability to suppress the vestibuloocular reflex and other movements for steady fixation. When the equipoise of one or
APA, Harvard, Vancouver, ISO, and other styles
7

Golparvar, Ata Jedari, and Murat Kaya Yapici. "Toward graphene textiles in wearable eye tracking systems for human–machine interaction." Beilstein Journal of Nanotechnology 12 (February 11, 2021): 180–89. http://dx.doi.org/10.3762/bjnano.12.14.

Full text
Abstract:
The study of eye movements and the measurement of the resulting biopotential, referred to as electrooculography (EOG), may find increasing use in applications within the domain of activity recognition, context awareness, mobile human–computer and human–machine interaction (HCI/HMI), and personal medical devices; provided that, seamless sensing of eye activity and processing thereof is achieved by a truly wearable, low-cost, and accessible technology. The present study demonstrates an alternative to the bulky and expensive camera-based eye tracking systems and reports the development of a graph
APA, Harvard, Vancouver, ISO, and other styles
8

Palmisano, Stephen, Juno Kim, Robert Allison, and Frederick Bonato. "Simulated Viewpoint Jitter Shakes Sensory Conflict Accounts of Vection." Seeing and Perceiving 24, no. 2 (2011): 173–200. http://dx.doi.org/10.1163/187847511x570817.

Full text
Abstract:
AbstractSensory conflict has been used to explain the way we perceive and control our self-motion, as well as the aetiology of motion sickness. However, recent research on simulated viewpoint jitter provides a strong challenge to one core prediction of these theories — that increasing sensory conflict should always impair visually induced illusions of self-motion (known as vection). These studies show that jittering self-motion displays (thought to generate significant and sustained visual–vestibular conflict) actually induce superior vection to comparable non-jittering displays (thought to ge
APA, Harvard, Vancouver, ISO, and other styles
9

Gaussier, P., C. Joulain, A. Revel, and J. P. Cocquerez. "How Acting Allows to Segregate Objects in a Visual Scene." Perception 25, no. 1_suppl (1996): 54. http://dx.doi.org/10.1068/v96l1110.

Full text
Abstract:
Our purpose is to allow an autonomous robot to find and to categorise objects in a visual scene according to the actions it performs. The robot information comes from a CCD gray-level camera. The edges are extracted and a simple DOG filter is used to find ‘corner’-like forms in the image. These positions are used as possible focus points. The robot eye performs saccadic movements on the whole visual scene. A log-polar transform of the image is performed in the neighbourhood of the focus points to mimic the projection of the retina on the primary cortical areas. It simplifies object recognition
APA, Harvard, Vancouver, ISO, and other styles
10

Wegman, Joost, and Gabriele Janzen. "Neural Encoding of Objects Relevant for Navigation and Resting State Correlations with Navigational Ability." Journal of Cognitive Neuroscience 23, no. 12 (2011): 3841–54. http://dx.doi.org/10.1162/jocn_a_00081.

Full text
Abstract:
Objects along a route can help us to successfully navigate through our surroundings. Previous neuroimaging research has shown that the parahippocampal gyrus (PHG) distinguishes between objects that were previously encountered at navigationally relevant locations (decision points) and irrelevant locations (nondecision points) during simple object recognition. This study aimed at unraveling how this neural marking of objects relevant for navigation is established during learning and postlearning rest. Twenty-four participants were scanned using fMRI while they were viewing a route through a virt
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Prosopagnosia, Self recognition, Eye movements"

1

MALASPINA, MANUELA. "Investigating face-specificity through congenital prosopagnosia: studies on perceptual phenomena and eye movement patterns." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2017. http://hdl.handle.net/10281/141393.

Full text
Abstract:
Congenital prosopagnosia consists of the failure to develop normal face recognition ability despite intact low-level perceptual and intellectual functioning, and in the context of normal exposure to faces throughout the individual’s life. Typically, these individuals are able to perceive facial stimuli as faces but fail to identify a face as familiar or unfamiliar and to identify it. Despite the large amount of studies that have investigated face recognition in individuals with typical development and in congenital prosopagnosics over the last twenty years, we are still far from a complete und
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Prosopagnosia, Self recognition, Eye movements"

1

de Jesus Rubio, Jose, Carlos Aviles, Raymundo Coello, Jose Francisco Cruz, and Hector Rivero. "Pattern recognition of eye movements." In 2009 IEEE Workshop on Evolving and Self-Developing Intelligent Systems. IEEE, 2009. http://dx.doi.org/10.1109/esdis.2009.4938997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!