Academic literature on the topic 'Prosopagnosia, Self recognition, Eye movements'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Prosopagnosia, Self recognition, Eye movements.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Prosopagnosia, Self recognition, Eye movements"

1

ADACHI, Tomomi, Midori TOKITA, and Akira ISHIGUCHI. "Eye movements during self-face recognition." Proceedings of the Annual Convention of the Japanese Psychological Association 75 (September 15, 2011): 3PM073. http://dx.doi.org/10.4992/pacjpa.75.0_3pm073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Avidan, Galia, and Marlene Behrmann. "Spatial Integration in Normal Face Processing and Its Breakdown in Congenital Prosopagnosia." Annual Review of Vision Science 7, no. 1 (2021): 301–21. http://dx.doi.org/10.1146/annurev-vision-113020-012740.

Full text
Abstract:
Congenital prosopagnosia (CP), a life-long impairment in face processing that occurs in the absence of any apparent brain damage, provides a unique model in which to explore the psychological and neural bases of normal face processing. The goal of this review is to offer a theoretical and conceptual framework that may account for the underlying cognitive and neural deficits in CP. This framework may also provide a novel perspective in which to reconcile some conflicting results that permits the expansion of the research in this field in new directions. The crux of this framework lies in linking the known behavioral and neural underpinnings of face processing and their impairments in CP to a model incorporating grid cell–like activity in the entorhinal cortex. Moreover, it stresses the involvement of active, spatial scanning of the environment with eye movements and implicates their critical role in face encoding and recognition. To begin with, we describe the main behavioral and neural characteristics of CP, and then lay down the building blocks of our proposed model, referring to the existing literature supporting this new framework. We then propose testable predictions and conclude with open questions for future research stemming from this model.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Muhua, and James J. Clark. "A Temporal Stability Approach to Position and Attention-Shift-Invariant Recognition." Neural Computation 16, no. 11 (2004): 2293–321. http://dx.doi.org/10.1162/0899766041941907.

Full text
Abstract:
Incorporation of visual-related self-action signals can help neural networks learn invariance. We describe a method that can produce a network with invariance to changes in visual input caused by eye movements and covert attention shifts. Training of the network is controlled by signals associated with eye movements and covert attention shifting. A temporal perceptual stability constraint is used to drive the output of the network toward remaining constant across temporal sequences of saccadicmotions and covert attention shifts. We use a four-layer neural network model to perform the position-invariant extraction of local features and temporal integration of invariant presentations of local features in a bottom-up structure. We present results on both simulated data and real images to demonstrate that our network can acquire both position and attention shift invariance.
APA, Harvard, Vancouver, ISO, and other styles
4

Sun, Xiangyang, and Zihan Cai. "Research on an Eye Control Method Based on the Fusion of Facial Expression and Gaze Intention Recognition." Applied Sciences 14, no. 22 (2024): 10520. http://dx.doi.org/10.3390/app142210520.

Full text
Abstract:
With the deep integration of psychology and artificial intelligence technology and other related technologies, eye control technology has achieved certain results at the practical application level. However, it is found that the accuracy of the current single-modal eye control technology is still not high, which is mainly caused by the inaccurate eye movement detection caused by the high randomness of eye movements in the process of human–computer interaction. Therefore, this study will propose an intent recognition method that fuses facial expressions and eye movement information and expects to complete an eye control method based on the fusion of facial expression and eye movement information based on the multimodal intent recognition dataset, including facial expressions and eye movement information constructed in this study. Based on the self-attention fusion strategy, the fused features are calculated, and the multi-layer perceptron is used to classify the fused features, so as to realize the mutual attention between different features, and improve the accuracy of intention recognition by enhancing the weight of effective features in a targeted manner. In order to solve the problem of inaccurate eye movement detection, an improved YOLOv5 model was proposed, and the accuracy of the model detection was improved by adding two strategies: a small target layer and a CA attention mechanism. At the same time, the corresponding eye movement behavior discrimination algorithm was combined for each eye movement action to realize the output of eye behavior instructions. Finally, the experimental verification of the eye–computer interaction scheme combining the intention recognition model and the eye movement detection model showed that the accuracy of the eye-controlled manipulator to perform various tasks could reach more than 95 percent based on this scheme.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Fuwang, Xiaolei Zhang, and Rongrong Fu. "Research on Home-Auxiliary Robot System Based on Characteristics of Human Physiological and Motion Signals." Complexity 2020 (February 11, 2020): 1–13. http://dx.doi.org/10.1155/2020/8195893.

Full text
Abstract:
A home-auxiliary robot system based on characteristics of the electrooculogram (EOG) and tongue signal is developed in the current study, which can provide daily life assistance for people with physical mobility disabilities. It relies on five simple actions (blinking twice in a row, tongue extension, upward tongue rolling, and left and right eye movements) of the human head itself to complete the motions (moving up/down/left/right and double-click) of a mouse in the system screen. In this paper, the brain network and BP neural network algorithms are used to identify these five types of actions. The result shows that, for all subjects, their average recognition rates of eye blinks and tongue movements (tongue extension and upward tongue rolling) were 90.17%, 88.00%, and 89.83%, respectively, and after training, the subjects can complete the five types of movements in sequence within 12 seconds. It means that people with physical disabilities can use the system to quickly and accurately complete life self-help, which brings great convenience to their lives.
APA, Harvard, Vancouver, ISO, and other styles
6

Gold, Daniel. "Nystagmus and Saccadic Intrusions." CONTINUUM: Lifelong Learning in Neurology 31, no. 2 (2025): 503–26. https://doi.org/10.1212/con.0000000000001561.

Full text
Abstract:
ABSTRACT OBJECTIVE This article describes the diagnosis and differentiation of the many possible localizations and causes of nystagmus. LATEST DEVELOPMENTS The eyes move to keep the fovea on the object of visual regard. To account for the movement of targets, the environment, or the self, different classes of eye movement are necessary to achieve visual stability. These movements involve the vergence, smooth pursuit, saccadic, vestibular, and optokinetic systems, as well as the ability to suppress the vestibuloocular reflex and other movements for steady fixation. When the equipoise of one or more of these systems is disrupted, nystagmus or saccadic intrusions may result. The astute clinician can distinguish between benign (eg, infantile or peripheral vestibular nystagmus) and dangerous (eg, stroke, Wernicke encephalopathy) etiologies with a high degree of confidence at the bedside, making expensive eye movement recording equipment unnecessary in the majority of cases. ESSENTIAL POINTS The recognition and interpretation of nystagmus and saccadic intrusions in the context of the history and a comprehensive ocular motor and neurologic examination is an essential skill in neurologic practice.
APA, Harvard, Vancouver, ISO, and other styles
7

Golparvar, Ata Jedari, and Murat Kaya Yapici. "Toward graphene textiles in wearable eye tracking systems for human–machine interaction." Beilstein Journal of Nanotechnology 12 (February 11, 2021): 180–89. http://dx.doi.org/10.3762/bjnano.12.14.

Full text
Abstract:
The study of eye movements and the measurement of the resulting biopotential, referred to as electrooculography (EOG), may find increasing use in applications within the domain of activity recognition, context awareness, mobile human–computer and human–machine interaction (HCI/HMI), and personal medical devices; provided that, seamless sensing of eye activity and processing thereof is achieved by a truly wearable, low-cost, and accessible technology. The present study demonstrates an alternative to the bulky and expensive camera-based eye tracking systems and reports the development of a graphene textile-based personal assistive device for the first time. This self-contained wearable prototype comprises a headband with soft graphene textile electrodes that overcome the limitations of conventional “wet” electrodes, along with miniaturized, portable readout electronics with real-time signal processing capability that can stream data to a remote device over Bluetooth. The potential of graphene textiles in wearable eye tracking and eye-operated remote object interaction is demonstrated by controlling a mouse cursor on screen for typing with a virtual keyboard and enabling navigation of a four-wheeled robot in a maze, all utilizing five different eye motions initiated with a single channel EOG acquisition. Typing speeds of up to six characters per minute without prediction algorithms and guidance of the robot in a maze with four 180° turns were successfully achieved with perfect pattern detection accuracies of 100% and 98%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
8

Palmisano, Stephen, Juno Kim, Robert Allison, and Frederick Bonato. "Simulated Viewpoint Jitter Shakes Sensory Conflict Accounts of Vection." Seeing and Perceiving 24, no. 2 (2011): 173–200. http://dx.doi.org/10.1163/187847511x570817.

Full text
Abstract:
AbstractSensory conflict has been used to explain the way we perceive and control our self-motion, as well as the aetiology of motion sickness. However, recent research on simulated viewpoint jitter provides a strong challenge to one core prediction of these theories — that increasing sensory conflict should always impair visually induced illusions of self-motion (known as vection). These studies show that jittering self-motion displays (thought to generate significant and sustained visual–vestibular conflict) actually induce superior vection to comparable non-jittering displays (thought to generate only minimal/transient sensory conflict). Here we review viewpoint jitter effects on vection, postural sway, eye-movements and motion sickness, and relate them to recent behavioural and neurophysiological findings. It is shown that jitter research provides important insights into the role that sensory interaction plays in self-motion perception.
APA, Harvard, Vancouver, ISO, and other styles
9

Gaussier, P., C. Joulain, A. Revel, and J. P. Cocquerez. "How Acting Allows to Segregate Objects in a Visual Scene." Perception 25, no. 1_suppl (1996): 54. http://dx.doi.org/10.1068/v96l1110.

Full text
Abstract:
Our purpose is to allow an autonomous robot to find and to categorise objects in a visual scene according to the actions it performs. The robot information comes from a CCD gray-level camera. The edges are extracted and a simple DOG filter is used to find ‘corner’-like forms in the image. These positions are used as possible focus points. The robot eye performs saccadic movements on the whole visual scene. A log-polar transform of the image is performed in the neighbourhood of the focus points to mimic the projection of the retina on the primary cortical areas. It simplifies object recognition by allowing size and rotation invariance. Those local views are learned on a self-organised topological map according to a vigilance level. At the same time, the robot tries to associate them with a particular action. For instance, we want the robot to learn to turn left when it sees a ‘turn-left’ arrow in the image. The problem is that the robot cannot see only a single object in the visual scene. There are many distractors such as doors, holes, and other objects not significant for the robot behaviour. At the beginning, a probabilistic conditioning rule allows the robot to associate all the seen objects to the performed movement. The robot repeatedly removes or creates new synaptic links to take into account only salient associations. As a result, object categorisation is not performed at the visual level (pure recognition of visual shape), but at the motor level (the action the robot has to perform). Our experiments show that learning and recognition of an object can be greatly simplified if we take into account the sensory-motor loop of the robot in its environment.
APA, Harvard, Vancouver, ISO, and other styles
10

Wegman, Joost, and Gabriele Janzen. "Neural Encoding of Objects Relevant for Navigation and Resting State Correlations with Navigational Ability." Journal of Cognitive Neuroscience 23, no. 12 (2011): 3841–54. http://dx.doi.org/10.1162/jocn_a_00081.

Full text
Abstract:
Objects along a route can help us to successfully navigate through our surroundings. Previous neuroimaging research has shown that the parahippocampal gyrus (PHG) distinguishes between objects that were previously encountered at navigationally relevant locations (decision points) and irrelevant locations (nondecision points) during simple object recognition. This study aimed at unraveling how this neural marking of objects relevant for navigation is established during learning and postlearning rest. Twenty-four participants were scanned using fMRI while they were viewing a route through a virtual environment. Eye movements were measured, and brain responses were time-locked to viewing each object. The PHG showed increased responses to decision point objects compared with nondecision point objects during route learning. We compared functional connectivity between the PHG and the rest of the brain in a resting state scan postlearning with such a scan prelearning. Results show that functional connectivity between the PHG and the hippocampus is positively related to participants' self-reported navigational ability. On the other hand, connectivity with the caudate nucleus correlated negatively with navigational ability. These results are in line with a distinction between egocentric and allocentric spatial representations in the caudate nucleus and the hippocampus, respectively. Our results thus suggest a relation between navigational ability and a neural preference for a specific type of spatial representation. Together, these results show that the PHG is immediately involved in the encoding of navigationally relevant object information. Furthermore, they provide insight into the neural correlates of individual differences in spatial ability.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Prosopagnosia, Self recognition, Eye movements"

1

MALASPINA, MANUELA. "Investigating face-specificity through congenital prosopagnosia: studies on perceptual phenomena and eye movement patterns." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2017. http://hdl.handle.net/10281/141393.

Full text
Abstract:
Congenital prosopagnosia consists of the failure to develop normal face recognition ability despite intact low-level perceptual and intellectual functioning, and in the context of normal exposure to faces throughout the individual’s life. Typically, these individuals are able to perceive facial stimuli as faces but fail to identify a face as familiar or unfamiliar and to identify it. Despite the large amount of studies that have investigated face recognition in individuals with typical development and in congenital prosopagnosics over the last twenty years, we are still far from a complete understanding of the mechanisms underlying typical and atypical face recognition, and some research questions are still open. For this reason, the present dissertation investigates some perceptual effects in individuals with a selective deficit in face recognition processing in order to reach a better understanding of what happens during a successful and unsuccessful face recognition process. In particular, by using a combination of behavioural and eye-tracking methods, I investigated whether the left perceptual bias and the self-face advantage are shown by individuals with congenital prosopagnosia and are truly face-specific or not. My results demonstrate that, whereas the left perceptual bias seems to characterize the recognition of unfamiliar faces in good recognizers, individuals with congenital prosopagnosia seem to show an opposite bias (i.e., a right perceptual bias) during the recognition of the self-face. Moreover, despite their face recognition impairment, congenital prosopagnosics consistently show high accuracy in recognizing their own face (i.e., a self-face advantage). Furthermore, some of the studies I conducted on the visual scanning strategies of this population demonstrated that the self-face advantage phenomenon is not associated with a different exploration of the face stimuli, suggesting that it could reflect a more general self-advantage and not be face-specific. Finally, the evidence presented in this dissertation also highlights that individuals with face impairment from birth show some difficulties in recognizing stimuli with high degree of similarity (such as objects belonging to the same class), and that these difficulties are associated with a different pattern of visual exploration. Overall, the evidence illustrated in the present thesis helps to shed light on the mechanisms characterizing face recognition and to expand our knowledge on the impairment affecting individuals with congenital prosopagnosia.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Prosopagnosia, Self recognition, Eye movements"

1

de Jesus Rubio, Jose, Carlos Aviles, Raymundo Coello, Jose Francisco Cruz, and Hector Rivero. "Pattern recognition of eye movements." In 2009 IEEE Workshop on Evolving and Self-Developing Intelligent Systems. IEEE, 2009. http://dx.doi.org/10.1109/esdis.2009.4938997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!