To see the other types of publications on this topic, follow the link: Human eye fixations; textured model.

Journal articles on the topic 'Human eye fixations; textured model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 45 journal articles for your research on the topic 'Human eye fixations; textured model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Guo, Xiaoying, Liang Li, Akira Asano, and Chie Muraki Asano. "Influences of Global and Local Features on Eye-Movement Patterns in Visual-Similarity Perception of Synthesized Texture Images." Applied Sciences 10, no. 16 (2020): 5552. http://dx.doi.org/10.3390/app10165552.

Full text
Abstract:
Global and local features are essential for visual-similarity texture perception. Therefore, understanding how people allocate their visual attention when viewing textures with global or local similarity is important. In this work, we investigate the influences of global and local features of a texture on eye-movement patterns and analyze the relationship between eye-movement patterns and visual-similarity selection. First, we synthesized textures by separately controlling global and local textural features through the primitive, grain, and point configuration (PGPC) texture model, a mathemati
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Ya, Chunyi Chen, Xiaojuan Hu, Ling Li, and Hailan Li. "Saliency detection of textured 3D models based on multi-view information and texel descriptor." PeerJ Computer Science 9 (October 25, 2023): e1584. http://dx.doi.org/10.7717/peerj-cs.1584.

Full text
Abstract:
Saliency-driven mesh simplification methods have shown promising results in maintaining visual detail, but effective simplification requires accurate 3D saliency maps. The conventional mesh saliency detection method may not capture salient regions in 3D models with texture. To address this issue, we propose a novel saliency detection method that fuses saliency maps from multi-view projections of textured models. Specifically, we introduce a texel descriptor that combines local convexity and chromatic aberration to capture texel saliency at multiple scales. Furthermore, we created a novel datas
APA, Harvard, Vancouver, ISO, and other styles
3

Appadurai, Jothi Prabha, and Bhargavi R. "Eye Movement Feature Set and Predictive Model for Dyslexia." International Journal of Cognitive Informatics and Natural Intelligence 15, no. 4 (2021): 1–22. http://dx.doi.org/10.4018/ijcini.20211001.oa28.

Full text
Abstract:
Dyslexia is a learning disorder that can cause difficulties in reading or writing. Dyslexia is not a visual problem but many dyslexics have impaired magnocellular system which causes poor eye control. Eye-trackers are used to track eye movements. This research work proposes a set of significant eye movement features that are used to build a predictive model for dyslexia. Fixation and saccade eye events are detected using the dispersion-threshold and velocity-threshold algorithms. Various machine learning models are experimented. Validation is done on 185 subjects using 10-fold cross-validation
APA, Harvard, Vancouver, ISO, and other styles
4

Romaniuk, Vladimir R. "Method for predicting eye movement activity based on intelligent data analysis from a mobile portable electroencephalograph." Analysis and data processing systems, no. 3 (December 26, 2024): 77–89. https://doi.org/10.17212/2782-2001-2024-3-77-89.

Full text
Abstract:
Eye movement processes, such as fixations and saccades, play a crucial role in human cognitive activity as they are closely associated with functions such as perception, attention, and decision-making. These processes are actively applied in various fields, including human-computer interaction systems and neurophysiological research. Modern eye-tracking methods based on optical systems provide high accuracy but have several significant limitations. In this regard, the use of electroencephalographic (EEG) data for analyzing eye movement activity is becoming a promising direction, as EEG provide
APA, Harvard, Vancouver, ISO, and other styles
5

Cornia, Marcella, Lorenzo Baraldi, Giuseppe Serra, and Rita Cucchiara. "Predicting Human Eye Fixations via an LSTM-Based Saliency Attentive Model." IEEE Transactions on Image Processing 27, no. 10 (2018): 5142–54. http://dx.doi.org/10.1109/tip.2018.2851672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Deng, Shuwen, David R. Reich, Paul Prasse, Patrick Haller, Tobias Scheffer, and Lena A. Jäger. "Eyettention: An Attention-based Dual-Sequence Model for Predicting Human Scanpaths during Reading." Proceedings of the ACM on Human-Computer Interaction 7, ETRA (2023): 1–24. http://dx.doi.org/10.1145/3591131.

Full text
Abstract:
Eye movements during reading offer insights into both the reader's cognitive processes and the characteristics of the text that is being read. Hence, the analysis of scanpaths in reading have attracted increasing attention across fields, ranging from cognitive science over linguistics to computer science. In particular, eye-tracking-while-reading data has been argued to bear the potential to make machine-learning-based language models exhibit a more human-like linguistic behavior. However, one of the main challenges in modeling human scanpaths in reading is their dual-sequence nature: the word
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Kaiwei, Dandan Zhu, Xiongkuo Min, and Guangtao Zhai. "Textured Mesh Saliency: Bridging Geometry and Texture for Human Perception in 3D Graphics." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 9 (2025): 9977–84. https://doi.org/10.1609/aaai.v39i9.33082.

Full text
Abstract:
Textured meshes significantly enhance the realism and detail of objects by mapping intricate texture details onto the geometric structure of 3D models. This advancement is valuable across various applications, including entertainment, education, and industry. While traditional mesh saliency studies focus on non-textured meshes, our work explores the complexities introduced by detailed texture patterns. We present a new dataset for textured mesh saliency, created through an innovative eye-tracking experiment in a six degrees of freedom (6-DOF) VR environment. This dataset addresses the limitati
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Yongxiang, William Clifford, Charles Markham, and Catherine Deegan. "Examination of Driver Visual and Cognitive Responses to Billboard Elicited Passive Distraction Using Eye-Fixation Related Potential." Sensors 21, no. 4 (2021): 1471. http://dx.doi.org/10.3390/s21041471.

Full text
Abstract:
Distractions external to a vehicle contribute to visual attention diversion that may cause traffic accidents. As a low-cost and efficient advertising solution, billboards are widely installed on side of the road, especially the motorway. However, the effect of billboards on driver distraction, eye gaze, and cognition has not been fully investigated. This study utilises a customised driving simulator and synchronised electroencephalography (EEG) and eye tracking system to investigate the cognitive processes relating to the processing of driver visual information. A distinction is made between e
APA, Harvard, Vancouver, ISO, and other styles
9

Hsieh, Chihcheng, André Luís, José Neves, et al. "EyeXNet: Enhancing Abnormality Detection and Diagnosis via Eye-Tracking and X-ray Fusion." Machine Learning and Knowledge Extraction 6, no. 2 (2024): 1055–71. http://dx.doi.org/10.3390/make6020048.

Full text
Abstract:
Integrating eye gaze data with chest X-ray images in deep learning (DL) has led to contradictory conclusions in the literature. Some authors assert that eye gaze data can enhance prediction accuracy, while others consider eye tracking irrelevant for predictive tasks. We argue that this disagreement lies in how researchers process eye-tracking data as most remain agnostic to the human component and apply the data directly to DL models without proper preprocessing. We present EyeXNet, a multimodal DL architecture that combines images and radiologists’ fixation masks to predict abnormality locati
APA, Harvard, Vancouver, ISO, and other styles
10

Schoonveld, W., and M. P. Eckstein. "A likelihood based metric to compare human and model eye movement fixations during visual search." Journal of Vision 8, no. 6 (2010): 379. http://dx.doi.org/10.1167/8.6.379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Kümmerer, Matthias, Thomas S. A. Wallis, and Matthias Bethge. "Information-theoretic model comparison unifies saliency metrics." Proceedings of the National Academy of Sciences 112, no. 52 (2015): 16054–59. http://dx.doi.org/10.1073/pnas.1510393112.

Full text
Abstract:
Learning the properties of an image associated with human gaze placement is important both for understanding how biological systems explore the environment and for computer vision applications. There is a large literature on quantitative eye movement models that seeks to predict fixations from images (sometimes termed “saliency” prediction). A major problem known to the field is that existing model comparison metrics give inconsistent results, causing confusion. We argue that the primary reason for these inconsistencies is because different metrics and models use different definitions of what
APA, Harvard, Vancouver, ISO, and other styles
12

Morrison, Edward, and Marianne Lanigan. "Shape of You: Eye-Tracking and Social Perceptions of the Human Body." Behavioral Sciences 15, no. 6 (2025): 817. https://doi.org/10.3390/bs15060817.

Full text
Abstract:
Much research has considered how physical appearance affects the way people are judged, such as how body size affects judgements of attractiveness and health. Less research, however, has looked at visual attention during such judgements. We used eye-tracking to measure the gaze behaviour of 32 participants (29 female) on male and female computer-generated bodies of different body mass index (BMI). Independent variables were sex and BMI of the model, area of interest of the body, and the judgement made (attractiveness, healthiness, and youthfulness). Dependent variables were the number and dura
APA, Harvard, Vancouver, ISO, and other styles
13

de Haas, Mirjam, Paul Vogt, and Emiel Krahmer. "When Preschoolers Interact with an Educational Robot, Does Robot Feedback Influence Engagement?" Multimodal Technologies and Interaction 5, no. 12 (2021): 77. http://dx.doi.org/10.3390/mti5120077.

Full text
Abstract:
In this paper, we examine to what degree children of 3–4 years old engage with a task and with a social robot during a second-language tutoring lesson. We specifically investigated whether children’s task engagement and robot engagement were influenced by three different feedback types by the robot: adult-like feedback, peer-like feedback and no feedback. Additionally, we investigated the relation between children’s eye gaze fixations and their task engagement and robot engagement. Fifty-eight Dutch children participated in an English counting task with a social robot and physical blocks. We f
APA, Harvard, Vancouver, ISO, and other styles
14

Blohm, Gunnar, and Philippe Lefèvre. "Visuomotor Velocity Transformations for Smooth Pursuit Eye Movements." Journal of Neurophysiology 104, no. 4 (2010): 2103–15. http://dx.doi.org/10.1152/jn.00728.2009.

Full text
Abstract:
Smooth pursuit eye movements are driven by retinal motion signals. These retinal motion signals are converted into motor commands that obey Listing's law (i.e., no accumulation of ocular torsion). The fact that smooth pursuit follows Listing's law is often taken as evidence that no explicit reference frame transformation between the retinal velocity input and the head-centered motor command is required. Such eye-position-dependent reference frame transformations between eye- and head-centered coordinates have been well-described for saccades to static targets. Here we suggest that such an eye
APA, Harvard, Vancouver, ISO, and other styles
15

Lee, Ja Young, Joonbum Lee, and John D. Lee. "A Visual Search Model for In-Vehicle Interface Design." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 60, no. 1 (2016): 1874–78. http://dx.doi.org/10.1177/1541931213601427.

Full text
Abstract:
As in-vehicle infotainment systems gain new functionality, their potential to distract drivers increases. Searching for an item on interface is a critical concern because a poorly designed interface that draws drivers’ attention to less important items can extend drivers’ search for items of interest and pull attention away from roadway events. This potential can be assessed in simulator-based experiments, but computational models of driver behavior might enable designers to assess this potential and revise their designs more quickly than if they have to wait weeks to compile human subjects da
APA, Harvard, Vancouver, ISO, and other styles
16

Hatfield, Nathan, Yusuke Yamani, Dakota B. Palmer, Nicole D. Karpinsky, William J. Horrey, and Siby Samuel. "Comparison of Visual Sampling Patterns Under Simulated L2 and L0 Systems." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 62, no. 1 (2018): 1826. http://dx.doi.org/10.1177/1541931218621414.

Full text
Abstract:
Automated driving systems (ADS) partially or fully perform or assist with primary driving functions. According to SAE J3016 (SAE, 2016), ADS can subsume driving tasks traditionally reserved for humans, ranging from L0 (no automation) to L5 (full automation), creating varying degrees of driver interaction and responsibility. However, the literature on human-automation interaction indicates that human operators may perform at a suboptimal level when interacting with automated support systems (Parasuraman & Riley, 1997), reducing the net benefit that automation can bring while also simultaneo
APA, Harvard, Vancouver, ISO, and other styles
17

Mather, George. "The Use of Image Blur as a Depth Cue." Perception 26, no. 9 (1997): 1147–58. http://dx.doi.org/10.1068/p261147.

Full text
Abstract:
Images of three-dimensional scenes inevitably contain regions that are spatially blurred by differing amounts, owing to depth-of-focus limitations in the imaging apparatus. Recent perceptual data indicate that this blur variation acts as an effective cue to depth: if one image region contains sharply focused texture, and another contains blurred texture, then the two regions may be perceived at different depths, even in the absence of other depth cues. Calculations based on the optical properties of the human eye have shown that variation in blur as a function of depth follows the same course
APA, Harvard, Vancouver, ISO, and other styles
18

Bekler, Meryem, Murat Yilmaz, and Hüseyin Emre Ilgın. "Assessing Feature Importance in Eye-Tracking Data within Virtual Reality Using Explainable Artificial Intelligence Techniques." Applied Sciences 14, no. 14 (2024): 6042. http://dx.doi.org/10.3390/app14146042.

Full text
Abstract:
Our research systematically investigates the cognitive and emotional processes revealed through eye movements within the context of virtual reality (VR) environments. We assess the utility of eye-tracking data for predicting emotional states in VR, employing explainable artificial intelligence (XAI) to advance the interpretability and transparency of our findings. Utilizing the VR Eyes: Emotions dataset (VREED) alongside an extra trees classifier enhanced by SHapley Additive ExPlanations (SHAP) and local interpretable model agnostic explanations (LIME), we rigorously evaluate the importance of
APA, Harvard, Vancouver, ISO, and other styles
19

Barnes, Jordan, Mark R. Blair, R. Calen Walshe, and Paul F. Tupper. "LAG-1: A dynamic, integrative model of learning, attention, and gaze." PLOS ONE 17, no. 3 (2022): e0259511. http://dx.doi.org/10.1371/journal.pone.0259511.

Full text
Abstract:
It is clear that learning and attention interact, but it is an ongoing challenge to integrate their psychological and neurophysiological descriptions. Here we introduce LAG-1, a dynamic neural field model of learning, attention and gaze, that we fit to human learning and eye-movement data from two category learning experiments. LAG-1 comprises three control systems: one for visuospatial attention, one for saccadic timing and control, and one for category learning. The model is able to extract a kind of information gain from pairwise differences in simple associations between visual features an
APA, Harvard, Vancouver, ISO, and other styles
20

Долґунсоз, Емраг, and Аріф Сарісобан. "Word Skipping in Reading English as a Foreign Language: Evidence from Eye Tracking." East European Journal of Psycholinguistics 3, no. 2 (2016): 22–31. http://dx.doi.org/10.29038/eejpl.2016.3.2.dol.

Full text
Abstract:
During reading, readers never fixate on all words in the text; shorter words sometimes gain zero fixation and skipped by the reader. Relying on E-Z Reader Model, this research hypothesized that a similar skipping effect also exists for a second language. The current study examined word skipping rates in EFL (English as a Foreign Language) with 75 EFL learners by using eye tracking methodology. The results showed that word skipping was affected by EFL reading proficiency significantly and articles (a, an, the) were skipped more than content words. Furthermore, more skilled learners were observe
APA, Harvard, Vancouver, ISO, and other styles
21

AG Pradnya Sidhawara, Sunu Wibirama, and Dwi Joko Suroso. "Eye-Tracking Study on the Gender Effect Towards Cognitive Processes During Multimedia Learning." Jurnal Nasional Teknik Elektro dan Teknologi Informasi 12, no. 2 (2023): 137–43. http://dx.doi.org/10.22146/jnteti.v12i2.5145.

Full text
Abstract:
Multimedia learning is defined as the process of forming a knowledge mental model from words and pictures. It is important to measure cognitive process during multimedia learning. Differences in learners’ capabilities can be investigated through cognitive processes to improve the learning process. However, conventional methods such as interviews or behavioural assessment do not provide an objective measurement of cognitive processes during multimedia learning. Some advance methods to measure cognitive processes takes into account learner’s eye movement during learning process. In such a case,
APA, Harvard, Vancouver, ISO, and other styles
22

Zhong, Wenqi, Linzhi Yu, Chen Xia, Junwei Han, and Dingwen Zhang. "SpFormer: Spatio-Temporal Modeling for Scanpaths with Transformer." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 7 (2024): 7605–13. http://dx.doi.org/10.1609/aaai.v38i7.28593.

Full text
Abstract:
Saccadic scanpath, a data representation of human visual behavior, has received broad interest in multiple domains. Scanpath is a complex eye-tracking data modality that includes the sequences of fixation positions and fixation duration, coupled with image information. However, previous methods usually face the spatial misalignment problem of fixation features and loss of critical temporal data (including temporal correlation and fixation duration). In this study, we propose a Transformer-based scanpath model, SpFormer, to alleviate these problems. First, we propose a fixation-centric paradigm
APA, Harvard, Vancouver, ISO, and other styles
23

Rouhafzay, Ghazal, and Ana-Maria Cretu. "A Visuo-Haptic Framework for Object Recognition Inspired by Human Tactile Perception." Proceedings 4, no. 1 (2019): 47. http://dx.doi.org/10.3390/ecsa-5-05754.

Full text
Abstract:
This paper addresses the issue of robotic haptic exploration of 3D objects using an enhanced model of visual attention, where the latter is applied to obtain a sequence of eye fixations on the surface of objects guiding the haptic exploratory procedure. According to psychological studies, somatosensory data resulting as a response to surface changes sensed by human skin are used in combination with kinesthetic cues from muscles and tendons to recognize objects. Drawing inspiration from these findings, a series of five sequential tactile images are obtained by adaptively changing the size of th
APA, Harvard, Vancouver, ISO, and other styles
24

Roth, Nicolas, Martin Rolfs, Olaf Hellwich, and Klaus Obermayer. "Objects guide human gaze behavior in dynamic real-world scenes." PLOS Computational Biology 19, no. 10 (2023): e1011512. http://dx.doi.org/10.1371/journal.pcbi.1011512.

Full text
Abstract:
The complexity of natural scenes makes it challenging to experimentally study the mechanisms behind human gaze behavior when viewing dynamic environments. Historically, eye movements were believed to be driven primarily by space-based attention towards locations with salient features. Increasing evidence suggests, however, that visual attention does not select locations with high saliency but operates on attentional units given by the objects in the scene. We present a new computational framework to investigate the importance of objects for attentional guidance. This framework is designed to s
APA, Harvard, Vancouver, ISO, and other styles
25

Crowe, David A., Bruno B. Averbeck, Matthew V. Chafee, John H. Anderson, and Apostolos P. Georgopoulos. "Mental Maze Solving." Journal of Cognitive Neuroscience 12, no. 5 (2000): 813–27. http://dx.doi.org/10.1162/089892900562426.

Full text
Abstract:
We sought to determine how a visual maze is mentally solved. Human subjects (N = 13) viewed mazes with orthogonal, unbranched paths; each subject solved 200-600 mazes in any specific experiment below. There were four to six openings at the perimeter of the maze, of which four were labeled: one was the entry point and the remainder were potential exits marked by Arabic numerals. Starting at the entry point, in some mazes the path exited, whereas in others it terminated within the maze. Subjects were required to type the number corresponding to the true exit (if the path exited) or type zero (if
APA, Harvard, Vancouver, ISO, and other styles
26

zhang. "human eye fixations on some textured model (partly)." July 10, 2023. https://doi.org/10.5281/zenodo.8131602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Zhou, Yunhui, and Yuguo Yu. "Human visual search follows a suboptimal Bayesian strategy revealed by a spatiotemporal computational model and experiment." Communications Biology 4, no. 1 (2021). http://dx.doi.org/10.1038/s42003-020-01485-0.

Full text
Abstract:
AbstractThere is conflicting evidence regarding whether humans can make spatially optimal eye movements during visual search. Some studies have shown that humans can optimally integrate information across fixations and determine the next fixation location, however, these models have generally ignored the control of fixation duration and memory limitation, and the model results do not agree well with the details of human eye movement metrics. Here, we measured the temporal course of the human visibility map and performed a visual search experiment. We further built a continuous-time eye movemen
APA, Harvard, Vancouver, ISO, and other styles
28

"Eye Movement Feature Set and Predictive Model for Dyslexia." International Journal of Cognitive Informatics and Natural Intelligence 15, no. 4 (2021): 0. http://dx.doi.org/10.4018/ijcini.20211001oa15.

Full text
Abstract:
Dyslexia is a learning disorder that can cause difficulties in reading or writing. Dyslexia is not a visual problem but many dyslexics have impaired magnocellular system which causes poor eye control. Eye-trackers are used to track eye movements. This research work proposes a set of significant eye movement features that are used to build a predictive model for dyslexia. Fixation and saccade eye events are detected using the dispersion-threshold and velocity-threshold algorithms. Various machine learning models are experimented. Validation is done on 185 subjects using 10-fold cross-validation
APA, Harvard, Vancouver, ISO, and other styles
29

Yang, Jinbiao, Antal van den Bosch, and Stefan L. Frank. "Unsupervised Text Segmentation Predicts Eye Fixations During Reading." Frontiers in Artificial Intelligence 5 (February 23, 2022). http://dx.doi.org/10.3389/frai.2022.731615.

Full text
Abstract:
Words typically form the basis of psycholinguistic and computational linguistic studies about sentence processing. However, recent evidence shows the basic units during reading, i.e., the items in the mental lexicon, are not always words, but could also be sub-word and supra-word units. To recognize these units, human readers require a cognitive mechanism to learn and detect them. In this paper, we assume eye fixations during reading reveal the locations of the cognitive units, and that the cognitive units are analogous with the text units discovered by unsupervised segmentation models. We pre
APA, Harvard, Vancouver, ISO, and other styles
30

Lüken, Malte, Šimon Kucharský, and Ingmar Visser. "Characterising eye movement events with an unsupervised hidden markov model." Journal of Eye Movement Research 15, no. 1 (2022). http://dx.doi.org/10.16910/jemr.15.1.4.

Full text
Abstract:
Eye-tracking allows researchers to infer cognitive processes from eye movements that are classified into distinct events. Parsing the events is typically done by algorithms. Here we aim at developing an unsupervised, generative model that can be fitted to eye-movement data using maximum likelihood estimation. This approach allows hypothesis testing about fitted models, next to being a method for classification. We developed gazeHMM, an algorithm that uses a hidden Markov model as a generative model, has few critical parameters to be set by users, and does not require human coded data as input.
APA, Harvard, Vancouver, ISO, and other styles
31

Strauch, Christoph, Alex J. Hoogerbrugge, Gregor Baer, et al. "Saliency models perform best for women’s and young adults' fixations." Communications Psychology 1, no. 1 (2023). http://dx.doi.org/10.1038/s44271-023-00035-8.

Full text
Abstract:
AbstractSaliency models seek to predict fixation locations in (human) gaze behaviour. These are typically created to generalize across a wide range of visual scenes but validated using only a few participants. Generalizations across individuals are generally implied. We tested this implied generalization across people, not images, with gaze data of 1600 participants. Using a single, feature-rich image, we found shortcomings in the prediction of fixations across this diverse sample. Models performed optimally for women and participants aged 18-29. Furthermore, model predictions differed in perf
APA, Harvard, Vancouver, ISO, and other styles
32

Polyakova, Zlata, Masao Iwase, Ryota Hashimoto, and Masatoshi Yoshida. "The effect of ketamine on eye movement characteristics during free-viewing of natural images in common marmosets." Frontiers in Neuroscience 16 (September 20, 2022). http://dx.doi.org/10.3389/fnins.2022.1012300.

Full text
Abstract:
Various eye movement abnormalities and impairments in visual information processing have been reported in patients with schizophrenia. Therefore, dysfunction of saccadic eye movements is a potential biological marker for schizophrenia. In the present study, we used a pharmacological model of schizophrenia symptoms in marmosets and compared the eye movement characteristics of marmosets during free-viewing, using an image set identical to those used for human studies. It contains natural and complex images that were randomly presented for 8 s. As a pharmacological model of schizophrenia symptoms
APA, Harvard, Vancouver, ISO, and other styles
33

Russek, Evan M., Frederick Callaway, and Thomas L. Griffiths. "Inverting Cognitive Models With Neural Networks to Infer Preferences From Fixations." Cognitive Science 48, no. 11 (2024). http://dx.doi.org/10.1111/cogs.70015.

Full text
Abstract:
AbstractInferring an individual's preferences from their observable behavior is a key step in the development of assistive decision‐making technology. Although machine learning models such as neural networks could in principle be deployed toward this inference, a large amount of data is required to train such models. Here, we present an approach in which a cognitive model generates simulated data to augment limited human data. Using these data, we train a neural network to invert the model, making it possible to infer preferences from behavior. We show how this approach can be used to infer th
APA, Harvard, Vancouver, ISO, and other styles
34

Rolff, Tim, Frank Steinicke, and Simone Frintrop. "Gaze Mapping for Immersive Virtual Environments Based on Image Retrieval." Frontiers in Virtual Reality 3 (May 3, 2022). http://dx.doi.org/10.3389/frvir.2022.802318.

Full text
Abstract:
In this paper, we introduce a novel gaze mapping approach for free viewing conditions in dynamic immersive virtual environments (VEs), which projects recorded eye fixation data of users, who viewed the VE from different perspectives, to the current view. This generates eye fixation maps, which can serve as ground truth for training machine learning (ML) models to predict saliency and the user’s gaze in immersive virtual reality (VR) environments. We use a flexible image retrieval approach based on SIFT features, which can also map the gaze under strong viewpoint changes and dynamic changes. A
APA, Harvard, Vancouver, ISO, and other styles
35

Schwetlick, Lisa, Sebastian Reich, and Ralf Engbert. "Bayesian Dynamical Modeling of Fixational Eye Movements." Biological Cybernetics 119, no. 2-3 (2025). https://doi.org/10.1007/s00422-025-01010-8.

Full text
Abstract:
Abstract Humans constantly move their eyes, even during visual fixations, where miniature (or fixational) eye movements occur involuntarily. Fixational eye movements comprise slow components (physiological drift and tremor) and fast components (microsaccades). The complex dynamics of physiological drift can be modeled qualitatively as a statistically self-avoiding random walk (SAW model, Engbert et al., 2011). In this study, we implement a data assimilation approach for the SAW model to explain statistics of fixational eye movements and microsaccades in experimental data obtained from high-res
APA, Harvard, Vancouver, ISO, and other styles
36

Turski, Jacek. "A Geometric Theory Integrating Human Binocular Vision With Eye Movement." Frontiers in Neuroscience 14 (December 7, 2020). http://dx.doi.org/10.3389/fnins.2020.555965.

Full text
Abstract:
A theory of the binocular system with asymmetric eyes (AEs) is developed in the framework of bicentric perspective projections. The AE accounts for the eyeball's global asymmetry produced by the foveal displacement from the posterior pole, the main source of the eye's optical aberrations, and the crystalline lens' tilt countering some of these aberrations. In this theory, the horopter curves, which specify retinal correspondence of binocular single vision, are conic sections resembling empirical horopters. This advances the classic model of empirical horopters as conic sections introduced in a
APA, Harvard, Vancouver, ISO, and other styles
37

Mohamed Selim, Abdulrahman, Michael Barz, Omair Shahzad Bhatti, Hasan Md Tusfiqur Alam, and Daniel Sonntag. "A review of machine learning in scanpath analysis for passive gaze-based interaction." Frontiers in Artificial Intelligence 7 (June 5, 2024). http://dx.doi.org/10.3389/frai.2024.1391745.

Full text
Abstract:
The scanpath is an important concept in eye tracking. It refers to a person's eye movements over a period of time, commonly represented as a series of alternating fixations and saccades. Machine learning has been increasingly used for the automatic interpretation of scanpaths over the past few years, particularly in research on passive gaze-based interaction, i.e., interfaces that implicitly observe and interpret human eye movements, with the goal of improving the interaction. This literature review investigates research on machine learning applications in scanpath analysis for passive gaze-ba
APA, Harvard, Vancouver, ISO, and other styles
38

Mohamed, Selim Abdulrahman, Michael Barz, Omair Bhatti, Hasan Md Tusfiqur Alam, and Daniel Sonntag. "A review of machine learning in scanpath analysis for passive gaze-based interaction." June 5, 2024. https://doi.org/10.3389/frai.2024.1391745.

Full text
Abstract:
The scanpath is an important concept in eye tracking. It refers to a person's eye movements over a period of time, commonly represented as a series of alternating fixations and saccades. Machine learning has been increasingly used for the automatic interpretation of scanpaths over the past few years, particularly in research on passive gaze-based interaction, i.e., interfaces that implicitly observe and interpret human eye movements, with the goal of improving the interaction. This literature review investigates research on machine learning applications in scanpath analysis for passive gaze-ba
APA, Harvard, Vancouver, ISO, and other styles
39

Chacón Quesada, Rodrigo, Fernando Estévez Casado, and Yiannis Demiris. "An Integrated 3D Eye-Gaze Tracking Framework for Assessing Trust in Human-Robot Interaction." ACM Transactions on Human-Robot Interaction, March 28, 2025. https://doi.org/10.1145/3725861.

Full text
Abstract:
We introduce a comprehensive approach to examining the complexities of trust during human-robot interactions through an innovative three-dimensional (3D) eye-gaze tracking framework. Trust is a fundamental psychological factor in Human-Robot Interaction studies, influencing how humans perceive and interact with robots. Although researchers have previously highlighted eye-tracking as a promising tool for capturing behavioural manifestations of trust continuously and non-intrusively, traditional approaches have been limited to two-dimensional (2D) setups, leaving their applicability to real-worl
APA, Harvard, Vancouver, ISO, and other styles
40

Zhang, Xinyong. "Evaluating Target Expansion for Eye Pointing Tasks." Interacting with Computers, February 27, 2024. http://dx.doi.org/10.1093/iwc/iwae004.

Full text
Abstract:
Abstract The idea of target expansion was proposed two decades ago for manual target acquisition, but it is not feasible to implement this idea in traditional user interfaces as the interactive system cannot know exactly which target is the desired one and should be expanded among several candidates. With the increasing maturity of eye tracking technology, gaze input has moved from an academically promising technique to an input method with built-in support in Windows 10; and target expansion has already become very feasible in the context of gaze input, as the user’s eye gaze is inherently an
APA, Harvard, Vancouver, ISO, and other styles
41

Choi, Minkyu, Yizhen Zhang, Kuan Han, Xiaokai Wang, and Zhongming Liu. "Human Eyes–Inspired Recurrent Neural Networks Are More Robust Against Adversarial Noises." Neural Computation, July 18, 2024, 1–31. http://dx.doi.org/10.1162/neco_a_01688.

Full text
Abstract:
Abstract Humans actively observe the visual surroundings by focusing on salient objects and ignoring trivial details. However, computer vision models based on convolutional neural networks (CNN) often analyze visual input all at once through a single feedforward pass. In this study, we designed a dual-stream vision model inspired by the human brain. This model features retina-like input layers and includes two streams: one determining the next point of focus (the fixation), while the other interprets the visuals surrounding the fixation. Trained on image recognition, this model examines an ima
APA, Harvard, Vancouver, ISO, and other styles
42

Ahmadi, Nima, Farzan Sasangohar, Jing Yang, et al. "Quantifying Workload and Stress in Intensive Care Unit Nurses: Preliminary Evaluation Using Continuous Eye-Tracking." Human Factors: The Journal of the Human Factors and Ergonomics Society, May 5, 2022, 001872082210853. http://dx.doi.org/10.1177/00187208221085335.

Full text
Abstract:
Objective (1) To assess mental workloads of intensive care unit (ICU) nurses in 12-hour working shifts (days and nights) using eye movement data; (2) to explore the impact of stress on the ocular metrics of nurses performing patient care in the ICU. Background Prior studies have employed workload scoring systems or accelerometer data to assess ICU nurses’ workload. This is the first naturalistic attempt to explore nurses’ mental workload using eye movement data. Methods Tobii Pro Glasses 2 eye-tracking and Empatica E4 devices were used to collect eye movement and physiological data from 15 nur
APA, Harvard, Vancouver, ISO, and other styles
43

Chen, Yupei, Zhibo Yang, Seoyoung Ahn, Dimitris Samaras, Minh Hoai, and Gregory Zelinsky. "COCO-Search18 fixation dataset for predicting goal-directed attention control." Scientific Reports 11, no. 1 (2021). http://dx.doi.org/10.1038/s41598-021-87715-9.

Full text
Abstract:
AbstractAttention control is a basic behavioral process that has been studied for decades. The currently best models of attention control are deep networks trained on free-viewing behavior to predict bottom-up attention control – saliency. We introduce COCO-Search18, the first dataset of laboratory-quality goal-directed behavior large enough to train deep-network models. We collected eye-movement behavior from 10 people searching for each of 18 target-object categories in 6202 natural-scene images, yielding $$\sim$$ ∼ 300,000 search fixations. We thoroughly characterize COCO-Search18, and benc
APA, Harvard, Vancouver, ISO, and other styles
44

Jónsdóttir, Auður Anna, Ziho Kang, Tianchen Sun, Saptarshi Mandal, and Ji-Eun Kim. "The Effects of Language Barriers and Time Constraints on Online Learning Performance: An Eye-Tracking Study." Human Factors: The Journal of the Human Factors and Ergonomics Society, May 4, 2021, 001872082110109. http://dx.doi.org/10.1177/00187208211010949.

Full text
Abstract:
Objective The goal of this study is to model the effect of language use and time pressure on English as a first language (EFL) and English as a second language (ESL) students by measuring their eye movements in an on-screen, self-directed learning environment. Background Online learning is becoming integrated into learners’ daily lives due to the flexibility in scheduling and location that it offers. However, in many cases, the online learners often have no interaction with one another or their instructors, making it difficult to determine how the learners are reading the materials and whether
APA, Harvard, Vancouver, ISO, and other styles
45

Gardezi, Maham, King Hei Fung, Usman Mirza Baig, et al. "What Makes an Image Interesting and How Can We Explain It." Frontiers in Psychology 12 (September 1, 2021). http://dx.doi.org/10.3389/fpsyg.2021.668651.

Full text
Abstract:
Here, we explore the question: What makes a photograph interesting? Answering this question deepens our understanding of human visual cognition and knowledge gained can be leveraged to reliably and widely disseminate information. Observers viewed images belonging to different categories, which covered a wide, representative spectrum of real-world scenes, in a self-paced manner and, at trial’s end, rated each image’s interestingness. Our studies revealed the following: landscapes were the most interesting of all categories tested, followed by scenes with people and cityscapes, followed still by
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!