To see the other types of publications on this topic, follow the link: Gaze data.

Journal articles on the topic 'Gaze data'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Gaze data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yoo, Sangbong, Seongmin Jeong, and Yun Jang. "Gaze Behavior Effect on Gaze Data Visualization at Different Abstraction Levels." Sensors 21, no. 14 (2021): 4686. http://dx.doi.org/10.3390/s21144686.

Full text
Abstract:
Many gaze data visualization techniques intuitively show eye movement together with visual stimuli. The eye tracker records a large number of eye movements within a short period. Therefore, visualizing raw gaze data with the visual stimulus appears complicated and obscured, making it difficult to gain insight through visualization. To avoid the complication, we often employ fixation identification algorithms for more abstract visualizations. In the past, many scientists have focused on gaze data abstraction with the attention map and analyzed detail gaze movement patterns with the scanpath visualization. Abstract eye movement patterns change dramatically depending on fixation identification algorithms in the preprocessing. However, it is difficult to find out how fixation identification algorithms affect gaze movement pattern visualizations. Additionally, scientists often spend much time on adjusting parameters manually in the fixation identification algorithms. In this paper, we propose a gaze behavior-based data processing method for abstract gaze data visualization. The proposed method classifies raw gaze data using machine learning models for image classification, such as CNN, AlexNet, and LeNet. Additionally, we compare the velocity-based identification (I-VT), dispersion-based identification (I-DT), density-based fixation identification, velocity and dispersion-based (I-VDT), and machine learning based and behavior-based modelson various visualizations at each abstraction level, such as attention map, scanpath, and abstract gaze movement visualization.
APA, Harvard, Vancouver, ISO, and other styles
2

Geller, Jason, Matthew B. Winn, Tristian Mahr, and Daniel Mirman. "GazeR: A Package for Processing Gaze Position and Pupil Size Data." Behavior Research Methods 52, no. 5 (2020): 2232–55. http://dx.doi.org/10.3758/s13428-020-01374-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hà, Tiên-Dung, and Peter A. Chow-White. "The cancer multiple: Producing and translating genomic big data into oncology care." Big Data & Society 8, no. 1 (2021): 205395172097899. http://dx.doi.org/10.1177/2053951720978991.

Full text
Abstract:
This article provides an ethnographic account of how Big Data biology is produced, interpreted, debated, and translated in a Big Data-driven cancer clinical trial, entitled “Personalized OncoGenomics,” in Vancouver, Canada. We delve into epistemological differences between clinical judgment, pathological assessment, and bioinformatic analysis of cancer. To unpack these epistemological differences, we analyze a set of gazes required to produce Big Data biology in cancer care: clinical gaze, molecular gaze, and informational gaze. We are concerned with the interactions of these bodily gazes and their interdependence on each other to produce Big Data biology and translate it into clinical knowledge. To that end, our central research questions ask: How do medical practitioners and data scientists interact, contest, and collaborate to produce and translate Big Data into clinical knowledge? What counts as actionable and reliable data in cancer decision-making? How does the explicability or translatability of genomic Big Data come to redefine or contradict medical practice? The article contributes to current debates on whether Big Data engenders new questions and approaches to biology, or Big Data biology is merely an extension of early modern natural history and biology. This ethnographic account will highlight how genomic Big Data, which underpins the mechanism of personalized medicine, allows oncologists to understand and diagnose cancer in a different light, but it does not revolutionize or disrupt medical oncology on an institutional level. Rather, personalized medicine is interdependent on different styles of (medical) thought, gaze, and practice to be produced and made intelligible.
APA, Harvard, Vancouver, ISO, and other styles
4

SUZUKI, Daisuke, Fumitoshi KIKUCHI, Takaharu KOIKE, and Hiroyuki KATANO. "Gaze data feedback on railway driving simulator." Japanese Journal of Ergonomics 56, Supplement (2020): 2F1–01–2F1–01. http://dx.doi.org/10.5100/jje.56.2f1-01.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kerasovitis, Konstantinos. "The data gaze: capitalism, power and perception." Information, Communication & Society 22, no. 13 (2019): 2033–36. http://dx.doi.org/10.1080/1369118x.2019.1609544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Zenghai, Hong Fu, Wai-Lun Lo, and Zheru Chi. "Strabismus Recognition Using Eye-Tracking Data and Convolutional Neural Networks." Journal of Healthcare Engineering 2018 (2018): 1–9. http://dx.doi.org/10.1155/2018/7692198.

Full text
Abstract:
Strabismus is one of the most common vision diseases that would cause amblyopia and even permanent vision loss. Timely diagnosis is crucial for well treating strabismus. In contrast to manual diagnosis, automatic recognition can significantly reduce labor cost and increase diagnosis efficiency. In this paper, we propose to recognize strabismus using eye-tracking data and convolutional neural networks. In particular, an eye tracker is first exploited to record a subject’s eye movements. A gaze deviation (GaDe) image is then proposed to characterize the subject’s eye-tracking data according to the accuracies of gaze points. The GaDe image is fed to a convolutional neural network (CNN) that has been trained on a large image database called ImageNet. The outputs of the full connection layers of the CNN are used as the GaDe image’s features for strabismus recognition. A dataset containing eye-tracking data of both strabismic subjects and normal subjects is established for experiments. Experimental results demonstrate that the natural image features can be well transferred to represent eye-tracking data, and strabismus can be effectively recognized by our proposed method.
APA, Harvard, Vancouver, ISO, and other styles
7

Le, Thao, Ronal Singh, and Tim Miller. "Goal Recognition for Deceptive Human Agents through Planning and Gaze." Journal of Artificial Intelligence Research 71 (August 3, 2021): 697–732. http://dx.doi.org/10.1613/jair.1.12518.

Full text
Abstract:
Eye gaze has the potential to provide insight into the minds of individuals, and this idea has been used in prior research to improve human goal recognition by combining human's actions and gaze. However, most existing research assumes that people are rational and honest. In adversarial scenarios, people may deliberately alter their actions and gaze, which presents a challenge to goal recognition systems. In this paper, we present new models for goal recognition under deception using a combination of gaze behaviour and observed movements of the agent. These models aim to detect when a person is deceiving by analysing their gaze patterns and use this information to adjust the goal recognition. We evaluated our models in two human-subject studies: (1) using data collected from 30 individuals playing a navigation game inspired by an existing deception study and (2) using data collected from 40 individuals playing a competitive game (Ticket To Ride). We found that one of our models (Modulated Deception Gaze+Ontic) offers promising results compared to the previous state-of-the-art model in both studies. Our work complements existing adversarial goal recognition systems by equipping these systems with the ability to tackle ambiguous gaze behaviours.
APA, Harvard, Vancouver, ISO, and other styles
8

Krassanakis, Vassilios. "Aggregated Gaze Data Visualization Using Contiguous Irregular Cartograms." Digital 1, no. 3 (2021): 130–44. http://dx.doi.org/10.3390/digital1030010.

Full text
Abstract:
Gaze data visualization constitutes one of the most critical processes during eye-tracking analysis. Considering that modern devices are able to collect gaze data in extremely high frequencies, the visualization of the collected aggregated gaze data is quite challenging. In the present study, contiguous irregular cartograms are used as a method to visualize eye-tracking data captured by several observers during the observation of a visual stimulus. The followed approach utilizes a statistical grayscale heatmap as the main input and, hence, it is independent of the total number of the recorded raw gaze data. Indicative examples, based on different parameters/conditions and heatmap grid sizes, are provided in order to highlight their influence on the final image of the produced visualization. Moreover, two analysis metrics, referred to as center displacement (CD) and area change (AC), are proposed and implemented in order to quantify the geometric changes (in both position and area) that accompany the topological transformation of the initial heatmap grids, as well as to deliver specific guidelines for the execution of the used algorithm. The provided visualizations are generated using open-source software in a geographic information system.
APA, Harvard, Vancouver, ISO, and other styles
9

Sesma-Sanchez, L., A. Villanueva, and R. Cabeza. "Gaze Estimation Interpolation Methods Based on Binocular Data." IEEE Transactions on Biomedical Engineering 59, no. 8 (2012): 2235–43. http://dx.doi.org/10.1109/tbme.2012.2201716.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Naqvi, Rizwan Ali, Muhammad Arsalan, Abdul Rehman, Ateeq Ur Rehman, Woong-Kee Loh, and Anand Paul. "Deep Learning-Based Drivers Emotion Classification System in Time Series Data for Remote Applications." Remote Sensing 12, no. 3 (2020): 587. http://dx.doi.org/10.3390/rs12030587.

Full text
Abstract:
Aggressive driving emotions is indeed one of the major causes for traffic accidents throughout the world. Real-time classification in time series data of abnormal and normal driving is a keystone to avoiding road accidents. Existing work on driving behaviors in time series data have some limitations and discomforts for the users that need to be addressed. We proposed a multimodal based method to remotely detect driver aggressiveness in order to deal these issues. The proposed method is based on change in gaze and facial emotions of drivers while driving using near-infrared (NIR) camera sensors and an illuminator installed in vehicle. Driver’s aggressive and normal time series data are collected while playing car racing and truck driving computer games, respectively, while using driving game simulator. Dlib program is used to obtain driver’s image data to extract face, left and right eye images for finding change in gaze based on convolutional neural network (CNN). Similarly, facial emotions that are based on CNN are also obtained through lips, left and right eye images extracted from Dlib program. Finally, the score level fusion is applied to scores that were obtained from change in gaze and facial emotions to classify aggressive and normal driving. The proposed method accuracy is measured through experiments while using a self-constructed large-scale testing database that shows the classification accuracy of the driver’s change in gaze and facial emotions for aggressive and normal driving is high, and the performance is superior to that of previous methods.
APA, Harvard, Vancouver, ISO, and other styles
11

B. N., Pavan, Adithya B., Chethana B., Ashok Patil, and Young Chai. "Gaze-Controlled Virtual Retrofitting of UAV-Scanned Point Cloud Data." Symmetry 10, no. 12 (2018): 674. http://dx.doi.org/10.3390/sym10120674.

Full text
Abstract:
This study proposed a gaze-controlled method for visualization, navigation, and retrofitting of large point cloud data (PCD), produced by unmanned aerial vehicles (UAV) mounted with laser range-scanners. For this purpose, the estimated human gaze point was used to interact with a head-mounted display (HMD) to visualize the PCD and the computer-aided design (CAD) models. Virtual water treat plant pipeline models were considered for retrofitting against the PCD of the actual pipelines. In such an application, the objective was to use the gaze data to interact with the HMD so the virtual retrofitting process was performed by navigating with the eye gaze. It was inferred that the integration of eye gaze tracking for visualization and interaction with the HMD could improve both speed and functionality for human–computer interaction. A usability study was conducted to investigate the speed of the proposed method against the mouse interaction-based retrofitting. In addition, immersion, interface quality and accuracy was analyzed by adopting the appropriate questionnaire and user learning was tested by conducting experiments in iterations from participants. Finally, it was verified whether any negative psychological factors, such as cybersickness, general discomfort, fatigue, headache, eye strain and difficulty concentrating through the survey experiment.
APA, Harvard, Vancouver, ISO, and other styles
12

MacInnes, Jeff, Shariq Iqbal, John Pearson, and Elizabeth Johnson. "Mobile Gaze Mapping: A Python package for mapping mobile gaze data to a fixed target stimulus." Journal of Open Source Software 3, no. 31 (2018): 984. http://dx.doi.org/10.21105/joss.00984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Khan, Muhammad Qasim, and Sukhan Lee. "Gaze and Eye Tracking: Techniques and Applications in ADAS." Sensors 19, no. 24 (2019): 5540. http://dx.doi.org/10.3390/s19245540.

Full text
Abstract:
Tracking drivers’ eyes and gazes is a topic of great interest in the research of advanced driving assistance systems (ADAS). It is especially a matter of serious discussion among the road safety researchers’ community, as visual distraction is considered among the major causes of road accidents. In this paper, techniques for eye and gaze tracking are first comprehensively reviewed while discussing their major categories. The advantages and limitations of each category are explained with respect to their requirements and practical uses. In another section of the paper, the applications of eyes and gaze tracking systems in ADAS are discussed. The process of acquisition of driver’s eyes and gaze data and the algorithms used to process this data are explained. It is explained how the data related to a driver’s eyes and gaze can be used in ADAS to reduce the losses associated with road accidents occurring due to visual distraction of the driver. A discussion on the required features of current and future eye and gaze trackers is also presented.
APA, Harvard, Vancouver, ISO, and other styles
14

Lamti, Hachem A., Mohamed Moncef Ben Khelifa, and Vincent Hugel. "Cerebral and gaze data fusion for wheelchair navigation enhancement: case of distracted users." Robotica 37, no. 2 (2018): 246–63. http://dx.doi.org/10.1017/s0263574718000991.

Full text
Abstract:
SUMMARYThe goal of this paper is to present a new hybrid system based on the fusion of gaze data and Steady State Visual Evoked Potentials (SSVEP) not only to command a powered wheelchair, but also to account for users distraction levels (concentrated or distracted). For this purpose, a multi-layer perception neural network was set up in order to combine relevant gazing and blinking features from gaze sequence and brainwave features from occipital and parietal brain regions. The motivation behind this work is the shortages raised from the individual use of gaze-based and SSVEP-based wheelchair command techniques. The proposed framework is based on three main modules: a gaze module to select command and activate the flashing stimuli. An SSVEP module to validate the selected command. In parallel, a distraction level module estimates the intention of the user by mean of behavioral entropy and validates/inhibits the command accordingly. An experimental protocol was set up and the prototype was tested on five paraplegic subjects and compared with standard SSVEP and gaze-based systems. The results showed that the new framework performed better than conventional gaze-based and SSVEP-based systems. Navigation performance was assessed based on navigation time and obstacles collisions.
APA, Harvard, Vancouver, ISO, and other styles
15

Zonca, Joshua, Giorgio Coricelli, and Luca Polonio. "Gaze data reveal individual differences in relational representation processes." Journal of Experimental Psychology: Learning, Memory, and Cognition 46, no. 2 (2020): 257–79. http://dx.doi.org/10.1037/xlm0000723.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Kurzhals, Kuno, Marcel Hlawatsch, Florian Heimerl, Michael Burch, Thomas Ertl, and Daniel Weiskopf. "Gaze Stripes: Image-Based Visualization of Eye Tracking Data." IEEE Transactions on Visualization and Computer Graphics 22, no. 1 (2016): 1005–14. http://dx.doi.org/10.1109/tvcg.2015.2468091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Marchal, Florian, Sylvain Castagnos, and Anne Boyer. "First Attempt to Predict User Memory from Gaze Data." International Journal on Artificial Intelligence Tools 27, no. 06 (2018): 1850029. http://dx.doi.org/10.1142/s021821301850029x.

Full text
Abstract:
Many recommenders compute predictions by inferring the users’ preferences. However, in some cases, such as in e-education, the recommendations of pedagogical resources should rather be based on users’ memory. In order to estimate in real time and with low involvement what has been recalled by users, we designed a user study to highlight the link between gaze features and visual memory. Our protocol consisted in asking different subjects to remember a large set of images. During this memory test, we collected about 19 000 fixation points. Among other results, we show in this paper a strong correlation between the relative path angles and the memorized items. We then applied various classifiers and showed that it is possible to predict the users’ memory status by analyzing their gaze data. This is the first step so as to provide recommendations that fits users’ learning curve.
APA, Harvard, Vancouver, ISO, and other styles
18

Simpson, James. "Three-Dimensional Gaze Projection Heat-Mapping of Outdoor Mobile Eye-Tracking Data." Interdisciplinary Journal of Signage and Wayfinding 5, no. 1 (2021): 62–82. http://dx.doi.org/10.15763/issn.2470-9670.2021.v5.i1.a75.

Full text
Abstract:
The mobilization of eye-tracking for use outside of the laboratory provides new opportunities for the assessment of pedestrian visual engagement with their surroundings. However, the development of data representation techniques that visualize the dynamics of pedestrian gaze distribution upon the environment they are situated within remains limited. The current study addresses this through highlighting how mobile eye-tracking data, which captures where pedestrian gaze is focused upon buildings along urban street edges, can be mapped as three-dimensional gaze projection heat-maps. This data processing and visualization technique is assessed during the current study along with future opportunities and associated challenges discussed.
APA, Harvard, Vancouver, ISO, and other styles
19

bin Suhaimi, Muhammad Syaiful Amri, Kojiro Matsushita, Minoru Sasaki, and Waweru Njeri. "24-Gaze-Point Calibration Method for Improving the Precision of AC-EOG Gaze Estimation." Sensors 19, no. 17 (2019): 3650. http://dx.doi.org/10.3390/s19173650.

Full text
Abstract:
This paper sought to improve the precision of the Alternating Current Electro-Occulo-Graphy (AC-EOG) gaze estimation method. The method consisted of two core techniques: To estimate eyeball movement from EOG signals and to convert signals from the eyeball movement to the gaze position. In conventional research, the estimations are computed with two EOG signals corresponding to vertical and horizontal movements. The conversion is based on the affine transformation and those parameters are computed with 24-point gazing data at the calibration. However, the transformation is not applied to all the 24-point gazing data, but to four spatially separated data (Quadrant method), and each result has different characteristics. Thus, we proposed the conversion method for 24-point gazing data at the same time: To assume an imaginary center (i.e., 25th point) on gaze coordinates with 24-point gazing data and apply an affine transformation to 24-point gazing data. Then, we conducted a comparative investigation between the conventional method and the proposed method. From the results, the average eye angle error for the cross-shaped electrode attachment is x = 2.27 ° ± 0.46 ° and y = 1.83 ° ± 0.34 ° . In contrast, for the plus-shaped electrode attachment, the average eye angle error is is x = 0.94 ° ± 0.19 ° and y = 1.48 ° ± 0.27 ° . We concluded that the proposed method offers a simpler and more precise EOG gaze estimation than the conventional method.
APA, Harvard, Vancouver, ISO, and other styles
20

Sarey Khanie, M., J. Stoll, W. Einhäuser, J. Wienold, and M. Andersen. "Gaze and discomfort glare, Part 1: Development of a gaze-driven photometry." Lighting Research & Technology 49, no. 7 (2016): 845–65. http://dx.doi.org/10.1177/1477153516649016.

Full text
Abstract:
Discomfort glare is a major challenge for the design of workplaces. The existing metrics for discomfort glare prediction share the limitation that they do not take gaze direction into account. To overcome this limitation, we developed a ‘gaze-driven’ method for discomfort glare assessment. We conducted a series of experiments under simulated office conditions and recorded the participants’ gaze using mobile eye tracking and the luminance distributions using high dynamic range imaging methods. The two methods were then integrated to derive ‘gaze-centred’ luminance measurements in the field of view. The existing ‘fixed-gaze’ and the newly developed ‘gaze-driven’ measurement methods are compared. Our results show that there is a significant difference between the two methods. In this paper, the procedure for integrating the recorded luminance images with the recorded gaze dynamics for obtaining gaze-centred luminance data is described. This gaze-centred luminance data will be compared to the subjective assessment of glare in Part 2 of this study.
APA, Harvard, Vancouver, ISO, and other styles
21

SUZUKI, Daisuke, Satoru MATSUURA, Takaharu KOIKE, and Kuniyuki MATSUU. "1B1-5 Gaze data feedback system on railway driving simulator." Japanese Journal of Ergonomics 55, Supplement (2019): 1B1–5–1B1–5. http://dx.doi.org/10.5100/jje.55.1b1-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Trapp, Andrew C., Wen Liu, and Soussan Djamasbi. "Identifying Fixations in Gaze Data via Inner Density and Optimization." INFORMS Journal on Computing 31, no. 3 (2019): 459–76. http://dx.doi.org/10.1287/ijoc.2018.0859.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Breeden, Katherine, and Pat Hanrahan. "Gaze Data for the Analysis of Attention in Feature Films." ACM Transactions on Applied Perception 14, no. 4 (2017): 1–14. http://dx.doi.org/10.1145/3127588.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Denniss, Jonathan, and Andrew T. Astle. "Spatial Interpolation Enables Normative Data Comparison in Gaze-Contingent Microperimetry." Investigative Opthalmology & Visual Science 57, no. 13 (2016): 5449. http://dx.doi.org/10.1167/iovs.16-20222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Thirunarayanan, Ishwarya, Khimya Khetarpal, Sanjeev Koppal, Olivier Le Meur, John Shea, and Eakta Jain. "Creating Segments and Effects on Comics by Clustering Gaze Data." ACM Transactions on Multimedia Computing, Communications, and Applications 13, no. 3 (2017): 1–23. http://dx.doi.org/10.1145/3078836.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

SHIBATA, Chikara, and Hiroyuki YAGUCHI. "A Study of Gaze Data Display Method for Text Readings." Japanese Journal of Ergonomics 57, Supplement (2021): 2D3–2–2D3–2. http://dx.doi.org/10.5100/jje.57.2d3-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Niehorster, Diederick C., Raimondas Zemblys, Tanya Beelders, and Kenneth Holmqvist. "Characterizing gaze position signals and synthesizing noise during fixations in eye-tracking data." Behavior Research Methods 52, no. 6 (2020): 2515–34. http://dx.doi.org/10.3758/s13428-020-01400-9.

Full text
Abstract:
AbstractThe magnitude of variation in the gaze position signals recorded by an eye tracker, also known as its precision, is an important aspect of an eye tracker’s data quality. However, data quality of eye-tracking signals is still poorly understood. In this paper, we therefore investigate the following: (1) How do the various available measures characterizing eye-tracking data during fixation relate to each other? (2) How are they influenced by signal type? (3) What type of noise should be used to augment eye-tracking data when evaluating eye-movement analysis methods? To support our analysis, this paper presents new measures to characterize signal type and signal magnitude based on RMS-S2S and STD, two established measures of precision. Simulations are performed to investigate how each of these measures depends on the number of gaze position samples over which they are calculated, and to reveal how RMS-S2S and STD relate to each other and to measures characterizing the temporal spectrum composition of the recorded gaze position signal. Further empirical investigations were performed using gaze position data recorded with five eye trackers from human and artificial eyes. We found that although the examined eye trackers produce gaze position signals with different characteristics, the relations between precision measures derived from simulations are borne out by the data. We furthermore conclude that data with a range of signal type values should be used to assess the robustness of eye-movement analysis methods. We present a method for generating artificial eye-tracker noise of any signal type and magnitude.
APA, Harvard, Vancouver, ISO, and other styles
28

Takahashi, Ryo, Hiromasa Suzuki, Jouh Yeong Chew, Yutaka Ohtake, Yukie Nagai, and Koichi Ohtomi. "A system for three-dimensional gaze fixation analysis using eye tracking glasses." Journal of Computational Design and Engineering 5, no. 4 (2017): 449–57. http://dx.doi.org/10.1016/j.jcde.2017.12.007.

Full text
Abstract:
Abstract Eye tracking is a technology that has quickly become a commonplace tool for evaluating package and webpage design. In such design processes, static two-dimensional images are shown on a computer screen while a subject's gaze where he or she looks is measured via an eye tracking device. The collected gaze fixation data are then visualized and analyzed via gaze plots and heat maps. Such evaluations using two-dimensional images are often too limited to analyze gaze on three-dimensional physical objects such as products because users look at them not from a single point of view but rather from various angles. Therefore in this study we propose methods for collecting gaze fixation data for a three-dimensional model of a given product and visualizing corresponding gaze plots and heat maps also in three dimensions. To achieve our goals, we used a wearable eye-tracking device, i.e., eye-tracking glasses. Further, we implemented a prototype system to demonstrate its advantages in comparison with two-dimensional gaze fixation methods. Highlights Proposing a method for collecting gaze fixation data for a three-dimensional model of a given product. Proposing two visualization methods for three dimensional gaze data; gaze plots and heat maps. Proposed system was applied to two practical examples of hair dryer and car interior.
APA, Harvard, Vancouver, ISO, and other styles
29

Brock, Brian. "Seeing through the Data Shadow: Communing with the Saints in a Surveillance Society." Surveillance & Society 16, no. 4 (2018): 533–45. http://dx.doi.org/10.24908/ss.v16i4.8085.

Full text
Abstract:
The political theologian Amy Laura Hall has recently suggested that the proliferation of security cameras can be read as an index displaying the quality of a given community’s social fabric. The aim of the paper is to show why this is a plausible reading of the Christian tradition that also helpfully illuminates the various cultural phenomena in western societies that are collectively indicated by the label “surveillance.” The Swedish theologian Ola Sigurdson’s account of modern regimes of perception substantiates this latter claim. An alternative political proposal is then developed around an account of the divine gaze that differs from the panoptic gaze of modernity. This theological positioning of the trusting gaze as ontologically fundamental for human community is paired with an acceptance of the limits of human sight and the multivalence of human knowing. The paper concludes by highlighting the importance of the gaze of the saints in training Christian vision to see beyond the characteristic ways of seeing and participating in the social organism characteristic of modern liberal surveillance societies. This conclusion implies, further, that one of the most important ways that the most denuding aspects of the surveillance society can be resisted is by drawing the gatekeepers who do the watching out in to public converse.
APA, Harvard, Vancouver, ISO, and other styles
30

Kar, Anuradha. "MLGaze: Machine Learning-Based Analysis of Gaze Error Patterns in Consumer Eye Tracking Systems." Vision 4, no. 2 (2020): 25. http://dx.doi.org/10.3390/vision4020025.

Full text
Abstract:
Analyzing the gaze accuracy characteristics of an eye tracker is a critical task as its gaze data is frequently affected by non-ideal operating conditions in various consumer eye tracking applications. In previous research on pattern analysis of gaze data, efforts were made to model human visual behaviors and cognitive processes. What remains relatively unexplored are questions related to identifying gaze error sources as well as quantifying and modeling their impacts on the data quality of eye trackers. In this study, gaze error patterns produced by a commercial eye tracking device were studied with the help of machine learning algorithms, such as classifiers and regression models. Gaze data were collected from a group of participants under multiple conditions that commonly affect eye trackers operating on desktop and handheld platforms. These conditions (referred here as error sources) include user distance, head pose, and eye-tracker pose variations, and the collected gaze data were used to train the classifier and regression models. It was seen that while the impact of the different error sources on gaze data characteristics were nearly impossible to distinguish by visual inspection or from data statistics, machine learning models were successful in identifying the impact of the different error sources and predicting the variability in gaze error levels due to these conditions. The objective of this study was to investigate the efficacy of machine learning methods towards the detection and prediction of gaze error patterns, which would enable an in-depth understanding of the data quality and reliability of eye trackers under unconstrained operating conditions. Coding resources for all the machine learning methods adopted in this study were included in an open repository named MLGaze to allow researchers to replicate the principles presented here using data from their own eye trackers.
APA, Harvard, Vancouver, ISO, and other styles
31

Busey, Thomas A., Nicholas Heise, R. Austin Hicklin, Bradford T. Ulery, and JoAnn Buscaglia. "Characterizing missed identifications and errors in latent fingerprint comparisons using eye-tracking data." PLOS ONE 16, no. 5 (2021): e0251674. http://dx.doi.org/10.1371/journal.pone.0251674.

Full text
Abstract:
Latent fingerprint examiners sometimes come to different conclusions when comparing fingerprints, and eye-gaze behavior may help explain these outcomes. missed identifications (missed IDs) are inconclusive, exclusion, or No Value determinations reached when the consensus of other examiners is an identification. To determine the relation between examiner behavior and missed IDs, we collected eye-gaze data from 121 latent print examiners as they completed a total 1444 difficult (latent-exemplar) comparisons. We extracted metrics from the gaze data that serve as proxies for underlying perceptual and cognitive capacities. We used these metrics to characterize potential mechanisms of missed IDs: Cursory Comparison and Mislocalization. We find that missed IDs are associated with shorter comparison times, fewer regions visited, and fewer attempted correspondences between the compared images. Latent print comparisons resulting in erroneous exclusions (a subset of missed IDs) are also more likely to have fixations in different regions and less accurate correspondence attempts than those comparisons resulting in identifications. We also use our derived metrics to describe one atypical examiner who made six erroneous identifications, four of which were on comparisons intended to be straightforward exclusions. The present work helps identify the degree to which missed IDs can be explained using eye-gaze behavior, and the extent to which missed IDs depend on cognitive and decision-making factors outside the domain of eye-tracking methodologies.
APA, Harvard, Vancouver, ISO, and other styles
32

Ananpiriyakul, Thanawut, Joshua Anghel, Kristi Potter, and Alark Joshi. "A Gaze-Contingent System for Foveated Multiresolution Visualization of Vector and Volumetric Data." Electronic Imaging 2020, no. 1 (2020): 374–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.1.vda-374.

Full text
Abstract:
Computational complexity is a limiting factor for visualizing large-scale scientific data. Most approaches to render large datasets are focused on novel algorithms that leverage cutting-edge graphics hardware to provide users with an interactive experience. In this paper, we alternatively demonstrate foveated imaging which allows interactive exploration using low-cost hardware by tracking the gaze of a participant to drive the rendering quality of an image. Foveated imaging exploits the fact that the spatial resolution of the human visual system decreases dramatically away from the central point of gaze, allowing computational resources to be reserved for areas of importance. We demonstrate this approach using face tracking to identify the gaze point of the participant for both vector and volumetric datasets and evaluate our results by comparing against traditional techniques. In our evaluation, we found a significant increase in computational performance using our foveated imaging approach while maintaining high image quality in regions of visual attention.
APA, Harvard, Vancouver, ISO, and other styles
33

Trentman, Emma, and Wenhao Diao. "The American gaze east." Study Abroad to, from, and within Asia 2, no. 2 (2017): 175–205. http://dx.doi.org/10.1075/sar.16001.tre.

Full text
Abstract:
Abstract The 21st century has seen an emphasis in US media and policy documents on increasing the numbers of US students studying abroad and also the amount of US students studying ‘critical’ languages. This paper examines the intersection of these discourses, or the experiences of critical language learners abroad. We analyze this intersection by using critical discourse analysis to examine US media and policy documents and data from students studying Arabic in Egypt and Mandarin in China. This analysis reveals considerable discrepancies between rhetoric and experience in terms of language and intercultural learning. We argue that a critical examination of current discourses of study abroad (SA) reveals that they in fact recreate the colonial map, mask global inequalities, and create a new global elite. We conclude that language and intercultural learning abroad will remain a source of tension until SA students and programs critically engage with these discourses.
APA, Harvard, Vancouver, ISO, and other styles
34

Kuravsky, L. S., P. A. Marmalyuk, G. A. Yuryev, O. B. Belyaeva, and O. Yu Prokopieva. "Flight crew diagnostic using aviation simulator training data." Experimental Psychology (Russia) 9, no. 3 (2016): 118–37. http://dx.doi.org/10.17759/exppsy.2016090310.

Full text
Abstract:
This paper describes a new concept of flight crew assessment based on flight simulators training result. It is based on representation of pilot gaze movement with the aid of continuous-time Markov processes with discrete states. Considered are both the procedure of model parameters identification provided with goodness-of-fit tests in use and the classifier-building technique, which makes it possible to estimate degree of correspondence between the observed gaze motion distribution under study and reference distributions identified for different diagnosed groups. The final assessing criterion is formed on the basis of integrated diagnostic parameters, which are determined by the parameters of the identified models. The article provides a description of the experiment, illustrations, and results of studies aimed at assessing the reliability of the developed models and criteria, as well as conclusions about the applicability of the approach, its advantages and disadvantages.
APA, Harvard, Vancouver, ISO, and other styles
35

Smith, Stephanie M., and Ian Krajbich. "Gaze Amplifies Value in Decision Making." Psychological Science 30, no. 1 (2018): 116–28. http://dx.doi.org/10.1177/0956797618810521.

Full text
Abstract:
When making decisions, people tend to choose the option they have looked at more. An unanswered question is how attention influences the choice process: whether it amplifies the subjective value of the looked-at option or instead adds a constant, value-independent bias. To address this, we examined choice data from six eye-tracking studies ( Ns = 39, 44, 44, 36, 20, and 45, respectively) to characterize the interaction between value and gaze in the choice process. We found that the summed values of the options influenced response times in every data set and the gaze-choice correlation in most data sets, in line with an amplifying role of attention in the choice process. Our results suggest that this amplifying effect is more pronounced in tasks using large sets of familiar stimuli, compared with tasks using small sets of learned stimuli.
APA, Harvard, Vancouver, ISO, and other styles
36

Caruana, Nathan, Genevieve McArthur, Alexandra Woolgar, and Jon Brock. "Detecting communicative intent in a computerised test of joint attention." PeerJ 5 (January 17, 2017): e2899. http://dx.doi.org/10.7717/peerj.2899.

Full text
Abstract:
The successful navigation of social interactions depends on a range of cognitive faculties—including the ability to achieve joint attention with others to share information and experiences. We investigated the influence that intention monitoring processes have on gaze-following response times during joint attention. We employed a virtual reality task in which 16 healthy adults engaged in a collaborative game with a virtual partner to locate a target in a visual array. In theSearchtask, the virtual partner was programmed to engage in non-communicative gaze shifts in search of the target, establish eye contact, and then display a communicative gaze shift to guide the participant to the target. In theNoSearchtask, the virtual partner simply established eye contact and then made a single communicative gaze shift towards the target (i.e., there were no non-communicative gaze shifts in search of the target). Thus, only the Search task required participants to monitor their partner’s communicative intent before responding to joint attention bids. We found that gaze following was significantly slower in the Search task than the NoSearch task. However, the same effect on response times was not observed when participants completed non-social control versions of the Search and NoSearch tasks, in which the avatar’s gaze was replaced by arrow cues. These data demonstrate that the intention monitoring processes involved in differentiating communicative and non-communicative gaze shifts during the Search task had a measurable influence on subsequent joint attention behaviour. The empirical and methodological implications of these findings for the fields of autism and social neuroscience will be discussed.
APA, Harvard, Vancouver, ISO, and other styles
37

ZHAN, Yinwei, Yaodong LI, Zhuo YANG, Yao ZHAO, and Huaiyu WU. "Saccade Information Based Directional Heat Map Generation for Gaze Data Visualization." IEICE Transactions on Information and Systems E102.D, no. 8 (2019): 1602–5. http://dx.doi.org/10.1587/transinf.2019edl8009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Sæther, Line, Werner Van Belle, Bruno Laeng, Tim Brennen, and Morten Øvervoll. "Anchoring gaze when categorizing faces’ sex: Evidence from eye-tracking data." Vision Research 49, no. 23 (2009): 2870–80. http://dx.doi.org/10.1016/j.visres.2009.09.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Jain, Eakta, Yaser Sheikh, and Jessica Hodgins. "Predicting Moves-on-Stills for Comic Art Using Viewer Gaze Data." IEEE Computer Graphics and Applications 36, no. 4 (2016): 34–45. http://dx.doi.org/10.1109/mcg.2016.74.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Malakhova, Katerina, and Evgenii Shelepin. "Including temporal information into prediction of gaze direction by webcam data." Journal of Vision 18, no. 10 (2018): 1204. http://dx.doi.org/10.1167/18.10.1204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Backer, G., B. Mertsching, and M. Bollmann. "Data- and model-driven gaze control for an active-vision system." IEEE Transactions on Pattern Analysis and Machine Intelligence 23, no. 12 (2001): 1415–29. http://dx.doi.org/10.1109/34.977565.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Sivarajah, Yathunanthan, Eun-Jung Holden, Roberto Togneri, Mike Dentith, and Jeffrey Shragge. "Analysing Variability in Geophysical Data Interpretation by Monitoring Eye Gaze Movement." ASEG Extended Abstracts 2012, no. 1 (2012): 1–4. http://dx.doi.org/10.1071/aseg2012ab182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Zhang, Yun, Xiaoyu Zhao, Hong Fu, et al. "A Time Delay Neural Network model for simulating eye gaze data." Journal of Experimental & Theoretical Artificial Intelligence 23, no. 1 (2011): 111–26. http://dx.doi.org/10.1080/0952813x.2010.506298.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Sadria, Mehrshad, Soroush Karimi, and Anita T. Layton. "Network centrality analysis of eye-gaze data in autism spectrum disorder." Computers in Biology and Medicine 111 (August 2019): 103332. http://dx.doi.org/10.1016/j.compbiomed.2019.103332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Canham, Hugo, and Rejane Williams. "Being black, middle class and the object of two gazes." Ethnicities 17, no. 1 (2016): 23–46. http://dx.doi.org/10.1177/1468796816664752.

Full text
Abstract:
The growth of the black 1 middle class in ‘post-apartheid’ 2 South Africa has become the subject of scholarly and public interest. Applying elements of discourse analysis to interview and group discussion based data, this article provides a qualitative thematic exploration of two pressures that confront a group of black middle-class professionals residing in Johannesburg, South Africa. The first pressure is the experience of being black under the hegemonic white gaze and the second is the experience of the marshalling black gaze. The complexities of occupying the positions of being black and middle class and of living with the scrutiny of two gazes concurrently, is explored. The findings suggest that the white gaze persists in seeking to negatively mark and destabilise black professionals and profiting off covert and paradoxical mobilisations of race discourses as a means of bolstering whiteness. On the other hand, the black gaze serves to police the boundaries of what acceptable blackness is. Under this gaze, the professional, black middle class is perceived as having sold out to whiteness and abandoned given conceptions of blackness. The tensions arising out of navigating these dialectical disciplining gazes suggests that this group holds the tenuous position of being corralled from the ‘outside’ and ‘inside.’ The research, however, reveals the complex ways in which racialisation continues to shape black lives alongside the less rigid identity possibilities for blackness that move beyond essentialised identity performances.
APA, Harvard, Vancouver, ISO, and other styles
46

Niehorster, Diederick C., Thiago Santini, Roy S. Hessels, Ignace T. C. Hooge, Enkelejda Kasneci, and Marcus Nyström. "The impact of slippage on the data quality of head-worn eye trackers." Behavior Research Methods 52, no. 3 (2020): 1140–60. http://dx.doi.org/10.3758/s13428-019-01307-0.

Full text
Abstract:
AbstractMobile head-worn eye trackers allow researchers to record eye-movement data as participants freely move around and interact with their surroundings. However, participant behavior may cause the eye tracker to slip on the participant’s head, potentially strongly affecting data quality. To investigate how this eye-tracker slippage affects data quality, we designed experiments in which participants mimic behaviors that can cause a mobile eye tracker to move. Specifically, we investigated data quality when participants speak, make facial expressions, and move the eye tracker. Four head-worn eye-tracking setups were used: (i) Tobii Pro Glasses 2 in 50 Hz mode, (ii) SMI Eye Tracking Glasses 2.0 60 Hz, (iii) Pupil-Labs’ Pupil in 3D mode, and (iv) Pupil-Labs’ Pupil with the Grip gaze estimation algorithm as implemented in the EyeRecToo software. Our results show that whereas gaze estimates of the Tobii and Grip remained stable when the eye tracker moved, the other systems exhibited significant errors (0.8–3.1∘ increase in gaze deviation over baseline) even for the small amounts of glasses movement that occurred during the speech and facial expressions tasks. We conclude that some of the tested eye-tracking setups may not be suitable for investigating gaze behavior when high accuracy is required, such as during face-to-face interaction scenarios. We recommend that users of mobile head-worn eye trackers perform similar tests with their setups to become aware of its characteristics. This will enable researchers to design experiments that are robust to the limitations of their particular eye-tracking setup.
APA, Harvard, Vancouver, ISO, and other styles
47

Källström, Lisa. "Midsommar i sagolandet: Bildretorik och rörelse." Rhetorica Scandinavica 22, no. 78 (2018): 110–20. http://dx.doi.org/10.52610/duvr9519.

Full text
Abstract:
When watching the world around us we are not neutral, but co-creating. Seeing a photograph in an everyday context is not just a matter of taking in sensory data, but of making meaning based on what we think we know about the world. The insight that the gaze is not neutral has political implications because it implies that the meaning of what is watched is not given. In the power game about the gaze, the question of whose gaze gets to be primary becomes central. The gaze is embodied and dependent on memory to become rhetorically effective. These dynamics and their subversive implications are illustrated in this article by a reading of a fashion photography from a German women’s magazine. In the context of a special issue on holidaying in Sweden, a small party of people is posing in a presumably Swedish scenery dressed in traditional German clothes. By highlighting the polysemy of this immediately inconspicuous photo, attention is drawn to factors that tend to be overlooked in a rational and argumentative approach to visual rhetoric
APA, Harvard, Vancouver, ISO, and other styles
48

Scherf, K. Suzanne, Jason W. Griffin, Brian Judy, et al. "Improving sensitivity to eye gaze cues in autism using serious game technology: study protocol for a phase I randomised controlled trial." BMJ Open 8, no. 9 (2018): e023682. http://dx.doi.org/10.1136/bmjopen-2018-023682.

Full text
Abstract:
IntroductionAutism spectrum disorder (ASD) is characterised by impairments in social communication. Core symptoms are deficits in social looking behaviours, including limitedvisual attention to facesandsensitivity to eye gaze cues.We designed an intervention game using serious game mechanics for adolescents with ASD. It is designed to train individuals with ASD to discover that the eyes, and shifts in gaze specifically, provide information about the external world. We predict that the game will increase understanding of gaze cues and attention to faces.Methods and analysisThe Social Games for Adolescents with Autism (SAGA) trial is a preliminary, randomised controlled trial comparing the intervention game with a waitlist control condition. 34 adolescents (10–18 years) with ASD with a Full-Scale IQ between 70 and 130 and a minimum second grade reading level, and their parents, will be randomly assigned (equally to intervention or the control condition) following baseline assessments. Intervention participants will be instructed to play the computer game at home on a computer for ~30 min, three times a week. All families are tested in the lab at baseline and approximately 2 months following randomisation in all measures. Primary outcomes are assessed with eye tracking to measure sensitivity to eye gaze cues and social visual attention to faces; secondary outcomes are assessed with questionnaires to measure social skills and autism-like behaviours. The analyses will focus on evaluating the feasibility, safety and preliminary effectiveness of the intervention.Ethics and disseminationSAGA is approved by the Institutional Review Board at Pennsylvania State University (00005097). Findings will be disseminated via scientific conferences and peer-reviewed journals and to participants via newsletter. The intervention game will be available to families in the control condition after the full data are collected and if analyses indicate that it is effective.Trial registration numberNCT02968225.
APA, Harvard, Vancouver, ISO, and other styles
49

King, Andrew J., Gregory F. Cooper, Gilles Clermont, et al. "Leveraging Eye Tracking to Prioritize Relevant Medical Record Data: Comparative Machine Learning Study." Journal of Medical Internet Research 22, no. 4 (2020): e15876. http://dx.doi.org/10.2196/15876.

Full text
Abstract:
Background Electronic medical record (EMR) systems capture large amounts of data per patient and present that data to physicians with little prioritization. Without prioritization, physicians must mentally identify and collate relevant data, an activity that can lead to cognitive overload. To mitigate cognitive overload, a Learning EMR (LEMR) system prioritizes the display of relevant medical record data. Relevant data are those that are pertinent to a context—defined as the combination of the user, clinical task, and patient case. To determine which data are relevant in a specific context, a LEMR system uses supervised machine learning models of physician information-seeking behavior. Since obtaining information-seeking behavior data via manual annotation is slow and expensive, automatic methods for capturing such data are needed. Objective The goal of the research was to propose and evaluate eye tracking as a high-throughput method to automatically acquire physician information-seeking behavior useful for training models for a LEMR system. Methods Critical care medicine physicians reviewed intensive care unit patient cases in an EMR interface developed for the study. Participants manually identified patient data that were relevant in the context of a clinical task: preparing a patient summary to present at morning rounds. We used eye tracking to capture each physician’s gaze dwell time on each data item (eg, blood glucose measurements). Manual annotations and gaze dwell times were used to define target variables for developing supervised machine learning models of physician information-seeking behavior. We compared the performance of manual selection and gaze-derived models on an independent set of patient cases. Results A total of 68 pairs of manual selection and gaze-derived machine learning models were developed from training data and evaluated on an independent evaluation data set. A paired Wilcoxon signed-rank test showed similar performance of manual selection and gaze-derived models on area under the receiver operating characteristic curve (P=.40). Conclusions We used eye tracking to automatically capture physician information-seeking behavior and used it to train models for a LEMR system. The models that were trained using eye tracking performed like models that were trained using manual annotations. These results support further development of eye tracking as a high-throughput method for training clinical decision support systems that prioritize the display of relevant medical record data.
APA, Harvard, Vancouver, ISO, and other styles
50

Zhang, Ruohan, Calen Walshe, Zhuode Liu, et al. "Atari-HEAD: Atari Human Eye-Tracking and Demonstration Dataset." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 6811–20. http://dx.doi.org/10.1609/aaai.v34i04.6161.

Full text
Abstract:
Large-scale public datasets have been shown to benefit research in multiple areas of modern artificial intelligence. For decision-making research that requires human data, high-quality datasets serve as important benchmarks to facilitate the development of new methods by providing a common reproducible standard. Many human decision-making tasks require visual attention to obtain high levels of performance. Therefore, measuring eye movements can provide a rich source of information about the strategies that humans use to solve decision-making tasks. Here, we provide a large-scale, high-quality dataset of human actions with simultaneously recorded eye movements while humans play Atari video games. The dataset consists of 117 hours of gameplay data from a diverse set of 20 games, with 8 million action demonstrations and 328 million gaze samples. We introduce a novel form of gameplay, in which the human plays in a semi-frame-by-frame manner. This leads to near-optimal game decisions and game scores that are comparable or better than known human records. We demonstrate the usefulness of the dataset through two simple applications: predicting human gaze and imitating human demonstrated actions. The quality of the data leads to promising results in both tasks. Moreover, using a learned human gaze model to inform imitation learning leads to an 115% increase in game performance. We interpret these results as highlighting the importance of incorporating human visual attention in models of decision making and demonstrating the value of the current dataset to the research community. We hope that the scale and quality of this dataset can provide more opportunities to researchers in the areas of visual attention, imitation learning, and reinforcement learning.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!