To see the other types of publications on this topic, follow the link: Blink recognition.

Journal articles on the topic 'Blink recognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Blink recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Rogalska, Anna, Filip Rynkiewicz, Marcin Daszuta, Krzysztof Guzek, and Piotr Napieralski. "Blinking Extraction in Eye gaze System for Stereoscopy Movies." Open Physics 17, no. 1 (September 21, 2019): 512–18. http://dx.doi.org/10.1515/phys-2019-0053.

Full text
Abstract:
Abstract The aim of this paper is to present methods for human eye blink recognition. The main function of blinking is to spread tears across the eye and remove irratants from the surface of the cornea and conjuctiva. Blinking can be associated with internal memory processing, fatigue or activation in central nervous system. There are currently many methods for automatic blink detection. The most reliable methods include EOG or EEG signals. These methods, however, are associated with a decrease in the comfort of the examined person. This paper presents a method to detect blinks with the eye-tracker device. There are currently many blink detection methods for this devices. Two popular eye-trackers were tested in this paper. In addition a method for improving detection efficiency was proposed.
APA, Harvard, Vancouver, ISO, and other styles
2

Stankevich, Lev A., Sabina S. Amanbaeva, and Aleksandr V. Samochadin. "User Authentication by Electroencephalographic Signals when Blinkin." Computer tools in education, no. 3 (September 30, 2019): 52–69. http://dx.doi.org/10.32603/2071-2340-2019-3-52-69.

Full text
Abstract:
The article presents the results of a study in the field of applying electroencephalography (EEG) for human authentication. An algorithm for EEG authentication based on blinks has been developed and described. Authentication is carried out by one blink, which takes 2-5 seconds. The data is collected using a Muse electroencephalograph. Data preprocessing includes wavelet transform and blink detection. Geometric characteristics of the EEG signals are used as features. Recognition is conducted by the Random Forest classifier. According to the test results, the percentage of correct authentication was 95 %. There is the possibility of background authentication. The implemented system may be used to authenticate students at distant education.
APA, Harvard, Vancouver, ISO, and other styles
3

Ren, Peng, Xiaole Ma, Wenjia Lai, Min Zhang, Shengnan Liu, Ying Wang, Min Li, et al. "Comparison of the Use of Blink Rate and Blink Rate Variability for Mental State Recognition." IEEE Transactions on Neural Systems and Rehabilitation Engineering 27, no. 5 (May 2019): 867–75. http://dx.doi.org/10.1109/tnsre.2019.2906371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jackson, M. C., and J. E. Raymond. "Familiarity effects on face recognition in the attentional blink." Journal of Vision 3, no. 9 (March 18, 2010): 817. http://dx.doi.org/10.1167/3.9.817.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wu, Junwen, and Mohan M. Trivedi. "An eye localization, tracking and blink pattern recognition system." ACM Transactions on Multimedia Computing, Communications, and Applications 6, no. 2 (March 2010): 1–23. http://dx.doi.org/10.1145/1671962.1671964.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Borza, Diana, Razvan Itu, and Radu Danescu. "In the Eye of the Deceiver: Analyzing Eye Movements as a Cue to Deception." Journal of Imaging 4, no. 10 (October 16, 2018): 120. http://dx.doi.org/10.3390/jimaging4100120.

Full text
Abstract:
Deceit occurs in daily life and, even from an early age, children can successfully deceive their parents. Therefore, numerous book and psychological studies have been published to help people decipher the facial cues to deceit. In this study, we tackle the problem of deceit detection by analyzing eye movements: blinks, saccades and gaze direction. Recent psychological studies have shown that the non-visual saccadic eye movement rate is higher when people lie. We propose a fast and accurate framework for eye tracking and eye movement recognition and analysis. The proposed system tracks the position of the iris, as well as the eye corners (the outer shape of the eye). Next, in an offline analysis stage, the trajectory of these eye features is analyzed in order to recognize and measure various cues which can be used as an indicator of deception: the blink rate, the gaze direction and the saccadic eye movement rate. On the task of iris center localization, the method achieves within pupil localization in 91.47% of the cases. For blink localization, we obtained an accuracy of 99.3% on the difficult EyeBlink8 dataset. In addition, we proposed a novel metric, the normalized blink rate deviation to stop deceitful behavior based on blink rate. Using this metric and a simple decision stump, the deceitful answers from the Silesian Face database were recognized with an accuracy of 96.15%.
APA, Harvard, Vancouver, ISO, and other styles
7

Usakli, Ali Bulent, Ana Susac, and Serkan Gurkan. "Fast face recognition: Eye blink as a reliable behavioral response." Neuroscience Letters 504, no. 1 (October 2011): 49–52. http://dx.doi.org/10.1016/j.neulet.2011.08.055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bach, Dominik R., Martin Schmidt-Daffy, and Raymond J. Dolan. "Facial expression influences face identity recognition during the attentional blink." Emotion 14, no. 6 (December 2014): 1007–13. http://dx.doi.org/10.1037/a0037945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Mei, Lin Guo, and Wen-Yuan Chen. "Blink detection using Adaboost and contour circle for fatigue recognition." Computers & Electrical Engineering 58 (February 2017): 502–12. http://dx.doi.org/10.1016/j.compeleceng.2016.09.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

John, Sofia Jennifer, and Sree T. Sharmila. "Real time blink recognition from various head pose using single eye." Multimedia Tools and Applications 77, no. 23 (June 5, 2018): 31331–45. http://dx.doi.org/10.1007/s11042-018-6113-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Niedeggen, Michael, Ivan Toni, Gereon Fink, Jon Shah, Petra Stoerig, and Karl Zilles. "Covert word recognition following the attentional blink: an ERP-fMRI study." NeuroImage 11, no. 5 (May 2000): S30. http://dx.doi.org/10.1016/s1053-8119(00)90964-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Korda, Alexandra I., Giorgos Giannakakis, Errikos Ventouras, Pantelis A. Asvestas, Nikolaos Smyrnis, Kostas Marias, and George K. Matsopoulos. "Recognition of Blinks Activity Patterns during Stress Conditions Using CNN and Markovian Analysis." Signals 2, no. 1 (January 23, 2021): 55–71. http://dx.doi.org/10.3390/signals2010006.

Full text
Abstract:
This paper investigates eye behaviour through blinks activity during stress conditions. Although eye blinking is a semi-voluntary action, it is considered to be affected by one’s emotional states such as arousal or stress. The blinking rate provides information towards this direction, however, the analysis on the entire eye aperture timeseries and the corresponding blinking patterns provide enhanced information on eye behaviour during stress conditions. Thus, two experimental protocols were established to induce affective states (neutral, relaxed and stress) systematically through a variety of external and internal stressors. The study populations included 24 and 58 participants respectively performing 12 experimental affective trials. After the preprocessing phase, the eye aperture timeseries and the corresponding features were extracted. The behaviour of inter-blink intervals (IBI) was investigated using the Markovian Analysis to quantify incidence dynamics in sequences of blinks. Moreover, Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) network models were employed to discriminate stressed versus neutral tasks per cognitive process using the sequence of IBI. The classification accuracy reached a percentage of 81.3% which is very promising considering the unimodal analysis and the noninvasiveness modality used.
APA, Harvard, Vancouver, ISO, and other styles
13

Haroush, Keren, and Shaul Hochstein. "Stop, blink and listen." Multisensory Research 26, no. 1-2 (2013): 82. http://dx.doi.org/10.1163/22134808-000s0056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Griffiths, G., A. Herwig, and W. X. Schneider. "Stimulus localization interferes with stimulus recognition: Evidence from an attentional blink paradigm." Journal of Vision 13, no. 7 (June 11, 2013): 7. http://dx.doi.org/10.1167/13.7.7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Lamba, Puneet Singh, Deepali Virmani, and Oscar Castillo. "Multimodal human eye blink recognition method using feature level fusion for exigency detection." Soft Computing 24, no. 22 (May 7, 2020): 16829–45. http://dx.doi.org/10.1007/s00500-020-04979-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Bakunah, Raed Awadh, and Saeed Mohammed Baneamoon. "A hybrid technique for intelligent bank security system based on blink gesture recognition." Journal of Physics: Conference Series 1962, no. 1 (July 1, 2021): 012001. http://dx.doi.org/10.1088/1742-6596/1962/1/012001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Korsun, Oleg, and Vladimir Yurko. "Convolutional neural networks emotion recognition and blink characteristics analysis for operator state estimation." Procedia Computer Science 186 (2021): 293–98. http://dx.doi.org/10.1016/j.procs.2021.04.148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

K, Gopalakrishna, and Hariprasad S.A. "Real-Time Fatigue Analysis of Driver through Iris Recognition." International Journal of Electrical and Computer Engineering (IJECE) 7, no. 6 (December 1, 2017): 3306. http://dx.doi.org/10.11591/ijece.v7i6.pp3306-3312.

Full text
Abstract:
In recent days, the driver’s fault accounted for about 77.5% of the total road accidents that are happening every day. There are several methods for the driver’s fatigue detection. These are based on the movement of the eye ball using eye blinking sensor, heart beat measurement using Electro Cardio Gram, mental status analysis using ElectroEncephaloGram, muscle cramping detection, etc. However the above said methods are more complicated and create inconvenience for the driver to drive the vehicle. Also, these methods are less accurate. In this work, an accurate method is adopted to detect the driver’s fatigue based on status of the eyes using Iris recognition and the results shows that the proposed method is more accurate (about 80%) compared to the existing methods such as Eye blink Sensor method.
APA, Harvard, Vancouver, ISO, and other styles
19

Miranda, Michael Gabriel, Renato Alberto Salinas, Ulrich Raff, and Oscar Magna. "Wavelet Design for Automatic Real-Time Eye Blink Detection and Recognition in EEG Signals." International Journal of Computers Communications & Control 14, no. 3 (May 31, 2019): 375–87. http://dx.doi.org/10.15837/ijccc.2019.3.3516.

Full text
Abstract:
The blinking of an eye can be detected in electroencephalographic (EEG) recordings and can be understood as a useful control signal in some information processing tasks. The detection of a specific pattern associated with the blinking of an eye in real time using EEG signals of a single channel has been analyzed. This study considers both theoretical and practical principles enabling the design and implementation of a system capable of precise real-time detection of eye blinks within the EEG signal. This signal or pattern is subject to considerable scale changes and multiple incidences. In our proposed approach, a new wavelet was designed to improve the detection and localization of the eye blinking signal. The detection of multiple occurrences of the blinking perturbation in the recordings performed in real-time operation is achieved with a window giving a time-limited projection of an ongoing analysis of the sampled EEG signal.
APA, Harvard, Vancouver, ISO, and other styles
20

Roy, Raphaëlle N., Sylvie Charbonnier, and Stéphane Bonnet. "Eye blink characterization from frontal EEG electrodes using source separation and pattern recognition algorithms." Biomedical Signal Processing and Control 14 (November 2014): 256–64. http://dx.doi.org/10.1016/j.bspc.2014.08.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Raymond, Jane E., and Jennifer L. O'Brien. "Selective Visual Attention and Motivation." Psychological Science 20, no. 8 (August 2009): 981–88. http://dx.doi.org/10.1111/j.1467-9280.2009.02391.x.

Full text
Abstract:
Learning to associate the probability and value of behavioral outcomes with specific stimuli (value learning) is essential for rational decision making. However, in demanding cognitive conditions, access to learned values might be constrained by limited attentional capacity. We measured recognition of briefly presented faces seen previously in a value-learning task involving monetary wins and losses; the recognition task was performed both with and without constraints on available attention. Regardless of available attention, recognition was substantially enhanced for motivationally salient stimuli (i.e., stimuli highly predictive of outcomes), compared with equally familiar stimuli that had weak or no motivational salience, and this effect was found regardless of valence (win or loss). However, when attention was constrained (because stimuli were presented during an attentional blink, AB), valence determined recognition; win-associated faces showed no AB, but all other faces showed large ABs. Motivational salience acts independently of attention to modulate simple perceptual decisions, but when attention is limited, visual processing is biased in favor of reward-associated stimuli.
APA, Harvard, Vancouver, ISO, and other styles
22

Hendler-Neumark, Adi, and Gili Bisker. "Fluorescent Single-Walled Carbon Nanotubes for Protein Detection." Sensors 19, no. 24 (December 7, 2019): 5403. http://dx.doi.org/10.3390/s19245403.

Full text
Abstract:
Nanosensors have a central role in recent approaches to molecular recognition in applications like imaging, drug delivery systems, and phototherapy. Fluorescent nanoparticles are particularly attractive for such tasks owing to their emission signal that can serve as optical reporter for location or environmental properties. Single-walled carbon nanotubes (SWCNTs) fluoresce in the near-infrared part of the spectrum, where biological samples are relatively transparent, and they do not photobleach or blink. These unique optical properties and their biocompatibility make SWCNTs attractive for a variety of biomedical applications. Here, we review recent advancements in protein recognition using SWCNTs functionalized with either natural recognition moieties or synthetic heteropolymers. We emphasize the benefits of the versatile applicability of the SWCNT sensors in different systems ranging from single-molecule level to in-vivo sensing in whole animal models. Finally, we discuss challenges, opportunities, and future perspectives.
APA, Harvard, Vancouver, ISO, and other styles
23

Dux, P. E., and I. M. Harris. "Object orientation and the attentional blink: Tests of a two-stage model of object recognition." Journal of Vision 4, no. 8 (August 1, 2004): 505. http://dx.doi.org/10.1167/4.8.505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Jang, Seok-Woo, and Byeongtae Ahn. "Implementation of Detection System for Drowsy Driving Prevention Using Image Recognition and IoT." Sustainability 12, no. 7 (April 10, 2020): 3037. http://dx.doi.org/10.3390/su12073037.

Full text
Abstract:
In recent years, the casualties of traffic accidents caused by driving cars have been gradually increasing. In particular, there are more serious injuries and deaths than minor injuries, and the damage due to major accidents is increasing. In particular, heavy cargo trucks and high-speed bus accidents that occur during driving in the middle of the night have emerged as serious social problems. Therefore, in this study, a drowsiness prevention system was developed to prevent large-scale disasters caused by traffic accidents. In this study, machine learning was applied to predict drowsiness and improve drowsiness prediction using facial recognition technology and eye-blink recognition technology. Additionally, a CO2 sensor chip was used to detect additional drowsiness. Speech recognition technology can also be used to apply Speech to Text (STT), allowing a driver to request their desired music or make a call to avoid drowsiness while driving.
APA, Harvard, Vancouver, ISO, and other styles
25

Tan, Huachun, and Yu-Jin Zhang. "Detecting eye blink states by tracking iris and eyelids." Pattern Recognition Letters 27, no. 6 (April 2006): 667–75. http://dx.doi.org/10.1016/j.patrec.2005.10.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Tamura, Hiroki, Mingmin Yan, Keiko Sakurai, and Koichi Tanno. "EOG-sEMG Human Interface for Communication." Computational Intelligence and Neuroscience 2016 (2016): 1–10. http://dx.doi.org/10.1155/2016/7354082.

Full text
Abstract:
The aim of this study is to present electrooculogram (EOG) and surface electromyogram (sEMG) signals that can be used as a human-computer interface. Establishing an efficient alternative channel for communication without overt speech and hand movements is important for increasing the quality of life for patients suffering from amyotrophic lateral sclerosis, muscular dystrophy, or other illnesses. In this paper, we propose an EOG-sEMG human-computer interface system for communication using both cross-channels and parallel lines channels on the face with the same electrodes. This system could record EOG and sEMG signals as “dual-modality” for pattern recognition simultaneously. Although as much as 4 patterns could be recognized, dealing with the state of the patients, we only choose two classes (left and right motion) of EOG and two classes (left blink and right blink) of sEMG which are easily to be realized for simulation and monitoring task. From the simulation results, our system achieved four-pattern classification with an accuracy of 95.1%.
APA, Harvard, Vancouver, ISO, and other styles
27

Kilbum, Kaye H., and Raphael H. Warshaw. "Effects on Neurobehavioral Performance of Chronic Exposure to Chemically Contaminated Well Water." Toxicology and Industrial Health 9, no. 3 (May 1993): 391–404. http://dx.doi.org/10.1177/074823379300900301.

Full text
Abstract:
Occupational exposure lo trichloroethylene (TCE) and other solvents impairs neurobehavioral performance. Use of well water contaminated with TCE and solvents has been associated with excessive symptoms, cancers, birth defects and impaired blink reflex. We extended these observations by measuring the neurophysiological (NPH) and neuropsychological (NPS) status of subjects who used water contaminated with 6 to 500 ppb of TCE for 1 to 25 years. The 170 well-water exposed subjects who resided in southwest Tucson, Arizona overlying the Santa Cruz River aquifer, were compared to 68 referent subjects for NPH and NPS tests. Also, 113 histology technicians (HT) were referents for blink reflex latency only. Affective status was assayed by a Profile of Mood States (POMS). Exposed subjects were statistically significantly impaired when compared to referents for NPH tests. These impairments included sway speed with eyes open and closed, blink reflex latency (R-1), eye closure speed, and two choice visual reaction time. NPS status was statistically significant impaired for Culture Fair (intelligence) scores, recall of stories, visual recall, digit span, block design, recognition of fingertip numbers, grooved pegboard and Trail making A and B. POMS scores were elevated. Prolonged residential exposure to well-water containing TCE at lower levels than occupational exposures, but without time away from exposure for metabolism and excretion of toxins, was associated with neurobehavioral impairment.
APA, Harvard, Vancouver, ISO, and other styles
28

Marx, S., O. Hansen-Goos, M. Thrun, and W. Einhauser. "Rapid serial processing of natural scenes: Color modulates detection but neither recognition nor the attentional blink." Journal of Vision 14, no. 14 (December 16, 2014): 4. http://dx.doi.org/10.1167/14.14.4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Shu, Qing Wang, and Hong Chen. "Research and Application of Eye Movement Interaction based on Eye Movement Recognition." MATEC Web of Conferences 246 (2018): 03038. http://dx.doi.org/10.1051/matecconf/201824603038.

Full text
Abstract:
Generally, human-computer interaction is an interaction and operation between users and machine hardware. The user submits instructions to the machine, and the machine outputs the processed data and results to the user after receiving the instructions from the user. Mouse, keyboard, etc. are common input channels. With the maturity of eye tracking technology and the development of equipment miniaturization, turning eye movements into human-computer interaction input channels has become a hot spot in the field of human-computer interaction. Therefore, this paper analysed the physiological characteristics of eye movement, proposed the design principles and framework of eye movement interaction, and designed three kinds of eye movement recognition algorithms of fixation, saccade and blink. On this basis, using Unity 3D cross-platform development engine as a development tool, a children’s attention training game application based on eye movement interaction is designed. The game is designed to combine eye movement interaction technology with attention training mode, simplify the control mode of the game, get attention feedback at the first time, achieve better training effect and improve the efficiency of human-computer interaction.
APA, Harvard, Vancouver, ISO, and other styles
30

Tyson-Parry, Maree M., Jessica Sailah, Mark E. Boyes, and Nicholas A. Badcock. "The attentional blink is related to phonemic decoding, but not sight-word recognition, in typically reading adults." Vision Research 115 (October 2015): 8–16. http://dx.doi.org/10.1016/j.visres.2015.08.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Luo, Wenping, Jianting Cao, Kousuke Ishikawa, and Dongying Ju. "A Human-Computer Control System Based on Intelligent Recognition of Eye Movements and Its Application in Wheelchair Driving." Multimodal Technologies and Interaction 5, no. 9 (August 28, 2021): 50. http://dx.doi.org/10.3390/mti5090050.

Full text
Abstract:
This paper presents a practical human-computer interaction system for wheelchair motion through eye tracking and eye blink detection. In this system, the pupil in the eye image has been extracted after binarization, and the center of the pupil was localized to capture the trajectory of eye movement and determine the direction of eye gaze. Meanwhile, convolutional neural networks for feature extraction and classification of open-eye and closed-eye images have been built, and machine learning was performed by extracting features from multiple individual images of open-eye and closed-eye states for input to the system. As an application of this human-computer interaction control system, experimental validation was carried out on a modified wheelchair and the proposed method proved to be effective and reliable based on the experimental results.
APA, Harvard, Vancouver, ISO, and other styles
32

OGOSHI, Yasuhiro, Yoshinori MITSUHASHI, Sakiko OGOSHI, Akio NAKAI, Shinya MATSUURA, and Chikahiro ARAKI. "Recognition of Facial Expression Based on Analysis of Resultant Sequential Retinal Outlines over The Course of A Blink." Journal of Japan Society for Fuzzy Theory and Intelligent Informatics 23, no. 2 (2011): 218–27. http://dx.doi.org/10.3156/jsoft.23.218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Xie, Wentao, Qian Zhang, and Jin Zhang. "Acoustic-based Upper Facial Action Recognition for Smart Eyewear." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, no. 2 (June 23, 2021): 1–28. http://dx.doi.org/10.1145/3448105.

Full text
Abstract:
Smart eyewear (e.g., AR glasses) is considered to be the next big breakthrough for wearable devices. The interaction of state-of-the-art smart eyewear mostly relies on the touchpad which is obtrusive and not user-friendly. In this work, we propose a novel acoustic-based upper facial action (UFA) recognition system that serves as a hands-free interaction mechanism for smart eyewear. The proposed system is a glass-mounted acoustic sensing system with several pairs of commercial speakers and microphones to sense UFAs. There are two main challenges in designing the system. The first challenge is that the system is in a severe multipath environment and the received signal could have large attenuation due to the frequency-selective fading which will degrade the system's performance. To overcome this challenge, we design an Orthogonal Frequency Division Multiplexing (OFDM)-based channel state information (CSI) estimation scheme that is able to measure the phase changes caused by a facial action while mitigating the frequency-selective fading. The second challenge is that because the skin deformation caused by a facial action is tiny, the received signal has very small variations. Thus, it is hard to derive useful information directly from the received signal. To resolve this challenge, we apply a time-frequency analysis to derive the time-frequency domain signal from the CSI. We show that the derived time-frequency domain signal contains distinct patterns for different UFAs. Furthermore, we design a Convolutional Neural Network (CNN) to extract high-level features from the time-frequency patterns and classify the features into six UFAs, namely, cheek-raiser, brow-raiser, brow-lower, wink, blink and neutral. We evaluate the performance of our system through experiments on data collected from 26 subjects. The experimental result shows that our system can recognize the six UFAs with an average F1-score of 0.92.
APA, Harvard, Vancouver, ISO, and other styles
34

Koleva-Georgieva, Dessislava Nikolaeva. "Optical Coherence Tomography – Segmentation Performance and Retinal Thickness Measurement Errors." European Ophthalmic Review 06, no. 02 (2012): 78. http://dx.doi.org/10.17925/eor.2012.06.02.78.

Full text
Abstract:
Optical coherence tomography (OCT) has become an indispensable tool in the assessment of macular pathology in clinical settings and an integral part of many clinical trials. However, as with any imaging technology, some limitations exist. In this review, the author describes and discusses the various causes that might compromise automated retinal thickness measurements. The segmentation software might perform less accurately in the presence of scan artefacts (e.g. ‘out-of-range’, mirror, blink and motion artefacts), a low signal:noise ratio, dense media opacities and specific retinal pathological features (e.g. pigment epithelial detachment, subretinal fluid, fibrotic tissue, hard exudates and full-thickness macular holes). The awareness of the clinician and the particular search for, and recognition of, measurement errors would improve the accuracy of OCT interpretation and should be an integral part of OCT scan analysis.
APA, Harvard, Vancouver, ISO, and other styles
35

Chang, Won-Du, and Chang-Hwan Im. "Enhanced Template Matching Using Dynamic Positional Warping for Identification of Specific Patterns in Electroencephalogram." Journal of Applied Mathematics 2014 (2014): 1–7. http://dx.doi.org/10.1155/2014/528071.

Full text
Abstract:
Template matching is an approach for signal pattern recognition, often used for biomedical signals including electroencephalogram (EEG). Since EEG is often severely contaminated by various physiological or pathological artifacts, identification and rejection of these artifacts with improved template matching algorithms would enhance the overall quality of EEG signals. In this paper, we propose a novel approach to improve the accuracy of conventional template matching methods by adopting the dynamic positional warping (DPW) technique, developed recently for handwriting pattern analysis. To validate the feasibility and superiority of the proposed method, eye-blink artifacts in the EEG signals were detected, and the results were then compared to those from conventional methods. DPW was found to outperform the conventional methods in terms of artifact detection accuracy, demonstrating the power of DPW in identifying specific one-dimensional data patterns.
APA, Harvard, Vancouver, ISO, and other styles
36

Rau, Pei-Luen Patrick, Jian Zheng, Lijun Wang, Jingyu Zhao, and Dangxiao Wang. "Haptic and Auditory–Haptic Attentional Blink in Spatial and Object-Based Tasks." Multisensory Research 33, no. 3 (July 1, 2020): 295–312. http://dx.doi.org/10.1163/22134808-20191483.

Full text
Abstract:
Abstract Dual-task performance depends on both modalities (e.g., vision, audition, haptics) and task types (spatial or object-based), and the order by which different task types are organized. Previous studies on haptic and especially auditory–haptic attentional blink (AB) are scarce, and the effect of task types and their order have not been fully explored. In this study, 96 participants, divided into four groups of task type combinations, identified auditory or haptic Target 1 (T1) and haptic Target 2 (T2) in rapid series of sounds and forces. We observed a haptic AB (i.e., the accuracy of identifying T2 increased with increasing stimulus onset asynchrony between T1 and T2) in spatial, object-based, and object–spatial tasks, but not in spatial–object task. Changing the modality of an object-based T1 from haptics to audition eliminated the AB, but similar haptic-to-auditory change of the modality of a spatial T1 had no effect on the AB (if it exists). Our findings fill a gap in the literature regarding the auditory–haptic AB, and substantiate the importance of modalities, task types and their order, and the interaction between them. These findings were explained by how the cerebral cortex is organized for processing spatial and object-based information in different modalities.
APA, Harvard, Vancouver, ISO, and other styles
37

Patel, Rajesh, K. Gireesan, and S. Sengottuvel. "Decoding non-linearity for effective extraction of the eye-blink artifact pattern from EEG recordings." Pattern Recognition Letters 139 (November 2020): 42–49. http://dx.doi.org/10.1016/j.patrec.2018.01.022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

ODAKA, Yoshiyuki, Ryuichi YOKOGAWA, and Hiroshi SHIBATA. "510 The measurement system of posture of face by recognition of feature point and blink detection system by extraction of iris." Proceedings of Conference of Kansai Branch 2006.81 (2006): _5–10_. http://dx.doi.org/10.1299/jsmekansai.2006.81._5-10_.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Kim, Kyung H., and Ji S. Lee. "CQ: Creativity quotient for climates, attitudes, and thinking skills with eye-tracking." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 233, no. 2 (August 12, 2018): 465–75. http://dx.doi.org/10.1177/0954406218780541.

Full text
Abstract:
This article examined a new creativity test designed for engineers, CQ: Creativity Quotient for Climates, Attitudes, and Thinking skills with Eye-Tracking. The creativity quotient expanded and enhanced both the figural and verbal Torrance Tests of Creative Thinking skills. Creativity quotient added new, more comprehensive measures of creative climates, attitudes, and thinking skills that comprise Kim’s creative climates, attitudes, and thinking skills model of creativity. Additionally, its patented online eye-tracking technology assesses test-takers’ creative-attitude and thinking-skill tendencies by tracking the changes in test-takers’ pupil diameters, eye-blink frequency, micro-saccade rates, fixation durations or curves, and smooth-pursuit movements. Finally, the creativity quotient assesses creative thinking skills using pattern-recognition technology to instantly and objectively analyze and score test-takers’ drawings, which previously required trained human scorers. Upon completion of the creativity quotient, test-takers receive a detailed, comprehensive, itemized report about the strengths and weaknesses of their climates, attitudes, and thinking skills along with individualized advice on how to enhance their creativity to achieve an innovation.
APA, Harvard, Vancouver, ISO, and other styles
40

Dent, Kevin, and Geoff G. Cole. "Gatecrashing the visual cocktail party: How visual and semantic similarity modulate the own name benefit in the attentional blink." Quarterly Journal of Experimental Psychology 72, no. 5 (June 5, 2018): 1102–11. http://dx.doi.org/10.1177/1747021818778694.

Full text
Abstract:
The “visual cocktail party effect” refers to superior report of a participant’s own name, under conditions of inattention. An early selection account suggests this advantage stems from enhanced visual processing. A late selection account suggests the advantage occurs when semantic information allowing identification as one’s own name is retrieved. In the context of inattentional blindness (IB), Mack and Rock showed that the advantage does not generalise to a minor modification of a participant’s own name, despite extensive visual similarity, supporting the late selection account. This study applied the name modification manipulation in the context of the attentional blink (AB). Participants were presented with rapid streams of names and identified a white target name, while also reporting the presence of one of two possible probes. The probe names appeared either close (the third item following the target: Lag 3) or far in time from the target (the eighth item following the target: Lag 8). The results revealed a robust AB; reports of the probe were reduced at Lag 3 relative to Lag 8. The AB was also greatly reduced for the own name compared to another name—a visual cocktail party effect. In contrast to the findings of Mack and Rock for IB, the reduced AB extended to the modified own name. The results suggest different loci for the visual cocktail party effect in the AB (word recognition) compared to IB (semantic processing).
APA, Harvard, Vancouver, ISO, and other styles
41

Li, Ting, Jinhua Zhang, Tao Xue, and Baozeng Wang. "Development of a Novel Motor Imagery Control Technique and Application in a Gaming Environment." Computational Intelligence and Neuroscience 2017 (2017): 1–16. http://dx.doi.org/10.1155/2017/5863512.

Full text
Abstract:
We present a methodology for a hybrid brain-computer interface (BCI) system, with the recognition of motor imagery (MI) based on EEG and blink EOG signals. We tested the BCI system in a 3D Tetris and an analogous 2D game playing environment. To enhance player’s BCI control ability, the study focused on feature extraction from EEG and control strategy supporting Game-BCI system operation. We compared the numerical differences between spatial features extracted with common spatial pattern (CSP) and the proposed multifeature extraction. To demonstrate the effectiveness of 3D game environment at enhancing player’s event-related desynchronization (ERD) and event-related synchronization (ERS) production ability, we set the 2D Screen Game as the comparison experiment. According to a series of statistical results, the group performing MI in the 3D Tetris environment showed more significant improvements in generating MI-associated ERD/ERS. Analysis results of game-score indicated that the players’ scores presented an obvious uptrend in 3D Tetris environment but did not show an obvious downward trend in 2D Screen Game. It suggested that the immersive and rich-control environment for MI would improve the associated mental imagery and enhance MI-based BCI skills.
APA, Harvard, Vancouver, ISO, and other styles
42

Hatture, Sanjeeva Kumar M., and Shweta Policepatil. "Masquerade Attack Analysis for Secured Face Biometric System." International Journal of Recent Technology and Engineering (IJRTE) 10, no. 2 (July 30, 2021): 225–32. http://dx.doi.org/10.35940/ijrte.b6309.0710221.

Full text
Abstract:
Biometrics systems are mostly used to establish an automated way for validating or recognising a living or nonliving person's identity based on physiological and behavioural features. Now a day’s biometric system has become trend in personal identification for security purpose in various fields like online banking, e-payment, organizations, institutions and so on. Face biometric is the second largest biometric trait used for unique identification while fingerprint is being the first. But face recognition systems are susceptible to spoof attacks made by nonreal faces mainly known as masquerade attack. The masquerade attack is performed using authorized users’ artifact biometric data that may be artifact facial masks, photo or iris photo or any latex finger. This type of attack in Liveness detection has become counter problem in the today's world. To prevent such spoofing attack, we proposed Liveness detection of face by considering the countermeasures and texture analysis of face and also a hybrid approach which combine both passive and active liveness detection is used. Our proposed approach achieves accuracy of 99.33 percentage for face anti-spoofing detection. Also we performed active face spoofing by providing several task (turn face left, turn face right, blink eye, etc) that performed by user on live camera for liveness detection.
APA, Harvard, Vancouver, ISO, and other styles
43

Crameri, Fabio. "Geodynamic diagnostics, scientific visualisation and StagLab 3.0." Geoscientific Model Development 11, no. 6 (June 29, 2018): 2541–62. http://dx.doi.org/10.5194/gmd-11-2541-2018.

Full text
Abstract:
Abstract. Today's geodynamic models can, often do and sometimes have to become very complex. Their underlying, increasingly elaborate numerical codes produce a growing amount of raw data. Post-processing such data is therefore becoming more and more important, but also more challenging and time-consuming. In addition, visualising processed data and results has, in times of coloured figures and a wealth of half-scientific software, become one of the weakest pillars of science, widely mistreated and ignored. Efficient and automated geodynamic diagnostics and sensible scientific visualisation preventing common pitfalls is thus more important than ever. Here, a collection of numerous diagnostics for plate tectonics and mantle dynamics is provided and a case for truly scientific visualisation is made. Amongst other diagnostics are a most accurate and robust plate-boundary identification, slab-polarity recognition, plate-bending derivation, surface-topography component splitting and mantle-plume detection. Thanks to powerful image processing tools and other elaborate algorithms, these and many other insightful diagnostics are conveniently derived from only a subset of the most basic parameter fields. A brand new set of scientific quality, perceptually uniform colour maps including devon, davos, oslo and broc is introduced and made freely available (http://www.fabiocrameri.ch/colourmaps, last access: 25 June 2018). These novel colour maps bring a significant advantage over misleading, non-scientific colour maps like rainbow, which is shown to introduce a visual error to the underlying data of up to 7.5 %. Finally, StagLab (http://www.fabiocrameri.ch/StagLab, last access: 25 June 2018) is introduced, a software package that incorporates the whole suite of automated geodynamic diagnostics and, on top of that, applies state-of-the-art scientific visualisation to produce publication-ready figures and movies, all in the blink of an eye and all fully reproducible. StagLab, a simple, flexible, efficient and reliable tool made freely available to everyone, is written in MATLAB and adjustable for use with geodynamic mantle convection codes.
APA, Harvard, Vancouver, ISO, and other styles
44

Converso, L., and S. Hocek. "Optical Character Recognition." Journal of Visual Impairment & Blindness 84, no. 10 (December 1990): 507–9. http://dx.doi.org/10.1177/0145482x9008401004.

Full text
Abstract:
Computer-based optical character recognition (OCR) systems allow blind persons access to a wide variety of printed material. This article describes these systems and how they work and discusses the features that should be considered before one purchases them.
APA, Harvard, Vancouver, ISO, and other styles
45

LU, Peizhong. "Blind recognition of punctured convolutional codes." Science in China Series F 48, no. 4 (2005): 484. http://dx.doi.org/10.1360/03yf0480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Liu, Ran R., Raika Pancaroglu, Charlotte S. Hills, Brad Duchaine, and Jason J. S. Barton. "Voice Recognition in Face-Blind Patients." Cerebral Cortex 26, no. 4 (October 27, 2014): 1473–87. http://dx.doi.org/10.1093/cercor/bhu240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Moosavi, Reza, and Erik G. Larsson. "Fast Blind Recognition of Channel Codes." IEEE Transactions on Communications 62, no. 5 (May 2014): 1393–405. http://dx.doi.org/10.1109/tcomm.2014.050614.130297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Rathore, Yuvraj Singh, Charvi Mittal, and Avik Basu. "Alphabet Recognition for Deaf-Blind People." IOSR Journal of Computer Engineering 16, no. 5 (2014): 15–20. http://dx.doi.org/10.9790/0661-16561520.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Heller, Morton A. "Tactile Memory in Sighted and Blind Observers: The Influence of Orientation and Rate of Presentation." Perception 18, no. 1 (February 1989): 121–33. http://dx.doi.org/10.1068/p180121.

Full text
Abstract:
Sighted, early blind, and late blind subjects attempted to identify numerals or number sequences printed on their palms. The numerals were either upright, or inverted, or rotated perpendicular to the arm axis. Stimulus rotation degraded recognition in the early blind subjects, suggesting the influence of experience with visual frames of reference. Slower rates of presentation with upright number sequences improved recall in both sighted and blind observers. An experiment on tactual — visual braille recognition in the sighted observers showed that tilt degraded pattern identification, but visual guidance of the fingertip and ballpoint minimized this loss. A further experiment was performed to distinguish between visual imagery and visual frame of reference explanations of the visual guidance effect on recognition of rotated braille. Subjects explored upright or tilted braille characters while viewing only a light emitting diode on the exploratory fingertip. Sight of scanning movements did not aid pattern recognition with tilt. The results indicate that the benefits of visual guidance on recognition of tilted patterns were probably due to frame of reference information. It is concluded that spatial reference information may aid tactile memory in the sighted and late blind, since the early blind performed at a lower level in the retention task. It is proposed that visual imagery may only explain the superiority of the sighted and late blind when familiar stimuli are studied.
APA, Harvard, Vancouver, ISO, and other styles
50

Chaudhari, V. J. "Currency Recognition App." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (June 10, 2021): 435–37. http://dx.doi.org/10.22214/ijraset.2021.34982.

Full text
Abstract:
Visually Impaired & foreign people are those people who have vision impairment or vision loss. Problems faced by visually impaired in performing daily activities are in great number. They also face a lot of difficulties in monetary transactions. They are unable to recognize the paper currencies due to similarity of paper texture and size between different categories. This money detector app helps visually impaired patients to recognize and detect money. Using this application blind people can speak and give command to open camera of a smartphone and camera will click picture of the note and tell the user by speech how much the money note is. This Android project uses speech to text conversion to convert the command given by the blind patient. Speech Recognition is a technology that allows users to provide spoken input into the systems. This android application uses text to speech concept to read the value of note to the user and then it converts the text value into speech. For currency detection, this application uses Azure custom vision API using Machine learning classification technique to detect currency based on images or paper using mobile camera.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography