To see the other types of publications on this topic, follow the link: Face recognition. eng.

Journal articles on the topic 'Face recognition. eng'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Face recognition. eng.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ashok Kumar, M., and Sivaram Rajeyyagari. "Erratum to “A novel mechanism for dynamic multifarious and disturbed human face recognition using Advanced Stance Coalition (ASC)” [Comput Electr Eng 84 (June 2020) 106642]." Computers & Electrical Engineering 86 (September 2020): 106819. http://dx.doi.org/10.1016/j.compeleceng.2020.106819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhao, Jian, Yu Cheng, Yi Cheng, Yang Yang, Fang Zhao, Jianshu Li, Hengzhu Liu, Shuicheng Yan, and Jiashi Feng. "Look across Elapse: Disentangled Representation Learning and Photorealistic Cross-Age Face Synthesis for Age-Invariant Face Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9251–58. http://dx.doi.org/10.1609/aaai.v33i01.33019251.

Full text
Abstract:
Despite the remarkable progress in face recognition related technologies, reliably recognizing faces across ages still remains a big challenge. The appearance of a human face changes substantially over time, resulting in significant intraclass variations. As opposed to current techniques for ageinvariant face recognition, which either directly extract ageinvariant features for recognition, or first synthesize a face that matches target age before feature extraction, we argue that it is more desirable to perform both tasks jointly so that they can leverage each other. To this end, we propose a deep Age-Invariant Model (AIM) for face recognition in the wild with three distinct novelties. First, AIM presents a novel unified deep architecture jointly performing cross-age face synthesis and recognition in a mutual boosting way. Second, AIM achieves continuous face rejuvenation/aging with remarkable photorealistic and identity-preserving properties, avoiding the requirement of paired data and the true age of testing samples. Third, we develop effective and novel training strategies for end-to-end learning the whole deep architecture, which generates powerful age-invariant face representations explicitly disentangled from the age variation. Extensive experiments on several cross-age datasets (MORPH, CACD and FG-NET) demonstrate the superiority of the proposed AIM model over the state-of-the-arts. Benchmarking our model on one of the most popular unconstrained face recognition datasets IJB-C additionally verifies the promising generalizability of AIM in recognizing faces in the wild.
APA, Harvard, Vancouver, ISO, and other styles
3

Ellis, Andrew W., Andrew W. Young, Brenda M. Flude, and Dennis C. Hay. "Repetition priming of face recognition." Quarterly Journal of Experimental Psychology Section A 39, no. 2 (May 1987): 193–210. http://dx.doi.org/10.1080/14640748708401784.

Full text
Abstract:
Three experiments investigating the priming of the recognition of familiar faces are reported. In Experiment 1, recognizing the face of a celebrity in an “Is this face familiar?” task was primed by exposure several minutes earlier to a different photograph of the same person, but not by exposure to the person's written name (a partial replication of Bruce and Valentine, 1985). In Experiment 2, recognizing the face of a personal acquaintance was again primed by recognizing a different photograph of their face, but not by recognizing the acquaintance from that person's body shape, clothes etc. Experiment 3 showed that maximum repetition priming is obtained from prior exposure to an identical photograph of a famous face, less from a similar photograph, and least (but still significant) from a dissimilar photograph. We argue that repetition priming is a function of the degree of physical similarity between two stimuli and that lack of priming between different stimulus types (e.g., written names and faces, or bodies and faces) may be attributable to lack of physical similarity between prime and test stimuli. Repetition priming effects may be best explained by some form of “instance-based” model such as that proposed by McClelland and Rumelhart (1985).
APA, Harvard, Vancouver, ISO, and other styles
4

Swain, Frank. "Egg donors paired by face recognition." New Scientist 239, no. 3189 (August 2018): 10. http://dx.doi.org/10.1016/s0262-4079(18)31375-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zimmermann, Friederike G. S., Xiaoqian Yan, and Bruno Rossion. "An objective, sensitive and ecologically valid neural measure of rapid human individual face recognition." Royal Society Open Science 6, no. 6 (June 2019): 181904. http://dx.doi.org/10.1098/rsos.181904.

Full text
Abstract:
Humans may be the only species able to rapidly and automatically recognize a familiar face identity in a crowd of unfamiliar faces, an important social skill. Here, by combining electroencephalography (EEG) and fast periodic visual stimulation (FPVS), we introduce an ecologically valid, objective and sensitive neural measure of this human individual face recognition function. Natural images of various unfamiliar faces are presented at a fast rate of 6 Hz, allowing one fixation per face, with variable natural images of a highly familiar face identity, a celebrity, appearing every seven images (0.86 Hz). Following a few minutes of stimulation, a high signal-to-noise ratio neural response reflecting the generalized discrimination of the familiar face identity from unfamiliar faces is observed over the occipito-temporal cortex at 0.86 Hz and harmonics. When face images are presented upside-down, the individual familiar face recognition response is negligible, being reduced by a factor of 5 over occipito-temporal regions. Differences in the magnitude of the individual face recognition response across different familiar face identities suggest that factors such as exposure, within-person variability and distinctiveness mediate this response. Our findings of a biological marker for fast and automatic recognition of individual familiar faces with ecological stimuli open an avenue for understanding this function, its development and neural basis in neurotypical individual brains along with its pathology. This should also have implications for the use of facial recognition measures in forensic science.
APA, Harvard, Vancouver, ISO, and other styles
6

Zheng, Siming, Rahmita Wirza OK Rahmat, Fatimah Khalid, and Nurul Amelina Nasharuddin. "3D texture-based face recognition system using fine-tuned deep residual networks." PeerJ Computer Science 5 (December 2, 2019): e236. http://dx.doi.org/10.7717/peerj-cs.236.

Full text
Abstract:
As the technology for 3D photography has developed rapidly in recent years, an enormous amount of 3D images has been produced, one of the directions of research for which is face recognition. Improving the accuracy of a number of data is crucial in 3D face recognition problems. Traditional machine learning methods can be used to recognize 3D faces, but the face recognition rate has declined rapidly with the increasing number of 3D images. As a result, classifying large amounts of 3D image data is time-consuming, expensive, and inefficient. The deep learning methods have become the focus of attention in the 3D face recognition research. In our experiment, the end-to-end face recognition system based on 3D face texture is proposed, combining the geometric invariants, histogram of oriented gradients and the fine-tuned residual neural networks. The research shows that when the performance is evaluated by the FRGC-v2 dataset, as the fine-tuned ResNet deep neural network layers are increased, the best Top-1 accuracy is up to 98.26% and the Top-2 accuracy is 99.40%. The framework proposed costs less iterations than traditional methods. The analysis suggests that a large number of 3D face data by the proposed recognition framework could significantly improve recognition decisions in realistic 3D face scenarios.
APA, Harvard, Vancouver, ISO, and other styles
7

Zarei, Shima. "Face recognition methods analysis." International Journal Artificial Intelligent and Informatics 1, no. 1 (July 10, 2018): 01. http://dx.doi.org/10.33292/ijarlit.v1i1.13.

Full text
Abstract:
Face Recognition is one of the most important issues in Image processing tasks. It is important because it uses for various purposes in real world such as Criminal detection or for detecting fraud in passport and visa check in airports. Face book is a nice example of Face recognition application, when it sends notification to one user’s friends who are recognized by their images that user uploaded in face book page. To solve Face Recognition problem different methods are introduced such as Principal Component Analysis (PCA), Independent Component Analysis (ICA), Support Vector Machine (SVM), Linear Discriminant Analysis (LDA), and Hidden Markov Models (HMM) which are explained and analyzed. Also algorithms like; Eigen face, Fisher face and Local Binary Pattern Histogram (LBPH) which are simplest and most accurate methods are implemented in this project for AT&T dataset to recognize the most similar face to other faces in this data set. To this end these algorithms are explained and advantages and disadvantages of each one are analyzed as well. Consequently, the best method is selected with comparison between the results of face reconstruction by Engine face, Fisher face and Local binary pattern histogram methods. In this project Eigen face method has best result. It should be noted that for implementing face recognition algorithms color map methods are used to distinguish the facial features more precisely. In this work Rainbow color map in Eigen Face algorithm and HSV color map in Fisher Face algorithm are utilized and results shows that HSV color map is more accurate than rainbow color map.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhong, Yuanyi, Jiansheng Chen, and Bo Huang. "Toward End-to-End Face Recognition Through Alignment Learning." IEEE Signal Processing Letters 24, no. 8 (August 2017): 1213–17. http://dx.doi.org/10.1109/lsp.2017.2715076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Hongxin, and Liying Chi. "End-to-End Spatial Transform Face Detection and Recognition." Virtual Reality & Intelligent Hardware 2, no. 2 (April 2020): 119–31. http://dx.doi.org/10.1016/j.vrih.2020.04.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Matsuda, Yoshi-Taka, Masako Myowa-Yamakoshi, and Satoshi Hirata. "Familiar face + novel face = familiar face? Representational bias in the perception of morphed faces in chimpanzees." PeerJ 4 (August 4, 2016): e2304. http://dx.doi.org/10.7717/peerj.2304.

Full text
Abstract:
Highly social animals possess a well-developed ability to distinguish the faces of familiar from novel conspecifics to induce distinct behaviors for maintaining society. However, the behaviors of animals when they encounter ambiguous faces of familiar yet novel conspecifics, e.g., strangers with faces resembling known individuals, have not been well characterised. Using a morphing technique and preferential-looking paradigm, we address this question via the chimpanzee’s facial–recognition abilities. We presented eight subjects with three types of stimuli: (1) familiar faces, (2) novel faces and (3) intermediate morphed faces that were 50% familiar and 50% novel faces of conspecifics. We found that chimpanzees spent more time looking at novel faces and scanned novel faces more extensively than familiar or intermediate faces. Interestingly, chimpanzees looked at intermediate faces in a manner similar to familiar faces with regards to the fixation duration, fixation count, and saccade length for facial scanning, even though the participant was encountering the intermediate faces for the first time. We excluded the possibility that subjects merely detected and avoided traces of morphing in the intermediate faces. These findings suggest a bias for a feeling-of-familiarity that chimpanzees perceive familiarity with an intermediate face by detecting traces of a known individual, as 50% alternation is sufficient to perceive familiarity.
APA, Harvard, Vancouver, ISO, and other styles
11

Asiedu, Louis, Bernard O. Essah, Samuel Iddi, K. Doku-Amponsah, and Felix O. Mettle. "Evaluation of the DWT-PCA/SVD Recognition Algorithm on Reconstructed Frontal Face Images." Journal of Applied Mathematics 2021 (April 7, 2021): 1–8. http://dx.doi.org/10.1155/2021/5541522.

Full text
Abstract:
The face is the second most important biometric part of the human body, next to the finger print. Recognition of face image with partial occlusion (half image) is an intractable exercise as occlusions affect the performance of the recognition module. To this end, occluded images are sometimes reconstructed or completed with some imputation mechanism before recognition. This study assessed the performance of the principal component analysis and singular value decomposition algorithm using discrete wavelet transform (DWT-PCA/SVD) as preprocessing mechanism on the reconstructed face image database. The reconstruction of the half face images was done leveraging on the property of bilateral symmetry of frontal faces. Numerical assessment of the performance of the adopted recognition algorithm gave average recognition rates of 95% and 75% when left and right reconstructed face images were used for recognition, respectively. It was evident from the statistical assessment that the DWT-PCA/SVD algorithm gives relatively lower average recognition distance for the left reconstructed face images. DWT-PCA/SVD is therefore recommended as a suitable algorithm for recognizing face images under partial occlusion (half face images). The algorithm performs relatively better on left reconstructed face images.
APA, Harvard, Vancouver, ISO, and other styles
12

O'Toole, Alice J., and Carlos D. Castillo. "Face Recognition by Humans and Machines: Three Fundamental Advances from Deep Learning." Annual Review of Vision Science 7, no. 1 (September 15, 2021): 543–70. http://dx.doi.org/10.1146/annurev-vision-093019-111701.

Full text
Abstract:
Deep learning models currently achieve human levels of performance on real-world face recognition tasks. We review scientific progress in understanding human face processing using computational approaches based on deep learning. This review is organized around three fundamental advances. First, deep networks trained for face identification generate a representation that retains structured information about the face (e.g., identity, demographics, appearance, social traits, expression) and the input image (e.g., viewpoint, illumination). This forces us to rethink the universe of possible solutions to the problem of inverse optics in vision. Second, deep learning models indicate that high-level visual representations of faces cannot be understood in terms of interpretable features. This has implications for understanding neural tuning and population coding in the high-level visual cortex. Third, learning in deep networks is a multistep process that forces theoretical consideration of diverse categories of learning that can overlap, accumulate over time, and interact. Diverse learning types are needed to model the development of human face processing skills, cross-race effects, and familiarity with individual faces.
APA, Harvard, Vancouver, ISO, and other styles
13

DeGutis, Joseph M., Shlomo Bentin, Lynn C. Robertson, and Mark D'Esposito. "Functional Plasticity in Ventral Temporal Cortex following Cognitive Rehabilitation of a Congenital Prosopagnosic." Journal of Cognitive Neuroscience 19, no. 11 (November 2007): 1790–802. http://dx.doi.org/10.1162/jocn.2007.19.11.1790.

Full text
Abstract:
We used functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) to measure neural changes associated with training configural processing in congenital prosopagnosia, a condition in which face identification abilities are not properly developed in the absence of brain injury or visual problems. We designed a task that required discriminating faces by their spatial configuration and, after extensive training, prosopagnosic MZ significantly improved at face identification. Event-related potential results revealed that although the N170 was not selective for faces before training, its selectivity after training was normal. fMRI demonstrated increased functional connectivity between ventral occipital temporal face-selective regions (right occipital face area and right fusiform face area) that accompanied improvement in face recognition. Several other regions showed fMRI activity changes with training; the majority of these regions increased connectivity with face-selective regions. Together, the neural mechanisms associated with face recognition improvements involved strengthening early face-selective mechanisms and increased coordination between face-selective and nonselective regions, particularly in the right hemisphere.
APA, Harvard, Vancouver, ISO, and other styles
14

Joyce, Carrie A., and Marta Kutas. "Event-Related Potential Correlates of Long-Term Memory for Briefly Presented Faces." Journal of Cognitive Neuroscience 17, no. 5 (May 2005): 757–67. http://dx.doi.org/10.1162/0898929053747603.

Full text
Abstract:
Electrophysiological studies have investigated the nature of face recognition in a variety of paradigms; some have contrasted famous and novel faces in explicit memory paradigms, others have repeated faces to examine implicit memory/ priming. If the general finding that implicit memory can last for up to several months also holds for novel faces, a reliable measure of it could have practical application for eyewitness testimony, given that explicit measures of eyewitness memory have at times proven fallible. The current study aimed to determine whether indirect behavioral and electrophysiological measures might yield reliable estimates of face memory over longer intervals than have typically been obtained with priming manipulations. Participants were shown 192 faces and then tested for recognition at four test delays ranging from immediately up to 1 week later. Three event-related brain potential components (e.g., N250r, N400f, and LPC) varied with memory measures although only the N250r varied regardless of explicit recognition, that is, with both repetition and recognition.
APA, Harvard, Vancouver, ISO, and other styles
15

Bate, Sarah, Rachel Bennetts, Ebony Murray, and Emma Portch. "Enhanced Matching of Children’s Faces in “Super-Recognisers” But Not High-Contact Controls." i-Perception 11, no. 4 (July 2020): 204166952094442. http://dx.doi.org/10.1177/2041669520944420.

Full text
Abstract:
Face matching is notoriously error-prone, and some work suggests additional difficulty when matching the faces of children. It is possible that individuals with natural proficiencies in adult face matching (“super-recognisers” [SRs]) will also excel at the matching of children’s faces, although other work implicates facilitations in typical perceivers who have high levels of contact with young children (e.g., nursery teachers). This study compared the performance of both of these groups on adult and child face matching to a group of low-contact controls. High- and low-contact control groups performed at a remarkably similar level in both tasks, whereas facilitations for adult and child face matching were observed in some (but not all) SRs. As a group, the SRs performed better in the adult compared with the child task, demonstrating an extended own-age bias compared with controls. These findings suggest that additional exposure to children’s faces does not assist the performance in a face matching task, and the mechanisms underpinning superior recognition of adult faces can also facilitate the child face recognition. Real-world security organisations should therefore seek individuals with general facilitations in face matching for both adult and child face matching tasks.
APA, Harvard, Vancouver, ISO, and other styles
16

Vanagaitė, Kristina, Gintautas Valickas, and Laura Soloveičikienė. "MOTERŲ IR VYRŲ PAKEISTŲ VEIDO ELEMENTŲ ATPAŽINIMO TIKSLUMAS." Psichologija 31 (January 1, 2005): 54–74. http://dx.doi.org/10.15388/psichol.2005..4338.

Full text
Abstract:
Straipsnyje analizuojami skirtingos lyties tiriamųjų vyrų ir moterų veidų elementų atpažinimo ypatumai. Tyrimo metu buvo fiksuojama, ar tiriamieji (30 vyrų ir 30 moterų) pastebi, kad demonstruojami veidai skiriasi nuo įsimintojo etaloninio veido. Gauti rezultatai parodė, kad tiek vyrai, tiek moterys, stebėdami kompiuterio ekrane demonstruojamus veidus, statistiškai reikšmingai tiksliau ir greičiau atpažino pakeistus vyrų, o ne moterų etaloninių veidų elementus. Tiksliausiai ir greičiausiai buvo atpažįstami pakeisti plaukai ir akys. Nustatyta, kad nevienodas veido elementų atpažinimo tikslumas ir greitis susijęs su tiriamųjų taikyta atpažinimo strategija, subjektyviu veido elementų atpažinimo lengvumo ir veidų tarpusavio panašumo vertinimu. Be to, buvo atskleista tendencija, kad moterys, palyginti su vyrais, tiksliau ir greičiau atpažįsta pakeistus etaloninių veidų elementus. THE ACCURACY OF RECOGNITION OF MODIFIED FACE ELEMENTS OF WOMEN AND MENKristina Vanagaitė, Gintautas Valickas, Laura Soloveičikienė SummaryThe article represents peculiarities of face elements recognition of different sexes. The goals of the research are: 1) to establish the accuracy and time of recognition of modified face elements and compare the results with the estimation of face typicality / distinctiveness; 2) to establish whether differences between sexes identifying modified face elements exist.With the help of computer-based photofit program there were target faces of two men and two women shaped. Research participants were supposed to memorize them. Replacing some particular elements of a target face (hair, eyes, lips and ears) with another ones there were new faces shaped. The research experts have selected five stimuli faces for the each target face. In the course of the research it was recorded whether the subjects (30 men and 30 women) noticed that the displayed faces differed from the target ones. While the created faces were being demonstrated the answers of the participants were registered with the help of the implemented computer program. In the end of the research, applying the 5 and 7 points scale, the participants assessed the easiness of recognition of face elements and the similarity among each other. Moreover, they indicated their strategy to identify modified elements of faces.The results showed that men as well as women were more accurate and fast in recognizing elements of modified faces of men, not those of women, which is statistically significant. A more accurate recognition of modified face elements might have been determined by a greater distinctiveness of target faces of men. Hair and eyes were recognized most accurately and quickly, after that proceeded lips and noses, while ears happened to be the most difficult elements to identify. The research also proved that the different speed and accuracy of recognition of modified face elements are connected: 1) with the strategy the participants applied (the correct answers were mostly presented applying a simultaneous recognition strategy); 2) with the subjective evaluation of the easiness of face elements recognition (the participants indicated most accurately those elements that were attributed to the easiest to recognize); 3) with the assessment of faces similarity (a higher level of perceived similarity of faces impedes the recognition of modified elements of a target face). Moreover, it was established that women in comparison with men are quicker and more accurate in recognizing modified elements of a target face.
APA, Harvard, Vancouver, ISO, and other styles
17

Hou, Chunna, and Zhijun Liu. "The Survival Processing Advantage of Face: The Memorization of the (Un)Trustworthy Face Contributes More to Survival Adaptation." Evolutionary Psychology 17, no. 2 (April 1, 2019): 147470491983972. http://dx.doi.org/10.1177/1474704919839726.

Full text
Abstract:
Researchers have found that compared with other existing conditions (e.g., pleasantness), information relevant to survival produced a higher rate of retrieval; this effect is known as the survival processing advantage (SPA). Previous experiments have examined that the advantage of memory can be extended to some different types of visual pictorial material, such as pictures and short video clips, but there were some arguments for whether face stimulus could be seen as a boundary condition of SPA. The current work explores whether there is a mnemonic advantage to different trustworthiness of face for human adaptation. In two experiments, we manipulated the facial trustworthiness (untrustworthy, neutral, and trustworthy), which is believed to provide information regarding survival decisions. Participants were asked to predict their avoidance or approach response tendency, when encountering strangers (represented by three classified faces of trustworthiness) in a survival scenario and the control scenario. The final surprise memory tests revealed that it was better to recognize both the trustworthy faces and untrustworthy faces, when the task was related to survival. Experiment 1 demonstrated the existence of a SPA in the bipolarity of facial untrustworthiness and trustworthiness. In Experiment 2, we replicated the SPA of trustworthy and untrustworthy face recognitions using a matched design, where we found this kind of memory benefits only in recognition tasks but not in source memory tasks. These results extend the generality of SPAs to face domain.
APA, Harvard, Vancouver, ISO, and other styles
18

Menon, Nadia, Richard I. Kemp, and David White. "More than a sum of parts: robust face recognition by integrating variation." Royal Society Open Science 5, no. 5 (May 2018): 172381. http://dx.doi.org/10.1098/rsos.172381.

Full text
Abstract:
Familiarity incrementally improves our ability to identify faces. It has been hypothesized that this improvement reflects the refinement of memory representations which incorporate variation in appearance across encounters. Although it is established that exposure to variation improves face identification accuracy, it is not clear how variation is assimilated into internal face representations. To address this, we used a novel approach to isolate the effect of integrating separate exposures into a single-identity representation. Participants ( n = 113) were exposed to either a single video clip or a pair of video clips of target identities. Pairs of video clips were presented as either a single identity (associated with a single name, e.g. Betty-Sue) or dual identities (associated with two names, e.g. Betty and Sue). Results show that participants exposed to pairs of video clips showed better matching performance compared with participants trained with a single clip. More importantly, identification accuracy was higher for faces presented as single identities compared to faces presented as dual identities. This provides the first direct evidence that the integration of information across separate exposures benefits face matching, thereby establishing a mechanism that may explain people's impressive ability to recognize familiar faces.
APA, Harvard, Vancouver, ISO, and other styles
19

Fysh, Matthew C., Lisa Stacchi, and Meike Ramon. "Differences between and within individuals, and subprocesses of face cognition: implications for theory, research and personnel selection." Royal Society Open Science 7, no. 9 (September 2020): 200233. http://dx.doi.org/10.1098/rsos.200233.

Full text
Abstract:
Recent investigations of individual differences have demonstrated striking variability in performance both within the same subprocess in face cognition (e.g. face perception), but also between two different subprocesses (i.e. face perception versus face recognition ) that are assessed using different tasks (face matching versus face memory ). Such differences between and within individuals between and within laboratory tests raise practical challenges. This applies in particular to the development of screening tests for the selection of personnel in real-world settings where faces are routinely processed, such as at passport control. The aim of this study, therefore, was to examine the performance profiles of individuals within and across two different subprocesses of face cognition: face perception and face recognition. To this end, 146 individuals completed four different tests of face matching—one novel tool for assessing proficiency in face perception, as well as three established measures—and two benchmark tests of face memory probing face recognition. In addition to correlational analyses, we further scrutinized individual performance profiles of the highest and lowest performing observers identified per test , as well as across all tests . Overall, a number of correlations emerged between tests. However, there was limited evidence at the individual level to suggest that high proficiency in one test generalized to other tests measuring the same subprocess, as well as those that measured a different subprocess. Beyond emphasizing the need to honour inter-individual differences through careful multivariate assessment in the laboratory, our findings have real-world implications: combinations of tests that most accurately map the task(s) and processes of interest are required for personnel selection.
APA, Harvard, Vancouver, ISO, and other styles
20

Chapman, Angus F., Hannah Hawkins-Elder, and Tirta Susilo. "How robust is familiar face recognition? A repeat detection study of more than 1000 faces." Royal Society Open Science 5, no. 5 (May 2018): 170634. http://dx.doi.org/10.1098/rsos.170634.

Full text
Abstract:
Recent theories suggest that familiar faces have a robust representation in memory because they have been encountered over a wide variety of contexts and image changes (e.g. lighting, viewpoint and expression). By contrast, unfamiliar faces are encountered only once, and so they do not benefit from such richness of experience and are represented based on image-specific details. In this registered report, we used a repeat detection task to test whether familiar faces are recognized better than unfamiliar faces across image changes. Participants viewed a stream of more than 1000 celebrity face images for 0.5 s each, any of which might be repeated at a later point and has to be detected. Some participants saw the same image at repeats, while others saw a different image of the same face. A post-experimental familiarity check allowed us to determine which celebrities were and were not familiar to each participant. We had three predictions: (i) detection would be better for familiar than unfamiliar faces, (ii) detection would be better across same rather than different images, and (iii) detection of familiar faces would be comparable across same and different images, but detection of unfamiliar faces would be poorer across different images. We obtained support for the first two predictions but not the last. Instead, we found that repeat detection of faces, regardless of familiarity, was poorer across different images. Our study suggests that the robustness of familiar face recognition may have limits, and that under some conditions, familiar face recognition can be just as influenced by image changes as unfamiliar face recognition.
APA, Harvard, Vancouver, ISO, and other styles
21

Yang, Jing, Adrian Bulat, and Georgios Tzimiropoulos. "FAN-Face: a Simple Orthogonal Improvement to Deep Face Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 12621–28. http://dx.doi.org/10.1609/aaai.v34i07.6953.

Full text
Abstract:
It is known that facial landmarks provide pose, expression and shape information. In addition, when matching, for example, a profile and/or expressive face to a frontal one, knowledge of these landmarks is useful for establishing correspondence which can help improve recognition. However, in prior work on face recognition, facial landmarks are only used for face cropping in order to remove scale, rotation and translation variations. This paper proposes a simple approach to face recognition which gradually integrates features from different layers of a facial landmark localization network into different layers of the recognition network. To this end, we propose an appropriate feature integration layer which makes the features compatible before integration. We show that such a simple approach systematically improves recognition on the most difficult face recognition datasets, setting a new state-of-the-art on IJB-B, IJB-C and MegaFace datasets.
APA, Harvard, Vancouver, ISO, and other styles
22

Wu, Yanfang, Xiaobo Lu, and Chen Qin. "Towards End-to-End Face Recognition Method through Spatial Transformer." Journal of Physics: Conference Series 1575 (June 2020): 012090. http://dx.doi.org/10.1088/1742-6596/1575/1/012090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Tang, Fenggao, Xuedong Wu, Zhiyu Zhu, Zhengang Wan, Yanchao Chang, Zhaoping Du, and Lili Gu. "An end-to-end face recognition method with alignment learning." Optik 205 (March 2020): 164238. http://dx.doi.org/10.1016/j.ijleo.2020.164238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Deffenbacher, Kenneth A., Cheryl Hendrickson, Alice J. O'Toole, David P. Huff, and Hervé Abdi. "Manipulating Face Gender." Journal of Biological Systems 06, no. 03 (September 1998): 219–39. http://dx.doi.org/10.1142/s0218339098000169.

Full text
Abstract:
Previous research has shown that faces coded as pixel-based images may be constructed from an appropriately weighted combination of statistical "features" (eigenvectors) which are useful for discriminating members of a learned set of images. We have shown previously that two of the most heavily weighted features are important in predicting face gender. Using a simple computational model, we adjusted weightings of these features in more masculine and more feminine directions for both male and female adult Caucasian faces. In Experiment 1, cross-gender face image alterations (e.g., feminizing male faces) reduced both gender classification speed and accuracy for young adult Caucasian observers, whereas same-gender alterations (e.g., masculinizing male faces) had no effect as compared to unaltered controls. Effects on femininity-masculinity ratings mirrored those obtained on gender classification speed and accuracy. We controlled statistically for possible effects of image distortion incurred by our gender manipulations. In Experiment 2 we replicated the same pattern of accuracy data. Combined, these data indicate the psychological relevance of the features derived from the computational model. Despite having different effects on the ease of gender classification, neither sort of gender alteration negatively impacted face recognition (Experiment 3), yielding evidence for a model of face recognition wherein gender and familiarity processing proceed in parallel.
APA, Harvard, Vancouver, ISO, and other styles
25

Tu, Huan, Gesang Duoji, Qijun Zhao, and Shuang Wu. "Improved Single Sample Per Person Face Recognition via Enriching Intra-Variation and Invariant Features." Applied Sciences 10, no. 2 (January 14, 2020): 601. http://dx.doi.org/10.3390/app10020601.

Full text
Abstract:
Face recognition using a single sample per person is a challenging problem in computer vision. In this scenario, due to the lack of training samples, it is difficult to distinguish between inter-class variations caused by identity and intra-class variations caused by external factors such as illumination, pose, etc. To address this problem, we propose a scheme to improve the recognition rate by both generating additional samples to enrich the intra-variation and eliminating external factors to extract invariant features. Firstly, a 3D face modeling module is proposed to recover the intrinsic properties of the input image, i.e., 3D face shape and albedo. To obtain the complete albedo, we come up with an end-to-end network to estimate the full albedo UV map from incomplete textures. The obtained albedo UV map not only eliminates the influence of the illumination, pose, and expression, but also retains the identity information. With the help of the recovered intrinsic properties, we then generate images under various illuminations, expressions, and poses. Finally, the albedo and the generated images are used to assist single sample per person face recognition. The experimental results on Face Recognition Technology (FERET), Labeled Faces in the Wild (LFW), Celebrities in Frontal-Profile (CFP) and other face databases demonstrate the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
26

Lesley, C. Scanlan, and Robert A. Johnston. "I Recognize Your Face, But I Can't Remember Your Name: A Grown up Explanation." Quarterly Journal of Experimental Psychology Section A 50, no. 1 (February 1997): 183–98. http://dx.doi.org/10.1080/027249897392288.

Full text
Abstract:
Contemporary models of face recognition explain everyday difficulties in name retrieval by proposing that name information can only be accessed after semantic information (e.g. Bruce & Young, 1986) or by proposing an architecture which puts name retrieval at a disadvantage (e.g. Burton& Bruce,1992). Experiments reported here examined the time required to access name and semantic details by adult and child subjects. In Experiment 1 adult subjects took more time to match familiar faces to names than to other semantic details (e.g. occupation), a finding consistent with all the previous literature on name retrieval. Experiment 2, however, showed that the youngest subjects were significantly faster in matching familiar faces to names than to semantic details. Experiment 3 also showed that children were faster at accessing names than occupations when giving vocal responses to presentations of familiar faces. These findings are not predicted by rigidly sequential models of face recognition and are discussed with specific reference to the ontogenesis of models based on a more flexible connectionist architecture.
APA, Harvard, Vancouver, ISO, and other styles
27

McNeill, Allan, and A. Mike Burton. "The locus of semantic priming effects in person recognition." Quarterly Journal of Experimental Psychology Section A 55, no. 4 (October 2002): 1141–56. http://dx.doi.org/10.1080/02724980244000189.

Full text
Abstract:
Semantic priming in person recognition has been studied extensively. In a typical experiment, participants are asked to make a familiarity decision to target items that have been immediately preceded by related or unrelated primes. Facilitation is usually observed from related primes, and this priming is equivalent across stimulus domains (i.e., faces and names prime one another equally). Structural models of face recognition (e.g., IAC: Burton, Bruce, Johnston, 1990) accommodate these effects by proposing a level of person identity nodes (PINs) at which recognition routes converge, and which allow access to a common pool of semantics. We present three experiments that examine semantic priming for different decisions. Priming for a semantic decision (e.g., British/American?) shows exactly the same pattern that is normally observed for a familiarity decision. The pattern is equivalent for name and face recognition. However, no semantic priming is observed when participants are asked to make a sex decision. These results constrain future models of face processing and are discussed with reference to current theories of semantic priming.
APA, Harvard, Vancouver, ISO, and other styles
28

Rodrigues, João, Roberto Lam, and Hans du Buf. "Cortical 3D Face and Object Recognition Using 2D Projections." International Journal of Creative Interfaces and Computer Graphics 3, no. 1 (January 2012): 45–62. http://dx.doi.org/10.4018/jcicg.2012010104.

Full text
Abstract:
Empirical studies concerning face recognition suggest that faces may be stored in memory by a few canonical representations. In cortical area V1 exist double-opponent colour blobs, also simple, complex and end-stopped cells which provide input for a multiscale line/edge representation, keypoints for dynamic feature routing, and saliency maps for Focus-of-Attention. All these combined allow faces to be segregated. Events of different facial views are stored in memory and combined to identify the view and recognise a face, including its expression. In this paper, the authors show that with five 2D views and their cortical representations it is possible to determine the left-right and frontal-lateral-profile views, achieving a view-invariant recognition rate of 91%. The authors also show that the same principle with eight views can be applied to 3D object recognition when they are mainly rotated about the vertical axis.
APA, Harvard, Vancouver, ISO, and other styles
29

Rani, Sheela, Vuyyuru Tejaswi, Bonthu Rohitha, and Bhimavarapu Akhil. "Pre filtering techniques for face recognition based on edge detection algorithm." International Journal of Engineering & Technology 7, no. 1.1 (December 21, 2017): 213. http://dx.doi.org/10.14419/ijet.v7i1.1.9469.

Full text
Abstract:
Recognition of face has been turned out to be the most important and interesting area in research. A face recognition framework is a PC application that is apt for recognizing or confirming the presence of human face from a computerized picture, from the video frames etc. One of the approaches to do this is by matching the chosen facial features with the pictures in the database. It is normally utilized as a part of security frameworks and can be implemented in different biometrics, for example, unique finger impression or eye iris acknowledgment frameworks. A picture is a mix of edges. The curved line potions where the brightness of the image change intensely are known as edges. We utilize a similar idea in the field of face-detection, the force of facial colours are utilized as a consistent value. Face recognition includes examination of a picture with a database of stored faces keeping in mind the end goal to recognize the individual in the given input picture. The entire procedure covers in three phases face detection, feature extraction and recognition and different strategies are required according to the specified requirements.
APA, Harvard, Vancouver, ISO, and other styles
30

Dixon, Mike J., Daniel N. Bub, and Martin Arguin. "Semantic and Visual Determinants of Face Recognition in a Prosopagnosic Patient." Journal of Cognitive Neuroscience 10, no. 3 (May 1998): 362–76. http://dx.doi.org/10.1162/089892998562799.

Full text
Abstract:
Prosopagnosia is the neuropathological inability to recognize familiar people by their faces. It can occur in isolation or can coincide with recognition deficits for other nonface objects. Often, patients whose prosopagnosia is accompanied by object recognition difficulties have more trouble identifying certain categories of objects relative to others. In previous research, we demonstrated that objects that shared multiple visual features and were semantically close posed severe recognition difficulties for a patient with temporal lobe damage. We now demonstrate that this patient's face recognition is constrained by these same parameters. The prosopagnosic patient ELM had difficulties pairing faces to names when the faces shared visual features and the names were semantically related (e.g., Tonya Harding, Nancy Kerrigan, and Josée Chouinard— three ice skaters). He made tenfold fewer errors when the exact same faces were associated with semantically unrelated people (e.g., singer Celine Dion, actress Betty Grable, and First Lady Hillary Clinton). We conclude that prosopagnosia and co-occurring category-specific recognition problems both stem from difficulties disambiguating the stored representations of objects that share multiple visual features and refer to semantically close identities or concepts.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhang, Jing, and You Li. "A Novel Approach of Face Detection Based on Skin Color Segmentation and SVM." Advanced Materials Research 225-226 (April 2011): 437–41. http://dx.doi.org/10.4028/www.scientific.net/amr.225-226.437.

Full text
Abstract:
Nowadays, face detection and recognition have gained importance in security and information access. In this paper, an efficient method of face detection based on skin color segmentation and Support Vector Machine(SVM) is proposed. Firstly, segmenting image using color model to filter candidate faces roughly; And then Eye-analogue segments at a given scale are discovered by finding regions which are darker than their neighborhoods to filter candidate faces farther; at the end, SVM classifier is used to detect face feature in the test image, SVM has great performance in classification task. Our tests in this paper are based on MIT face database. The experimental results demonstrate that the proposed method is encouraging with a successful detection rate.
APA, Harvard, Vancouver, ISO, and other styles
32

Guzzi, Francesco, Luca De Bortoli, Romina Soledad Molina, Stefano Marsi, Sergio Carrato, and Giovanni Ramponi. "Distillation of an End-to-End Oracle for Face Verification and Recognition Sensors." Sensors 20, no. 5 (March 2, 2020): 1369. http://dx.doi.org/10.3390/s20051369.

Full text
Abstract:
Face recognition functions are today exploited through biometric sensors in many applications, from extended security systems to inclusion devices; deep neural network methods are reaching in this field stunning performances. The main limitation of the deep learning approach is an inconvenient relation between the accuracy of the results and the needed computing power. When a personal device is employed, in particular, many algorithms require a cloud computing approach to achieve the expected performances; other algorithms adopt models that are simple by design. A third viable option consists of model (oracle) distillation. This is the most intriguing among the compression techniques since it permits to devise of the minimal structure that will enforce the same I/O relation as the original model. In this paper, a distillation technique is applied to a complex model, enabling the introduction of fast state-of-the-art recognition capabilities on a low-end hardware face recognition sensor module. Two distilled models are presented in this contribution: the former can be directly used in place of the original oracle, while the latter incarnates better the end-to-end approach, removing the need for a separate alignment procedure. The presented biometric systems are examined on the two problems of face verification and face recognition in an open set by using well-agreed training/testing methodologies and datasets.
APA, Harvard, Vancouver, ISO, and other styles
33

Micheletta, Jérôme, Jamie Whitehouse, Lisa A. Parr, Paul Marshman, Antje Engelhardt, and Bridget M. Waller. "Familiar and unfamiliar face recognition in crested macaques ( Macaca nigra )." Royal Society Open Science 2, no. 5 (May 2015): 150109. http://dx.doi.org/10.1098/rsos.150109.

Full text
Abstract:
Many species use facial features to identify conspecifics, which is necessary to navigate a complex social environment. The fundamental mechanisms underlying face processing are starting to be well understood in a variety of primate species. However, most studies focus on a limited subset of species tested with unfamiliar faces. As well as limiting our understanding of how widely distributed across species these skills are, this also limits our understanding of how primates process faces of individuals they know, and whether social factors (e.g. dominance and social bonds) influence how readily they recognize others. In this study, socially housed crested macaques voluntarily participated in a series of computerized matching-to-sample tasks investigating their ability to discriminate (i) unfamiliar individuals and (ii) members of their own social group. The macaques performed above chance on all tasks. Familiar faces were not easier to discriminate than unfamiliar faces. However, the subjects were better at discriminating higher ranking familiar individuals, but not unfamiliar ones. This suggests that our subjects applied their knowledge of their dominance hierarchies to the pictorial representation of their group mates. Faces of high-ranking individuals garner more social attention, and therefore might be more deeply encoded than other individuals. Our results extend the study of face recognition to a novel species, and consequently provide valuable data for future comparative studies.
APA, Harvard, Vancouver, ISO, and other styles
34

More, Prof C. S. "Face Recognition Based Attendance Record System." International Journal for Research in Applied Science and Engineering Technology 9, no. VII (July 20, 2021): 1610–12. http://dx.doi.org/10.22214/ijraset.2021.36658.

Full text
Abstract:
In today’s world, face recognition is the type of biometric that is used in almost every field. This technology is used for security purposes and can be used in many verification and security system. Though it is less efficient than eyes recognition and fingerprint recognition, is still in market due to its untouchability and non-intrusive method. Besides, face recognition should also be utilized for attendance checking in schools, colleges, offices, etc. Face Recognition method pivot to build up a class attendance system which uses the idea of face recognition as present hand done attendance process is lethargic and not suitable to keep. And there are chances of too much proxy attendance. Thus, the want for this method is much needed. This method involves 4 stages- database introduction, face detection, face recognition, attendance updation. The database is made by taking the snap shots of the students in elegance. Face detection and popularity is done using python opencv. Attendance is to be exported at the end of semester.
APA, Harvard, Vancouver, ISO, and other styles
35

Zou, Fangyuan, Jing Li, and Weidong Min. "Distributed Face Recognition Based on Load Balancing and Dynamic Prediction." Applied Sciences 9, no. 4 (February 24, 2019): 794. http://dx.doi.org/10.3390/app9040794.

Full text
Abstract:
With the dramatic expansion of large-scale videos, traditional centralized face recognition methods cannot meet the demands of time efficiency and expansibility, and thus distributed face recognition models were proposed. However, the number of tasks at the agent side is always dynamic, and unbalanced allocation will lead to time delay and a sharp increase of CPU utilization. To this end, a new distributed face recognition framework based on load balancing and dynamic prediction is proposed in this paper. The framework consists of a server and multiple agents. When performing face recognition, the server is used to recognize faces, and other operations are performed by the agents. Since the changes of the total number of videos and the number of pedestrians affect the task amount, we perform load balancing with an improved genetic algorithm. To ensure the accuracy of task allocation, we use extreme learning machine to predict the change of tasks. The server then performs task allocation based on the predicted results sent by the agents. The experimental results show that the proposed method can effectively solve the problem of unbalanced task allocation at the agent side, and meanwhile alleviate time delay and the sharp increase of CPU utilization.
APA, Harvard, Vancouver, ISO, and other styles
36

Naji, Maitham Ali, Ghalib Ahmed Salman, and Muthna Jasim Fadhil. "Face recognition using selected topographical features." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 5 (October 1, 2020): 4695. http://dx.doi.org/10.11591/ijece.v10i5.pp4695-4700.

Full text
Abstract:
This paper represents a new features selection method to improve an existed feature type. Topographical (TGH) features provide large set of features by assigning each image pixel to the related feature depending on image gradient and Hessian matrix. Such type of features was handled by a proposed features selection method. A face recognition feature selector (FRFS) method is presented to inspect TGH features. FRFS depends in its main concept on linear discriminant analysis (LDA) technique, which is used in evaluating features efficiency. FRFS studies feature behavior over a dataset of images to determine the level of its performance. At the end, each feature is assigned to its related level of performance with different levels of performance over the whole image. Depending on a chosen threshold, the highest set of features is selected to be classified by SVM classifier
APA, Harvard, Vancouver, ISO, and other styles
37

Sui, Jie, and Shihui Han. "Self-Construal Priming Modulates Neural Substrates of Self-Awareness." Psychological Science 18, no. 10 (October 2007): 861–66. http://dx.doi.org/10.1111/j.1467-9280.2007.01992.x.

Full text
Abstract:
We used functional magnetic resonance imaging to assess whether self-construal priming can change adults' self-awareness induced during face perception. After reading essays containing independent or interdependent pronouns (e.g., I or we), participants were scanned while judging the head orientation of images showing their own and familiar faces. Neural activity in the right middle frontal cortex was greater when participants viewed their own rather than familiar faces, and this difference was larger after independent than after interdependent self-construal priming. The increased right frontal activity for participants' own faces relative to familiar faces was associated with faster responses. Our findings suggest that the neural correlates of self-awareness associated with recognition of one's own face can be modulated by self-construal priming in human adults.
APA, Harvard, Vancouver, ISO, and other styles
38

Hanson, Stephen José, and Yaroslav O. Halchenko. "Brain Reading Using Full Brain Support Vector Machines for Object Recognition: There Is No “Face” Identification Area." Neural Computation 20, no. 2 (February 2008): 486–503. http://dx.doi.org/10.1162/neco.2007.09-06-340.

Full text
Abstract:
Over the past decade, object recognition work has confounded voxel response detection with potential voxel class identification. Consequently, the claim that there are areas of the brain that are necessary and sufficient for object identification cannot be resolved with existing associative methods (e.g., the general linear model) that are dominant in brain imaging methods. In order to explore this controversy we trained full brain (40,000 voxels) single TR (repetition time) classifiers on data from 10 subjects in two different recognition tasks on the most controversial classes of stimuli (house and face) and show 97.4% median out-of-sample (unseen TRs) generalization. This performance allowed us to reliably and uniquely assay the classifier's voxel diagnosticity in all individual subjects' brains. In this two-class case, there may be specific areas diagnostic for house stimuli (e.g., LO) or for face stimuli (e.g., STS); however, in contrast to the detection results common in this literature, neither the fusiform face area nor parahippocampal place area is shown to be uniquely diagnostic for faces or places, respectively.
APA, Harvard, Vancouver, ISO, and other styles
39

Bruce, Vicki. "Stability from Variation: The Case of Face Recognition the M.D. Vernon Memorial Lecture." Quarterly Journal of Experimental Psychology Section A 47, no. 1 (February 1994): 5–28. http://dx.doi.org/10.1080/14640749408401141.

Full text
Abstract:
A theme running through M.D. Vernon's discussions of visual perception was the key question of how we perceive a stable world despite continuous variation. The central problem in face identification is how we build stable representations from exemplars that vary, both rigidly and non-rigidly, from instant to instant and from encounter to encounter. Experiments reveal that people are rather poor at generalizing from one exemplar of a face to another (e.g. from one photograph to another showing a different view or expression) yet highly accurate at encoding precise details of faces within the range shown by several slightly different exemplars. Moreover, provided instructions do not encourage subjects explicitly to attend to the way that different exemplars vary, faces are retained in a way that enhances familiarity of the prototype of the set, even if this was not presented for study. It is suggested that our usual encounters with continuous variations of facial expressions, angles, and lightings provide the conditions necessary to establish stable representations of individuals within an overall category (the face) where all members share the same overall structure. These observations about face recognition would probably not have come as any great surprise to Maggie Vernon, many of whose more general observations about visual perception anticipated such conclusions.
APA, Harvard, Vancouver, ISO, and other styles
40

Del-Ben, Cristina M., Cesar AQ Ferreira, Tiago A. Sanchez, Wolme C. Alves-Neto, Vinicius G. Guapo, Draulio B. de Araujo, and Frederico G. Graeff. "Effects of diazepam on BOLD activation during the processing of aversive faces." Journal of Psychopharmacology 26, no. 4 (November 24, 2010): 443–51. http://dx.doi.org/10.1177/0269881110389092.

Full text
Abstract:
This study aimed to measure, using fMRI, the effect of diazepam on the haemodynamic response to emotional faces. Twelve healthy male volunteers (mean age = 24.83 ± 3.16 years), were evaluated in a randomized, balanced-order, double-blind, placebo-controlled crossover design. Diazepam (10 mg) or placebo was given 1 h before the neuroimaging acquisition. In a blocked design covert face emotional task, subjects were presented with neutral (A) and aversive (B) (angry or fearful) faces. Participants were also submitted to an explicit emotional face recognition task, and subjective anxiety was evaluated throughout the procedures. Diazepam attenuated the activation of right amygdala and right orbitofrontal cortex and enhanced the activation of right anterior cingulate cortex (ACC) to fearful faces. In contrast, diazepam enhanced the activation of posterior left insula and attenuated the activation of bilateral ACC to angry faces. In the behavioural task, diazepam impaired the recognition of fear in female faces. Under the action of diazepam, volunteers were less anxious at the end of the experimental session. These results suggest that benzodiazepines can differentially modulate brain activation to aversive stimuli, depending on the stimulus features and indicate a role of amygdala and insula in the anxiolytic action of benzodiazepines.
APA, Harvard, Vancouver, ISO, and other styles
41

Wang, Xiaobo, Shifeng Zhang, Shuo Wang, Tianyu Fu, Hailin Shi, and Tao Mei. "Mis-Classified Vector Guided Softmax Loss for Face Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 12241–48. http://dx.doi.org/10.1609/aaai.v34i07.6906.

Full text
Abstract:
Face recognition has witnessed significant progress due to the advances of deep convolutional neural networks (CNNs), the central task of which is how to improve the feature discrimination. To this end, several margin-based (e.g., angular, additive and additive angular margins) softmax loss functions have been proposed to increase the feature margin between different classes. However, despite great achievements have been made, they mainly suffer from three issues: 1) Obviously, they ignore the importance of informative features mining for discriminative learning; 2) They encourage the feature margin only from the ground truth class, without realizing the discriminability from other non-ground truth classes; 3) The feature margin between different classes is set to be same and fixed, which may not adapt the situations very well. To cope with these issues, this paper develops a novel loss function, which adaptively emphasizes the mis-classified feature vectors to guide the discriminative feature learning. Thus we can address all the above issues and achieve more discriminative face features. To the best of our knowledge, this is the first attempt to inherit the advantages of feature margin and feature mining into a unified loss function. Experimental results on several benchmarks have demonstrated the effectiveness of our method over state-of-the-art alternatives. Our code is available at http://www.cbsr.ia.ac.cn/users/xiaobowang/.
APA, Harvard, Vancouver, ISO, and other styles
42

Li, Shen, Fang Liu, Jiayue Liang, Zhenhua Cai, and Zhiyao Liang. "Optimization of Face Recognition System Based on Azure IoT Edg." Computers, Materials & Continua 61, no. 3 (2019): 1377–89. http://dx.doi.org/10.32604/cmc.2019.06402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Kakigi, Ryusuke. "WS3 Face Recognition-Related Potentials: EEG, MEG and NIRS Studies." Clinical Neurophysiology 120 (April 2009): S17. http://dx.doi.org/10.1016/s1388-2457(09)60045-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

BURGESS, A. P., and J. H. GRUZELIER. "Localization of word and face recognition memory using topographical EEG." Psychophysiology 34, no. 1 (January 1997): 7–16. http://dx.doi.org/10.1111/j.1469-8986.1997.tb02410.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Jain, Ayush. "Secure Authentication for Banking Using Face Recognition." Journal of Informatics Electrical and Electronics Engineering (JIEEE) 2, no. 2 (June 2, 2021): 1–8. http://dx.doi.org/10.54060/jieee/002.02.001.

Full text
Abstract:
With the increasing demand for online banking lack of security in the system has been felt due to a tremendous increase in fraudulent activities. Facial recognition is one of the numerous ways that banks can increase security and accessibility. This paper proposes to inspect the use of facial recognition for login and for banking purposes. The potency of our system is that it provides strong security, username and password verification, face recognition and pin for a successful transaction. Multilevel Security of this system will reduce problems of cyber-crime and maintain the safety of the internet banking system. The end result is a strengthened authentication system that will escalate the confidence of customers in the banking sector.
APA, Harvard, Vancouver, ISO, and other styles
46

Barnett-Cowan, Michael, Jacqueline C. Snow, and Jody C. Culham. "Contribution of Bodily and Gravitational Orientation Cues to Face and Letter Recognition." Multisensory Research 28, no. 5-6 (2015): 427–42. http://dx.doi.org/10.1163/22134808-00002481.

Full text
Abstract:
Sensory information provided by the vestibular system is crucial in cognitive processes such as the ability to recognize objects. The orientation at which objects are most easily recognized — the perceptual upright (PU) — is influenced by body orientation with respect to gravity as detected from the somatosensory and vestibular systems. To date, the influence of these sensory cues on the PU has been measured using a letter recognition task. Here we assessed whether gravitational influences on letter recognition also extend to human face recognition. 13 right-handed observers were positioned in four body orientations (upright, left-side-down, right-side-down, supine) and visually discriminated ambiguous characters (‘p’-from-‘d’; ‘i’-from-‘!’) and ambiguous faces used in popular visual illusions (‘young woman’-from-‘old woman’; ‘grinning man’-from-‘frowning man’) in a forced-choice paradigm. The two transition points (e.g., ‘p-to-d’ and ‘d-to-p’; ‘young woman-to-old woman’ and ‘old woman-to-young woman’) were fit with a sigmoidal psychometric function and the average of these transitions was taken as the PU for each stimulus category. The results show that both faces and letters are more influenced by body orientation than gravity. However, faces are more optimally recognized when closer in alignment with body orientation than letters — which are more influenced by gravity. Our results indicate that the brain does not utilize a common representation of upright that governs recognition of all object categories. Distinct areas of ventro-temporal cortex that represent faces and letters may weight bodily and gravitational cues differently — possibly to facilitate the specific demands of face and letter recognition.
APA, Harvard, Vancouver, ISO, and other styles
47

Fiedler, Marc-André, Philipp Werner, Aly Khalifa, and Ayoub Al-Hamadi. "SFPD: Simultaneous Face and Person Detection in Real-Time for Human–Robot Interaction." Sensors 21, no. 17 (September 2, 2021): 5918. http://dx.doi.org/10.3390/s21175918.

Full text
Abstract:
Face and person detection are important tasks in computer vision, as they represent the first component in many recognition systems, such as face recognition, facial expression analysis, body pose estimation, face attribute detection, or human action recognition. Thereby, their detection rate and runtime are crucial for the performance of the overall system. In this paper, we combine both face and person detection in one framework with the goal of reaching a detection performance that is competitive to the state of the art of lightweight object-specific networks while maintaining real-time processing speed for both detection tasks together. In order to combine face and person detection in one network, we applied multi-task learning. The difficulty lies in the fact that no datasets are available that contain both face as well as person annotations. Since we did not have the resources to manually annotate the datasets, as it is very time-consuming and automatic generation of ground truths results in annotations of poor quality, we solve this issue algorithmically by applying a special training procedure and network architecture without the need of creating new labels. Our newly developed method called Simultaneous Face and Person Detection (SFPD) is able to detect persons and faces with 40 frames per second. Because of this good trade-off between detection performance and inference time, SFPD represents a useful and valuable real-time framework especially for a multitude of real-world applications such as, e.g., human–robot interaction.
APA, Harvard, Vancouver, ISO, and other styles
48

Sreenivasan, Kartik K., and Amishi P. Jha. "Selective Attention Supports Working Memory Maintenance by Modulating Perceptual Processing of Distractors." Journal of Cognitive Neuroscience 19, no. 1 (January 2007): 32–41. http://dx.doi.org/10.1162/jocn.2007.19.1.32.

Full text
Abstract:
Selective attention has been shown to bias sensory processing in favor of relevant stimuli and against irrelevant or distracting stimuli in perceptual tasks. Increasing evidence suggests that selective attention plays an important role during working memory maintenance, possibly by biasing sensory processing in favor of to-be-remembered items. In the current study, we investigated whether selective attention may also support working memory by biasing processing against irrelevant and potentially distracting information. Event-related potentials (ERPs) were recorded while subjects (n = 22) performed a delayed-recognition task for faces and shoes. The delay period was filled with face or shoe distractors. Behavioral performance was impaired when distractors were congruent with the working memory domain (e.g., face distractor during working memory for faces) relative to when distractors were incongruent with the working memory domain (e.g., face distractor during shoe working memory). If attentional biasing against distractor processing is indeed functionally relevant in supporting working memory maintenance, perceptual processing of distractors is predicted to be attenuated when distractors are more behaviorally intrusive relative to when they are nonintrusive. As such, we predicted that perceptual processing of distracting faces, as measured by the face-sensitive N170 ERP component, would be reduced in the context of congruent (face) working memory relative to incongruent (shoe) working memory. The N170 elicited by distracting faces demonstrated reduced amplitude during congruent versus incongruent working memory. These results suggest that perceptual processing of distracting faces may be attenuated due to attentional biasing against sensory processing of distractors that are most behaviorally intrusive during working memory maintenance.
APA, Harvard, Vancouver, ISO, and other styles
49

Adjabi, Insaf, Abdeldjalil Ouahabi, Amir Benzaoui, and Abdelmalik Taleb-Ahmed. "Past, Present, and Future of Face Recognition: A Review." Electronics 9, no. 8 (July 23, 2020): 1188. http://dx.doi.org/10.3390/electronics9081188.

Full text
Abstract:
Face recognition is one of the most active research fields of computer vision and pattern recognition, with many practical and commercial applications including identification, access control, forensics, and human-computer interactions. However, identifying a face in a crowd raises serious questions about individual freedoms and poses ethical issues. Significant methods, algorithms, approaches, and databases have been proposed over recent years to study constrained and unconstrained face recognition. 2D approaches reached some degree of maturity and reported very high rates of recognition. This performance is achieved in controlled environments where the acquisition parameters are controlled, such as lighting, angle of view, and distance between the camera–subject. However, if the ambient conditions (e.g., lighting) or the facial appearance (e.g., pose or facial expression) change, this performance will degrade dramatically. 3D approaches were proposed as an alternative solution to the problems mentioned above. The advantage of 3D data lies in its invariance to pose and lighting conditions, which has enhanced recognition systems efficiency. 3D data, however, is somewhat sensitive to changes in facial expressions. This review presents the history of face recognition technology, the current state-of-the-art methodologies, and future directions. We specifically concentrate on the most recent databases, 2D and 3D face recognition methods. Besides, we pay particular attention to deep learning approach as it presents the actuality in this field. Open issues are examined and potential directions for research in facial recognition are proposed in order to provide the reader with a point of reference for topics that deserve consideration.
APA, Harvard, Vancouver, ISO, and other styles
50

Minaee, Shervin, Mehdi Minaei, and Amirali Abdolrashidi. "Deep-Emotion: Facial Expression Recognition Using Attentional Convolutional Network." Sensors 21, no. 9 (April 27, 2021): 3046. http://dx.doi.org/10.3390/s21093046.

Full text
Abstract:
Facial expression recognition has been an active area of research over the past few decades, and it is still challenging due to the high intra-class variation. Traditional approaches for this problem rely on hand-crafted features such as SIFT, HOG, and LBP, followed by a classifier trained on a database of images or videos. Most of these works perform reasonably well on datasets of images captured in a controlled condition but fail to perform as well on more challenging datasets with more image variation and partial faces. In recent years, several works proposed an end-to-end framework for facial expression recognition using deep learning models. Despite the better performance of these works, there are still much room for improvement. In this work, we propose a deep learning approach based on attentional convolutional network that is able to focus on important parts of the face and achieves significant improvement over previous models on multiple datasets, including FER-2013, CK+, FERG, and JAFFE. We also use a visualization technique that is able to find important facial regions to detect different emotions based on the classifier’s output. Through experimental results, we show that different emotions are sensitive to different parts of the face.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography