To see the other types of publications on this topic, follow the link: Face Image.

Journal articles on the topic 'Face Image'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Face Image.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Awhad, Rahul, Saurabh Jayswal, Adesh More, and Jyoti Kundale. "Fraudulent Face Image Detection." ITM Web of Conferences 32 (2020): 03005. http://dx.doi.org/10.1051/itmconf/20203203005.

Full text
Abstract:
Due to the growing advancements in technology, many software applications are being developed to modify and edit images. Such software can be used to alter images. Nowadays, an altered image is so realistic that it becomes too difficult for a person to identify whether the image is fake or real. Such software applications can be used to alter the image of a person’s face also. So, it becomes very difficult to identify whether the image of the face is real or not. Our proposed system is used to identify whether the image of a face is fake or real. The proposed system makes use of machine learning. The system makes use of a convolution neural network and support vector classifier. Both these machine learning models are trained using real as well as fake images. Both these trained models will take an image as an input and will determine whether the image is fake or real.
APA, Harvard, Vancouver, ISO, and other styles
2

Hu, Chang Jie, and Hong Li Xu. "Face Image Segmentation Technology Research." Advanced Materials Research 846-847 (November 2013): 1339–42. http://dx.doi.org/10.4028/www.scientific.net/amr.846-847.1339.

Full text
Abstract:
Face contains the very rich information, which is a typical biological feature .It has a wide application prospect in personal identification, intelligent video surveillance and human-computer interaction. Face detection is to determine the number, the location, size and other information of all the faces among the color images that have been input. Firstly, skin color model is established, and then we use the skin color model to convert color image to gray image, and then we can denoise gray image, at last use the Fisher criterion to obtain the dynamic threshold segmentation of the face image, so as to lay a good foundation for the location of the face region. Through the experiment we can see, the selection of dynamic threshold, for different detecting images, obtained better color segmentation.
APA, Harvard, Vancouver, ISO, and other styles
3

Du, Cheng, and Biao Leng. "Tunnel Face Image Segmentation Optimization." Applied Mechanics and Materials 397-400 (September 2013): 2148–51. http://dx.doi.org/10.4028/www.scientific.net/amm.397-400.2148.

Full text
Abstract:
With the development of Transportation Highway and railroad build, mining tunnel geological exploration in the road construction in the proportion of great. This paper presents a design of image processing software of Geological Engineering images for automatic analysis and processing. At present, the technology of image processing, most algorithms are based on the specific image information of specific analysis, and the face image is very complicated, different regions, and even the same construction sections in different areas of the face image may have very big difference. For the tunnel excavation face of digital image processing algorithms have little, need to start from scratch. This paper describes the use of digital image processing technology of Geological Engineering image image segmentation, found on the rock face, through the comparison of edge detection operator and Sobel Gauss - Laplasse operator methods advantages and disadvantages, a value of two images as the processing object image processing algorithm. The technology of Geological Engineering image analysis on tunnel construction period prediction plays a very important role.
APA, Harvard, Vancouver, ISO, and other styles
4

Saha, Rajib, Debotosh Bhattacharjee, and Sayan Barman. "Comparison of Different Face Recognition Method Based On PCA." INTERNATIONAL JOURNAL OF MANAGEMENT & INFORMATION TECHNOLOGY 10, no. 4 (November 4, 2014): 2016–22. http://dx.doi.org/10.24297/ijmit.v10i4.626.

Full text
Abstract:
This paper is about human face recognition in image files. Face recognition involves matching a given image with the database of images and identifying the image that it resembles the most. Here, face recognition is done using: (a) Eigen faces and (b) applying Principal Component Analysis (PCA) on image. The aim is to successfully demonstrate the human face recognition using Principal component analysis & comparison of Manhattan distance, Eucleadian distance & Chebychev distance for face matching.
APA, Harvard, Vancouver, ISO, and other styles
5

Xin, Jingwei, Nannan Wang, Xinrui Jiang, Jie Li, Xinbo Gao, and Zhifeng Li. "Facial Attribute Capsules for Noise Face Super Resolution." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 12476–83. http://dx.doi.org/10.1609/aaai.v34i07.6935.

Full text
Abstract:
Existing face super-resolution (SR) methods mainly assume the input image to be noise-free. Their performance degrades drastically when applied to real-world scenarios where the input image is always contaminated by noise. In this paper, we propose a Facial Attribute Capsules Network (FACN) to deal with the problem of high-scale super-resolution of noisy face image. Capsule is a group of neurons whose activity vector models different properties of the same entity. Inspired by the concept of capsule, we propose an integrated representation model of facial information, which named Facial Attribute Capsule (FAC). In the SR processing, we first generated a group of FACs from the input LR face, and then reconstructed the HR face from this group of FACs. Aiming to effectively improve the robustness of FAC to noise, we generate FAC in semantic, probabilistic and facial attributes manners by means of integrated learning strategy. Each FAC can be divided into two sub-capsules: Semantic Capsule (SC) and Probabilistic Capsule (PC). Them describe an explicit facial attribute in detail from two aspects of semantic representation and probability distribution. The group of FACs model an image as a combination of facial attribute information in the semantic space and probabilistic space by an attribute-disentangling way. The diverse FACs could better combine the face prior information to generate the face images with fine-grained semantic attributes. Extensive benchmark experiments show that our method achieves superior hallucination results and outperforms state-of-the-art for very low resolution (LR) noise face image super resolution.
APA, Harvard, Vancouver, ISO, and other styles
6

BEBIS, GEORGE, SATISHKUMAR UTHIRAM, and MICHAEL GEORGIOPOULOS. "FACE DETECTION AND VERIFICATION USING GENETIC SEARCH." International Journal on Artificial Intelligence Tools 09, no. 02 (June 2000): 225–46. http://dx.doi.org/10.1142/s0218213000000161.

Full text
Abstract:
We consider the problem of searching for the face of a particular individual in a two-dimensional intensity image. This problem has many potential applications such as locating a person in a crowd using images obtained by surveillance cameras. There are two steps in solving this problem: first, face regions must be extracted from the image(s) (face detection) and second, candidate faces must be compared against a face of interest (face verification). Without any a-priori knowledge about the location and size of a face in an image, every possible image location and face size should be considered, leading to a very large search space. In this paper, we propose using Genetic Algorithms (GAs) for searching the image efficiently. Specifically, we use GAs to find image sub-windows that contain faces and in particular, the face of interest. Each sub-window is evaluated using a fitness function containing two terms: the first term favors sub-windows containing faces while the second term favors sub-windows containing faces similar to the face of interest. Both terms have been derived using the theory of eigenspaces. A set of increasingly complex scenes demonstrate the performance of the proposed genetic-search approach.
APA, Harvard, Vancouver, ISO, and other styles
7

Chapman, Angus F., Hannah Hawkins-Elder, and Tirta Susilo. "How robust is familiar face recognition? A repeat detection study of more than 1000 faces." Royal Society Open Science 5, no. 5 (May 2018): 170634. http://dx.doi.org/10.1098/rsos.170634.

Full text
Abstract:
Recent theories suggest that familiar faces have a robust representation in memory because they have been encountered over a wide variety of contexts and image changes (e.g. lighting, viewpoint and expression). By contrast, unfamiliar faces are encountered only once, and so they do not benefit from such richness of experience and are represented based on image-specific details. In this registered report, we used a repeat detection task to test whether familiar faces are recognized better than unfamiliar faces across image changes. Participants viewed a stream of more than 1000 celebrity face images for 0.5 s each, any of which might be repeated at a later point and has to be detected. Some participants saw the same image at repeats, while others saw a different image of the same face. A post-experimental familiarity check allowed us to determine which celebrities were and were not familiar to each participant. We had three predictions: (i) detection would be better for familiar than unfamiliar faces, (ii) detection would be better across same rather than different images, and (iii) detection of familiar faces would be comparable across same and different images, but detection of unfamiliar faces would be poorer across different images. We obtained support for the first two predictions but not the last. Instead, we found that repeat detection of faces, regardless of familiarity, was poorer across different images. Our study suggests that the robustness of familiar face recognition may have limits, and that under some conditions, familiar face recognition can be just as influenced by image changes as unfamiliar face recognition.
APA, Harvard, Vancouver, ISO, and other styles
8

Kim, Sanghyuk, Yuseok Ban, Changhyun Park, and Sangyoun Lee. "3D Face Modeling using Face Image." Journal of International Society for Simulation Surgery 2, no. 1 (June 10, 2015): 10–12. http://dx.doi.org/10.18204/jissis.2015.2.1.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Qi, Li Yang, Dongping Zhang, Ye Shen, and Shuying Huang. "Face Deduplication in Video Surveillance." International Journal of Pattern Recognition and Artificial Intelligence 32, no. 03 (November 22, 2017): 1856001. http://dx.doi.org/10.1142/s0218001418560013.

Full text
Abstract:
The video surveillance system based on face analysis has played an increasingly important role in the security industry. Compared with identification methods of other physical characteristics, face verification method is easy to be accepted by people. In the video surveillance scene, it is common to capture multiple faces belonging to a same person. We cannot get a good result of face recognition if we use all the images without considering image quality. In order to solve this problem, we propose a face deduplication system which is combined with face detection and face quality evaluation to obtain the highest quality face image of a person. The experimental results in this paper also show that our method can effectively detect the faces and select the high-quality face images, so as to improve the accuracy of face recognition.
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Jing, and Muhammad Aqeel Ashraf. "Face recognition method based on GA-BP neural network algorithm." Open Physics 16, no. 1 (December 31, 2018): 1056–65. http://dx.doi.org/10.1515/phys-2018-0126.

Full text
Abstract:
Abstract In order to recognize faces, face recognition methods need to be studied. When a face is identified by the current method, the image denoising effect is poor, the face image recognition result thus has error, the time used to recognize the face image is long, the signal to noise ratio, the recognition result and the recognition efficiency are low. Based on the GA-BP neural network algorithm, a face recognition method is proposed. A mixed denoising model of face images is constructed by combining dictionary based sparse representation with non-local similarity. The principal component analysis method is used to extract the feature of the face image after denoising and staining the eigenvector of the face image. The GA-BP neural network algorithm is used to optimize the initial weights and thresholds so as to achieve the optimal value. The feature vectors of face images are ted into the genetic neural network to complete face recognition. Experimental results show that the proposed method has high signal-to-noise ratio, accuracy and recognition efficiency.
APA, Harvard, Vancouver, ISO, and other styles
11

Wei, Tongxin, Qingbao Li, Jinjin Liu, Ping Zhang, and Zhifeng Chen. "3D Face Image Inpainting with Generative Adversarial Nets." Mathematical Problems in Engineering 2020 (December 14, 2020): 1–11. http://dx.doi.org/10.1155/2020/8882995.

Full text
Abstract:
In the process of face recognition, face acquisition data is seriously distorted. Many face images collected are blurred or even missing. Faced with so many problems, the traditional image inpainting was based on structure, while the current popular image inpainting method is based on deep convolutional neural network and generative adversarial nets. In this paper, we propose a 3D face image inpainting method based on generative adversarial nets. We identify two parallels of the vector to locate the planer positions. Compared with the previous, the edge information of the missing image is detected, and the edge fuzzy inpainting can achieve better visual match effect. We make the face recognition performance dramatically boost.
APA, Harvard, Vancouver, ISO, and other styles
12

Choi, Sang-Il, Yonggeol Lee, and Minsik Lee. "Face Recognition in SSPP Problem Using Face Relighting Based on Coupled Bilinear Model." Sensors 19, no. 1 (December 22, 2018): 43. http://dx.doi.org/10.3390/s19010043.

Full text
Abstract:
There have been decades of research on face recognition, and the performance of many state-of-the-art face recognition algorithms under well-conditioned environments has become saturated. Accordingly, recent research efforts have focused on difficult but practical challenges. One such issue is the single sample per person (SSPP) problem, i.e., the case where only one training image of each person. While this problem is challenging because it is difficult to establish the within-class variation, working toward its solution is very practical because often only a few images of a person are available. To address the SSPP problem, we propose an efficient coupled bilinear model that generates virtual images under various illuminations using a single input image. The proposed model is inspired by the knowledge that the illuminance of an image is not sensitive to the poor quality of a subspace-based model, and it has a strong correlation to the image itself. Accordingly, a coupled bilinear model was constructed that retrieves the illuminance information from an input image. This information is then combined with the input image to estimate the texture information, from which we can generate virtual illumination conditions. The proposed method can instantly generate numerous virtual images of good quality, and these images can then be utilized to train the feature space for resolving SSPP problems. Experimental results show that the proposed method outperforms the existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
13

Yang, Zhao Nan, and Shu Zhang. "Research on Face Recognition Technology of Real-Time Video High Performance Similarity." Applied Mechanics and Materials 713-715 (January 2015): 2160–64. http://dx.doi.org/10.4028/www.scientific.net/amm.713-715.2160.

Full text
Abstract:
A new similarity measurement standard is proposed, namely background similarity matching. Learning algorithm based on kernel function is utilized in the method for feature extraction and classification of face image. Meanwhile, a real-time video face recognition method is proposed, image binary algorithm in similarity calculation is introduced, and a video face recognition system is designed and implemented [1-2]. The system is provided with a camera to obtain face images, and face recognition is realized through image preprocessing, face detection and positioning, feature extraction, feature learning and matching. Design, image preprocessing, feature positioning and extraction, face recognition and other major technologies of face recognition systems are introduced in details. Lookup mode from top down is improved, thereby improving lookup accuracy and speed [3-4]. The experimental results showed that the method has high recognition rate. Higher recognition rate still can be obtained even for limited change images of face images and face gesture with slightly uneven illumination. Meanwhile, training speed and recognition speed of the method are very fast, thereby fully meeting real-time requirements of face recognition system [5]. The system has certain face recognition function and can well recognize front faces.
APA, Harvard, Vancouver, ISO, and other styles
14

Hajiarbabi, Mohammadreza, and Arvin Agah. "Techniques for Skin, Face, Eye and Lip Detection using Skin Segmentation in Color Images." International Journal of Computer Vision and Image Processing 5, no. 2 (July 2015): 35–57. http://dx.doi.org/10.4018/ijcvip.2015070103.

Full text
Abstract:
Face detection is a challenging and important problem in Computer Vision. In most of the face recognition systems, face detection is used in order to locate the faces in the images. There are different methods for detecting faces in images. One of these methods is to try to find faces in the part of the image that contains human skin. This can be done by using the information of human skin color. Skin detection can be challenging due to factors such as the differences in illumination, different cameras, ranges of skin colors due to different ethnicities, and other variations. Neural networks have been used for detecting human skin. Different methods have been applied to neural networks in order to increase the detection rate of the human skin. The resulting image is then used in the detection phase. The resulting image consists of several components and in the face detection phase, the faces are found by just searching those components. If the components consist of just faces, then the faces can be detected using correlation. Eye and lip detections have also been investigated using different methods, using information from different color spaces. The speed of face detection methods using color images is compared with other face detection methods.
APA, Harvard, Vancouver, ISO, and other styles
15

Sajid, Muhammad, Naeem Iqbal Ratyal, Nouman Ali, Bushra Zafar, Saadat Hanif Dar, Muhammad Tariq Mahmood, and Young Bok Joo. "The Impact of Asymmetric Left and Asymmetric Right Face Images on Accurate Age Estimation." Mathematical Problems in Engineering 2019 (February 25, 2019): 1–10. http://dx.doi.org/10.1155/2019/8041413.

Full text
Abstract:
Aging affects left and right half face differently owing to numerous factors such as sleeping habits, exposure to sun light, and weaker face muscles of one side of face. In computer vision, age of a given face image is estimated using features that are correlated with age, such as moles, scars, and wrinkles. In this study we report the asymmetric aging of the left and right sides of face images and its impact on accurate age estimation. Left symmetric faces were perceived as younger while right symmetric faces were perceived as older when presented to the state-of-the-art age estimator. These findings show that facial aging is an asymmetric process which plays role in accurate facial age estimation. Experimental results on two large datasets verify the significance of using asymmetric right face image to estimate the age of a query face image more accurately compared to the corresponding original or left asymmetric face image.
APA, Harvard, Vancouver, ISO, and other styles
16

Edelman, Betty, Dominique Valentin, and Hervé Abdi. "Sex Classification of Face Areas." Journal of Biological Systems 06, no. 03 (September 1998): 241–63. http://dx.doi.org/10.1142/s0218339098000170.

Full text
Abstract:
Human subjects and an artificial neural network, composed of an autoassociative memory and a perceptron, gender classified the same 160 frontal face images (80 male and 80 female). All 160 face images were presented under three conditions (1) full face image with the hair cropped (2) top portion only of the Condition 1 image (3) bottom portion only of the Condition 1 image. Predictions from simulations using Condition 1 stimuli for training and testing novel stimuli in Conditions 1, 2, and 3, were compared to human subject performance. Although the network showed a fair ability to generalize learning to new stimuli under the three conditions, performing from 66 to 78% correctly on novel faces, and predicted main effects, a more detailed comparison with the human data was not as promising. As expected, human accuracy declined with decreased image area, but showed a surprising interaction between the sex of the face and the partial image conditions. The network failed to predict this interaction, or the likelihood of correct human classification for a particular face. This analysis on an item level raises concern about the psychological relevance of the model.
APA, Harvard, Vancouver, ISO, and other styles
17

ZHENG, YING, STAN Z. LI, JIANGLONG CHANG, and ZENGFU WANG. "3D MODELING OF FACES FROM NEAR INFRARED IMAGES USING STATISTICAL LEARNING." International Journal of Pattern Recognition and Artificial Intelligence 24, no. 01 (February 2010): 55–71. http://dx.doi.org/10.1142/s0218001410007804.

Full text
Abstract:
This paper proposes a statistical learning based method for 3D modeling of faces directly from Near Infrared (NIR) images. We use a specially designed camera system with active NIR illumination to capture the NIR images of faces. The NIR images captured in such a way are invariant to environmental lighting changes. The property provides more reasonable data sources for statistical learning. By using the NIR images and the depth images of some known faces, we can observe a mapping relation between the two image modalities. The mapping relation can then be used to recover depth data of an unknown face from his NIR image. To perform the learning, the images of different modalities taken from different persons are elaborately aligned to make pixel-to-pixel correspondences between images. Based on these aligned images, two face spaces corresponding to NIR and depth face images can be constructed, respectively. We then use a PCA based or kernel based scheme to perform the learning between spaces of large dimensions. Several regression algorithms with linear and nonlinear kernels are employed and evaluated to find the mapping that best describes the relation between the two face spaces. The experimental results show that the method presented in this paper is effective. It can reconstruct 3D face model directly from NIR image of a face with high accuracy and low computational costs.
APA, Harvard, Vancouver, ISO, and other styles
18

Asiedu, Louis, Bernard O. Essah, Samuel Iddi, K. Doku-Amponsah, and Felix O. Mettle. "Evaluation of the DWT-PCA/SVD Recognition Algorithm on Reconstructed Frontal Face Images." Journal of Applied Mathematics 2021 (April 7, 2021): 1–8. http://dx.doi.org/10.1155/2021/5541522.

Full text
Abstract:
The face is the second most important biometric part of the human body, next to the finger print. Recognition of face image with partial occlusion (half image) is an intractable exercise as occlusions affect the performance of the recognition module. To this end, occluded images are sometimes reconstructed or completed with some imputation mechanism before recognition. This study assessed the performance of the principal component analysis and singular value decomposition algorithm using discrete wavelet transform (DWT-PCA/SVD) as preprocessing mechanism on the reconstructed face image database. The reconstruction of the half face images was done leveraging on the property of bilateral symmetry of frontal faces. Numerical assessment of the performance of the adopted recognition algorithm gave average recognition rates of 95% and 75% when left and right reconstructed face images were used for recognition, respectively. It was evident from the statistical assessment that the DWT-PCA/SVD algorithm gives relatively lower average recognition distance for the left reconstructed face images. DWT-PCA/SVD is therefore recommended as a suitable algorithm for recognizing face images under partial occlusion (half face images). The algorithm performs relatively better on left reconstructed face images.
APA, Harvard, Vancouver, ISO, and other styles
19

Su, Ya, Zhe Liu, and Xiaojuan Ban. "Symmetric Face Normalization." Symmetry 11, no. 1 (January 16, 2019): 96. http://dx.doi.org/10.3390/sym11010096.

Full text
Abstract:
Image registration is an important process in image processing which is used to improve the performance of computer vision related tasks. In this paper, a novel self-registration method, namely symmetric face normalization (SFN) algorithm, is proposed. There are three contributions in this paper. Firstly, a self-normalization algorithm for face images is proposed, which normalizes a face image to be reflection symmetric horizontally. It has the advantage that no face model needs to be built, which is always severely time-consuming. Moreover, it can be considered as a pre-processing procedure which greatly decreases the parameters needed to be adjusted. Secondly, an iterative algorithm is designed to solve the self-normalization algorithm. Finally, SFN is applied to the between-image alignment problem, which results in the symmetric face alignment (SFA) algorithm. Experiments performed on face databases show that the accuracy of SFN is higher than 0.95 when the translation on the x-axis is lower than 15 pixels, or the rotation angle is lower than 18°. Moreover, the proposed SFA outperforms the state-of-the-art between-image alignment algorithm in efficiency (about four times) without loss of accuracy.
APA, Harvard, Vancouver, ISO, and other styles
20

Gurumurthy, Sasikumar. "Age Estimation and Gender Classification based on Face detection and feature extraction." INTERNATIONAL JOURNAL OF MANAGEMENT & INFORMATION TECHNOLOGY 4, no. 1 (June 30, 2013): 134–40. http://dx.doi.org/10.24297/ijmit.v4i1.809.

Full text
Abstract:
Nowadays the computer systems created a various types of automated applications in personal identification like biometrics, face recognition techniques. Face verification has turn into an area of dynamic research and the applications are important in law enforcement because it can be done without involving the subject. Still, the influence of age estimation on face verification become a challenge to decide the similarity of pair images from individual faces considering very limited of data base availability. We focus on the development of image processing and face detection on face verification system by improving the quality of image quality. The main objective of the system is to compare the image with the reference images stored as templates in the database and to determine the age and gender.
APA, Harvard, Vancouver, ISO, and other styles
21

Jahangir Alam, Mohammad, Tanjia Chowdhury, and Md Shahzahan Ali. "A smart login system using face detection and recognition by ORB algorithm." Indonesian Journal of Electrical Engineering and Computer Science 20, no. 2 (November 1, 2020): 1078. http://dx.doi.org/10.11591/ijeecs.v20.i2.pp1078-1087.

Full text
Abstract:
<p>We can identify human faces using a web Camera which is known as Face Detection. This is a very effective technique in computer technology. There are used different types of attendance systems such as log in with the password, punch card, fingerprint, etc. In this research, we have introduced a facial recognition type of biometric system that can identify a specific face by analyzing and comparing patterns of a digital image. This system is the latest login system based on face detection. Primarily, the device captures the face images and stores the captured images into the specific path of the computer relating the information into a database. When any body tries to enter into any room or premises through this login system, the system captures the image of that particular person and matches the image with the stored image. If this image matches with the stored image then the system allows the person to enter the room or premises, otherwise the system denies entry. This face recognition login system is very effective, reliable and secured. This research has used the Viola and Jones algorithm for face detection and ORB for image matching in face recognition and Java, MySql, OpenCV, and iReport are used for implementation.</p>
APA, Harvard, Vancouver, ISO, and other styles
22

Borovikov, Eugene, Szilard Vajda, and Michael Gill. "Face Match for Family Reunification." International Journal of Computer Vision and Image Processing 7, no. 2 (April 2017): 19–35. http://dx.doi.org/10.4018/ijcvip.2017040102.

Full text
Abstract:
Despite the many advances in face recognition technology, practical face detection and matching for unconstrained images remain challenging. A real-world Face Image Retrieval (FIR) system is described in this paper. It is based on optimally weighted image descriptor ensemble utilized in single-image-per-person (SIPP) approach that works with large unconstrained digital photo collections. The described visual search can be deployed in many applications, e.g. person location in post-disaster scenarios, helping families reunite quicker. It provides efficient means for face detection, matching and annotation, working with images of variable quality, requiring no time-consuming training, yet showing commercial performance levels.
APA, Harvard, Vancouver, ISO, and other styles
23

Balas, Benjamin, Jacob Gable, and Hannah Pearson. "The Effects of Blur and Inversion on the Recognition of Ambient Face Images." Perception 48, no. 1 (December 8, 2018): 58–71. http://dx.doi.org/10.1177/0301006618812581.

Full text
Abstract:
When viewing unfamiliar faces that vary in expressions, angles, and image quality, observers make many recognition errors. Specifically, in unconstrained identity-sorting tasks, observers struggle to cope with variation across different images of the same person while succeeding at telling different people apart. The use of ambient face images in this simple card-sorting task reveals the magnitude of these face recognition errors and suggests a useful platform to reexamine the nature of face processing using naturalistic stimuli. In the present study, we chose to investigate the impact of two basic stimulus manipulations (image blur and face inversion) on identity sorting with ambient images. Although these manipulations are both known to affect face processing when well-controlled, frontally viewed face images are used, examining how they affect performance for ambient images is an important step toward linking the large body of research using controlled face images to more ecologically valid viewing conditions. Briefly, we observed a high cost of image blur regardless of blur magnitude, and a strong inversion effect that affected observers’ sensitivity to extrapersonal variability but did not affect the number of unique identities they estimated were present in the set of images presented to them.
APA, Harvard, Vancouver, ISO, and other styles
24

Barabanschikov, V. A., and M. M. Marinova. "Deepfake in Face Perception Research." Experimental Psychology (Russia) 14, no. 1 (2021): 4–19. http://dx.doi.org/10.17759/exppsy.2021000001.

Full text
Abstract:
Presents the state-of-the-art Deepfake face replacement image collage method, an artificial intelligence (AI) product that can be used to create high-quality, realistic videos with a fake or replaced face, with no obvious signs of manipulation. Based on the DeepFaceLab (DFL) application, the process of creating video images of an “impossible face” is described step by step. The results of the experiments of studying the perception patterns of the moving “impossible face” and their differences in statics and dynamics are presented. The stimuli were two DFL-generated models of virtual sitters with impossible faces: a video image of a chimerical face, in which the right and left sides belong to different people, and a Tatchered face with the eyes and mouth areas rotated by 180°. It was shown that the phenomena of perception of the “impossible face”, registered earlier under static conditions (integrity of perception of the split image, distraction and inversion effect), are preserved and acquire a new content when dynamic models are exposed. In contrast to the collaged images, the original faces in statics and motion, regardless of egocentric orientation, are evaluated positively at the level of high values. Under all tested conditions the gender of the virtual sitter is determined adequately, the perceived age is overestimated. Estimates of the virtual sitter’s emotions from his video images are differentiated into basic (stable) and additional (changing) states, the ratio of which depends on the content of a particular episode. Deepfake image synthesis technology significantly expands the possibilities of psychological research of interpersonal perception. The use of digital technologies simplifies the creation of “impossible face” stimulus models necessary for in-depth study of representations of the human inner world, and creates a need for new experimental-psychological procedures corresponding to a higher level of ecological and social validity.
APA, Harvard, Vancouver, ISO, and other styles
25

Deffenbacher, Kenneth A., Cheryl Hendrickson, Alice J. O'Toole, David P. Huff, and Hervé Abdi. "Manipulating Face Gender." Journal of Biological Systems 06, no. 03 (September 1998): 219–39. http://dx.doi.org/10.1142/s0218339098000169.

Full text
Abstract:
Previous research has shown that faces coded as pixel-based images may be constructed from an appropriately weighted combination of statistical "features" (eigenvectors) which are useful for discriminating members of a learned set of images. We have shown previously that two of the most heavily weighted features are important in predicting face gender. Using a simple computational model, we adjusted weightings of these features in more masculine and more feminine directions for both male and female adult Caucasian faces. In Experiment 1, cross-gender face image alterations (e.g., feminizing male faces) reduced both gender classification speed and accuracy for young adult Caucasian observers, whereas same-gender alterations (e.g., masculinizing male faces) had no effect as compared to unaltered controls. Effects on femininity-masculinity ratings mirrored those obtained on gender classification speed and accuracy. We controlled statistically for possible effects of image distortion incurred by our gender manipulations. In Experiment 2 we replicated the same pattern of accuracy data. Combined, these data indicate the psychological relevance of the features derived from the computational model. Despite having different effects on the ease of gender classification, neither sort of gender alteration negatively impacted face recognition (Experiment 3), yielding evidence for a model of face recognition wherein gender and familiarity processing proceed in parallel.
APA, Harvard, Vancouver, ISO, and other styles
26

Bouhou, Lhoussaine, Rachid El Ayachi, Mohamed Baslam, and Mohamed Oukessou. "Face Detection in a Mixed-Subject Document." International Journal of Electrical and Computer Engineering (IJECE) 6, no. 6 (December 1, 2016): 2828. http://dx.doi.org/10.11591/ijece.v6i6.12725.

Full text
Abstract:
<p>Before you recognize anyone, it is essential to identify various characteristics variations from one person to another. among of this characteristics, we have those relating to the face. Nowadays the detection of skin regions in an image has become an important research topic for the location of a face in the image. In this research study, unlike previous research studies related to this topic which have focused on images inputs data faces, we are more interested to the fields face detection in mixed-subject documents (text + images). The face detection system developed is based on the hybrid method to distinguish two categories of objects from the mixed document. The first category is all that is text or images containing figures having no skin color, and the second category is any figure with the same color as the skin. In the second phase the detection system is based on Template Matching method to distinguish among the figures of the second category only those that contain faces to detect them. To validate this study, the system developed is tested on the various documents which including text and image.</p>
APA, Harvard, Vancouver, ISO, and other styles
27

Bouhou, Lhoussaine, Rachid El Ayachi, Mohamed Baslam, and Mohamed Oukessou. "Face Detection in a Mixed-Subject Document." International Journal of Electrical and Computer Engineering (IJECE) 6, no. 6 (December 1, 2016): 2828. http://dx.doi.org/10.11591/ijece.v6i6.pp2828-2835.

Full text
Abstract:
<p>Before you recognize anyone, it is essential to identify various characteristics variations from one person to another. among of this characteristics, we have those relating to the face. Nowadays the detection of skin regions in an image has become an important research topic for the location of a face in the image. In this research study, unlike previous research studies related to this topic which have focused on images inputs data faces, we are more interested to the fields face detection in mixed-subject documents (text + images). The face detection system developed is based on the hybrid method to distinguish two categories of objects from the mixed document. The first category is all that is text or images containing figures having no skin color, and the second category is any figure with the same color as the skin. In the second phase the detection system is based on Template Matching method to distinguish among the figures of the second category only those that contain faces to detect them. To validate this study, the system developed is tested on the various documents which including text and image.</p>
APA, Harvard, Vancouver, ISO, and other styles
28

Nam, Amir Nobahar Sadeghi. "Face Detection." Volume 5 - 2020, Issue 9 - September 5, no. 9 (September 29, 2020): 688–92. http://dx.doi.org/10.38124/ijisrt20sep391.

Full text
Abstract:
Face detection is one of the challenging problems in the image processing, as a main part of automatic face recognition. Employing the color and image segmentation procedures, a simple and effective algorithm is presented to detect human faces on the input image. To evaluate the performance, the results of the proposed methodology is compared with ViolaJones face detection method.
APA, Harvard, Vancouver, ISO, and other styles
29

Мельник, Р. А., Р. І. Квіт, and Т. М. Сало. "Face image profiles features extraction for recognition systems." Scientific Bulletin of UNFU 31, no. 1 (February 4, 2021): 117–21. http://dx.doi.org/10.36930/40310120.

Full text
Abstract:
The object of research is the algorithm of piecewise linear approximation when applying it to the selection of facial features and compression of its images. One of the problem areas is to obtain the optimal ratio of the degree of compression and accuracy of image reproduction, as well as the accuracy of the obtained facial features, which can be used to search for people in databases. The main characteristics of the image of the face are the coordinates and size of the eyes, mouth, nose and other objects of attention. Dimensions, distances between them, as well as their relationship also form a set of characteristics. A piecewise linear approximation algorithm is used to identify and determine these features. First, it is used to approximate the image of the face to obtain a graph of the silhouette from right to left and, secondly, to approximate fragments of the face to obtain silhouettes of the face from top to bottom. The purpose of the next stage is to implement multilevel segmentation of the approximated images to cover them with rectangles of different intensity. Due to their shape they are called barcodes. These three stages of the algorithm the faces are represented by two barcode images are vertical and horizontal. This material is used to calculate facial features. The medium intensity function in a row or column is used to form an approximation object and as a tool to measure the values of facial image characteristics. Additionally, the widths of barcodes and the distances between them are calculated. Experimental results with faces from known databases are presented. A piecewise linear approximation is used to compress facial images. Experiments have shown how the accuracy of the approximation changes with the degree of compression of the image. The method has a linear complexity of the algorithm from the number of pixels in the image, which allows its testing for large data. Finding the coordinates of a synchronized object, such as the eyes, allows calculating all the distances between the objects of attention on the face in relative form. The developed software has control parameters for conducting research.
APA, Harvard, Vancouver, ISO, and other styles
30

Amjed, Noor, Fatimah Khalid, Rahmita Wirza O. K. Rahmat, and Hizmawati Bint Madzin. "A Robust Geometric Skin Colour Face Detection Method under Unconstrained Environment of Smartphone Database." Applied Mechanics and Materials 892 (June 2019): 31–37. http://dx.doi.org/10.4028/www.scientific.net/amm.892.31.

Full text
Abstract:
Face detection is the primary task in building a vision-based human-computer interaction system and in special applications such as face recognition, face tracking, face identification, expression recognition and also content-based image retrieval. A potent face detection system must be able to detect faces irrespective of illuminations, shadows, cluttered backgrounds, orientation and facial expressions. In previous literature, many approaches for face detection had been proposed. However, face detection in outdoor images with uncontrolled illumination and images with complex background are still a serious problem. Hence, in this paper, we had proposed a Geometric Skin Colour (GSC) method for detecting faces accurately in real world image, under capturing conditions of both indoor and outdoor, and with a variety of illuminations and also in cluttered backgrounds. The selected method was evaluated on two different face video smartphone databases and the obtained results proved the outperformance of the proposed method under the unconstrained environment of these databases.
APA, Harvard, Vancouver, ISO, and other styles
31

Yaqob, Olga. "The Face of God in Suffering." Theology Today 62, no. 1 (April 2005): 9–17. http://dx.doi.org/10.1177/004057360506200102.

Full text
Abstract:
Media coverage of Iraq generally has overlooked the daily lives of ordinary Iraqis. In all the wars Iraq has endured since 1980, we have lost sight of human faces. Every nation is its people, not merely its geographic territory, and these people are all made in the image of God. The illustrations accompanying this article include both images of Iraq's geography (the land) and an image, in the shape of Iraq, formed out of the faces of many different ordinary Iraqi people, from all different religious and geographical areas of the country. In the center of this image is the face of Jesus on the cross. In the suffering of the Iraqi people, I have seen the face of God.
APA, Harvard, Vancouver, ISO, and other styles
32

Wen, Lilong, and Dan Xu. "Face Image Manipulation Detection." IOP Conference Series: Materials Science and Engineering 533 (May 30, 2019): 012054. http://dx.doi.org/10.1088/1757-899x/533/1/012054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Yu, Xiangchun, Zhezhou Yu, Wei Pang, Minghao Li, and Lei Wu. "An Improved EMD-Based Dissimilarity Metric for Unsupervised Linear Subspace Learning." Complexity 2018 (2018): 1–24. http://dx.doi.org/10.1155/2018/8917393.

Full text
Abstract:
We investigate a novel way of robust face image feature extraction by adopting the methods based on Unsupervised Linear Subspace Learning to extract a small number of good features. Firstly, the face image is divided into blocks with the specified size, and then we propose and extract pooled Histogram of Oriented Gradient (pHOG) over each block. Secondly, an improved Earth Mover’s Distance (EMD) metric is adopted to measure the dissimilarity between blocks of one face image and the corresponding blocks from the rest of face images. Thirdly, considering the limitations of the original Locality Preserving Projections (LPP), we proposed the Block Structure LPP (BSLPP), which effectively preserves the structural information of face images. Finally, an adjacency graph is constructed and a small number of good features of a face image are obtained by methods based on Unsupervised Linear Subspace Learning. A series of experiments have been conducted on several well-known face databases to evaluate the effectiveness of the proposed algorithm. In addition, we construct the noise, geometric distortion, slight translation, slight rotation AR, and Extended Yale B face databases, and we verify the robustness of the proposed algorithm when faced with a certain degree of these disturbances.
APA, Harvard, Vancouver, ISO, and other styles
34

Held, Tobias. "Face to Face." Journal für Medienlinguistik 2, no. 2 (August 27, 2020): 157–94. http://dx.doi.org/10.21248/jfml.2019.16.

Full text
Abstract:
The present article shows an experimental subject investigation on elements of video telephony in relation to experiencing and feeling connectedness and intimacy within private interpersonal communication. Particular interests are questions about possible relationships between image detail, angle of view or perspective as well as image format or the foreign and personal perception of the communicators. Central to this is the question of whether the practices and interactions of users in dealing with communication technology can be used to derive possible conclusions on negotiation measures or even adaptation services. The obtained results are presented on the basis of an introductory theoretical discussion. It is followed by a summary and analysis as well as an outlook on the further use and significance of the results.
APA, Harvard, Vancouver, ISO, and other styles
35

Held, Tobias. "Face to Face." Journal für Medienlinguistik 2, no. 2 (August 27, 2020): 157–94. http://dx.doi.org/10.21248/jfml.2019.16.

Full text
Abstract:
The present article shows an experimental subject investigation on elements of video telephony in relation to experiencing and feeling connectedness and intimacy within private interpersonal communication. Particular interests are questions about possible relationships between image detail, angle of view or perspective as well as image format or the foreign and personal perception of the communicators. Central to this is the question of whether the practices and interactions of users in dealing with communication technology can be used to derive possible conclusions on negotiation measures or even adaptation services. The obtained results are presented on the basis of an introductory theoretical discussion. It is followed by a summary and analysis as well as an outlook on the further use and significance of the results.
APA, Harvard, Vancouver, ISO, and other styles
36

Li, Fei, and Yuan Yuan Wang. "The Safety Research for Face Recognition System Based on Image Feature and Digital Watermarking." Applied Mechanics and Materials 224 (November 2012): 485–88. http://dx.doi.org/10.4028/www.scientific.net/amm.224.485.

Full text
Abstract:
Abstract: In order to solve the easily copied problem of images in face recognition software, an algorithm combining the image feature with digital watermark is presented in this paper. As watermark information, image feature of the adjacent blocks are embedded to the face image. And primitive face images are not needed when recovering the watermark. So face image integrity can be well confirmed, and the algorithm can detect whether the face image is the original one and identify whether the face image is attacked by malicious aim-such as tampering, replacing or illegally adding. Experimental results show that the algorithm with good invisibility and excellent robustness has no interference on face recognition rate, and it can position the specific tampered location of human face image.
APA, Harvard, Vancouver, ISO, and other styles
37

Moon, Hae-Min, Min-Gu Kim, Ju-Hyun Shin, and Sung Bum Pan. "Multiresolution Face Recognition through Virtual Faces Generation Using a Single Image for One Person." Wireless Communications and Mobile Computing 2018 (November 11, 2018): 1–8. http://dx.doi.org/10.1155/2018/7584942.

Full text
Abstract:
In recent years, various studies have been conducted to provide a real-time service based on face recognition in Internet of things environments such as in a smart home environment. In particular, face recognition in a network-based surveillance camera environment can significantly change the performance or utilization of face recognition technology because the size of image information to be transmitted varies depending on the communication capabilities. In this paper, we propose a multiresolution face recognition method that uses virtual facial images by distance as learning to solve the problem of low recognition rate caused by communication, camera, and distance change. Face images for each virtual distance are generated through clarity and image degradation for each resolution, using a single high-resolution face image. The proposed method achieved a performance that was 5.9% more accurate than methods using MPCA and SVM, when LDA and the Euclidean distance were employed for a DB that was configured using faces that were acquired from the real environments of five different streets.
APA, Harvard, Vancouver, ISO, and other styles
38

Xu, Yong, Xuelong Li, Jian Yang, and David Zhang. "Integrate the original face image and its mirror image for face recognition." Neurocomputing 131 (May 2014): 191–99. http://dx.doi.org/10.1016/j.neucom.2013.10.025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Tian, Chunwei, Qi Zhang, Jian Zhang, Guanglu Sun, and Yuan Sun. "2D-PCA Representation and Sparse Representation for Image Recognition." Journal of Computational and Theoretical Nanoscience 14, no. 1 (January 1, 2017): 829–34. http://dx.doi.org/10.1166/jctn.2017.6281.

Full text
Abstract:
The two-dimensional principal component analysis (2D-PCA) method has been widely applied in fields of image classification, computer vision, signal processing and pattern recognition. The 2D-PCA algorithm also has a satisfactory performance in both theoretical research and real-world applications. It not only retains main information of the original face images, but also decreases the dimension of original face images. In this paper, we integrate the 2D-PCA and spare representation classification (SRC) method to distinguish face images, which has great performance in face recognition. The novel representation of original face image obtained using 2D-PCA is complementary with original face image, so that the fusion of them can obviously improve the accuracy of face recognition. This is also attributed to the fact the features obtained using 2D-PCA are usually more robust than original face image matrices. The experiments of face recognition demonstrate that the combination of original face images and new representations of the original face images is more effective than the only original images. Especially, the simultaneous use of the 2D-PCA method and sparse representation can extremely improve accuracy in image classification. In this paper, the adaptive weighted fusion scheme automatically obtains optimal weights and it has no any parameter. The proposed method is not only simple and easy to achieve, but also obtains high accuracy in face recognition.
APA, Harvard, Vancouver, ISO, and other styles
40

Abayomi-Alli, Olusola Oluwakemi, Robertas Damaševičius, Rytis Maskeliūnas, and Sanjay Misra. "Few-Shot Learning with a Novel Voronoi Tessellation-Based Image Augmentation Method for Facial Palsy Detection." Electronics 10, no. 8 (April 19, 2021): 978. http://dx.doi.org/10.3390/electronics10080978.

Full text
Abstract:
Face palsy has adverse effects on the appearance of a person and has negative social and functional consequences on the patient. Deep learning methods can improve face palsy detection rate, but their efficiency is limited by insufficient data, class imbalance, and high misclassification rate. To alleviate the lack of data and improve the performance of deep learning models for palsy face detection, data augmentation methods can be used. In this paper, we propose a novel Voronoi decomposition-based random region erasing (VDRRE) image augmentation method consisting of partitioning images into randomly defined Voronoi cells as an alternative to rectangular based random erasing method. The proposed method augments the image dataset with new images, which are used to train the deep neural network. We achieved an accuracy of 99.34% using two-shot learning with VDRRE augmentation on palsy faces from Youtube Face Palsy (YFP) dataset, while normal faces are taken from Caltech Face Database. Our model shows an improvement over state-of-the-art methods in the detection of facial palsy from a small dataset of face images.
APA, Harvard, Vancouver, ISO, and other styles
41

Boranbayev, S. N., and M. S. Amirtayev. "DEVELOPMENT A SYSTEM FOR CLASSIFYING AND RECOGNIZING PERSON’S FACE." EurasianUnionScientists 4, no. 4(73) (May 12, 2020): 15–24. http://dx.doi.org/10.31618/esu.2413-9335.2020.4.73.677.

Full text
Abstract:
The purpose of this article is to summarize the knowledge gained in the development and implementation of a neural network for facial recognition. Neural networks are used to solve complex tasks that require analytical calculations similar to what the human brain does. Machine learning algorithms are the foundation of a neural network. As input, the algorithm receives an image with people's faces, then searches for faces in this image using HOG (Histogram of oriented gradients). The result is images with explicit face structures. To determine unique facial features, the Face landmark algorithm is used, which finds 68 special points on the face. These points can be used to centerthe eyes and mouth for more accurate encoding. To get an accurate “face map” consisting of 128 dimensions, you need to use image encoding. Using the obtained data, the convolutional neural network can determine people's faces using the SVM linear classifier algorithm.
APA, Harvard, Vancouver, ISO, and other styles
42

Kouzani, A. Z., and S. H. Ong. "Lighting-Effects Classification in Facial Images Using Wavelet Packets Transform." International Journal of Wavelets, Multiresolution and Information Processing 01, no. 02 (June 2003): 199–215. http://dx.doi.org/10.1142/s021969130300013x.

Full text
Abstract:
Faces often produce inconsistent features under different lighting conditions. Classifying lighting effects within a face image is therefore the first crucial step of building a lighting invariant face recognition system. This paper presents a hybrid system to classify face images based on the lighting effects present in the image. The theories of multivariate discriminant analysis and wavelet packets transform are utilised to develop the proposed system. An extensive set of face images of different poses, illuminated from different angles, are used to train the system. The performance of the proposed system is evaluated by conducting experiments on different test sets and by comparing its results against those of some existing counterparts.
APA, Harvard, Vancouver, ISO, and other styles
43

Omar, Herman Kh, and Nada E. Tawfiq. "Face Recognition Based on Histogram Equalization and LBP Algorithm." Academic Journal of Nawroz University 8, no. 3 (August 25, 2019): 33. http://dx.doi.org/10.25007/ajnu.v8n3a394.

Full text
Abstract:
In the recent time bioinformatics take wide field in image processing. Face recognition which is basically the task of recognizing a person based on its facial image. It has become very popular in the last two decades, mainly because of the new methods developed and the high quality of the current visual instruments. There are different types of face recognition algorithms, and each method has a different approach to extract the image features and perform the matching with the input image. In this paper the Local Binary Patterns (LBP) was used, which is a particular case of the Texture Spectrum model, and powerful feature for texture classification. The face recognition system consists of recognizing the faces acquisition from a given data base via two phases. The most useful and unique features of the face image are extracted in the feature extraction phase. In the classification the face image is compared with the images from the database. The proposed algorithm for face recognition in this paper adopt the LBP features encode local texture information with default values. Apply histogram equalization and Resize the image into 80x60, divide it to five blocks, then Save every LBP feature as a vector table. Matlab R2019a was used to build the face recognition system. The Results which obtained are accurate and they are 98.8% overall (500 face image).
APA, Harvard, Vancouver, ISO, and other styles
44

Bindemann, Markus, Rob Jenkins, and A. Mike Burton. "A Bottleneck in Face Identification." Experimental Psychology 54, no. 3 (January 2007): 192–201. http://dx.doi.org/10.1027/1618-3169.54.3.192.

Full text
Abstract:
Abstract. There is evidence that face processing is capacity-limited in distractor interference tasks and in tasks requiring overt recognition memory. We examined whether capacity limits for faces can be observed with a more sensitive measure of visual processing, by measuring repetition priming of flanker faces that were presented alongside a face or a nonface target. In Experiment 1, we found identity priming for face flankers, by measuring repetition priming across a change in image, during task-relevant nonface processing, but not during the processing of a concurrently-presented face target. Experiment 2 showed perceptual priming of the flanker faces, across identical images at prime and test, when they were presented alongside a face target. In a third Experiment, all of these effects were replicated by measuring identity priming and perceptual priming within the same task. Overall, these results imply that face processing is capacity limited, such that only a single face can be identified at one time. Merely attending to a target face appears sufficient to trigger these capacity limits, thereby extinguishing identification of a second face in the display, although our results demonstrate that the additional face remains at least subject to superficial image processing.
APA, Harvard, Vancouver, ISO, and other styles
45

QING, LAIYUN, SHIGUANG SHAN, WEN GAO, and BO DU. "FACE RECOGNITION UNDER GENERIC ILLUMINATION BASED ON HARMONIC RELIGHTING." International Journal of Pattern Recognition and Artificial Intelligence 19, no. 04 (June 2005): 513–31. http://dx.doi.org/10.1142/s0218001405004186.

Full text
Abstract:
The performances of the current face recognition systems suffer heavily from the variations in lighting. To deal with this problem, this paper presents an illumination normalization approach by relighting face images to a canonical illumination based on the harmonic images model. Benefiting from the observations that human faces share similar shape, and the albedos of the face surfaces are quasi-constant, we first estimate the nine low-frequency components of the illumination from the input facial image. The facial image is then normalized to the canonical illumination by re-rendering it using the illumination ratio image technique. For the purpose of face recognition, two kinds of canonical illuminations, the uniform illumination and a frontal flash with the ambient lights, are considered, among which the former encodes merely the texture information, while the latter encodes both the texture and shading information. Our experiments on the CMU-PIE face database and the Yale B face database have shown that the proposed relighting normalization can significantly improve the performance of a face recognition system when the probes are collected under varying lighting conditions.
APA, Harvard, Vancouver, ISO, and other styles
46

Srichumroenrattana, Natchamol, Rajalida Lipikorn, and Chidchanok Lursinsap. "Stereoscopic Face Reconstruction from a Single 2-Dimensional Face Image Using Orthogonality of Normal Surface and Y-Ratio." International Journal of Pattern Recognition and Artificial Intelligence 30, no. 02 (February 2016): 1655006. http://dx.doi.org/10.1142/s0218001416550065.

Full text
Abstract:
This paper modified the method of three-dimensional (3-dimensional) face reconstruction from a single two-dimensional (2-dimensional) image based on the Lambertian model consisting of height estimation, normal surface estimation, albedo calculation and image normalization, normal surface calculation, actual height calculation, and error correction. In height estimation, the facial height of each input image is estimated from the average facial height of face images in the training data set. The estimated height is used for estimating the normal surface by applying our proposed pattern morphing method (PMM). To calculate the actual normal surface, the albedo of the input face image is calculated to normalize the image first. Then, the actual normal surface is derived by using our proposed Y-ratio calculation with improved computational time. Finally, the actual height of input face image is computed afterwards to construct the 3-dimensional face. Two face databases of 110 human face images containing 10 real and 100 synthetic images were tested by our proposed method with uniform reflectance and high level of heterogeneous reflectance abilities. The experimental results were compared with the results obtained from other existing methods, such as the traditional minimization, shape propagation, local, and linear approaches. Our method can accurately reconstruct 3-dimensional face from a single 2-dimensional face image with the error less than 6%.
APA, Harvard, Vancouver, ISO, and other styles
47

Wu, Siwei, Shan Xiao, Yihua Di, and Cheng Di. "3D Film Animation Image Acquisition and Feature Processing Based on the Latest Virtual Reconstruction Technology." Complexity 2021 (June 14, 2021): 1–11. http://dx.doi.org/10.1155/2021/2331306.

Full text
Abstract:
In this paper, the latest virtual reconstruction technology is used to conduct in-depth research on 3D movie animation image acquisition and feature processing. This paper firstly proposes a time-division multiplexing method based on subpixel multiplexing technology to improve the resolution of integrated imaging reconstruction images. By studying the degradation effect of the reconstruction process of the 3D integrated imaging system, it is proposed to improve the display resolution by increasing the pixel point information of fixed display array units. According to the subpixel multiplexing, an algorithm to realize the reuse of pixel point information of 3D scene element image gets the element image array with new information; then, through the high frame rate light emitting diode (LED) large screen fast output of the element image array, the human eye temporary retention effect is used, so that this group of element image array information go through a plane display, to increase the limited display array information capacity thus improving the reconstructed image. In this way, the information capacity of the finite display array is increased and the display resolution of the reconstructed image is improved. In this paper, we first use the classification algorithm to determine the gender and expression attributes of the face in the input image and filter the corresponding 3D face data subset in the database according to the gender and expression attributes, then use the sparse representation theory to filter the prototype face like the target face in the data subset, then use the filtered prototype face samples to construct the sparse deformation model, and finally use the target faces. Finally, the target 3D face is reconstructed using the feature points of the target face for model matching. The experimental results show that the algorithm reconstructs faces with high realism and accuracy, and the algorithm can reconstruct expression faces.
APA, Harvard, Vancouver, ISO, and other styles
48

G L, Kavitha. "Effective Refinement of Distinctive Analysis of the Facial Matrices for Automatic Face Annotation." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (June 20, 2021): 1259–67. http://dx.doi.org/10.22214/ijraset.2021.34869.

Full text
Abstract:
We deal with real world images which contains numerous faces captioned with equivalent names, it may be wrongly annotated. The face naming technique that we propose, exploits the weakly labeled image dataset, and aims at labeling a face in the image accurately. We propose this efficient face naming technique which is self regulated and aims at correctly labeling a face in an image. This is a challenging task because of the very large appearance variation in the images, as well as the potential mismatch between images and their captions. This paper introduces a method called Refined Low-Rank Regularization (RLRR) which productively employs the weakly named image information to determine a low-rank matrix which is obtained by examining many subspace structures of the recreated data. From the recreation method used a discriminatory matrix is deduced. Also, Large Margin Nearest Neighbor (LMNN) method is used to label an image, which further leads to another kernel matrix, based on the Mahalanobis distances of the data and the two consistent facial matrices can be fused to enhance the quality of each other and it is used as a new reiterative method to infer the names of each facial image. Experimental results on synthetic and real world data sets validate the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
49

White, David, P. Jonathon Phillips, Carina A. Hahn, Matthew Hill, and Alice J. O'Toole. "Perceptual expertise in forensic facial image comparison." Proceedings of the Royal Society B: Biological Sciences 282, no. 1814 (September 7, 2015): 20151292. http://dx.doi.org/10.1098/rspb.2015.1292.

Full text
Abstract:
Forensic facial identification examiners are required to match the identity of faces in images that vary substantially, owing to changes in viewing conditions and in a person's appearance. These identifications affect the course and outcome of criminal investigations and convictions. Despite calls for research on sources of human error in forensic examination, existing scientific knowledge of face matching accuracy is based, almost exclusively, on people without formal training. Here, we administered three challenging face matching tests to a group of forensic examiners with many years' experience of comparing face images for law enforcement and government agencies. Examiners outperformed untrained participants and computer algorithms, thereby providing the first evidence that these examiners are experts at this task. Notably, computationally fusing responses of multiple experts produced near-perfect performance. Results also revealed qualitative differences between expert and non-expert performance. First, examiners' superiority was greatest at longer exposure durations, suggestive of more entailed comparison in forensic examiners. Second, experts were less impaired by image inversion than non-expert students, contrasting with face memory studies that show larger face inversion effects in high performers. We conclude that expertise in matching identity across unfamiliar face images is supported by processes that differ qualitatively from those supporting memory for individual faces.
APA, Harvard, Vancouver, ISO, and other styles
50

Bernstein, Michal, Jonathan Oron, Boaz Sadeh, and Galit Yovel. "An Integrated Face–Body Representation in the Fusiform Gyrus but Not the Lateral Occipital Cortex." Journal of Cognitive Neuroscience 26, no. 11 (November 2014): 2469–78. http://dx.doi.org/10.1162/jocn_a_00639.

Full text
Abstract:
Faces and bodies are processed by distinct category-selective brain areas. Neuroimaging studies have so far presented isolated faces and headless bodies, and therefore little is known on whether and where faces and headless bodies are grouped together to one object, as they appear in the real world. The current study examined whether a face presented above a body are represented as two separate images or as an integrated face–body representation in face and body-selective brain areas by employing a fMRI competition paradigm. This paradigm has been shown to reveal higher fMRI response to sequential than simultaneous presentation of multiple stimuli (i.e., the competition effect), indicating competitive interactions among simultaneously presented multiple stimuli. We therefore hypothesized that if a face above a body is integrated to an image of a person whereas a body above a face is represented as two separate objects, the competition effect will be larger for the latter than the former. Consistent with our hypothesis, our findings reveal a competition effect when a body is presented above a face, but not when a face is presented above a body, suggesting that a body above a face is represented as two separate objects whereas a face above a body is represented as an integrated image of a person. Interestingly, this integration of a face and a body to an image of a person was found in the fusiform, but not the lateral-occipital face and body areas. We conclude that faces and bodies are processed separately at early stages and are integrated to a unified image of a person at mid-level stages of object processing.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography