Academic literature on the topic 'Binary facial expression recognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Binary facial expression recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Binary facial expression recognition"

1

Tong, Ying, Kun Wang, and Liang Bao Jiao. "Facial Expression Recognition Using Directional Local Binary Pattern." Applied Mechanics and Materials 701-702 (December 2014): 395–99. http://dx.doi.org/10.4028/www.scientific.net/amm.701-702.395.

Full text
Abstract:
Local binary pattern (LBP) descriptor could not efficiently describe the gray change in different directions of facial expressions characteristic regions. For this, the directional local binary pattern (DLBP) is put forward to represent facial geometrical characteristic. DLBP encodes the directional information of the face’s facial textures in horizontal, vertical and diagonal three directions, which can effectively describe the characteristic of facial muscles, wrinkles and other local deformation. Experimental results on JAFFE databases demonstrate the algorithm’s effectiveness, where nearly 5 percent recognition rate improvement is obtained beyond traditional LBP. Additional experiments verify robustness and reliability of the proposed DLBP operator within Gaussian white noise and pepper salt noise.
APA, Harvard, Vancouver, ISO, and other styles
2

Xia, Xiao Xiao, Zi Lu Ying, and Wen Jin Chu. "Facial Expression Recognition Based on Monogenic Binary Coding." Applied Mechanics and Materials 511-512 (February 2014): 437–40. http://dx.doi.org/10.4028/www.scientific.net/amm.511-512.437.

Full text
Abstract:
A new method based on Monogenic Binary Coding (MBC) is proposed for facial expression feature extraction and representation. Firstly, monogenic signal analysis is used to extract multi-scale magnitude, orientation and phase components. Secondly, Monogenic Binary Coding (MBC) is used to encode the monogenic local variation and intensity in local regions of each extracted component in each scale and local histograms are built. Then Blocked Fisher Linear Discrimination (BFLD) is used to reduce the dimensionality of histogram features and to enhance discrimination. Finally the three complementary components are fused for more effective facial expression recognition (FER). Experiment results on Japanese female expression database (JAFFE) show that the performance of the fusion method is even better than state-of-the-art local feature based FER methods such as Local Binary Pattern (LBP)+Sparse Representation (SRC), Local Phase Quantization (LPQ)+SRC ,etc.
APA, Harvard, Vancouver, ISO, and other styles
3

Feng, X., M. Pietikäinen, and A. Hadid. "Facial expression recognition based on local binary patterns." Pattern Recognition and Image Analysis 17, no. 4 (2007): 592–98. http://dx.doi.org/10.1134/s1054661807040190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Owusu, Ebenezer, Jacqueline Asor Kumi, and Justice Kwame Appati. "On Facial Expression Recognition Benchmarks." Applied Computational Intelligence and Soft Computing 2021 (September 17, 2021): 1–20. http://dx.doi.org/10.1155/2021/9917246.

Full text
Abstract:
Facial expression is an important form of nonverbal communication, as it is noted that 55% of what humans communicate is expressed in facial expressions. There are several applications of facial expressions in diverse fields including medicine, security, gaming, and even business enterprises. Thus, currently, automatic facial expression recognition is a hotbed research area that attracts lots of grants and therefore the need to understand the trends very well. This study, as a result, aims to review selected published works in the domain of study and conduct valuable analysis to determine the most common and useful algorithms employed in the study. We selected published works from 2010 to 2021 and extracted, analyzed, and summarized the findings based on the most used techniques in feature extraction, feature selection, validation, databases, and classification. The result of the study indicates strongly that local binary pattern (LBP), principal component analysis (PCA), saturated vector machine (SVM), CK+, and 10-fold cross-validation are the most widely used feature extraction, feature selection, classifier, database, and validation method used, respectively. Therefore, in line with our findings, this study provides recommendations for research specifically for new researchers with little or no background as to which methods they can employ and strive to improve.
APA, Harvard, Vancouver, ISO, and other styles
5

Davison, Adrian, Walied Merghani, and Moi Yap. "Objective Classes for Micro-Facial Expression Recognition." Journal of Imaging 4, no. 10 (2018): 119. http://dx.doi.org/10.3390/jimaging4100119.

Full text
Abstract:
Micro-expressions are brief spontaneous facial expressions that appear on a face when a person conceals an emotion, making them different to normal facial expressions in subtlety and duration. Currently, emotion classes within the CASME II dataset (Chinese Academy of Sciences Micro-expression II) are based on Action Units and self-reports, creating conflicts during machine learning training. We will show that classifying expressions using Action Units, instead of predicted emotion, removes the potential bias of human reporting. The proposed classes are tested using LBP-TOP (Local Binary Patterns from Three Orthogonal Planes), HOOF (Histograms of Oriented Optical Flow) and HOG 3D (3D Histogram of Oriented Gradient) feature descriptors. The experiments are evaluated on two benchmark FACS (Facial Action Coding System) coded datasets: CASME II and SAMM (A Spontaneous Micro-Facial Movement). The best result achieves 86.35% accuracy when classifying the proposed 5 classes on CASME II using HOG 3D, outperforming the result of the state-of-the-art 5-class emotional-based classification in CASME II. Results indicate that classification based on Action Units provides an objective method to improve micro-expression recognition.
APA, Harvard, Vancouver, ISO, and other styles
6

Hu, De Kun, An Sheng Ye, Li Li, and Li Zhang. "Recognition of Facial Expression via Kernel PCA Network." Applied Mechanics and Materials 631-632 (September 2014): 498–501. http://dx.doi.org/10.4028/www.scientific.net/amm.631-632.498.

Full text
Abstract:
In this work, a kernel principle component analysis network (KPCANet) is proposed for classification of the facial expression in unconstrained images, which comprises only the very basic data processing components: cascaded kernel principal component analysis (KPCA), binary hashing, and block-wise histograms. In the proposed model, KPCA is employed to learn multistage filter banks. It is followed by simple binary hashing and block histograms for indexing and pooling. For comparison and better understanding, We have tested these basic networks extensively on many benchmark visual datasets ( such as the JAFFE [13] database, the CMU AMP face expression database, a part of the Extended Cohn-Kanade (CK+) database), The results demonstrate the potential of the KPCANet serving as a simple but highly competitive baseline for facial expression recognition.
APA, Harvard, Vancouver, ISO, and other styles
7

CAO, NHAN THI, AN HOA TON-THAT, and HYUNG IL CHOI. "FACIAL EXPRESSION RECOGNITION BASED ON LOCAL BINARY PATTERN FEATURES AND SUPPORT VECTOR MACHINE." International Journal of Pattern Recognition and Artificial Intelligence 28, no. 06 (2014): 1456012. http://dx.doi.org/10.1142/s0218001414560126.

Full text
Abstract:
Facial expression recognition has been researched much in recent years because of their applications in intelligent communication systems. Many methods have been developed based on extracting Local Binary Pattern (LBP) features associating different classifying techniques in order to get more and more better effects of facial expression recognition. In this work, we propose a novel method for recognizing facial expressions based on Local Binary Pattern features and Support Vector Machine with two effective improvements. First is the preprocessing step and second is the method of dividing face images into nonoverlap square regions for extracting LBP features. The method was experimented on three typical kinds of database: small (213 images), medium (2040 images) and large (5130 images). Experimental results show the effectiveness of our method for obtaining remarkably better recognition rate in comparison with other methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Zhaoqi, Reziwanguli Xiamixiding, Atul Sajjanhar, Juan Chen, and Quan Wen. "Image Appearance-Based Facial Expression Recognition." International Journal of Image and Graphics 18, no. 02 (2018): 1850012. http://dx.doi.org/10.1142/s0219467818500122.

Full text
Abstract:
We investigate facial expression recognition (FER) based on image appearance. FER is performed using state-of-the-art classification approaches. Different approaches to preprocess face images are investigated. First, region-of-interest (ROI) images are obtained by extracting the facial ROI from raw images. FER of ROI images is used as the benchmark and compared with the FER of difference images. Difference images are obtained by computing the difference between the ROI images of neutral and peak facial expressions. FER is also evaluated for images which are obtained by applying the Local binary pattern (LBP) operator to ROI images. Further, we investigate different contrast enhancement operators to preprocess images, namely, histogram equalization (HE) approach and a brightness preserving approach for histogram equalization. The classification experiments are performed for a convolutional neural network (CNN) and a pre-trained deep learning model. All experiments are performed on three public face databases, namely, Cohn–Kanade (CK[Formula: see text]), JAFFE and FACES.
APA, Harvard, Vancouver, ISO, and other styles
9

V.Jonnalagedda, Megha, and Dharmpal D. Doye. "Radially Defined Local Binary Patterns for Facial Expression Recognition." International Journal of Computer Applications 119, no. 21 (2015): 17–22. http://dx.doi.org/10.5120/21360-4369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Huang, Xiaohua, Guoying Zhao, Wenming Zheng, and Matti Pietikainen. "Spatiotemporal Local Monogenic Binary Patterns for Facial Expression Recognition." IEEE Signal Processing Letters 19, no. 5 (2012): 243–46. http://dx.doi.org/10.1109/lsp.2012.2188890.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Binary facial expression recognition"

1

Nordén, Frans, and Reis Marlevi Filip von. "A Comparative Analysis of Machine Learning Algorithms in Binary Facial Expression Recognition." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254259.

Full text
Abstract:
In this paper an analysis is conducted regarding whether a higher classification accuracy of facial expressions are possible. The approach used is that the seven basic emotional states are combined into a binary classification problem. Five different machine learning algorithms are implemented: Support vector machines, Extreme learning Machine and three different Convolutional Neural Networks (CNN). The utilized CNN:S were one conventional, one based on VGG16 and transfer learning and one based on residual theory known as RESNET50. The experiment was conducted on two datasets, one small containing no contamination called JAFFE and one big containing contamination called FER2013. The highest accuracy was achieved with the CNN:s where RESNET50 had the highest classification accuracy. When comparing the classification accuracy with the state of the art accuracy an improvement of around 0.09 was achieved on the FER2013 dataset. This dataset does however include some ambiguities regarding what facial expression is shown. It would henceforth be of interest to conduct an experiment where humans classify the facial expressions in the dataset in order to achieve a benchmark.
APA, Harvard, Vancouver, ISO, and other styles
2

Deaney, Mogammat Waleed. "A Comparison of Machine Learning Techniques for Facial Expression Recognition." University of the Western Cape, 2018. http://hdl.handle.net/11394/6412.

Full text
Abstract:
Magister Scientiae - MSc (Computer Science)<br>A machine translation system that can convert South African Sign Language (SASL) video to audio or text and vice versa would be bene cial to people who use SASL to communicate. Five fundamental parameters are associated with sign language gestures, these are: hand location; hand orientation; hand shape; hand movement and facial expressions. The aim of this research is to recognise facial expressions and to compare both feature descriptors and machine learning techniques. This research used the Design Science Research (DSR) methodology. A DSR artefact was built which consisted of two phases. The rst phase compared local binary patterns (LBP), compound local binary patterns (CLBP) and histogram of oriented gradients (HOG) using support vector machines (SVM). The second phase compared the SVM to arti cial neural networks (ANN) and random forests (RF) using the most promising feature descriptor|HOG|from the rst phase. The performance was evaluated in terms of accuracy, robustness to classes, robustness to subjects and ability to generalise on both the Binghamton University 3D facial expression (BU-3DFE) and Cohn Kanade (CK) datasets. The evaluation rst phase showed HOG to be the best feature descriptor followed by CLBP and LBP. The second showed ANN to be the best choice of machine learning technique closely followed by the SVM and RF.
APA, Harvard, Vancouver, ISO, and other styles
3

Huang, X. (Xiaohua). "Methods for facial expression recognition with applications in challenging situations." Doctoral thesis, Oulun yliopisto, 2014. http://urn.fi/urn:isbn:9789526206561.

Full text
Abstract:
Abstract In recent years, facial expression recognition has become a useful scheme for computers to affectively understand the emotional state of human beings. Facial representation and facial expression recognition under unconstrained environments have been two critical issues for facial expression recognition systems. This thesis contributes to the research and development of facial expression recognition systems from two aspects: first, feature extraction for facial expression recognition, and second, applications to challenging conditions. Spatial and temporal feature extraction methods are introduced to provide effective and discriminative features for facial expression recognition. The thesis begins with a spatial feature extraction method. This descriptor exploits magnitude while it improves local quantized pattern using improved vector quantization. It also makes the statistical patterns domain-adaptive and compact. Then, the thesis discusses two spatiotemporal feature extraction methods. The first method uses monogenic signal analysis as a preprocessing stage and extracts spatiotemporal features using local binary pattern. The second method extracts sparse spatiotemporal features using sparse cuboids and spatiotemporal local binary pattern. Both methods increase the discriminative capability of local binary pattern in the temporal domain. Based on feature extraction methods, three practical conditions, including illumination variations, facial occlusion and pose changes, are studied for the applications of facial expression recognition. First, with near-infrared imaging technique, a discriminative component-based single feature descriptor is proposed to achieve a high degree of robustness and stability to illumination variations. Second, occlusion detection is proposed to dynamically detect the occluded face regions. A novel system is further designed for handling effectively facial occlusion. Lastly, multi-view discriminative neighbor preserving embedding is developed to deal with pose change, which formulates multi-view facial expression recognition as a generalized eigenvalue problem. Experimental results on publicly available databases show that the effectiveness of the proposed approaches for the applications of facial expression recognition<br>Tiivistelmä Kasvonilmeiden tunnistamisesta on viime vuosina tullut tietokoneille hyödyllinen tapa ymmärtää affektiivisesti ihmisen tunnetilaa. Kasvojen esittäminen ja kasvonilmeiden tunnistaminen rajoittamattomissa ympäristöissä ovat olleet kaksi kriittistä ongelmaa kasvonilmeitä tunnistavien järjestelmien kannalta. Tämä väitöskirjatutkimus myötävaikuttaa kasvonilmeitä tunnistavien järjestelmien tutkimukseen ja kehittymiseen kahdesta näkökulmasta: piirteiden irrottamisesta kasvonilmeiden tunnistamista varten ja kasvonilmeiden tunnistamisesta haastavissa olosuhteissa. Työssä esitellään spatiaalisia ja temporaalisia piirteenirrotusmenetelmiä, jotka tuottavat tehokkaita ja erottelukykyisiä piirteitä kasvonilmeiden tunnistamiseen. Ensimmäisenä työssä esitellään spatiaalinen piirteenirrotusmenetelmä, joka parantaa paikallisia kvantisoituja piirteitä käyttämällä parannettua vektorikvantisointia. Menetelmä tekee myös tilastollisista malleista monikäyttöisiä ja tiiviitä. Seuraavaksi työssä esitellään kaksi spatiotemporaalista piirteenirrotusmenetelmää. Ensimmäinen näistä käyttää esikäsittelynä monogeenistä signaalianalyysiä ja irrottaa spatiotemporaaliset piirteet paikallisia binäärikuvioita käyttäen. Toinen menetelmä irrottaa harvoja spatiotemporaalisia piirteitä käyttäen harvoja kuusitahokkaita ja spatiotemporaalisia paikallisia binäärikuvioita. Molemmat menetelmät parantavat paikallisten binärikuvioiden erottelukykyä ajallisessa ulottuvuudessa. Piirteenirrotusmenetelmien pohjalta työssä tutkitaan kasvonilmeiden tunnistusta kolmessa käytännön olosuhteessa, joissa esiintyy vaihtelua valaistuksessa, okkluusiossa ja pään asennossa. Ensiksi ehdotetaan lähi-infrapuna kuvantamista hyödyntävää diskriminatiivistä komponenttipohjaista yhden piirteen kuvausta, jolla saavutetaan korkea suoritusvarmuus valaistuksen vaihtelun suhteen. Toiseksi ehdotetaan menetelmä okkluusion havainnointiin, jolla dynaamisesti havaitaan peittyneet kasvon alueet. Uudenlainen menetelmä on kehitetty käsittelemään kasvojen okkluusio tehokkaasti. Viimeiseksi työssä on kehitetty moninäkymäinen diskriminatiivisen naapuruston säilyttävään upottamiseen pohjautuva menetelmä käsittelemään pään asennon vaihtelut. Menetelmä kuvaa moninäkymäisen kasvonilmeiden tunnistamisen yleistettynä ominaisarvohajotelmana. Kokeelliset tulokset julkisilla tietokannoilla osoittavat tässä työssä ehdotetut menetelmät suorituskykyisiksi kasvonilmeiden tunnistamisessa
APA, Harvard, Vancouver, ISO, and other styles
4

Mushfieldt, Diego. "Robust facial expression recognition in the presence of rotation and partial occlusion." Thesis, University of Western Cape, 2014. http://hdl.handle.net/11394/3367.

Full text
Abstract:
>Magister Scientiae - MSc<br>This research proposes an approach to recognizing facial expressions in the presence of rotations and partial occlusions of the face. The research is in the context of automatic machine translation of South African Sign Language (SASL) to English. The proposed method is able to accurately recognize frontal facial images at an average accuracy of 75%. It also achieves a high recognition accuracy of 70% for faces rotated to 60◦. It was also shown that the method is able to continue to recognize facial expressions even in the presence of full occlusions of the eyes, mouth and left/right sides of the face. The accuracy was as high as 70% for occlusion of some areas. An additional finding was that both the left and the right sides of the face are required for recognition. As an addition, the foundation was laid for a fully automatic facial expression recognition system that can accurately segment frontal or rotated faces in a video sequence.
APA, Harvard, Vancouver, ISO, and other styles
5

Fraser, Matthew Paul. "Repetition priming of facial expression recognition." Thesis, University of York, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.431255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hsu, Shen-Mou. "Adaptation effects in facial expression recognition." Thesis, University of York, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.403968.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

de, la Cruz Nathan. "Autonomous facial expression recognition using the facial action coding system." University of the Western Cape, 2016. http://hdl.handle.net/11394/5121.

Full text
Abstract:
>Magister Scientiae - MSc<br>The South African Sign Language research group at the University of the Western Cape is in the process of creating a fully-edged machine translation system to automatically translate between South African Sign Language and English. A major component of the system is the ability to accurately recognise facial expressions, which are used to convey emphasis, tone and mood within South African Sign Language sentences. Traditionally, facial expression recognition research has taken one of two paths: either recognising whole facial expressions of which there are six i.e. anger, disgust, fear, happiness, sadness, surprise, as well as the neutral expression; or recognising the fundamental components of facial expressions as defined by the Facial Action Coding System in the form of Action Units. Action Units are directly related to the motion of specific muscles in the face, combinations of which are used to form any facial expression. This research investigates enhanced recognition of whole facial expressions by means of a hybrid approach that combines traditional whole facial expression recognition with Action Unit recognition to achieve an enhanced classification approach.
APA, Harvard, Vancouver, ISO, and other styles
8

Schulze, Martin Michael. "Facial expression recognition with support vector machines." [S.l. : s.n.], 2003. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10952963.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fan, Xijian. "Spatio-temporal framework on facial expression recognition." Thesis, University of Warwick, 2016. http://wrap.warwick.ac.uk/88732/.

Full text
Abstract:
This thesis presents an investigation into two topics that are important in facial expression recognition: how to employ the dynamic information from facial expression image sequences and how to efficiently extract context and other relevant information of different facial regions. This involves the development of spatio-temporal frameworks for recognising facial expression. The thesis proposed three novel frameworks for recognising facial expression. The first framework uses sparse representation to extract features from patches of a face to improve the recognition performance, where part-based methods which are robust to image alignment are applied. In addition, the use of sparse representation reduces the dimensionality of features, and improves the semantic meaning and represents a face image more efficiently. Since a facial expression involves a dynamic process, and the process contains information that describes a facial expression more effectively, it is important to capture such dynamic information so as to recognise facial expressions over the entire video sequence. Thus, the second framework uses two types of dynamic information to enhance the recognition: a novel spatio-temporal descriptor based on PHOG (pyramid histogram of gradient) to represent changes in facial shape, and dense optical flow to estimate the movement (displacement) of facial landmarks. The framework views an image sequence as a spatio-temporal volume, and uses temporal information to represent the dynamic movement of facial landmarks associated with a facial expression. Specifically, spatial based descriptor representing spatial local shape is extended to spatio-temporal domain to capture the changes in local shape of facial sub-regions in the temporal dimension to give 3D facial component sub-regions of forehead, mouth, eyebrow and nose. The descriptor of optical flow is also employed to extract the information of temporal. The fusion of these two descriptors enhance the dynamic information and achieves better performance than the individual descriptors. The third framework also focuses on analysing the dynamics of facial expression sequences to represent spatial-temporal dynamic information (i.e., velocity). Two types of features are generated: a spatio-temporal shape representation to enhance the local spatial and dynamic information, and a dynamic appearance representation. In addition, an entropy-based method is introduced to provide spatial relationship of different parts of a face by computing the entropy value of different sub-regions of a face.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhou, Yun. "Embedded Face Detection and Facial Expression Recognition." Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-theses/583.

Full text
Abstract:
Face Detection has been applied in many fields such as surveillance, human machine interaction, entertainment and health care. Two main reasons for extensive attention on this typical research domain are: 1) a strong need for the face recognition system is obvious due to the widespread use of security, 2) face recognition is more user friendly and faster since it almost requests the users to do nothing. The system is based on ARM Cortex-A8 development board, including transplantation of Linux operating system, the development of drivers, detecting face by using face class Haar feature and Viola-Jones algorithm. In the paper, the face Detection system uses the AdaBoost algorithm to detect human face from the frame captured by the camera. The paper introduces the pros and cons between several popular images processing algorithm. Facial expression recognition system involves face detection and emotion feature interpretation, which consists of offline training and online test part. Active shape model (ASM) for facial feature node detection, optical flow for face tracking, support vector machine (SVM) for classification is applied in this research.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Binary facial expression recognition"

1

Young, A. W. Facial Expression Recognition. Psychology Press, 2016. http://dx.doi.org/10.4324/9781315715933.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bai, Xiang, Yi Fang, Yangqing Jia, et al., eds. Video Analytics. Face and Facial Expression Recognition. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-12177-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ji, Qiang, Thomas B. Moeslund, Gang Hua, and Kamal Nasrollahi, eds. Face and Facial Expression Recognition from Real World Videos. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-13737-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nasrollahi, Kamal, Cosimo Distante, Gang Hua, et al., eds. Video Analytics. Face and Facial Expression Recognition and Audience Measurement. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-56687-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wallbott, Harald G. Recognition of emotion from facial expression via imitation?: Some indirect evidence for anold theory. British Psychological Society, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Our biometric future: Facial recognition technology and the culture of surveillance. New York University Press, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

A, Tsihrintzis George, ed. Visual affect recognition. IOS Press, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

The Oxford handbook of face perception. Oxford University Press, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fang, Yi, Chunhua Shen, Shuicheng Yan, et al. Video Analytics. Face and Facial Expression Recognition. Springer, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Calder, Andrew J. Does Facial Identity and Facial Expression Recognition Involve Separate Visual Routes? Oxford University Press, 2011. http://dx.doi.org/10.1093/oxfordhb/9780199559053.013.0022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Binary facial expression recognition"

1

Biswas, Suparna, and Jaya Sil. "Facial Expression Recognition Using Modified Local Binary Pattern." In Computational Intelligence in Data Mining - Volume 2. Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-2208-8_54.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Srinivasa Reddy, K., E. Sunil Reddy, and N. Baswanth. "Facial Expression Recognition by Considering Nonuniform Local Binary Patterns." In Emerging Research in Computing, Information, Communication and Applications. Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-6001-5_55.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yang, Chunjian, Min Hu, Yaqin Zheng, Xiaohua Wang, Yong Gao, and Hao Wu. "Facial Expression Recognition Based on Local Double Binary Mapped Pattern." In Advances in Multimedia Information Processing – PCM 2018. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00767-6_45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ying, Zilu, Linbo Cai, Junying Gan, and Sibin He. "Facial Expression Recognition with Local Binary Pattern and Laplacian Eigenmaps." In Emerging Intelligent Computing Technology and Applications. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04070-2_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Majumder, Anima, Laxmidhar Behera, and Venkatesh K. Subramanian. "Facial Expression Recognition with Regional Features Using Local Binary Patterns." In Computer Analysis of Images and Patterns. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-40261-6_67.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jain, Sarika, Sunny Bagga, Ramchand Hablani, Narendra Chaudhari, and Sanjay Tanwani. "Facial Expression Recognition Using Local Binary Patterns with Different Distance Measures." In Intelligent Computing, Networking, and Informatics. Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-1665-0_86.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nigam, Swati, and Ashish Khare. "Multiscale Local Binary Patterns for Facial Expression-Based Human Emotion Recognition." In Computational Vision and Robotics. Springer India, 2015. http://dx.doi.org/10.1007/978-81-322-2196-8_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nigam, Swati, Rajiv Singh, and A. K. Misra. "Local Binary Patterns Based Facial Expression Recognition for Efficient Smart Applications." In Security in Smart Cities: Models, Applications, and Challenges. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01560-2_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sun, Yuechuan, and Jun Yu. "Facial Expression Recognition by Fusing Gabor and Local Binary Pattern Features." In MultiMedia Modeling. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-51814-5_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Saurav, Sumeet, Sanjay Singh, Madhulika Yadav, and Ravi Saini. "Image-Based Facial Expression Recognition Using Local Neighborhood Difference Binary Pattern." In Proceedings of 3rd International Conference on Computer Vision and Image Processing. Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-32-9088-4_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Binary facial expression recognition"

1

Wencheng Wang, Faliang Chang, Jianguo Zhao, and Zhenxue Chen. "Automatic facial expression recognition using local binary pattern." In 2010 8th World Congress on Intelligent Control and Automation (WCICA 2010). IEEE, 2010. http://dx.doi.org/10.1109/wcica.2010.5554337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Caifeng Shan, Shaogang Gong, and P. W. McOwan. "Robust facial expression recognition using local binary patterns." In rnational Conference on Image Processing. IEEE, 2005. http://dx.doi.org/10.1109/icip.2005.1530069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Song Guo and Qiuqi Ruan. "Facial expression recognition using local binary covariance matrices." In 4th IET International Conference on Wireless, Mobile & Multimedia Networks (ICWMMN 2011). IET, 2011. http://dx.doi.org/10.1049/cp.2011.0997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ekweariri, Augustine Nnamdi, and Kamil Yurtkan. "Facial expression recognition using enhanced local binary patterns." In 2017 9th International Conference on Computational Intelligence and Communication Networks (CICN). IEEE, 2017. http://dx.doi.org/10.1109/cicn.2017.8319353.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Verma, Rohit, and Mohamed-Yahia Dabbagh. "Fast facial expression recognition based on local binary patterns." In 2013 26th IEEE Canadian Conference on Electrical and Computer Engineering (CCECE). IEEE, 2013. http://dx.doi.org/10.1109/ccece.2013.6567728.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Yiding, and Meng Meng. "3D Facial Expression Recognition on Curvature Local Binary Patterns." In 2013 5th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC). IEEE, 2013. http://dx.doi.org/10.1109/ihmsc.2013.176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Saha, Ashirbani, and Q. M. Jonathan Wu. "Facial expression recognition using curvelet based local binary patterns." In 2010 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2010. http://dx.doi.org/10.1109/icassp.2010.5494892.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Alsubari, Akram, D. N. Satange, and R. J. Ramteke. "Facial expression recognition using wavelet transform and local binary pattern." In 2017 2nd International Conference for Convergence in Technology (I2CT). IEEE, 2017. http://dx.doi.org/10.1109/i2ct.2017.8226147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ahmed, Faisal, Emam Hossain, A. S. M. Hossain Bari, and ASM Shihavuddin. "Compound local binary pattern (CLBP) for robust facial expression recognition." In 2011 IEEE 12th International Symposium on Computational Intelligence and Informatics (CINTI). IEEE, 2011. http://dx.doi.org/10.1109/cinti.2011.6108536.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mohammad Shoyaib, M. Abdullah-Al-Wadud, Jo Moo Youl, Muhammad Mahbub Alam, and Oksam Chae. "Facial expression recognition based on a weighted Local Binary Pattern." In 2010 13th International Conference on Computer and Information Technology (ICCIT). IEEE, 2010. http://dx.doi.org/10.1109/iccitechn.2010.5723877.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!