To see the other types of publications on this topic, follow the link: Binary facial expression recognition.

Dissertations / Theses on the topic 'Binary facial expression recognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Binary facial expression recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Nordén, Frans, and Reis Marlevi Filip von. "A Comparative Analysis of Machine Learning Algorithms in Binary Facial Expression Recognition." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254259.

Full text
Abstract:
In this paper an analysis is conducted regarding whether a higher classification accuracy of facial expressions are possible. The approach used is that the seven basic emotional states are combined into a binary classification problem. Five different machine learning algorithms are implemented: Support vector machines, Extreme learning Machine and three different Convolutional Neural Networks (CNN). The utilized CNN:S were one conventional, one based on VGG16 and transfer learning and one based on residual theory known as RESNET50. The experiment was conducted on two datasets, one small containing no contamination called JAFFE and one big containing contamination called FER2013. The highest accuracy was achieved with the CNN:s where RESNET50 had the highest classification accuracy. When comparing the classification accuracy with the state of the art accuracy an improvement of around 0.09 was achieved on the FER2013 dataset. This dataset does however include some ambiguities regarding what facial expression is shown. It would henceforth be of interest to conduct an experiment where humans classify the facial expressions in the dataset in order to achieve a benchmark.
APA, Harvard, Vancouver, ISO, and other styles
2

Deaney, Mogammat Waleed. "A Comparison of Machine Learning Techniques for Facial Expression Recognition." University of the Western Cape, 2018. http://hdl.handle.net/11394/6412.

Full text
Abstract:
Magister Scientiae - MSc (Computer Science)<br>A machine translation system that can convert South African Sign Language (SASL) video to audio or text and vice versa would be bene cial to people who use SASL to communicate. Five fundamental parameters are associated with sign language gestures, these are: hand location; hand orientation; hand shape; hand movement and facial expressions. The aim of this research is to recognise facial expressions and to compare both feature descriptors and machine learning techniques. This research used the Design Science Research (DSR) methodology. A DSR artefact was built which consisted of two phases. The rst phase compared local binary patterns (LBP), compound local binary patterns (CLBP) and histogram of oriented gradients (HOG) using support vector machines (SVM). The second phase compared the SVM to arti cial neural networks (ANN) and random forests (RF) using the most promising feature descriptor|HOG|from the rst phase. The performance was evaluated in terms of accuracy, robustness to classes, robustness to subjects and ability to generalise on both the Binghamton University 3D facial expression (BU-3DFE) and Cohn Kanade (CK) datasets. The evaluation rst phase showed HOG to be the best feature descriptor followed by CLBP and LBP. The second showed ANN to be the best choice of machine learning technique closely followed by the SVM and RF.
APA, Harvard, Vancouver, ISO, and other styles
3

Huang, X. (Xiaohua). "Methods for facial expression recognition with applications in challenging situations." Doctoral thesis, Oulun yliopisto, 2014. http://urn.fi/urn:isbn:9789526206561.

Full text
Abstract:
Abstract In recent years, facial expression recognition has become a useful scheme for computers to affectively understand the emotional state of human beings. Facial representation and facial expression recognition under unconstrained environments have been two critical issues for facial expression recognition systems. This thesis contributes to the research and development of facial expression recognition systems from two aspects: first, feature extraction for facial expression recognition, and second, applications to challenging conditions. Spatial and temporal feature extraction methods are introduced to provide effective and discriminative features for facial expression recognition. The thesis begins with a spatial feature extraction method. This descriptor exploits magnitude while it improves local quantized pattern using improved vector quantization. It also makes the statistical patterns domain-adaptive and compact. Then, the thesis discusses two spatiotemporal feature extraction methods. The first method uses monogenic signal analysis as a preprocessing stage and extracts spatiotemporal features using local binary pattern. The second method extracts sparse spatiotemporal features using sparse cuboids and spatiotemporal local binary pattern. Both methods increase the discriminative capability of local binary pattern in the temporal domain. Based on feature extraction methods, three practical conditions, including illumination variations, facial occlusion and pose changes, are studied for the applications of facial expression recognition. First, with near-infrared imaging technique, a discriminative component-based single feature descriptor is proposed to achieve a high degree of robustness and stability to illumination variations. Second, occlusion detection is proposed to dynamically detect the occluded face regions. A novel system is further designed for handling effectively facial occlusion. Lastly, multi-view discriminative neighbor preserving embedding is developed to deal with pose change, which formulates multi-view facial expression recognition as a generalized eigenvalue problem. Experimental results on publicly available databases show that the effectiveness of the proposed approaches for the applications of facial expression recognition<br>Tiivistelmä Kasvonilmeiden tunnistamisesta on viime vuosina tullut tietokoneille hyödyllinen tapa ymmärtää affektiivisesti ihmisen tunnetilaa. Kasvojen esittäminen ja kasvonilmeiden tunnistaminen rajoittamattomissa ympäristöissä ovat olleet kaksi kriittistä ongelmaa kasvonilmeitä tunnistavien järjestelmien kannalta. Tämä väitöskirjatutkimus myötävaikuttaa kasvonilmeitä tunnistavien järjestelmien tutkimukseen ja kehittymiseen kahdesta näkökulmasta: piirteiden irrottamisesta kasvonilmeiden tunnistamista varten ja kasvonilmeiden tunnistamisesta haastavissa olosuhteissa. Työssä esitellään spatiaalisia ja temporaalisia piirteenirrotusmenetelmiä, jotka tuottavat tehokkaita ja erottelukykyisiä piirteitä kasvonilmeiden tunnistamiseen. Ensimmäisenä työssä esitellään spatiaalinen piirteenirrotusmenetelmä, joka parantaa paikallisia kvantisoituja piirteitä käyttämällä parannettua vektorikvantisointia. Menetelmä tekee myös tilastollisista malleista monikäyttöisiä ja tiiviitä. Seuraavaksi työssä esitellään kaksi spatiotemporaalista piirteenirrotusmenetelmää. Ensimmäinen näistä käyttää esikäsittelynä monogeenistä signaalianalyysiä ja irrottaa spatiotemporaaliset piirteet paikallisia binäärikuvioita käyttäen. Toinen menetelmä irrottaa harvoja spatiotemporaalisia piirteitä käyttäen harvoja kuusitahokkaita ja spatiotemporaalisia paikallisia binäärikuvioita. Molemmat menetelmät parantavat paikallisten binärikuvioiden erottelukykyä ajallisessa ulottuvuudessa. Piirteenirrotusmenetelmien pohjalta työssä tutkitaan kasvonilmeiden tunnistusta kolmessa käytännön olosuhteessa, joissa esiintyy vaihtelua valaistuksessa, okkluusiossa ja pään asennossa. Ensiksi ehdotetaan lähi-infrapuna kuvantamista hyödyntävää diskriminatiivistä komponenttipohjaista yhden piirteen kuvausta, jolla saavutetaan korkea suoritusvarmuus valaistuksen vaihtelun suhteen. Toiseksi ehdotetaan menetelmä okkluusion havainnointiin, jolla dynaamisesti havaitaan peittyneet kasvon alueet. Uudenlainen menetelmä on kehitetty käsittelemään kasvojen okkluusio tehokkaasti. Viimeiseksi työssä on kehitetty moninäkymäinen diskriminatiivisen naapuruston säilyttävään upottamiseen pohjautuva menetelmä käsittelemään pään asennon vaihtelut. Menetelmä kuvaa moninäkymäisen kasvonilmeiden tunnistamisen yleistettynä ominaisarvohajotelmana. Kokeelliset tulokset julkisilla tietokannoilla osoittavat tässä työssä ehdotetut menetelmät suorituskykyisiksi kasvonilmeiden tunnistamisessa
APA, Harvard, Vancouver, ISO, and other styles
4

Mushfieldt, Diego. "Robust facial expression recognition in the presence of rotation and partial occlusion." Thesis, University of Western Cape, 2014. http://hdl.handle.net/11394/3367.

Full text
Abstract:
>Magister Scientiae - MSc<br>This research proposes an approach to recognizing facial expressions in the presence of rotations and partial occlusions of the face. The research is in the context of automatic machine translation of South African Sign Language (SASL) to English. The proposed method is able to accurately recognize frontal facial images at an average accuracy of 75%. It also achieves a high recognition accuracy of 70% for faces rotated to 60◦. It was also shown that the method is able to continue to recognize facial expressions even in the presence of full occlusions of the eyes, mouth and left/right sides of the face. The accuracy was as high as 70% for occlusion of some areas. An additional finding was that both the left and the right sides of the face are required for recognition. As an addition, the foundation was laid for a fully automatic facial expression recognition system that can accurately segment frontal or rotated faces in a video sequence.
APA, Harvard, Vancouver, ISO, and other styles
5

Fraser, Matthew Paul. "Repetition priming of facial expression recognition." Thesis, University of York, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.431255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hsu, Shen-Mou. "Adaptation effects in facial expression recognition." Thesis, University of York, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.403968.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

de, la Cruz Nathan. "Autonomous facial expression recognition using the facial action coding system." University of the Western Cape, 2016. http://hdl.handle.net/11394/5121.

Full text
Abstract:
>Magister Scientiae - MSc<br>The South African Sign Language research group at the University of the Western Cape is in the process of creating a fully-edged machine translation system to automatically translate between South African Sign Language and English. A major component of the system is the ability to accurately recognise facial expressions, which are used to convey emphasis, tone and mood within South African Sign Language sentences. Traditionally, facial expression recognition research has taken one of two paths: either recognising whole facial expressions of which there are six i.e. anger, disgust, fear, happiness, sadness, surprise, as well as the neutral expression; or recognising the fundamental components of facial expressions as defined by the Facial Action Coding System in the form of Action Units. Action Units are directly related to the motion of specific muscles in the face, combinations of which are used to form any facial expression. This research investigates enhanced recognition of whole facial expressions by means of a hybrid approach that combines traditional whole facial expression recognition with Action Unit recognition to achieve an enhanced classification approach.
APA, Harvard, Vancouver, ISO, and other styles
8

Schulze, Martin Michael. "Facial expression recognition with support vector machines." [S.l. : s.n.], 2003. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10952963.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fan, Xijian. "Spatio-temporal framework on facial expression recognition." Thesis, University of Warwick, 2016. http://wrap.warwick.ac.uk/88732/.

Full text
Abstract:
This thesis presents an investigation into two topics that are important in facial expression recognition: how to employ the dynamic information from facial expression image sequences and how to efficiently extract context and other relevant information of different facial regions. This involves the development of spatio-temporal frameworks for recognising facial expression. The thesis proposed three novel frameworks for recognising facial expression. The first framework uses sparse representation to extract features from patches of a face to improve the recognition performance, where part-based methods which are robust to image alignment are applied. In addition, the use of sparse representation reduces the dimensionality of features, and improves the semantic meaning and represents a face image more efficiently. Since a facial expression involves a dynamic process, and the process contains information that describes a facial expression more effectively, it is important to capture such dynamic information so as to recognise facial expressions over the entire video sequence. Thus, the second framework uses two types of dynamic information to enhance the recognition: a novel spatio-temporal descriptor based on PHOG (pyramid histogram of gradient) to represent changes in facial shape, and dense optical flow to estimate the movement (displacement) of facial landmarks. The framework views an image sequence as a spatio-temporal volume, and uses temporal information to represent the dynamic movement of facial landmarks associated with a facial expression. Specifically, spatial based descriptor representing spatial local shape is extended to spatio-temporal domain to capture the changes in local shape of facial sub-regions in the temporal dimension to give 3D facial component sub-regions of forehead, mouth, eyebrow and nose. The descriptor of optical flow is also employed to extract the information of temporal. The fusion of these two descriptors enhance the dynamic information and achieves better performance than the individual descriptors. The third framework also focuses on analysing the dynamics of facial expression sequences to represent spatial-temporal dynamic information (i.e., velocity). Two types of features are generated: a spatio-temporal shape representation to enhance the local spatial and dynamic information, and a dynamic appearance representation. In addition, an entropy-based method is introduced to provide spatial relationship of different parts of a face by computing the entropy value of different sub-regions of a face.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhou, Yun. "Embedded Face Detection and Facial Expression Recognition." Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-theses/583.

Full text
Abstract:
Face Detection has been applied in many fields such as surveillance, human machine interaction, entertainment and health care. Two main reasons for extensive attention on this typical research domain are: 1) a strong need for the face recognition system is obvious due to the widespread use of security, 2) face recognition is more user friendly and faster since it almost requests the users to do nothing. The system is based on ARM Cortex-A8 development board, including transplantation of Linux operating system, the development of drivers, detecting face by using face class Haar feature and Viola-Jones algorithm. In the paper, the face Detection system uses the AdaBoost algorithm to detect human face from the frame captured by the camera. The paper introduces the pros and cons between several popular images processing algorithm. Facial expression recognition system involves face detection and emotion feature interpretation, which consists of offline training and online test part. Active shape model (ASM) for facial feature node detection, optical flow for face tracking, support vector machine (SVM) for classification is applied in this research.
APA, Harvard, Vancouver, ISO, and other styles
11

Tang, Wing Hei Iris. "Facial expression recognition for a sociable robot." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/46467.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.<br>Includes bibliographical references (p. 53-54).<br>In order to develop a sociable robot that can operate in the social environment of humans, we need to develop a robot system that can recognize the emotions of the people it interacts with and can respond to them accordingly. In this thesis, I present a facial expression system that recognizes the facial features of human subjects in an unsupervised manner and interprets the facial expressions of the individuals. The facial expression system is integrated with an existing emotional model for the expressive humanoid robot, Mertz.<br>by Wing Hei Iris Tang.<br>M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
12

Ainsworth, Kirsty. "Facial expression recognition and the autism spectrum." Thesis, University of Glasgow, 2016. http://theses.gla.ac.uk/8287/.

Full text
Abstract:
An atypical recognition of facial expressions of emotion is thought to be part of the characteristics associated with an autism spectrum disorder diagnosis (DSM-5, 2013). However, despite over three decades of experimental research into facial expression recognition (FER) in autism spectrum disorder (ASD), conflicting results are still reported (Harms, Martin, and Wallace, 2010). The thesis presented here aims to explore FER in ASD using novel techniques, as well as assessing the contribution of a co-occurring emotion-blindness condition (alexithymia) and autism-like personality traits. Chapter 1 provides a review of the current literature surrounding emotion perception in ASD, focussing specifically on evidence for, and against, atypical recognition of facial expressions of emotion in ASD. The experimental chapters presented in this thesis (Chapters 2, 3 and 4) explore FER in adults with ASD, children with ASD and in the wider, typical population. In Chapter 2, a novel psychophysics method is presented along with its use in assessing FER in individuals with ASD. Chapter 2 also presents a research experiment in adults with ASD, indicating that FER is similar compared to typically developed (TD) adults in terms of the facial muscle components (action units; AUs), the intensity levels and the timing components utilised from the stimuli. In addition to this, individual differences within groups are shown, indicating that better FER ability is associated with lower levels of ASD symptoms in adults with ASD (measured using the ADOS; Lord et al. (2000)) and lower levels of autism-like personality traits in TD adults (measured using the Autism-Spectrum Quotient; (S. Baron-Cohen, Wheelwright, Skinner, Martin, and Clubley, 2001)). Similarly, Chapter 3 indicates that children with ASD are not significantly different from TD children in their perception of facial expressions of emotion as assessed using AU, intensity and timing components. Chapter 4 assesses the contribution of alexithymia and autism-like personality traits (AQ) to FER ability in a sample of individuals from the typical population. This chapter provides evidence against the idea that alexithymia levels predict FER ability over and above AQ levels. The importance of the aforementioned results are discussed in Chapter 5 in the context of previous research in the field, and in relation to established theoretical approaches to FER in ASD. In particular, arguments are made that FER cannot be conceptualised under an ‘all-or-nothing’ framework, which has been implied for a number of years (Harms et al., 2010). Instead it is proposed that FER is a multifaceted skill in individuals with ASD, which varies according to an individual’s skillset. Lastly, limitations of the research presented in this thesis are discussed in addition to suggestions for future research.
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Shihai. "Boosting learning applied to facial expression recognition." Thesis, University of Manchester, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.511940.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Besel, Lana Diane Shyla. "Empathy : the role of facial expression recognition." Thesis, University of British Columbia, 2006. http://hdl.handle.net/2429/30730.

Full text
Abstract:
This research examined whether people with higher dispositional empathy are better at recognizing facial expressions of emotion at faster presentation speeds. Facial expressions of emotion, taken from Pictures o f Facial Affect (Ekman & Friesen, 1976), were presented at two different durations: 47 ms and 2008 ms. Participants were 135 undergraduate students. They identified the emotion displayed in the expression from a list of the basic emotions. The first part of this research explored connections between expression recognition and the common cognitive empathy/emotional empathy distinction. Two factors from the Interpersonal Reactivity Scale (IRS; Davis, 1983) measured self-reported tendencies to experience cognitive empathy and emotional empathy: Perspective Taking (IRS-PT), and Empathic Concern (IRS-EC), respectively. Results showed that emotional empathy significantly positively predicted performance at 47 ms, but not at 2008 ms and cognitive empathy did not significantly predict performance at either duration. The second part examined empathy deficits. The kinds of emotional empathy deficits that comprise psychopathy were measured by the Self-Report Psychopathy Scale (SRP-III; Paulhus, Hemphill; & Hare, in press). Cognitive empathy deficits were explored using the Empathy Quotient (EQ; Shaw et al., 2004). Results showed that the callous affect factor of the SRP (SRP-CA) was the only significant predictor at 47 ms, with higher callous affect scores associated with lower performance. SRP-CA is a deficit in emotional empathy, and thus, these results match the first paper's results. At 2008 ms, the social skills factor of the EQ was significantly positively predictive, indicating that people with less social competence had more trouble recognizing facial expressions at longer presentation durations. Neither the total scores for SRP nor EQ were significant predictors of identification accuracy at 47 ms and 2008 ms. Together, the results suggest that a disposition to react emotionally to the emotions of others, and remain other-focussed, provides a specific advantage for accurately recognizing briefly presented facial expressions, compared to people with lower dispositional emotional empathy.<br>Arts, Faculty of<br>Psychology, Department of<br>Graduate
APA, Harvard, Vancouver, ISO, and other styles
15

Varanka, T. (Tuomas). "A comparative study of facial micro-expression recognition." Bachelor's thesis, University of Oulu, 2019. http://jultika.oulu.fi/Record/nbnfioulu-201909282947.

Full text
Abstract:
Abstract. Facial micro-expressions are involuntary and rapid facial movements that reveal hidden emotions. Spotting and recognition of micro-expressions is a hard task even for humans due to their low magnitude and short duration compared to macro-expressions. In this thesis we look at why micro-expressions are important, datasets that contain micro-expressions for training of automatic systems, and how we can utilize modern computational methods to automatically recognize micro-expressions. Furthermore, we experiment with several representative methods in the literature and compare their performance.Vertaileva tutkimus mikroilmeiden tunnistuksesta. Tiivistelmä. Mikroilmeet ovat tahattomia ja nopeita kasvojen liikkeitä, jotka kertovat henkilön piilotetuista ilmeistä. Mikroilmeiden tunnistus ja luokittelu on vaikea tehtävä jopa ihmisille niiden lyhyen keston ja pienten liikkeiden takia verrattaessa makroilmeisiin. Tässä työssä tarkastelemme miksi mikroilmeet ovat tärkeitä, data-aineistoja, jotka sisältävät mikroilmeitä automaattisten systeemien opetukseen ja miten mikroilmeitä voidaan luokitella moderneilla laskennallisilla keinoilla. Lisäksi tarkastelemme ja testaamme eri keinoja kirjallisuudesta ja vertaamme niiden tuloksia.
APA, Harvard, Vancouver, ISO, and other styles
16

Aly, Sherin Fathy Mohammed Gaber. "Techniques for Facial Expression Recognition Using the Kinect." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/89220.

Full text
Abstract:
Facial expressions convey non-verbal cues. Humans use facial expressions to show emotions, which play an important role in interpersonal relations and can be of use in many applications involving psychology, human-computer interaction, health care, e-commerce, and many others. Although humans recognize facial expressions in a scene with little or no effort, reliable expression recognition by machine is still a challenging problem. Automatic facial expression recognition (FER) has several related problems: face detection, face representation, extraction of the facial expression information, and classification of expressions, particularly under conditions of input data variability such as illumination and pose variation. A system that performs these operations accurately and in realtime would be a major step forward in achieving a human-like interaction between the man and machine. This document introduces novel approaches for the automatic recognition of the basic facial expressions, namely, happiness, surprise, sadness, fear, disgust, anger, and neutral using relatively low-resolution noisy sensor such as the Microsoft Kinect. Such sensors are capable of fast data collection, but the low-resolution noisy data present unique challenges when identifying subtle changes in appearance. This dissertation will present the work that has been done to address these challenges and the corresponding results. The lack of Kinect-based FER datasets motivated this work to build two Kinect-based RGBD+time FER datasets that include facial expressions of adults and children. To the best of our knowledge, they are the first FER-oriented datasets that include children. Availability of children data is important for research focused on children (e.g., psychology studies on facial expressions of children with autism), and also allows researchers to do deeper studies on automatic FER by analyzing possible differences between data coming from adults and children. The key contributions of this dissertation are both empirical and theoretical. The empirical contributions include the design and successful test of three FER systems that outperform existing FER systems either when tested on public datasets or in realtime. One proposed approach automatically tunes itself to the given 3D data by identifying the best distance metric that maximizes the system accuracy. Compared to traditional approaches where a fixed distance metric is employed for all classes, the presented adaptive approach had better recognition accuracy especially in non-frontal poses. Another proposed system combines high dimensional feature vectors extracted from 2D and 3D modalities via a novel fusion technique. This system achieved 80% accuracy which outperforms the state of the art on the public VT-KFER dataset by more than 13%. The third proposed system has been designed and successfully tested to recognize the six basic expressions plus neutral in realtime using only 3D data captured by the Kinect. When tested on a public FER dataset, it achieved 67% (7% higher than other 3D-based FER systems) in multi-class mode and 89% (i.e., 9% higher than the state of the art) in binary mode. When the system was tested in realtime on 20 children, it achieved over 73% on a reduced set of expressions. To the best of our knowledge, this is the first known system that has been tested on relatively large dataset of children in realtime. The theoretical contributions include 1) the development of a novel feature selection approach that ranks the features based on their class separability, and 2) the development of the Dual Kernel Discriminant Analysis (DKDA) feature fusion algorithm. This later approach addresses the problem of fusing high dimensional noisy data that are highly nonlinear distributed.<br>PHD
APA, Harvard, Vancouver, ISO, and other styles
17

Kokin, Jessica. "Facial Expression Recognition and Interpretation in Shy Children." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32079.

Full text
Abstract:
Two studies were conducted in which we examined the relation between shyness and facial expression processing in children. In Study 1, facial expression recognition was examined by asking 97 children ages 12 to 14 years to identify six different expressions displayed at 50% and 100% intensity, as well as a neutral expression. In Study 2, the focus shifted from the recognition of emotions to the interpretation of emotions. In this study, 123 children aged 12 to 14 years were asked a series of questions regarding how they would perceive different facial expressions. Findings from Study 1 showed that, in the case of shy boys, higher levels of shyness were related to lower recognition accuracy for sad faces displayed at 50% intensity. However, in most cases, shyness was not related to facial expression recognition. The results from Study 2 suggested broader implications for shy children. The findings of Study 2 demonstrated that shyness is predictive of biased facial expression interpretation and that rejection sensitivity mediates this relation. Overall the results of these two studies add to the research on facial expression processing in shy children and suggest that cognitive biases in the way facial expressions are interpreted may be related to shy children’s discomfort in social situations.
APA, Harvard, Vancouver, ISO, and other styles
18

Kreklewetz, Kimberly. "Facial affect recognition in psychopathic offenders /." Burnaby B.C. : Simon Fraser University, 2005. http://ir.lib.sfu.ca/handle/1892/2166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Sierra, Brandon Luis. "COMPARING AND IMPROVING FACIAL RECOGNITION METHOD." CSUSB ScholarWorks, 2017. https://scholarworks.lib.csusb.edu/etd/575.

Full text
Abstract:
Facial recognition is the process in which a sample face can be correctly identified by a machine amongst a group of different faces. With the never-ending need for improvement in the fields of security, surveillance, and identification, facial recognition is becoming increasingly important. Considering this importance, it is imperative that the correct faces are recognized and the error rate is as minimal as possible. Despite the wide variety of current methods for facial recognition, there is no clear cut best method. This project reviews and examines three different methods for facial recognition: Eigenfaces, Fisherfaces, and Local Binary Patterns to determine which method has the highest accuracy of prediction rate. The three methods are reviewed and then compared via experiments. OpenCV, CMake, and Visual Studios were used as tools to conduct experiments. Analysis were conducted to identify which method has the highest accuracy of prediction rate with various experimental factors. By feeding a number of sample images of different people which serve as experimental subjects. The machine is first trained to generate features for each person among the testing subjects. Then, a new image was tested against the “learned” data and be labeled as one of the subjects. With experimental data analysis, the Eigenfaces method was determined to have the highest prediction rate of the three algorithms tested. The Local Binary Pattern Histogram (LBP) was found to have the lowest prediction rate. Finally, LBP was selected for the algorithm improvement. In this project, LBP was improved by identifying the most significant regions of the histograms for each person in training time. The weights of each region are assigned depending on the gray scale contrast. At recognition time, given a new face, different weights are assigned to different regions to increase prediction rate and also speed up the real time recognition. The experimental results confirmed the performance improvement.
APA, Harvard, Vancouver, ISO, and other styles
20

Cheung, Ching-ying Crystal. "Facial emotion recognition after subcortical cerebrovascular diseases /." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk:8888/cgi-bin/hkuto%5Ftoc%5Fpdf?B23425027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Mistry, Kamlesh. "Intelligent facial expression recognition with unsupervised facial point detection and evolutionary feature optimization." Thesis, Northumbria University, 2016. http://nrl.northumbria.ac.uk/36011/.

Full text
Abstract:
Facial expression is one of the effective channels to convey emotions and feelings. Many shape-based, appearance-based or hybrid methods for automatic facial expression recognition have been proposed. However, it is still a challenging task to identify emotions from facial images with scaling differences, pose variations, and occlusions. In addition, it is also difficult to identify significant discriminating facial features that could represent the characteristic of each expression because of the subtlety and variability of facial expressions. In order to deal with the above challenges, this research proposes two novel approaches: unsupervised facial point detection and texture-based facial expression recognition with feature optimisation. First of all, unsupervised automatic facial point detection integrated with regression-based intensity estimation for facial Action Units (AUs) and emotion clustering is proposed to deal with challenges such as scaling differences, pose variations, and occlusions. The proposed facial point detector can detect 54 facial points in images of faces with occlusions, pose variations and scaling differences. We conduct AU intensity estimation respectively using support vector regression and neural networks for 18 selected AUs. FCM is also subsequently employed to recognise seven basic emotions as well as neutral expressions. It also shows great potential to deal with compound and newly arrived novel emotion class detection. The second proposed system focuses on a texture-based approach for facial expression recognition by proposing a novel variant of the local binary pattern for discriminative feature extraction and Particle Swarm Optimization (PSO)-based feature optimisation. Multiple classifiers are applied for recognising seven facial expressions. Finally, evaluations are conducted to show the efficiency of the above two proposed systems. Evaluated using well-known facial databases: Helen, labelled faces in the wild, PUT, and CK+ the proposed unsupervised facial point detector outperforms other supervised landmark detection models dramatically and shows excellent robustness and capability in dealing with rotations, occlusions and illumination changes. Moreover, a comprehensive evaluation is also conducted for the proposed texture-based facial expression recognition with mGA-embedded PSO feature optimisation. Evaluated using the CK+ and MMI benchmark databases, the experimental results indicate that it outperforms other state-of-the-art metaheuristic search methods and facial emotion recognition research reported in the literature by a significant margin.
APA, Harvard, Vancouver, ISO, and other styles
22

張晶凝 and Ching-ying Crystal Cheung. "Facial emotion recognition after subcortical cerebrovascular diseases." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31224155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Zhan, Ce. "Facial expression recognition for multi-player on-line games." School of Computer Science and Software Engineering, 2008. http://ro.uow.edu.au/theses/100.

Full text
Abstract:
Multi-player on-line games (MOGs) have become increasingly popular because of the opportunity they provide for collaboration, communications and interactions. However, compared with ordinary human communication, MOG still has several limitations, especially in the communication using facial expressions. Although detailed facial animation has already been achieved in a number of MOGs, players have to use text commands to control the expressions of avatars. This thesis proposes an automatic expression recognition system that can be integrated into a MOG to control the facial expressions of avatars. To meet the specific requirements of such a system, a number of algorithms are studied, tailored and extended. In particular, Viola-Jones face detection method is modified in several aspects to detect small scale key facial components with wide shape variations. In addition a new coarse-to-fine method is proposed for extracting 20 facial landmarks from image sequences. The proposed system has been evaluated on a number of databases that are different from the training database and achieved 83% recognition rate for 4 emotional state expressions. During the real-time test, the system achieved an average frame rate of 13 fps for 320 x 240 images on a PC with 2.80 GHz Intel Pentium. Testing results have shown that the system has a practical range of working distances (from user to camera), and is robust against variations in lighting and backgrounds.
APA, Harvard, Vancouver, ISO, and other styles
24

Hall, Jessica. "Psychological mechanisms underlying sex differences in facial expression recognition." Thesis, University of Sussex, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.506818.

Full text
Abstract:
A female advantage is observed in the accurate recognition of mental and emotional states from the face (Hall, 1978, 1984). The psychological mechanisms that may underlie this advantage have not been addressed systematically by researchers. The present thesis discusses the potential mechanisms underlying the female advantage by considering the Extreme Male Brain (EMB) hypothesis of autism (Baron-Cohen, 2002). Several possible directions for research are presented, including sex differences in i) automaticity of processing facial expressions; ii) attention to the eyes; iii) configural versus featural processing of faces; and iv) stimulation of emotion. The first three of these directions are addressed in experimental chapters. A priming task and emotional face-word Stroop task were used to investigate sex differences in the automaticity of processing facial expressions. Sex differences in attention to the eyes were investigated in two eye tracking studies, and in two studies manipulating the eye region of emotional faces. Finally, a study with spatial frequency filtered emotional faces examined sex differences in the use of fine and coarse detail facial information. Overall, the investigations provide some evidence for greater female attention to the eye region in faces, and the possibility that this may explain an observed female advantage in facial expression recognition. Results are discussed in relation to the EMB hypothesis and sex differences in social cognition more generally. Potential directions for further research are outlined.
APA, Harvard, Vancouver, ISO, and other styles
25

Jan, Asim. "Deep learning based facial expression recognition and its applications." Thesis, Brunel University, 2017. http://bura.brunel.ac.uk/handle/2438/15944.

Full text
Abstract:
Facial expression recognition (FER) is a research area that consists of classifying the human emotions through the expressions on their face. It can be used in applications such as biometric security, intelligent human-computer interaction, robotics, and clinical medicine for autism, depression, pain and mental health problems. This dissertation investigates the advanced technologies for facial expression analysis and develops the artificial intelligent systems for practical applications. The first part of this work applies geometric and texture domain feature extractors along with various machine learning techniques to improve FER. Advanced 2D and 3D facial processing techniques such as Edge Oriented Histograms (EOH) and Facial Mesh Distances (FMD) are then fused together using a framework designed to investigate their individual and combined domain performances. Following these tests, the face is then broken down into facial parts using advanced facial alignment and localising techniques. Deep learning in the form of Convolutional Neural Networks (CNNs) is also explored also FER. A novel approach is used for the deep network architecture design, to learn the facial parts jointly, showing an improvement over using the whole face. Joint Bayesian is also adapted in the form of metric learning, to work with deep feature representations of the facial parts. This provides a further improvement over using the deep network alone. Dynamic emotion content is explored as a solution to provide richer information than still images. The motion occurring across the content is initially captured using the Motion History Histogram descriptor (MHH) and is critically evaluated. Based on this observation, several improvements are proposed through extensions such as Average Spatial Pooling Multi-scale Motion History Histogram (ASMMHH). This extension adds two modifications, first is to view the content in different spatial dimensions through spatial pooling; influenced by the structure of CNNs. The other modification is to capture motion at different speeds. Combined, they have provided better performance over MHH, and other popular techniques like Local Binary Patterns - Three Orthogonal Planes (LBP-TOP). Finally, the dynamic emotion content is observed in the feature space, with sequences of images represented as sequences of extracted features. A novel technique called Facial Dynamic History Histogram (FDHH) is developed to capture patterns of variations within the sequence of features; an approach not seen before. FDHH is applied in an end to end framework for applications in Depression analysis and evaluating the induced emotions through a large set of video clips from various movies. With the combination of deep learning techniques and FDHH, state-of-the-art results are achieved for Depression analysis.
APA, Harvard, Vancouver, ISO, and other styles
26

Ortega, Margarita Marie. "Schizophrenia: Treating deficits in facial emotion expression and recognition." Scholarly Commons, 2005. https://scholarlycommons.pacific.edu/uop_etds/2703.

Full text
Abstract:
There is growing research suggesting that individuals diagnosed with schizophrenia are impaired in their ability to recognize and express facial emotions. However, research examining the effects of treatment on facial emotion expression and recognition deficits is extremely limited. This study examined the effects of a brief training program on the ability to express and recognize facial emotions among individuals diagnosed with schizophrenia ( N = 6). Assessment procedures included identification (photo and in vivo models), imitation, and simulation. The training program consisted of 8 sessions, lasting approximately 20–30 min. The first training session consisted of a discussion about the six basic emotions (happy, sad, surprised, fearful, angry, disgusted). The next seven training sessions included identification (photo and in vivo models), imitation, and simulation of each of the six basic emotions. Verbal reinforcement and feedback were used to increase performance. The results indicated that performance improved for all tasks from baseline to treatment, and maintained during a 3-week follow up period.
APA, Harvard, Vancouver, ISO, and other styles
27

Wang, Xuejian. "Improving multi-view facial expression recognition in unconstrained environments." Thesis, University of Kent, 2017. https://kar.kent.ac.uk/59934/.

Full text
Abstract:
Facial expression and emotion-related research has been a longstanding activity in psychology while computerized/automatic facial expression recognition of emotion is a relative recent and still emerging but active research area. Although many automatic computer systems have been proposed to address facial expression recognition problems, the majority of them fail to cope with the requirements of many practical application scenarios arising from either environmental factors or unexpected behavioural bias introduced by the users, such as illumination conditions and large head pose variation to the camera. In this thesis, two of the most influential and common issues raised in practical application scenarios when applying automatic facial expression recognition system are comprehensively explored and investigated. Through a series of experiments carried out under a proposed texture-based system framework for multi-view facial expression recognition, several novel texture feature representations are introduced for implementing multi-view facial expression recognition systems in practical environments, for which the state-of-the-art performance is achieved. In addition, a variety of novel categorization schemes for the configurations of an automatic multi-view facial expression recognition system is presented to address the impractical discrete categorization of facial expression of emotions in real-world scenarios. A significant improvement is observed when using the proposed categorizations in the proposed system framework using a novel implementation of the block based local ternary pattern approach.
APA, Harvard, Vancouver, ISO, and other styles
28

Lopes, André Teixeira. "Facial expression recognition using deep learning - convolutional neural network." Universidade Federal do Espírito Santo, 2016. http://repositorio.ufes.br/handle/10/4301.

Full text
Abstract:
Made available in DSpace on 2016-08-29T15:33:24Z (GMT). No. of bitstreams: 1 tese_9629_dissertacao(1)20160411-102533.pdf: 9277551 bytes, checksum: c18df10308db5314d25f9eb1543445b3 (MD5) Previous issue date: 2016-03-03<br>CAPES<br>O reconhecimento de expressões faciais tem sido uma área de pesquisa ativa nos últimos dez anos, com uma área de aplicação em crescimento como animação de personagens e neuro-marketing. O reconhecimento de uma expressão facial não é um problema fácil para métodos de aprendizagem de máquina, dado que pessoas diferentes podem variar na forma com que mostram suas expressões. Até uma imagem da mesma pessoa em uma expressão pode variar em brilho, cor de fundo e posição. Portanto, reconhecer expressões faciais ainda é um problema desafiador em visão computacional. Para resolver esses problemas, nesse trabalho, nós propomos um sistema de reconhecimento de expressões faciais que usa redes neurais de convolução. Geração sintética de dados e diferentes operações de pré-processamento foram estudadas em conjunto com várias arquiteturas de redes neurais de convolução. A geração sintética de dados e as etapas de pré-processamento foram usadas para ajudar a rede na seleção de características. Experimentos foram executados em três bancos de dados largamente utilizados (CohnKanade, JAFFE, e BU3DFE) e foram feitas validações entre bancos de dados(i.e., treinar em um banco de dados e testar em outro). A abordagem proposta mostrou ser muito efetiva, melhorando os resultados do estado-da-arte na literatura.<br>Facial expression recognition has been an active research area in the past ten years, with growing application areas such avatar animation, neuromarketing and sociable robots. The recognition of facial expressions is not an easy problem for machine learning methods, since people can vary signi cantly in the way that they show their expressions. Even images of the same person in one expression can vary in brightness, background and position. Hence, facial expression recognition is still a challenging problem. To address these problems, in this work we propose a facial expression recognition system that uses Convolutional Neural Networks. Data augmentation and di erent preprocessing steps were studied together with various Convolutional Neural Networks architectures. The data augmentation and pre-processing steps were used to help the network on the feature selection. Experiments were carried out with three largely used databases (Cohn-Kanade, JAFFE, and BU3DFE) and cross-database validations (i.e. training in one database and test in another) were also performed. The proposed approach has shown to be very e ective, improving the state-of-the-art results in the literature and allowing real time facial expression recognition with standard PC computers.
APA, Harvard, Vancouver, ISO, and other styles
29

Warwick, Ross. "Recognition of emotion from facial expression in multiple sclerosis." Thesis, University of Edinburgh, 2008. http://hdl.handle.net/1842/29413.

Full text
Abstract:
The present study aimed to further explore the relationship between MS and emotion recognition from facial expression and ascertain whether impaired recognition of emotion from facial expression was associated with reports of everyday social functioning. Thirty people with MS were assessed using the Facial Expression of Emotion: Stimuli and Tests (FEEST, Young, Perrett, <i>et al.</i>, 2002), comprised of the Ekman 60 Faces and the Emotion Hexagon. Their performance was compared to the published normative data of the FEEST collected from neurologically healthy controls (n = 227; n = 125 respectively). Each MS participant was asked to complete a questionnaire about everyday functional behaviour, the Brock Adaptive Functioning Questionnaire (BAFQ, e.g. Dywan and Segalowitz, 1996). A parallel version was completed for each MS participant by a significant other. <i>FEEST</i>. The MS group were significantly worse at overall recognition of emotion (<i>p&lt;.001; p&lt;.05)</i>. Using published cut-off scores, 36.67% of the MS group were classified as impaired on the Ekman 60 Faces; 23.33% on the Emotion Hexagon, significantly greater than the 5% expected from the normative data (<i>p&lt;</i>0.001). There were also significant between-group differences on recognition of individual emotions. <i>BAFQ</i>. BAFQ informant reports of aggression were significantly correlated with recognition of disgust on both FEEST tests (<i>p = .001</i>). Although several other correlations were approaching significance, no other significant correlations (i.e. <i>p &lt; </i>.01) were found. Scores on the BAFQ were generally low, suggesting few social behaviour impairments in the current sample. It was confirmed that people with MS have difficulty recognising emotion from facial expressions but insufficient evidence was found to show that this was related to reported social behaviour. The implications for further research are discussed, along with a critique of the methodology.
APA, Harvard, Vancouver, ISO, and other styles
30

Miao, Yu. "A Real Time Facial Expression Recognition System Using Deep Learning." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/38488.

Full text
Abstract:
This thesis presents an image-based real-time facial expression recognition system that is capable of recognizing basic facial expressions of several subjects simultaneously from a webcam. Our proposed methodology combines a supervised transfer learning strategy and a joint supervision method with a new supervision signal that is crucial for facial tasks. A convolutional neural network (CNN) model, MobileNet, that contains both accuracy and speed is deployed in both offline and real-time frameworks to enable fast and accurate real-time output. Evaluations for both offline and real-time experiments are provided in our work. The offline evaluation is carried out by first evaluating two publicly available datasets, JAFFE and CK+, and then presenting the results of the cross-dataset evaluation between these two datasets to verify the generalization ability of the proposed method. A comprehensive evaluation configuration for the CK+ dataset is given in this work, providing a baseline for a fair comparison. It reaches an accuracy of 95.24% on JAFFE dataset, and an accuracy of 96.92% on 6-class CK+ dataset which only contains the last frames of image sequences. The resulting average run-time cost for recognition in the real-time implementation is reported, which is approximately 3.57 ms/frame on an NVIDIA Quadro K4200 GPU. The results demonstrate that our proposed CNN-based framework for facial expression recognition, which does not require a massive preprocessing module, can not only achieve state-of-art accuracy on these two datasets but also perform the classification task much faster than a conventional machine learning methodology as a result of the lightweight structure of MobileNet.
APA, Harvard, Vancouver, ISO, and other styles
31

Vadapalli, Hima Bindu. "Recognition of facial action units from video streams with recurrent neural networks : a new paradigm for facial expression recognition." University of the Western Cape, 2011. http://hdl.handle.net/11394/5415.

Full text
Abstract:
Philosophiae Doctor - PhD<br>This research investigated the application of recurrent neural networks (RNNs) for recognition of facial expressions based on facial action coding system (FACS). Support vector machines (SVMs) were used to validate the results obtained by RNNs. In this approach, instead of recognizing whole facial expressions, the focus was on the recognition of action units (AUs) that are defined in FACS. Recurrent neural networks are capable of gaining knowledge from temporal data while SVMs, which are time invariant, are known to be very good classifiers. Thus, the research consists of four important components: comparison of the use of image sequences against single static images, benchmarking feature selection and network optimization approaches, study of inter-AU correlations by implementing multiple output RNNs, and study of difference images as an approach for performance improvement. In the comparative studies, image sequences were classified using a combination of Gabor filters and RNNs, while single static images were classified using Gabor filters and SVMs. Sets of 11 FACS AUs were classified by both approaches, where a single RNN/SVM classifier was used for classifying each AU. Results indicated that classifying FACS AUs using image sequences yielded better results than using static images. The average recognition rate (RR) and false alarm rate (FAR) using image sequences was 82.75% and 7.61%, respectively, while the classification using single static images yielded a RR and FAR of 79.47% and 9.22%, respectively. The better performance by the use of image sequences can be at- tributed to RNNs ability, as stated above, to extract knowledge from time-series data. Subsequent research then investigated benchmarking dimensionality reduction, feature selection and network optimization techniques, in order to improve the performance provided by the use of image sequences. Results showed that an optimized network, using weight decay, gave best RR and FAR of 85.38% and 6.24%, respectively. The next study was of the inter-AU correlations existing in the Cohn-Kanade database and their effect on classification models. To accomplish this, a model was developed for the classification of a set of AUs by a single multiple output RNN. Results indicated that high inter-AU correlations do in fact aid classification models to gain more knowledge and, thus, perform better. However, this was limited to AUs that start and reach apex at almost the same time. This suggests the need for availability of a larger database of AUs, which could provide both individual and AU combinations for further investigation. The final part of this research investigated use of difference images to track the motion of image pixels. Difference images provide both noise and feature reduction, an aspect that was studied. Results showed that the use of difference image sequences provided the best results, with RR and FAR of 87.95% and 3.45%, respectively, which is shown to be significant when compared to use of normal image sequences classified using RNNs. In conclusion, the research demonstrates that use of RNNs for classification of image sequences is a new and improved paradigm for facial expression recognition.
APA, Harvard, Vancouver, ISO, and other styles
32

Schofield, Casey Anne. "Recognition of facial expressions of emotion in social anxiety." Diss., Online access via UMI:, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
33

Anderson, Keith William John. "A real-time facial expression recognition system for affective computing." Thesis, Queen Mary, University of London, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.405823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Bourel, Fabrice. "Models of spatially-localised facial dynamics for robust expression recognition." Thesis, Staffordshire University, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.394143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Moore, Stephen. "The effects of features and pose on facial expression recognition." Thesis, University of Surrey, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.540969.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Nelson, Nicole L. "A Facial Expression of Pax: Revisiting Preschoolers' "Recognition" of Expressions." Thesis, Boston College, 2011. http://hdl.handle.net/2345/2458.

Full text
Abstract:
Thesis advisor: James A. Russell<br>Prior research showing that children recognize emotional expressions has used a choice-from-array style task; for example, children are asked to find the fear face in an array of several expressions. However, these choice-from-array tasks allow for the use of a process of elimination strategy in which children could select an expression they are unfamiliar with when presented a label that does not apply to other expressions in the array. Across six studies (N = 144), 80% of 2- to 4-year-olds selected a novel expression when presented a target label and performed similarly when the label was novel (such as pax) or familiar (such as fear). In addition, 46% of children went on to freely label the expression with the target label in a subsequent task. These data are the first to show that children extend the process of elimination strategy to facial expressions and also call into question the findings of prior choice-from-array studies<br>Thesis (PhD) — Boston College, 2011<br>Submitted to: Boston College. Graduate School of Arts and Sciences<br>Discipline: Psychology
APA, Harvard, Vancouver, ISO, and other styles
37

Ho, Hsing-Han, and 何星翰. "Local Binary Patterns based Hierarchical Method for Facial Expression Recognition." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/92797046499255942918.

Full text
Abstract:
碩士<br>國立臺灣大學<br>資訊工程學研究所<br>100<br>So far, automatic human facial expression recognition has been a challenging problem in computer vision area, and brings strong impacts in important applications in many areas such as human-computer interaction and data-driven animation. In this thesis, I use the Local Binary Patterns method, which was used commonly in face recognition, to recognize the facial expressions. Furthermore, I propose a novel idea to improve the result. This new method is called “Hierarchical Facial Expression Recognition”. In traditional facial expression recognition, the researchers use training data to make a classifier, and classify the test expression data into one of the several basic expressions directly. Hierarchical method separates the recognition process into few stages, each stage using specific methods to deal with the specific expression to step by step recognize the target image. The overall recognition rates of my system on two expression databases are about 80% and 86% with pure Local Binary Patterns algorithm, and reach 87% and 89% after using the hierarchical method. This work shows that the Local Binary Patterns method is practical in facial expression recognition, and the idea of hierarchical facial expression recognition can further improve the result.
APA, Harvard, Vancouver, ISO, and other styles
38

Chuang, Shun-Hsu, and 莊順旭. "Facial Expression Recognition based on Fusing Weighted Local Directional Pattern and Local Binary Pattern." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/13073845934835992163.

Full text
Abstract:
碩士<br>中華大學<br>資訊工程學系(所)<br>98<br>A method of combining Weighted Local Directional Pattern (WLDP) and Local Binary Pattern (LBP) for facial expression recognition is proposed. First, WLDP and LBP are applied to extract human facial features. Second, principle component analysis (PCA) is used to reduce their feature dimensions respectively. Third, both reduced facial features are merged to form the final feature vector. Fourth, support vector machine (SVM) is used to recognize facial expressions. Experiment on the well known Cohn-Kanade expression database, a high accuracy rate up to 91.1% for recognizing seven expressions can be achieved with a person-independent 10-fold cross-validation scheme.
APA, Harvard, Vancouver, ISO, and other styles
39

Jain, Gaurav. "Emotion Recognition from Eye Region Signals using Local Binary Patterns." Thesis, 2011. http://hdl.handle.net/1807/30639.

Full text
Abstract:
Automated facial expression analysis for Emotion Recognition (ER) is an active research area towards creating socially intelligent systems. The eye region, often considered integral for ER by psychologists and neuroscientists, has received very little attention in engineering and computer sciences. Using eye region as an input signal presents several bene ts for low-cost, non-intrusive ER applications. This work proposes two frameworks towards ER from eye region images. The first framework uses Local Binary Patterns (LBP) as the feature extractor on grayscale eye region images. The results validate the eye region as a signi cant contributor towards communicating the emotion in the face by achieving high person-dependent accuracy. The system is also able to generalize well across di erent environment conditions. In the second proposed framework, a color-based approach to ER from the eye region is explored using Local Color Vector Binary Patterns (LCVBP). LCVBP extend the traditional LBP by incorporating color information extracting a rich and a highly discriminative feature set, thereby providing promising results.
APA, Harvard, Vancouver, ISO, and other styles
40

Ren, Yuan. "Facial Expression Recognition System." Thesis, 2008. http://hdl.handle.net/10012/3516.

Full text
Abstract:
A key requirement for developing any innovative system in a computing environment is to integrate a sufficiently friendly interface with the average end user. Accurate design of such a user-centered interface, however, means more than just the ergonomics of the panels and displays. It also requires that designers precisely define what information to use and how, where, and when to use it. Facial expression as a natural, non-intrusive and efficient way of communication has been considered as one of the potential inputs of such interfaces. The work of this thesis aims at designing a robust Facial Expression Recognition (FER) system by combining various techniques from computer vision and pattern recognition. Expression recognition is closely related to face recognition where a lot of research has been done and a vast array of algorithms have been introduced. FER can also be considered as a special case of a pattern recognition problem and many techniques are available. In the designing of an FER system, we can take advantage of these resources and use existing algorithms as building blocks of our system. So a major part of this work is to determine the optimal combination of algorithms. To do this, we first divide the system into 3 modules, i.e. Preprocessing, Feature Extraction and Classification, then for each of them some candidate methods are implemented, and eventually the optimal configuration is found by comparing the performance of different combinations. Another issue that is of great interest to facial expression recognition systems designers is the classifier which is the core of the system. Conventional classification algorithms assume the image is a single variable function of a underlying class label. However this is not true in face recognition area where the appearance of the face is influenced by multiple factors: identity, expression, illumination and so on. To solve this problem, in this thesis we propose two new algorithms, namely Higher Order Canonical Correlation Analysis and Simple Multifactor Analysis which model the image as a multivariable function. The addressed issues are challenging problems and are substantial for developing a facial expression recognition system.
APA, Harvard, Vancouver, ISO, and other styles
41

Hsu, Wei-Cheng, and 徐瑋呈. "Facial Expression Recognition Based on Facial Features." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/50258463357861831524.

Full text
Abstract:
碩士<br>國立清華大學<br>資訊工程學系<br>101<br>We propose an expression recognition method based on facial features from the psychological perspective. According to the American psychologist Paul Ekman’s work on action units, we divide a face into different facial feature regions for expression recognition via the movements of individual facial muscles during slight different instant changes in facial expression. This thesis starts from introducing Paul Ekman’s work, 6 basic emotions, and existing methods based on feature extraction or facial models. Our system have two main parts: preprocessing and recognition method. The difference in training and test environments, such as illumination, or face size and skin color of different subjects under testing, is usually the major influencing factor in recognition accuracy. It is therefore we propose a preprocessing step in our first part of the system: we first perform face detection and facial feature detection to locate facial features. We then perform a rotation calibration based on the horizontal line obtained by connecting both eyes. The complete face region can be extracted by using facial models. Lastly, the face region is calibrated for illumination and resized to same resolution for dimensionality of feature vector. After preprocessing, we can reduce the difference among images. Second part of our proposed system is the recognition method. Here we use Gabor filter banks with ROI capture to obtain the feature vector and principal component analysis (PCA) and linear discriminant analysis (LDA) for dimensionality reduction to reduce the computation time. Finally, a support vector machine (SVM) is adopted as our classifier. The experimental result shows that the proposed method can archive 86.1%, 96.9%, and 89.0% accuracy on three existing datasets JAFFE, TFEID, and CK+ respectively (based on leave-one-person-out evaluation). We also tested the performance on the 101SC dataset that were collected and prepared by ourselves. This dataset is relatively difficult in recognition but closer to the scenario in reality. The proposed method is able to achieve 62.1% accuracy on it. We also use this method to participate the 8th UTMVP (Utechzone Machine Vision Prize) competition, and we were ranked the second place out of 10 teams.
APA, Harvard, Vancouver, ISO, and other styles
42

Hsueh, Ming-Kai, and 薛名凱. "Facial Expression Recognition with WebCam." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/24120668987886256360.

Full text
Abstract:
碩士<br>國立臺北科技大學<br>自動化科技研究所<br>92<br>It’s very easy for human being to recognize the emotion through facial expression, but it’s not so simple for computers. In this research, we use the common video device to establish the system which distinguishs people’s emotion, automatically. This system can work very well with neural network. In this paper, we are base on Ekman’s (Action Units; AUs) to catch the characteristics on the faces. First of all, it catches images on people’s faces by CMOS WebCam, and then detects the moving route of the five organs by image processing technique. Then, using those characteristics through neural network to recognize people’s emotion. Our system can recognize moving emotion within a short time, and the correction percentage can be over eighty percent. Therefore, it’s strong enough to approve the availability of our system.
APA, Harvard, Vancouver, ISO, and other styles
43

Hsu, Chen-wei, and 徐晨暐. "DSP-Based Facial Expression Recognition System." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/91621154215297203654.

Full text
Abstract:
碩士<br>國立中山大學<br>電機工程學系研究所<br>93<br>This thesis is based on the DSP to develop a facial expression recognition system. Most facial expression recognition systems suppose that human faces have been found, or the background colors are simple, or the facial feature points are extracted manually. Only few recognition systems are automatic and complete. This thesis is a complete facial expression system. Images are captured by CCD camera. DSP locates the human face, extracts the facial feature points and recognizes the facial expression automatically. The recognition system is divided into four sub-system: Image capture system, Genetic Algorithm human face location system, Facial feature points extraction system, Fuzzy logic facial expression recognition system. Image capture system is using CCD camera to capture the facial expression image which will be recognized in any background, and transmitting the image data to SRAM on DSP through the PPI interface on DSP. Human face location system is using genetic algorithm to find the human face’s position in image by facial skin color and ellipse information, no matter what the size of the human face or the background is simple. Feature points extraction system is finding 16 facial feature points in located human face by many image process skills. Facial expression recognition system is analyzing facial action units by 16 feature points and making them fuzzily. Judging the four facial expression: happiness, anger, surprise and neutral, by fuzzy rule bases.. According to the results of the experiment. The facial expression system has nice performance on recognition rate and recognition speed.
APA, Harvard, Vancouver, ISO, and other styles
44

Zhang, Yi-Cheng, and 張益誠. "Facial expression recognition based on SVM." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/762ycm.

Full text
Abstract:
碩士<br>國立虎尾科技大學<br>資訊管理研究所<br>97<br>A novel facial expression recognition technique is proposed in this thesis. It can further be used in distance learning. First, it employs the face detection method to locate the face region in images, which integrate the Haar-like feature and the self quotient image filter to improve the detection rate due to the insufficient light and shade light. Second, face-block normalization is also carried out. Subsequently, angular radiual transform, discrete cosine transform and Gabor filter are used in the procedure of facial expression feature extraction. A trained support vector machine is conducted to predict the expression for an query face block. Finally, experimental results show that the performance of the proposed technique can be better than that of other methods.
APA, Harvard, Vancouver, ISO, and other styles
45

Chiang, Bo-Cheng, and 江柏城. "Real Time Facial Expression Recognition System." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/at3ezb.

Full text
Abstract:
碩士<br>國立臺灣科技大學<br>電機工程系<br>105<br>With new technological advances, interaction between human and machine has been varied. In this thesis, an interaction-oriented real-time facial expression recognition system is applied for the usage of interaction. Through image processing, face detection and expression recognition system is used to detect human face in the image and recognize the expression on the face. Further, we analyze the recognition results and output the corresponding speech to certain expression in order to interaction with the user. In this thesis, we proposed a real-time facial expression recognition system based on PC. The research has developed the expression recognition on the basis of face detection and finally can interact with user. The system consists of three parts: (1) Face candidate extraction, (2) Face verification & expression recognition, and (3) Machine reaction. In the first part, we extract the face candidate in the image using pre-processing method. The face candidate image is then classified in the second part by extracting Local Binary Pattern feature. The classification in the second part is done using two stages: (1) first, the candidate image is classified into face image or non-face image, (2) the face image is then classified into six type of expression. In the last part, the output speech is played according to the classified results produced in a time period. The system is implemented using C language and based on PC. In the experiments, we use a facial expression database to perform cross validation and recognition rate is 84.78%. Further, we use webcam as the input serial image and compare the recognition rate and run time differences between the non-deleting-blocks case and deleting-blocks case, which contain less expression information.
APA, Harvard, Vancouver, ISO, and other styles
46

Lee, Cheng-Yen, and 李承諺. "Facial Expression Recognition in Curriculum Teaching." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/01485894037823734821.

Full text
Abstract:
碩士<br>國立臺北教育大學<br>資訊科學系碩士班<br>102<br>In this thesis, we proposed a learning management system with face recognition and facial expression recognition on Android. It is divided into two parts, face recognition and facial expression recognition. In face recognition, PCA is our primary method. The users should put their face images to the face database first. Then, training all data in the database. The weight vectors were obtained. The vectors compared with the input image. Finding the smallest distance, and we can get this input face belongs to whom. Recognition rate can reach 85%. In facial expression recognition, we detected the face first. According to face proportions, we can locate eyes and mouth. Next, found the feature points by our algorithm. There are 12 points at face. Then according to the relationship between the feature points, we can get a facial. Recognition rate can reach about 80.2%.
APA, Harvard, Vancouver, ISO, and other styles
47

Goren, Deborah. "Quantifying facial expression recognition across viewing conditions /." 2004. http://wwwlib.umi.com/cr/yorku/fullcit?pMQ99314.

Full text
Abstract:
Thesis (M.Sc.)--York University, 2004. Graduate Programme in Biology.<br>Typescript. Includes bibliographical references (leaves 59-66). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://wwwlib.umi.com/cr/yorku/fullcit?pMQ99314
APA, Harvard, Vancouver, ISO, and other styles
48

Huang, Chun-Hao, and 黃俊豪. "Facial Expression Recognition with Discriminative Common Vector." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/69337163094463480953.

Full text
Abstract:
碩士<br>輔仁大學<br>電子工程學系<br>95<br>Recently facial expression recognition system has become an important issue in both human-computer interaction (HCI) and human-robot interaction (HRI). It is an important issue to extract features from face images to recognize facial expression. In this paper, we apply a face feature extraction approach, namely discriminative common vectors, for the recognition of the six basic expressions including happy, sad, angry, disgust, fear and surprise. By applying discriminative common vector, we can reduce the dimensionality of image and classify them in a lower dimension which would be useful in later recognition procedure. Then we use HMM as our classifier to find the time series information of the feature vector projected by common vector.
APA, Harvard, Vancouver, ISO, and other styles
49

Shiu, Ch-Ting, and 徐啟庭. "Hough Forest-based Facial Expression Recognition Technology." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/63966293586327537847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Gu, Shao-Huan, and 辜紹桓. "Facial Expression Recognition on Partially-Occluded Faces." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/12887632407413329519.

Full text
Abstract:
碩士<br>國立東華大學<br>資訊工程學系<br>99<br>This paper proposes an approach on dependent person to recognize partially occluded and damaged facial expression images, include an iterative face recovery method and a recognition method called the recognition-by-input-approximation (RBIA) method. We use PCA technique to build a person-specific eigenspace for each person identity and a person’s expression-specific eigenspace for each person’s expression. The iterative face recovery method can recover face textures on occluded or damaged area very well. The recovered faces can better preserve personal characteristics and the original input illumination. Using RBIA method to recognize facial expression with two- stage recognition way. Different from common template matching method, RBIA match recovered face with original input face texture, which will not restrict to the fixed template or model. Our experiments on the public cohn-kanade face database show recognition rate in recognizing faces with partially occlusions and different illumination, and recognition rate is better than other method.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!