To see the other types of publications on this topic, follow the link: Expresión facial.

Dissertations / Theses on the topic 'Expresión facial'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Expresión facial.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gálvez, Rojas Rodrigo Eduardo. "Cuantificación del grado de parálisis facial mediante algoritmos basados en tracking facial." Tesis, Universidad de Chile, 2017. http://repositorio.uchile.cl/handle/2250/147271.

Full text
Abstract:
Magíster en informática médica<br>La parálisis facial afecta entre 15 a 40 casos por cada 100.000 personas en la población mundial. De estos, el 50% corresponde a parálisis de Bell. Esta patología es un tipo de parálisis de origen desconocido (parálisis idiopática) con posibilidad de rehabilitación. Este tipo de parálisis es la causa más frecuente de la parálisis facial parcial. En la actualidad existen métodos clínicos de graduación del nivel de parálisis facial que tienen la desventaja de ser subjetivos, ambiguos y ofrecer una graduación discreta. Se han hecho trabajos para objetivar la graduación con el apoyo computacional constituyendo métodos de graduación no clínicos, que resultan más objetivos y precisos, pero con la desventaja de que requieren marcadores visuales, mucho tiempo de preparación o son de difícil uso, lo que limita su aplicación en entornos clínicos. Esta tesis busca proponer un nuevo método de graduación de la parálisis facial parcial basado en un algoritmo de tracking facial automático sin marcadores ni preparación previa y validarlo computacionalmente.Se propone un nuevo método de graduación no clínica que detecta expresiones faciales asimétricas utilizando un modelo paramétrico tridimensional modificado para dar soporte a las expresiones faciales asimétricas. Este se basa en un algoritmo diseñado para realizar tracking facial que detecta expresiones faciales simétricas, sin la ayuda de marcadores, basado en un modelo tridimensional paramétrico que codifica rostros y expresiones faciales simétricas. El algoritmo de tracking facial del método propuesto es comparado con el algoritmo original, presentando un tracking de calidad similar. También es comparado con un sistema de seguimiento cinemático con marcadores ópticos pasivos, con una diferencia promedio entre un marcador pasivo y un punto del modelo de 1,03 mm (d.s. 0,91 mm). El método propuesto es validado utilizando vídeos de personas con graduación de parálisis facial en escala de House-Brackmann (una persona por cada grado de parálisis, la escala cuenta con seis grados), proponiendo una equivalencia entre el resultado del tracking facial y la escala House-Brackmann. Finalmente, el método propuesto se evalúa de manera preliminar con tres pacientes chilenos con parálisis facial parcial con graduación desconocida, determinando un grado de parálisis coherente con la literatura. Si bien esta última evaluación no es concluyente estadísticamente, deja abierta la posibilidad de un futuro desarrollo de software y a una validación clínica del algoritmo propuesto.<br>Facial paralysis affects between 15 and 40 cases per 100,000 people in the world population. Of these, 50% corresponds to Bell's palsy. This pathology is a type of paralysis of unknown origin (idiopathic paralysis) with the possibility of rehabilitation. This is the most common cause of unilateral facial paralysis. Currently, there are clinical graduation methods of the level of facial paralysis that have the disadvantages of: being subjective, ambiguous and to deliver a discrete graduation. Previous work has been done to make graduation objective using computer algorithms, the so-called non-clinical graduation methods, which are more objective and precise. Non-clinical methods have the disadvantage that they require visual markers, extensive preparation time, and are difficult to use, all of which limits their application in clinical environments. This thesis aims to propose a new method of graduation of partial facial paralysis based on an automatic facial tracking algorithm without markers or previous preparation and computationally validate it. A new non-clinical graduation method is proposed. This method modifies an existing previous algorithm designed to perform facial tracking that detects symmetrical facial expressions, without the help of markers, based on a parametric three-dimensional model that encodes faces and symmetrical facial expressions. The proposed method detects asymmetric facial expressions using a modified three-dimensional parametric model to support asymmetric facial expressions. The facial tracking algorithm of the proposed method is compared with the original algorithm, presenting similar tracking quality, and compared to a cinematic tracking system with passive markers, showing an average difference of 1.03 mm (d.s.0.91 mm) between one passive marker and one point of the model. A validation using videos of people with graduation of facial paralysis in House-Brackmann scale is presented (one person for each degree of paralysis, the scale has six degrees), proposing an equivalence of facial tracking with House-Brackmann scale. Finally, the proposed algorithm is preliminarily evaluated with three Chilean patients with partial facial paralysis presenting an unknown graduation, determining a degree of paralysis consistent with the literature. Although the presented evaluations are not statistically conclusive in a clinical context, it leaves open the possibility of future software development and a clinical validation of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
2

Reyes, Aguilera Milton César. "Análisis de la percepción estética facial." Tesis, Universidad de Chile, 2010. http://repositorio.uchile.cl/handle/2250/134376.

Full text
Abstract:
Trabajo de Investigación Requisito para optar al Título de Cirujano Dentista<br>Autor no autoriza el acceso a texto completo de su documento<br>Introducción: RESUMEN El interés por la belleza, en particular la belleza facial, ha sido y será uno de los grandes temas a esclarecer. La continua búsqueda del hombre por su comprensión se debe fundamentalmente a que es considerada un atributo importante en la sociedad. Es por esta razón, entre otras, que pacientes acuden al Odontólogo para lograr una apariencia armoniosa en su rostro. Por este motivo, se hace necesario conocer cuál es la percepción de los individuos, que acuden al dentista por un tratamiento estético, en relación a lo que ellos consideran un rostro bello, y la importancia e influencia de la sonrisa como rasgo de expresión, de tal manera que el profesional pueda comprender las necesidades, requerimientos y expectativas del paciente. Metodología: La metodología utilizada en el presente trabajo fue de tipo cualitativa, bajo un enfoque interpretativista; se realizaron 35 entrevistas semi-estructuradas a hombres y mujeres chilenos, legos en el tema de estética facial, residentes de la región metropolitana, 20 jóvenes y 15 adultos. Cada entrevista fue transcrita y codificada con el fin de confeccionar un modelo explicativo por medio del método de la teoría fundamentada (Grounded Theory), la cual pretende generar una teoría de manera inductiva y deductiva mediante el análisis de contenido. Resultados: El presente estudio estableció que ambos grupos entrevistados consideran que la belleza del rostro se relaciona principalmente a los conceptos de simetría y proporción. La sonrisa y la mirada fueron consideradas los rasgos de expresión más importantes, mientras que otros rasgos como la nariz y el mentón, relacionados a rasgos de personalidad, lo eran menos. Otros rasgos fueron 8 catalogados como rasgos diferenciativos entre las personas y no promovían, en general, a un aumento en la belleza facial. Conclusiones: La percepción estética facial se ve influenciada por diferentes factores. Entre ellos tenemos una evolución histórica de lo que es considerado bello, relacionado principalmente a mecanismos culturales. Otro factor de importancia es la mentalidad propia de cada persona relacionado a las emociones que evoca ver un rostro bello. Existen también características intrínsecas individuales derivadas del entorno social en el que se desarrollan las personas.
APA, Harvard, Vancouver, ISO, and other styles
3

Fernández, Vivas Sandra Paola. "Análisis de la sonrisa y patrón facial en estudiantes de la Universidad Nacional Mayor de San Marcos." Bachelor's thesis, Universidad Nacional Mayor de San Marcos, 2008. https://hdl.handle.net/20.500.12672/2154.

Full text
Abstract:
La sonrisa tiene una función primordial en las interacciones sociales, culturales y psicológicas; las características de ésta son uno de los principales motivos de consulta odontológica. El cuerpo humano es un conjunto de proporciones donde cada parte del cuerpo guarda relación entre sí para dar armonía por lo que es importante conocer las características de la sonrisa de cada patrón facial ya que estos factores se deben considerar al momento de restaurar el sector anterior. El propósito de este estudio fue describir las características de la sonrisa de cada patrón facial en un grupo de estudiantes entre los 15 y 30 años de edad de ambos géneros. Se tomo una muestra de 216 personas (95mujeres y 121 varones) que fueron clasificados en 5 grupos faciales: hipereuriprosopo, euriprosopo, mesoprosopo, leptoprosopo e hiperleptoprosopo. Se midió clínicamente la altura y ancho facial utilizando un vernier, se tomaron fotografías de las sonrisas y fueron evaluadas por medio de la percepción visual. Se observó que en el patrón facial hipereuriprosopo predominó el tipo de sonrisa media, arco de sonrisa paralelo, la exposición de diez piezas dentales al sonreír y la presencia de correderas bucales. En el patrón facial euriprosopo predominó el tipo de sonrisa alta, el arco de sonrisa paralelo, la exposición de diez piezas dentales al sonreír y la presencia de correderas bucales. En los del tipo mesoprosopo predominó el tipo de sonrisa media, el arco de sonrisa paralelo, la exposición de diez piezas dentales al sonreír y la ausencia de correderas bucales.<br>Tesis
APA, Harvard, Vancouver, ISO, and other styles
4

Castro, Olivares Fidel. "Análisis de la sonrisa según el patrón facial en pacienes del Centro Médico Naval Cirujano Mayos Santiago Távara." Bachelor's thesis, Universidad Nacional Mayor de San Marcos, 2014. https://hdl.handle.net/20.500.12672/3627.

Full text
Abstract:
La clave para el éxito de un tratamiento ortodóntico está dada por el diagnostico. Para la realización de un buen diagnóstico se necesita un minucioso examen clínico y la ayuda de exámenes auxiliares como las radiografías panorámicas, cefalométricas, carpal y de las vértebras cervicales; además de modelos de estudio, fotografías extraorales e intraorales, etc. El análisis facial es un elemento muy importante en el tratamiento ortodóntico; dentro del cual, el análisis de la sonrisa muchas veces no es muy tomado en cuenta. El problema parte en el motivo de consulta de los pacientes al realizarse los tratamientos ortodónticos, en su mayoría ellos desean mejorar su estética facial; es decir ellos prefieren una agradable sonrisa por encima de una línea media correcta y una relación canina o molar clase I. Una agradable sonrisa brinda a las personas una mayor confianza en sí mismo, mejoran su autoestima y ayudan a tener mayor éxito laboral. Para que esta sonrisa sea agradable, necesita que sus partes constituyentes estén en equilibrio y armonía, la cual no sólo se logra con el alineamiento de los dientes sino también con la relación que existe entre el componente esquelético, la musculatura y la boca. Lamentablemente la literatura ortodóntica contiene más estudios sobre la estructura del esqueleto que de la estructura de los tejidos blandos. Por esta razón la sonrisa todavía recibe relativamente poca atención. Además se han realizado diversas investigaciones donde se dan promedios de características más frecuentes en la sonrisa de una población, sin embargo no se conocen estudios que corroboren esos resultados en la nuestra y que por lo tanto puedan ser aplicables a nuestro medio; aun así estos parámetros son generalizados para todas las poblaciones, a veces sin tomar en cuenta sus diferentes rasgos físicos. Existen muchos parámetros de la sonrisa descritos, pero los que más se utilizan y mejor clasificados se encuentran son los 8 componentes de la sonrisa de Roy Sabri: Línea labial, Arco de sonrisa, Curvatura del labio superior, Simetría de la sonrisa, Plano oclusal frontal, Espacios negativos, Componente dental, Componente gingival.<br>Tesis
APA, Harvard, Vancouver, ISO, and other styles
5

Monago, Jurado Carlos. "Proporciones verticales del perfil facial en niños con hipertrofia adenoidea." Bachelor's thesis, Universidad Nacional Mayor de San Marcos, 2006. https://hdl.handle.net/20.500.12672/2378.

Full text
Abstract:
El propósito del presente estudio de tipo descriptivo correlacional fue determinar las proporciones faciales verticales del perfil facial en niños con hipertrofia adenoidea atendidos en el servicio de otorrinolaringología pediátrica del Hospital Nacional Daniel Alcides Carrión del Callao durante los meses de setiembre 2005 a febrero de 2006. Las proporciones faciales se obtuvieron de fotografías laterales de 70 niños (47 varones y 23 mujeres ) de 3 a 10 años de edad, divididos en tres grados de hipertrofia adenoidea ( normal, moderado y severo ). En cada fotografía se identificaron 5 puntos antropométricos, 4 medidas faciales lineales y 4 proporciones verticales. Se utilizó el test de Kruskall- Wallis para determinar diferencia de las proporciones entre grupos de hipertrofia adenoidea y el test de U de Man-Whitney para determinar diferencia intergrupal y el dimorfismo sexual en cada grado de hipertrofia adenoidea. Los resultados fueron: Existen variaciones en las proporciones faciales estudiadas, las variaciones de las proporciones faciales NaSn/NM y SnM/NM son directamente proporcional al grado de hipertrofia adenoidea, las proporciones faciales NaSn/NM y SnM/NM en el grado I, II y III se encuentran dentro del promedio establecido por Flores y Gregoret, en el grado III severo de hipertrofia adenoidea las proporciones faciales NaSn/NM y SnM/NM presentan variaciones estadísticamente significativas, siendo más acentuado en el sexo masculino, el grado III severo de hipertrofia adenoidea presenta un patrón de crecimiento vertical aumentado del tercio inferior, siendo mas acentuado en el sexo masculino, las proporciones faciales SnSts/SnM y StiM/SnM se mantienen estables, las proporciones faciales no presentan dimorfismo sexual en los diferentes grados de hipertrofia adenoidea, a excepción de las proporciones NaSn/NM y SnM/NM en el grado III de hipertrofia adenoidea, no se encontró variaciones significativas de las proporciones faciales en el sexo femenino.<br>The purpose of the present study of descriptive correlational type was to determine the vertical facial proportions on profile in children with adenoid enlarged that assisted in the otorrinolaringology pediatric service of the National Hospital Daniel Alcides Carrión from the Callao during the months of September 2005 to February of 2006. The facial proportions were obtained of 70 children's lateral pictures (47 males and 23 women) of 3 to 10 years of age, divided in three degrees of adenoid enlarged (normal, moderate and severe). 5 landmarks antropométrics, 4 linear dimensions facial measures and 4 vertical proportions were identified in each photograph. Difference of the proportions were determined using the test of Kruskall – Wallis, to determine difference of the proportions between groups of adenoid enlarged and the sexual dimorphism in each degree of enlarged adenoids whit the test of U of Man-Whitney. The results were: Variations exist in the studied facial proportions, the variations of the facial proportions NaSn/NM and SnM/NM are directly proportional to the degree of enlarged adenoid, the facial proportions NaSn/NM and SnM/NM in the degree I, II and III are inside the average settled down by Flowers and Gregoret, in the degree III severe of enlarged adenoid the facial proportions NaSn/NM and SnM/NM present variations statistically significant, being accented in the masculine sex. The degree III severe of enlarged adenoid a pattern of increased vertical growth of the inferior third presents, being but accentuated in the masculine sex, the facial proportions SnSts/SnM and StiM/SnM stay stable, the facial proportions don't present sexual dimorphism in the different degrees of enlarged adenoid, to exception of the proportions NaSn/NM and SnM/NM in the degree III of enlarged adenoid, he/she was not significant variations of the facial proportions in the feminine sex.<br>Tesis
APA, Harvard, Vancouver, ISO, and other styles
6

Palma, Pinto Carolina Paz. "Análisis de la percepción estética de la sonrisa." Tesis, Universidad de Chile, 2010. http://repositorio.uchile.cl/handle/2250/134339.

Full text
Abstract:
Trabajo de Investigación Requisito para optar al Título de Cirujano Dentista<br>Autor no autoriza el acceso a texto completo de su documento<br>Introducción: La belleza es una continua búsqueda en el hombre, dado que es considerada un atributo que puede abrir puertas en la sociedad en todo ámbito, es por esta razón, entre otras, que pacientes acuden al Odontólogo para lograr una apariencia armoniosa en su rostro. Por este motivo, es imprescindible saber la importancia que los individuos le otorgan a la sonrisa, tanto como expresión como en relación a los elementos constituyentes, de tal manera que el profesional pueda comprender las necesidades y requerimientos del paciente. Metodología: La metodología utilizada en el presente trabajo fue de tipo cualitativa, se realizaron 35 entrevistas semi estructuradas a hombres y mujeres chilenas legos en estética dentaria, residentes de la Región Metropolitana, 17 jóvenes y 18 adultos. Se excluyeron estudiantes de Odontología y Odontólogos. Cada entrevista fue transcrita y codificada con el fin de confeccionar un modelo explicativo por medio del método de la Teoría Fundamentada. Resultados: El presente estudio establece que ambos grupos entrevistados valora la sonrisa como una expresión que denota sentimientos positivos, siendo un rasgo importante en el rostro, sobre todo para los individuos adultos. Sin embargo existen diferencias entre los grupos entrevistados en relación a preferencias y requisitos de una sonrisa bella. Conclusiones: La sonrisa como expresión facial es un rasgo importante en jóvenes y por sobretodo en adultos, rasgo influyente en la percepción de atractivo y belleza del rostro. La mujer es asociada a esta expresión facial, mientras que los hombres son asociados a expresiones más serias. Sin embargo, ambos grupos entrevistados concuerdan que el chileno es una persona que no sonríe frecuentemente. Entre los elementos constituyentes de una sonrisa bella, tanto en jóvenes como en adultos, están las piezas dentarias, ambos grupos coindicen en la importancia de tener todas las piezas dentarias, pero difieren en lo primordial de esta expresión, siendo para los jóvenes fundamental la alineación dentaria, en cambio para los adultos es más importante el color de los dientes.
APA, Harvard, Vancouver, ISO, and other styles
7

Mascaró, Oliver Miquel. "Expresión de emociones de alegría para personajes virtuales mediante la risa y la sonrisa." Doctoral thesis, Universitat de les Illes Balears, 2014. http://hdl.handle.net/10803/145970.

Full text
Abstract:
La animación facial es uno de los tópicos todavía no resueltos tanto en el campo de la interacción hombre máquina como en el de la informática gráfica. Las expresiones de alegría asociadas a risa y sonrisa son por su significado e importancia, parte fundamental de estos campos. En esta tesis se hace una aproximación a la representación de los diferentes tipos de risa en animación facial a la vez que se presenta un nuevo método capaz de reproducir todos estos tipos. El método se valida mediante la recreación de secuencias cinematográficas y mediante la utilización de bases de datos de expresiones faciales genéricas y específicas de sonrisa. Adicionalmente se crea una base de datos propia que recopila los diferentes tipos de risas clasificados y generados en este trabajo. De acuerdo a esta base de datos propia se generan las expresiones más representativas de cada una de las risas y sonrisas consideradas en el estudio.<br>L'animació facial és un dels tòpics encara no resolts tant en el camp de la interacció home màquina com en el de la informàtica gràfica. Les expressions d'alegria associades a riure i somriure són pel seu significat i importància, part fonamental d'aquests camps. En aquesta tesi es fa una aproximació a la representació dels diferents tipus de riure en animació facial alhora que es presenta un nou mètode capaç de reproduir tots aquests tipus. El mètode es valida mitjançant la recreació de seqüències cinematogràfiques i mitjançant la utilització de bases de dades d'expressions facials genèriques i específiques de somriure. Addicionalment es crea una base de dades pròpia que recull els diferents tipus de rialles classificats i generats en aquest treball. D'acord a aquesta base de dades pròpia es generen les expressions més representatives de cadascuna de les rialles i somriures considerades en l'estudi.<br>Nowadays, facial animation is one of the most relevant research topics still unresolved both in the field of human machine interaction and in the computer graphics. Expressions of joy associated with laughter and smiling are a key part of these fields mainly due to its meaning and importance. In this thesis an approach to the representation of different types of laughter in facial animation is done while a new method to reproduce all these types is proposed. The method is validated by recreating movie sequences and using databases of generic and specific facial smile expressions. Additionally, a proprietary database that lists the different types of classified and generated laughs in this work is created. According to this proprietary database the most representative of every smile expression considered in the study is generated.
APA, Harvard, Vancouver, ISO, and other styles
8

Alarcón, Haro Jefferson Santos. "Perfil facial de pobladores peruanos dela comunidad de los UROS mediante el Análisis de Powell." Bachelor's thesis, Universidad Nacional Mayor de San Marcos, 2003. https://hdl.handle.net/20.500.12672/2798.

Full text
Abstract:
El Perú es un país de gran diversidad étnica y que guarda características faciales particulares, muchas de las cuales aún no han sido estudiadas; por ello, el presente trabajo buscó determinar cuáles son las características del perfil facial de los pobladores de la comunidad de los Uros mediante el análisis de Powell. Se evaluó el perfil facial mediante análisis fotográfico a 32 individuos con edades entre 18 y 25 años y se obtuvieron las medidas de los ángulos nasofrontal, nasofacial, nasomental y mentocervical. El promedio que se obtuvo para cada uno de ellos fue el siguiente: nasofrontal de 128.03, nasofacial de 33.65, nasomental de 125.96 y mentocervical de 94.28. Los resultados obtenidos permiten proponer diferentes valores normales a los propuestos inicialmente por Powell, esto debido principalmente a las diferencias étnicas- anatómicas entre ambas poblaciones. Finalmente, los resultados de esta investigación de tipo descriptiva, dan el primer paso para ampliar el conocimiento en esta área de la odontología y también servirán como base a futuras investigaciones.<br>Perú is an etnic diversity country of that has particulars characteristics on their faces, that have not been studiedyet for that reason the present work shows wich are the community Uros people facial characteristics profile that is compared with the Powell’s test. It has made 32 photograph test to 32 people anions 18 and 25 years old and we got Nasofacial, Nasoforehead, Nasochin, and Cervicalchin angles measures. The awarage we got per each one of them was: Nasoforehead of 128.03, Nasochin of the 125.96, Nasofacial OF 33.65, Cervicalchin of 94.35. The test results we got let us tell different normal scales that had been given first by Powell, because of the many etnic anathomic differences amons, these populations. Finally the results of this description investigation give the knowledge in this odontology area and also it’s going to help future investigations as samples in this area too.<br>Tesis
APA, Harvard, Vancouver, ISO, and other styles
9

Anguas-Wong, Ana María, and David Matsumoto. "Acknowledgement of emotional facial expression in Mexican college students." Pontificia Universidad Católica del Perú, 2012. http://repositorio.pucp.edu.pe/index/handle/123456789/99730.

Full text
Abstract:
The aim of this study is to explore the patterns of emotion recognition in Mexican bilin-guals using the JACFEE (Matsumoto & Ekman, 1988). Previous cross cultural research has documented high agreement in judgments of facial expressions of emotion, however, none of the previous studies has included data from Mexican culture. Participants were 229 Mexican college students (mean age 21.79). Results indicate that each of the seven universal emotions: anger, contempt, disgust, fear, happiness, sadness and surprise was recognized by the participants above chance levels (p < .001), regardless of the gender or ethnicity of the posers. These findings replicate reported data on the high cross cultural agreement in emotion recognition (Ekman, 1994) and contribute to the increasing body of evidence regardingthe universality of emotions.<br>Este estudio explora el reconocimiento de la expresión facial de las emociones en bilingües mexicanos mediante el JACFEE (Matsumoto & Ekman, 1988). Investigaciones previas evidencian el alto nivel de acuerdo transcultural en el reconocimiento emocional, sin embargo no se reportan estudios en la cultura mexicana. Participaron 229 estudiantes universitarios, edad promedio 21.79 años. Los resultados indican que las emociones universales: enojo, desprecio, disgusto, temor, felicidad, tristeza y sorpresa fueron reconocidas más allá del azar (p < .01), independientemente del sexo o nacionalidad del modelo. Estos hallazgos coincidencompletamente con los datos transculturales que se tienen sobre el alto nivel de acuerdo en el reconocimiento emocional (Ekman, 1994), contribuyendo así al creciente cuerpo de evidencia sobre la universalidad de las emociones.
APA, Harvard, Vancouver, ISO, and other styles
10

Giner, Bartolomé Cristina. "Emociones y trastornos de la conducta alimentaria: Correlatos clínicos y abordajes terapéuticos basados en nuevas tecnologías." Doctoral thesis, Universitat de Barcelona, 2016. http://hdl.handle.net/10803/401710.

Full text
Abstract:
La presente tesis se ha centrado en el estudio de aspectos relativos a la expresión y regulación de emociones en pacientes con Trastornos de la Conducta Alimentaria (TCA), así como en el abordaje terapéutico de estas cuestiones a través de una intervención adicional basada en el uso de un videojuego (VJ) terapéutico ("Playmancer"). Todas las investigaciones realizadas han sido llevadas a cabo en un contexto clínico, con pacientes que acudieron a la Unidad de Trastornos de la Conducta Alimentaria del Hospital Universitari de Bellvitge, para ser evaluadas y recibir tratamiento para sus problemas alimentarios. De este modo, tras analizar los resultados de los distintos estudios presentados, a continuación se detallan las conclusiones más destacadas que se podrían extraer de este trabajo. En primer lugar, se ha observado que, en comparación con personas sanas, las pacientes diagnosticadas de un Trastorno del Espectro Bulímico presentan una menor expresividad facial de emociones tanto positivas (concretamente alegría) como negativas (ira). Asimismo, en este tipo de pacientes los niveles de expresión de alegría estarían asociados a una mayor sociabilidad y necesidad de aprobación por parte de los demás (es decir, una elevada dependencia de la recompensa), así como a una mayor habilidad para dominar la propia conducta y dirigirla hacia la consecución de objetivos determinados (y por tanto, una alta capacidad de autodirección). Contrariamente, los niveles de expresión facial de ira estarían relacionados con una mayor reactividad emocional, una tendencia a dejarse llevar por los impulsos momentáneos y una mayor dificultad para llevar a cabo conductas orientadas hacia un objetivo (baja autodirección). Por tanto, teniendo en cuenta la importancia que tienen las expresiones faciales en las interacciones sociales, así como el papel que juegan las dificultades interpersonales como factor mantenedor de los TCA, estos hallazgos apoyarían la noción de que las estrategias de comunicación social efectivas (las cuales incluyen la expresión facial de emociones) constituyen uno de los aspectos fundamentales que deben ser abordados en las intervenciones terapéuticas diseñadas para los pacientes con TCA. Por otro lado, existe suficiente evidencia científica que define las autolesiones sin intencionalidad suicida como estrategias desadaptativas para paliar o reducir la experimentación de emociones negativas. Además, estas conductas estarían ampliamente presentes entre algunas pacientes con TCA. En este sentido, la ansiedad sería una de las emociones negativas más frecuentes en este tipo de población. En relación a ello, otro de los resultados obtenidos en nuestros estudios ha sido que una elevada predisposición o tendencia a experimentar ansiedad de manera generalizada (ansiedad rasgo) representaría una vulnerabilidad para el desarrollo de conductas autolesivas en pacientes con problemas alimentarios. Finalmente, dada la marcada relevancia que tienen las dificultades de expresión y regulación emocional tanto en la aparición como en el pronóstico de los TCA, estaría plenamente justificada la necesidad de establecer intervenciones que contemplen estos aspectos como uno de los objetivos terapéuticos principales en el tratamiento de este tipo de patologías. De este modo, se contribuiría a la creación de abordajes más integrales y efectivos para mejorar el resultado global de las personas con problemas alimentarios. En relación a lo anterior, la aplicación del VJ terapéutico "Playmancer" ha mostrado resultados prometedores en cuanto al aumento del auto-control (disminución de los niveles de impulsividad, etc.), la mejoría en la habilidad para regular ciertas emociones (disminución de los niveles de ansiedad y de externalización de la ira), la mejora de la capacidad de toma de decisiones, la reducción de la severidad de la sintomatología alimentaria (principalmente la frecuencia de atracones) y los índices de psicopatología general, así como la consecución de una mayor adherencia al tratamiento.<br>This thesis has focused on the study of expression and regulation of emotions in eating disordered individuals, as well as on the analysis of the effectiveness of an additional therapeutic tool based on a "serious video game" (called "Playmancer"), which was specifically designed to reduce impulsivity levels and to train emotion regulation strategies in eating disordered patients. This work presents the compendium of four research articles, with the following goals: 1) to analyze the levels of facial expression of both positive and negative emotions and the association with certain personality traits in patients with eating disorders; 2) to study the relationship between the existence of non-suicidal self-injury behaviors (understood as inadequate emotion regulation strategies) and levels of state and trait anxiety in patients with eating disorders; and 3) to evaluate the effectiveness of the video game "Playmancer" as an additional therapeutic intervention to cognitive-behavioral therapy to improve emotion regulation abilities and impulsivity control in eating disordered patients.
APA, Harvard, Vancouver, ISO, and other styles
11

Castro, Ramos Andrea Verónica. "Exposición de los incisivos mandibulares durante la sonrisa y el patrón facial de los estudiantes de pregrado de la Facultad de Odontología de la Universidad Nacional Mayor de San Marcos." Bachelor's thesis, Universidad Nacional Mayor de San Marcos, 2021. https://hdl.handle.net/20.500.12672/16170.

Full text
Abstract:
La exposición dentaria durante la sonrisa es variable y es necesario reconocer cuales son los factores que influyen en su variabilidad. Determina la exposición de los incisivos mandibulares durante la sonrisa y su relación con el patrón facial. Métodos. Estudio relacional, transversal y prospectivo en 103 estudiantes (hombres y mujeres) de pregrado de la facultad de odontología de la Universidad Nacional Mayor de San Marcos, se realizó el registro fotográfico frontal en reposo y el videográfico pronunciando” chapulín hizo un chiste sobre el chipote chillón” y sonriendo al final, se determinó el patrón facial según el Índice Total Facial. Se eligió el fotograma de mayor exposición en la sonrisa y en la pronunciación de la silabas “chis”, “chi” y “chi”. Se calculó las longitudes con el programa image J. Resultados. Se encontró que la exposición dentaria de incisivos mandibulares durante la sonrisa en las mujeres fue 1,71 mm y la de los hombres 2,46 mm, sin embargo, durante el habla, 3,68 mm en mujeres y 3,25 mm en hombres. Concluye que la exposición dentaria según el sexo durante la sonrisa resultó inversa en el habla y el patrón facial de los individuos no guarda relación con la exposición dentaria.
APA, Harvard, Vancouver, ISO, and other styles
12

Liñán, Santoyo Rhonald Miguel. "Análisis de las características estéticas de la sonrisa según el género en los estudiantes de odontología de la Universidad Nacional Mayor de San Marcos." Bachelor's thesis, Universidad Nacional Mayor de San Marcos, 2016. https://hdl.handle.net/20.500.12672/5478.

Full text
Abstract:
Determina si las características estéticas de la sonrisa son diferentes según el género de la muestra mediante fotografías. Para ello usa una muestra de 77 estudiantes de la Facultad de Odontología de la Universidad Nacional Mayor De San Marcos (44 mujeres y 33 varones) entre 18 y 28 años de edad. Emplea el software AutoCad 2010 para la evaluación de las características estéticas de la sonrisa, realiza trazados de puntos y líneas de referencia en cada fotografía para la obtención de medidas y angulaciones, las cuales son llenadas en una ficha de registro para su análisis estadístico en el software SPSS 22.0 obteniéndose de ello tablas y gráficos de frecuencia, medidas de dispersión, medias aritméticas, valores de intervalo. Emplea para la evaluación de la normalidad de distribución de los resultados el test de normalidad de Kolmogorov-Smirnov, para la valoración de la homogeneidad de varianza emplea la Prueba De Levene, para la normalidad de los datos obtenidos en la comparación entre las medidas de tipo categórica ordinales respecto al género utiliza el análisis estadístico de U-Mann Whitney, mientras que para el tipo categórica nominales usa el análisis estadístico de X2. Para la comparación del género y las características estéticas de la sonrisa de tipo no categórica y de razón se emplea el análisis de T Student. Encuentra que los valores representativos para la línea labial según el género son para el grupo femenino; media 50% (n=22), alta 50% (n=22), para el grupo masculino media 72,7% (n=24) (p=0,017). Los valores representativos para el arco de sonrisa según el género son: para el grupo femenino convexa sin contacto 45,5%(n=20), convexa en contacto 34,1%(n=15), para el grupo masculino convexa sin contacto 72,4% (n=24) convexa en contacto 24,2% (n=8). (p=0,008). Los valores representativos para la curvatura del labio superior según el género fueron: para el grupo femenino recta 52,3% (n=23), para el grupo masculino baja 66,7% (n=22). (p=0,004). Los valores representativos para la Presencia del espacio negativo según el género son: para el grupo femenino bilateral 81,8% (n=36), para el grupo masculino Bilateral 90,9% (n=30). (p=0.528). Los valores representativos para el tamaño del espacio negativo según el género son: para el grupo femenino de 1, 790mm (desviación estándar 1,200704), para el grupo masculino de 2,521mm (desviación estándar: 1,507090). (p=0,020). Los valores representativos para el contorno gingival según el género fueron: para el grupo femenino sinuoso 68,2% (n=30), para el grupo masculino recto 60,6% (n=20). (p=0,012).<br>Tesis
APA, Harvard, Vancouver, ISO, and other styles
13

Dávalos, Riva Juan José. "Los patrones de sonrisa y su relación con el grosor de los labios en estudiantes de pregrado de la Facultad de Odontología de la Universidad Nacional Mayor de San Marcos." Bachelor's thesis, Universidad Nacional Mayor de San Marcos, 2021. https://hdl.handle.net/20.500.12672/16147.

Full text
Abstract:
El análisis de sonrisa se debe realizar antes de comenzar un tratamiento en el campo odontológico. La evaluación de los tejidos blandos, durante mucho tiempo dejada de lado, es parte del proceso de diagnóstico y planificación para un adecuado tratamiento. La investigación busca relacionar los patrones de sonrisa y el grosor de los labios. Realiza un estudio descriptivo y transversal. La muestra estuvo integrada por 103 estudiantes (52 mujeres y 51 hombres) de la Facultad de Odontología de la Universidad Nacional Mayor de San Marcos, con edad entre 18 a 35 años. Se tomaron dos fotografías una frontal en reposo y otra en sonrisa. Para las mediciones del grosor de los labios se utilizó el programa digital ImagenJ en la fotografía de reposo. Se procedió con la identificación de cada uno de los patrones de sonrisa, los cuales fueron divididos en estilo, tipo y etapa de la sonrisa, en la fotografía de sonrisa. Se encontró relación estadísticamente significativa entre el grosor del bermellón inferior y el tipo de sonrisa (p<0,001). Los patrones de sonrisa más frecuentes fueron el comisural (69.9%), tipo I (48,5%) y etapa III (97%). Concluye que se encontró relación significativa entre el grosor del bermellón inferior y el tipo de sonrisa.
APA, Harvard, Vancouver, ISO, and other styles
14

Paredes, Cruz Leslie Romina. "Percepción estética de los componentes de la sonrisa en personas sin conocimiento odontológico." Bachelor's thesis, Universidad Nacional Mayor de San Marcos, 2017. https://hdl.handle.net/20.500.12672/7022.

Full text
Abstract:
El documento digital no refiere asesor<br>Determina la percepción estética de los ocho componentes de la sonrisa según Sabri en personas sin conocimiento odontológico que acuden al Servicio de Odontología del Hospital Nacional Arzobispo Loayza en los meses de julio y agosto del 2017. Realiza un estudio descriptivo, observacional y transversal en el que participaron 369 personas; quienes calificaron fotografías de sonrisas que fueron modificadas con el programa Adobe Photoshop CS6 en sonrisas más estéticas, medianamente estética y menos estéticas. Los resultados mostraron que las personas sin conocimiento odontológico consideran más estéticas a las siguientes sonrisas: sonrisas baja y media, sonrisa consonante, sonrisa con curvatura del labio superior alto, sonrisa con espacios negativos medianos, sonrisa simétrica, sonrisa con plano oclusal anterior recto, sonrisa con la línea media sin desviación y una sonrisa con margen de los laterales a la misma altura que los centrales. Si se encontró diferencias estadísticamente significativas entre la percepción estética según la edad, género y grado de instrucción. Concluimos que hay diferencias entre los parámetros establecidos por Sabri y los parámetros de preferencia de las personas sin conocimiento odontológico, con respecto a la curvatura del labio superior y el componente gingival.<br>Tesis
APA, Harvard, Vancouver, ISO, and other styles
15

Fuentes, Sánchez Nieves. "Correlatos Psicológicos y Psicofisiológicos de Inducción Emocional a través de la Música." Doctoral thesis, Universitat Jaume I, 2021. http://dx.doi.org/10.6035/14109.2021.486721.

Full text
Abstract:
La música es un tipo de estimulación que tiene una gran influencia en la inducción y regulación de las emociones. Sin embargo, ha sido relativamente poco estudiada hasta el momento dentro del campo de las emociones. Como consecuencia, la presente tesis doctoral tiene como objetivo explorar los mecanismos psicológicos y psicofisiológicos subyacentes a la inducción de emociones a través de la música en contextos de laboratorio. Para ello, se diseñan tres estudios experimentales. Los resultados obtenidos en la presente tesis demuestran que la base de estímulos musicales utilizada es un instrumento válido para el estudio de los correlatos psicofisiológicos y subjetivos/experienciales asociados al procesamiento emocional. Asimismo, se demuestra la importancia de la inclusión de ciertas variables individuales como el género o la recompensa musical en el estudio del procesamiento emocional a través de la música.<br>Music has a great influence on the induction and regulation of emotions. However, it has been relatively little studied within the field of emotions. As a consequence, the present doctoral thesis aims to explore the psychological and psychophysiological mechanisms underlying the induction of emotions through music in experimental contexts. For this purpose, three experimental studies were carried out. The results obtained in the present research demonstrate that the film music stimulus set used in this research is a valid tool for the study of the psychophysiological and subjective/experiential correlates associated with emotional processing. It also demonstrated the importance of including certain individual variables such as gender or musical reward in the study of emotional processing through music.<br>Programa de Doctorat en Psicologia
APA, Harvard, Vancouver, ISO, and other styles
16

Correia, Ana Sofia Guimarães. "A competência no reconhecimento da expressão facial da emoção: estudo empírico com crianças e jovens com Perturbação do Espetro do Autismo." Doctoral thesis, [s.n.], 2014. http://hdl.handle.net/10284/4467.

Full text
Abstract:
Tese apresentada à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Doutor em Desenvolvimento e Perturbações da Linguagem, especialidade em Desenvolvimento Psico e Neurolinguístico<br>O reconhecimento emocional através da expressão facial constitui uma importante competência social que potencia o indivíduo a responder de forma adequada ao meio, sendo um dispositivo de comunicação no relacionamento com o mundo e com os outros. Os indivíduos com Perturbação do Espetro do Autismo (PEA) revelam défices a este nível, evidenciando dificuldades na cognição social com implicações negativas no funcionamento interpessoal e social. Nas últimas décadas, o recurso mais utilizado para avaliar objetivamente o reconhecimento de expressões faciais da emoção, tem sido a fotografia. Contudo, o progresso da tecnologia tem permitido a utilização de imagens criadas em laboratório na investigação e promoção desta competência. No sentido de percebermos a vantagem da utilização de imagens tridimensionais na avaliação e no desenvolvimento de programas específicos de promoção de competências relacionadas com o reconhecimento da expressão facial da emoção junto da população com PEA, desenvolveu-se o presente estudo. O principal objetivo deste trabalho consiste em comparar a eficácia do reconhecimento de emoções através de expressões faciais laboratoriais e expressões faciais reais. Participaram no estudo 38 crianças e jovens com Síndrome de Asperger (SA) e Autismo de Elevado Funcionamento (AEF), aos quais foram aplicadas três tarefas com imagens e vídeos fornecidos pelo Laboratório de Expressão Facial da Emoção (FEELab), para avaliação do reconhecimento emocional, e as Provas de Avaliação da Linguagem e da Afasia em Português (PALPA-P), para avaliação das competências linguísticas. Os resultados sugerem que os indivíduos são mais eficazes no reconhecimento de estímulos tridimensionais e dinâmicos e de expressões faciais femininas. As emoções mais facilmente identificadas são a cólera e a surpresa. Verifica-se uma correlação positiva entre o reconhecimento emocional e a idade e a capacidade linguística dos participantes. A acrescentar que, variáveis como o diagnóstico clínico e a escolaridade interferem no reconhecimento de emoções. Os dados obtidos poderão ser úteis na preparação de métodos terapêuticos adequados à população com PEA. Programas de intervenção desenvolvidos com recurso ao computador e às novas tecnologias poderão constituir uma mais-valia na promoção de competências ao nível do reconhecimento da expressão facial da emoção nestes indivíduos, proporcionando a execução de tarefas mais aproximadas à experiência real.<br>Emotional recognition of facial expression constitutes an important social skill that potentiates the individual to respond appropriately to the surrounding space, being a communication device in the relationship with the world and with others. Individuals with Autism Spectrum Disorder (ASD) show deficits at this level, evidencing difficulties in social cognition with negative implications for interpersonal and social functioning. In recent decades, the most common objective measurement to evaluate recognition of facial expressions of emotion has been the photograph. However, emergent technology has allowed the use of images created in laboratory settings that can be utilized in research and promotion of this competence. A study was designed to explore potential advantages of using three-dimensional imaging in the evaluation and development of programs to promote facial expression recognition skills in ASD populations. The main objective of this work is to compare the efficacy of recognizing emotions through laboratorial facial expressions and real facial expressions. In the study 38 children and young people with Asperger Syndrome (AS) and High Functioning Autism (AEF) participated. These subjects were given three tasks with images and videos provided by the Facial Emotion Expression Lab (FEELab) to evaluate emotional recognition, Evaluation Tests of Language and Aphasia in Portuguese (PALPA-P), and to evaluate language skills. The results suggest that individuals are more effective at recognizing three-dimensional and dynamic stimuli and female facial expressions. Anger and surprise were the most easily identified emotions. A positive correlation existed between emotional recognition, age, and the participants’ language skills. Additionally, variables such as clinical diagnosis and scholarity interfere in the recognition of emotions. Obtained data may be useful in the preparation of therapeutic methods appropriate for population with ASD. Intervention programs developed using the computer and new technologies may be an advantage in promoting skills in the recognition of facial expression of emotion in these individuals, providing the execution of tasks as close to the real experience.<br>La reconnaissance émotionnelle par le biais de l’expression faciale est une compétence sociale importante qui permet à l’individu de répondre de forme approprié à un environnement, étant un dispositif de communication dans la relation avec le monde et avec les autres. Les individus atteints de troubles du spectre autistique (TSA) montrent des difficultés à ce niveau, ils mettent en évidence des problèmes de cognition sociale qui ont des conséquences négatives pour le fonctionnement interpersonnel et social. Au cours des dernières décennies, le moyen le plus utilisé pour évaluer objectivement la reconnaissance des expressions faciales de l’émotion a été la photographie. Cependant, les progrès technologiques ont permis l’utilisation d’images créées en laboratoire pour l’étude et la promotion de cette compétence. Cette étude a été développée avec l’objectif de comprendre les avantages de l'utilisation des images tridimensionnelles durant l’évaluation et le développement de programmes spécifiques de promotion de compétences liées avec la reconnaissance de l’expression faciale de l’émotion auprès de la population atteinte de TSA. L'objectif principal de ce travail est de comparer l'efficacité de la reconnaissance des émotions à travers les expressions faciale du laboratoire et les expressions faciales réels. 38 enfants et jeunes (adolescents) atteint du syndrome d'Asperger (AS) et d’Autisme de Haut Niveau de fonctionnement (AHN) ont participé à l’étude. Ils ont été soumis à trois tests avec des images et des vidéos fournies par le Laboratoire de l’Expressions Faciale de l’Emotion (FEELab), pour l’évaluation de la reconnaissance émotionnelle, et aux examens d’évaluation linguistique et de l'aphasie en portugais (PALPA-P), pour l’évaluation des compétences linguistiques. Les résultats suggèrent que les individus sont plus efficaces dans la reconnaissance des stimuli tridimensionnels et dynamiques et des expressions faciales féminines. Les émotions les plus facilement identifiables sont la colère et la surprise. Il existe une corrélation positive entre la reconnaissance émotionnelle et l’âge et la capacité linguistique des participants. De plus, des variables comme le diagnostic clinique et la scolarité interviennent dans la reconnaissance des émotions. Les données obtenues pourront être utiles pour la préparation des méthodes thérapeutiques adéquates à la population atteint de TSA. Les programmes d’interventions développés utilisant l’ordinateur et les nouvelles technologies pourront constituer une plus-value pour la promotion de compétences au niveau de la reconnaissance de l’expression facial de l’émotion pour ces individus, visant l’application de tâches plus proches de l’expérience réelles.
APA, Harvard, Vancouver, ISO, and other styles
17

Neth, Donald C. "Facial configuration and the perception of facial expression." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1189090729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Baltrušaitis, Tadas. "Automatic facial expression analysis." Thesis, University of Cambridge, 2014. https://www.repository.cam.ac.uk/handle/1810/245253.

Full text
Abstract:
Humans spend a large amount of their time interacting with computers of one type or another. However, computers are emotionally blind and indifferent to the affective states of their users. Human-computer interaction which does not consider emotions, ignores a whole channel of available information. Faces contain a large portion of our emotionally expressive behaviour. We use facial expressions to display our emotional states and to manage our interactions. Furthermore, we express and read emotions in faces effortlessly. However, automatic understanding of facial expressions is a very difficult task computationally, especially in the presence of highly variable pose, expression and illumination. My work furthers the field of automatic facial expression tracking by tackling these issues, bringing emotionally aware computing closer to reality. Firstly, I present an in-depth analysis of the Constrained Local Model (CLM) for facial expression and head pose tracking. I propose a number of extensions that make location of facial features more accurate. Secondly, I introduce a 3D Constrained Local Model (CLM-Z) which takes full advantage of depth information available from various range scanners. CLM-Z is robust to changes in illumination and shows better facial tracking performance. Thirdly, I present the Constrained Local Neural Field (CLNF), a novel instance of CLM that deals with the issues of facial tracking in complex scenes. It achieves this through the use of a novel landmark detector and a novel CLM fitting algorithm. CLNF outperforms state-of-the-art models for facial tracking in presence of difficult illumination and varying pose. Lastly, I demonstrate how tracked facial expressions can be used for emotion inference from videos. I also show how the tools developed for facial tracking can be applied to emotion inference in music.
APA, Harvard, Vancouver, ISO, and other styles
19

Mikheeva, Olga. "Perceptual facial expression representation." Thesis, KTH, Robotik, perception och lärande, RPL, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217307.

Full text
Abstract:
Facial expressions play an important role in such areas as human communication or medical state evaluation. For machine learning tasks in those areas, it would be beneficial to have a representation of facial expressions which corresponds to human similarity perception. In this work, the data-driven approach to representation learning of facial expressions is taken. The methodology is built upon Variational Autoencoders and eliminates the appearance-related features from the latent space by using neutral facial expressions as additional inputs. In order to improve the quality of the learned representation, we modify the prior distribution of the latent variable to impose the structure on the latent space that is consistent with human perception of facial expressions. We conduct the experiments on two datasets and the additionally collected similarity data, show that the human-like topology in the latent representation helps to improve the performance on the stereotypical emotion classification task and demonstrate the benefits of using a probabilistic generative model in exploring the roles of latent dimensions through the generative process.<br>Ansiktsuttryck spelar en viktig roll i områden som mänsklig kommunikation eller vid utvärdering av medicinska tillstånd. För att tillämpa maskininlärning i dessa områden skulle det vara fördelaktigt att ha en representation av ansiktsuttryck som bevarar människors uppfattning av likhet. I det här arbetet används ett data-drivet angreppssätt till representationsinlärning av ansiktsuttryck. Metodologin bygger på s. k. Variational Autoencoders och eliminerar utseende-relaterade drag från den latenta rymden genom att använda neutrala ansiktsuttryck som extra input-data. För att förbättra kvaliteten på den inlärda representationen så modifierar vi a priori-distributionen för den latenta variabeln för att ålägga den struktur på den latenta rymden som är överensstämmande med mänsklig perception av ansiktsuttryck. Vi utför experiment på två dataset och även insamlad likhets-data och visar att den människolika topologin i den latenta representationen hjälper till att förbättra prestandan på en typisk emotionsklassificeringsuppgift samt fördelarna med att använda en probabilistisk generativ modell när man undersöker latenta dimensioners roll i den generativa processen.
APA, Harvard, Vancouver, ISO, and other styles
20

Li, Jingting. "Facial Micro-Expression Analysis." Thesis, CentraleSupélec, 2019. http://www.theses.fr/2019CSUP0007.

Full text
Abstract:
Les micro-expressions (MEs) sont porteuses d'informations non verbales spécifiques. Cependant, en raison de leur nature locale et brève, il est difficile de les détecter. Dans cette thèse, nous proposons une méthode de détection par reconnaissance d'un motif local et temporel de mouvement du visage. Ce motif a une forme spécifique (S-pattern) lorsque la ME apparait. Ainsi, à l'aide de SVM, nous distinguons les MEs des autres mouvements faciaux. Nous proposons également une fusion spatiale et temporelle afin d'améliorer la distinction entre les MEs (locaux) et les mouvements de la tête (globaux). Cependant, l'apprentissage des S-patterns est limité par le petit nombre de bases de données de ME et par le faible volume d'échantillons de ME. Les modèles de Hammerstein (HM) est une bonne approximation des mouvements musculaires. En approximant chaque S-pattern par un HM, nous pouvons filtrer les S-patterns réels et générer de nouveaux S-patterns similaires. Ainsi, nous effectuons une augmentation et une fiabilisation des S-patterns pour l'apprentissage et améliorons ainsi la capacité de différencier les MEs d'autres mouvements. Lors du premier challenge de détection de MEs, nous avons participé à la création d’une nouvelle méthode d'évaluation des résultats. Cela a aussi été l’occasion d’appliquer notre méthode à longues vidéos. Nous avons fourni le résultat de base au challenge.Les expérimentions sont effectuées sur CASME I, CASME II, SAMM et CAS(ME)2. Les résultats montrent que notre méthode proposée surpasse la méthode la plus populaire en termes de F1-score. L'ajout du processus de fusion et de l'augmentation des données améliore encore les performances de détection<br>The Micro-expressions (MEs) are very important nonverbal communication clues. However, due to their local and short nature, spotting them is challenging. In this thesis, we address this problem by using a dedicated local and temporal pattern (LTP) of facial movement. This pattern has a specific shape (S-pattern) when ME are displayed. Thus, by using a classical classification algorithm (SVM), MEs are distinguished from other facial movements. We also propose a global final fusion analysis on the whole face to improve the distinction between ME (local) and head (global) movements. However, the learning of S-patterns is limited by the small number of ME databases and the low volume of ME samples. Hammerstein models (HMs) are known to be a good approximation of muscle movements. By approximating each S-pattern with a HM, we can both filter outliers and generate new similar S-patterns. By this way, we perform a data augmentation for S-pattern training dataset and improve the ability to differentiate MEs from other facial movements. In the first ME spotting challenge of MEGC2019, we took part in the building of the new result evaluation method. In addition, we applied our method to spotting ME in long videos and provided the baseline result for the challenge. The spotting results, performed on CASME I and CASME II, SAMM and CAS(ME)2, show that our proposed LTP outperforms the most popular spotting method in terms of F1-score. Adding the fusion process and data augmentation improve even more the spotting performance
APA, Harvard, Vancouver, ISO, and other styles
21

Munasinghe, Kankanamge Sarasi Madushika. "Facial analysis models for face and facial expression recognition." Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/118197/1/Sarasi%20Madushika_Munasinghe%20Kankanamge_Thesis.pdf.

Full text
Abstract:
This thesis examines the research and development of new approaches for face and facial expression recognition within the fields of computer vision and biometrics. Expression variation is a challenging issue in current face recognition systems and current approaches are not capable of recognizing facial variations effectively within human-computer interfaces, security and access control applications. This thesis presents new contributions for performing face and expression recognition simultaneously; face recognition in the wild; and facial expression recognition in challenging environments. The research findings include the development of new factor analysis and deep learning approaches which can better handle different facial variations.
APA, Harvard, Vancouver, ISO, and other styles
22

Andréasson, Per. "Emotional Empathy, Facial Reactions, and Facial Feedback." Doctoral thesis, Uppsala universitet, Institutionen för psykologi, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-126825.

Full text
Abstract:
The human face has a fascinating capability to express emotions. The facial feedback hypothesis suggests that the human face not only expresses emotions but is also able to send feedback to the brain and modulate the ongoing emotional experience. It has furthermore been suggested that this feedback from the facial muscles could be involved in empathic reactions. This thesis explores the concept of emotional empathy and relates it to two aspects concerning activity in the facial muscles. First, do people high versus low in emotional empathy differ in regard to in what degree they spontaneously mimic emotional facial expressions? Second, is there any difference between people with high as compared to low emotional empathy in respect to how sensitive they are to feedback from their own facial muscles? Regarding the first question, people with high emotional empathy were found to spontaneously mimic pictures of emotional facial expressions while people with low emotional empathy were lacking this mimicking reaction. The answer to the second question is a bit more complicated. People with low emotional empathy were found to rate humorous films as funnier in a manipulated sulky facial expression than in a manipulated happy facial expression, whereas people with high emotional empathy did not react significantly. On the other hand, when the facial manipulations were a smile and a frown, people with low as well as high emotional empathy reacted in line with the facial feedback hypothesis. In conclusion, the experiments in the present thesis indicate that mimicking and feedback from the facial muscles may be involved in emotional contagion and thereby influence emotional empathic reactions. Thus, differences in emotional empathy may in part be accounted for by different degree of mimicking reactions and different emotional effects of feedback from the facial muscles.
APA, Harvard, Vancouver, ISO, and other styles
23

de, la Cruz Nathan. "Autonomous facial expression recognition using the facial action coding system." University of the Western Cape, 2016. http://hdl.handle.net/11394/5121.

Full text
Abstract:
>Magister Scientiae - MSc<br>The South African Sign Language research group at the University of the Western Cape is in the process of creating a fully-edged machine translation system to automatically translate between South African Sign Language and English. A major component of the system is the ability to accurately recognise facial expressions, which are used to convey emphasis, tone and mood within South African Sign Language sentences. Traditionally, facial expression recognition research has taken one of two paths: either recognising whole facial expressions of which there are six i.e. anger, disgust, fear, happiness, sadness, surprise, as well as the neutral expression; or recognising the fundamental components of facial expressions as defined by the Facial Action Coding System in the form of Action Units. Action Units are directly related to the motion of specific muscles in the face, combinations of which are used to form any facial expression. This research investigates enhanced recognition of whole facial expressions by means of a hybrid approach that combines traditional whole facial expression recognition with Action Unit recognition to achieve an enhanced classification approach.
APA, Harvard, Vancouver, ISO, and other styles
24

Wild-Wall, Nele. "Is there an interaction between facial expression and facial familiarity?" Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2004. http://dx.doi.org/10.18452/15042.

Full text
Abstract:
Entgegen traditioneller Gesichtererkennungsmodelle konnte in einigen Studien gezeigt werden, dass die Erkennung des Emotionsausdrucks und der Bekanntheit interagieren. In dieser Dissertation wurde mit Hilfe von ereigniskorrelierten Potentialen untersucht, welche funktionalen Prozesse bei einer Interaktion moduliert werden. Teil I untersuchte, ob die Bekanntheit eines Gesichtes die Emotionsdiskrimination erleichtert. In mehreren Experimenten diskriminierten Versuchspersonen zwei Emotionen, die von bekannten und unbekannten Gesichtern praesentiert wurden . Dabei war die Entscheidung fuer persoenlich bekannte Gesichter mit froehlichem Ausdruck schneller und fehlerfreier. Dies zeigt sich in einer kuerzeren Latenz der P300 Komponente (Trend), welche die Dauer der Reizklassifikation auswies, sowie in einem verkuerzten Intervall zwischen Stimulus und Beginn des Lateralisierten Bereitschaftspotentials (S-LRP), welches die handspezifische Reaktionsauswahl anzeigt. Diese Befunde sprechen fuer eine Erleichterung der Emotionsdiskrimination auf spaeten perzeptuellen Verarbeitungsstufen bei persoenlich bekannten Gesichtern. In weiteren Experimenten mit oeffentlich bekannten, gelernten und unbekannten Gesichtern zeigte sich keine Erleichterung der Emotionsdiskrimination für bekannte Gesichter. Teil II untersuchte, ob es einen Einfluss des Emotionsausdrucks auf die Bekanntheitsentscheidung gibt. Eine Erleichterung zeigte sich fuer neutrale oder froehliche Emotionen nur bei persoenlich bekannten Gesichtern, nicht aber bei gelernten oder unbekannten Gesichtern. Sie spiegelt sich in einer Verkuerzung des S-LRP fuer persoenlich bekannte Gesichter wider, was eine Erleichterung der Reaktionsauswahl nahelegt. Zusammenfassend konnte gezeigt werden, dass eine Interaktion der Bekanntheit mit der Emotionserkennung unter bestimmten Bedingungen auftritt. In einer abschließenden Diskussion werden die experimentellen Ergebnisse in Beziehung gesetzt und in Hinblick auf bisherige Befunde diskutiert.<br>Contrasting traditional face recognition models previous research has revealed that the recognition of facial expressions and familiarity may not be independent. This dissertation attempts to localize this interaction within the information processing system by means of performance data and event-related potentials. Part I elucidated upon the question of whether there is an interaction between facial familiarity and the discrimination of facial expression. Participants had to discriminate two expressions which were displayed on familiar and unfamiliar faces. The discrimination was faster and less error prone for personally familiar faces displaying happiness. Results revealed a shorter peak latency for the P300 component (trend), reflecting stimulus categorization time, and for the onset of the lateralized readiness potential (S-LRP), reflecting the duration of pre-motor processes. A facilitation of perceptual stimulus categotization for personally familiar faces displaying happiness is suggested. The discrimination of expressions was not facilitated in further experiments using famous or experimentally familiarized, and unfamiliar faces. Part II raises the question of whether there is an interaction between facial expression and the discrimination of facial familiarity. In this task a facilitation was only observable for personally familiar faces displaying a neutral or happy expression, but not for experimentally familiarized, or unfamiliar faces. Event-related potentials reveal a shorter S-LRP interval for personally familiar faces, hence, suggesting a facilitated response selection stage. In summary, the results suggest that an interaction of facial familiarity and facial expression might be possible under some circumstances. Finally, the results are discussed in the context of possible interpretations, previous results, and face recognition models.
APA, Harvard, Vancouver, ISO, and other styles
25

Carter, Jeffrey R. "Facial expression analysis in schizophrenia." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ58398.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Yasuda, Maiko. "Color and facial expressions." abstract (free order & download UNR users only), 2007. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1447610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Bannerman, Rachel L. "Orienting to emotion : a psychophysical approach." Thesis, Available from the University of Aberdeen Library and Historic Collections Digital Resources, 2009. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?application=DIGITOOL-3&owner=resourcediscovery&custom_att_2=simple_viewer&pid=59429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Testa, Rafael Luiz. "Síntese de expressões faciais em fotografias para representação de emoções." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-31012019-165605/.

Full text
Abstract:
O processamento e a identificação de emoções faciais constituem ações essenciais para estabelecer interação entre pessoas. Alguns transtornos psiquiátricos podem limitar a capacidade de um indivíduo em reconhecer emoções em expressões faciais. De modo a contribuir com a solução deste problema, técnicas computacionais podem ser utilizadas para compor ferramentas destinadas ao diagnóstico, avaliação e treinamento no reconhecimento de tais expressões. Com esta motivação, o objetivo deste trabalho é definir, implementar e avaliar um método para sintetizar expressões faciais que representam emoções em imagens de pessoas reais. Nos trabalhos encontrados na literatura a principal ideia é que a expressão facial da imagem de uma pessoa pode ser reconstituída na imagem de outra pessoa. Este estudo difere-se das abordagens apresentadas na literatura ao propor uma técnica que considera a similaridade entre imagens faciais para escolher aquela que será empregada como origem para a reconstituição. Desta maneira, pretende-se aumentar o realismo das imagens sintetizadas. A abordagem sugerida para resolver o problema, além de buscar as faces mais similares em banco de imagens, faz a deformação dos componentes faciais e o mapeamento das diferenças de iluminação na imagem destino. O realismo das imagens geradas foi mensurado de forma objetiva e subjetiva usando imagens disponíveis em bancos de imagens públicos. Uma análise visual mostrou que as imagens sintetizadas com base em faces similares apresentaram um grau de realismo adequado, principalmente quando comparadas com imagens sintetizadas a partir de faces aleatórias. Além de constituir uma contribuição para a geração de imagens a serem aplicadas em ferramentas de auxílio ao diagnóstico e terapia de distúrbios psiquiátricos, oferece uma contribuição para a área de Ciência da Computação, por meio da proposição de novas técnicas de síntese de expressões faciais<br>The ability to process and identify facial emotions are essential factors for an individual\'s social interaction. Some psychiatric disorders can limit an individual\'s ability to recognize emotions in facial expressions. This problem could be confronted by using computational techniques in order to develop learning environments for diagnosis, evaluation, and training in identifying facial emotions. With this motivation, the objective of this work is to define, implement and evaluate a method to synthesize realistic facial expression that represents emotions in images of real people. The main idea of the studies found in the literature is that a facial expression of one persons image can be reenacted in an another persons image. The study differs from the approaches presented in the literature when proposing a technique that considers the similarity between facial images to choose the one that will be used as the origin for reenactment. As a result, we intend to increase the realism of the synthesized images. Our approach to solve the problem, besides searching for the most similar facial components in the image dataset, also deforms the facial elements and maps the differences of illumination in the target image. A visual analysis showed that the images synthesized on the basis of similar faces presented an adequate degree of realism, especially when compared with images synthesized from random faces. The study will contribute to the generation of the images applied to tools for the diagnosis and therapy of psychiatric disorders, and also contribute to the computational field, through the proposition of new techniques for facial expression synthesis
APA, Harvard, Vancouver, ISO, and other styles
29

Stefani, Fabiane Miron. "Estudo eletromiográfico do padrão de contração muscular da face de adultos." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/5/5160/tde-19112008-162050/.

Full text
Abstract:
A motricidade orofacial é a especialidade da Fonoaudiologia, que tem como objetivo a prevenção, diagnóstico e tratamento das alterações miofuncionais do sistema estomatognático. Atualmente, muitos pesquisadores desta área, nacional e internacionalmente, têm buscado metodologias mais objetivas de avaliação e conduta. Dentre tais aparatos está a eletromiografia de superfície (EMG). A EMG é a medida da atividade elétrica de um músculo. Os objetivos deste trabalho foram o de identificar, por meio da EMG, a atividade elétrica dos músculos faciais de adultos saudáveis durante movimentos faciais normalmente utilizados terapeuticamente na clínica fonoaudiológica, para identificar o papel de cada músculo durante os movimentos e para diferenciar a atividade elétrica destes músculos nestes mesmos movimentos, bem como avaliar a validade da EMG na clínica fonoaudiológica. Foram avaliadas 31 pessoas (18 mulheres) com média de idade de 29,48 anos e sem queixas fonoaudiológicas ou odontológicas. Os eletrodos de superfície bipolares foram aderidos aos músculos masseteres, bucinadores e supra-hióides bilateralmente e aos músculos orbicular da boca superior e inferior. Os eletrodos foram conectados a um eletromiógrafo EMG 1000 da Lynx Tecnologia Eletrônica de oito canais, e foi pedido que cada participante realizasse os seguintes movimentos: Protrusão Labial (PL), Protrusão Lingual (L), Inflar Bochechas (IB), Sorriso Aberto (SA), Sorriso Fechado (SF), Lateralização Labial Direita (LD) e Esquerda (LE) e Pressão de um lábio contra o outro (AL). Os dados eletromiográficos foram registrados em microvolts (RMS) e foi considerada a média dos movimentos para a realização da análise dos dados, que foram normalizados utilizando como base o registro da EMG no repouso e os resultados demonstram que os músculos orbiculares da boca inferior e superior apresentam maior atividade elétrica que os outros músculos na maior parte dos movimentos, com exceção dos movimentos de L e SF, Nos movimentos de LD e LE, os orbiculares da boca também estavam mais ativos, mas os músculos bucinadores demonstraram participação importante, especialmente o bucinador direito em LD A Protrusão Lingual não demonstrou diferenças significativas entre os músculos estudados. O SA teve maior participação do orbicular da boca Inferior que o superior, e demonstrou ser o movimento que mais movimenta os músculos da face como um todo e o músculo com maior atividade durante o SF foi o bucinador. Concluímos que o aparato da EMG é eficiente não só para a avaliação dos músculos mastigatórios, mas também dos da mímica, a não ser no movimento de Protrusão lingual, onde o EMG de superfície não foi eficiente. Os músculos orbiculares foram mais ativos durante os movimentos testados, portanto, são também os mais exercitados durante os exercícios de motricidade oral. O movimento que envolve a maior atividade dos músculos da face como um todo foi o Sorriso Aberto<br>Speech Therapy has been considered subjective during many years due to its manual and visual methods. Many researchers have been searching for more objective methodology of evaluation, based on electronics devises. One of them is the EMG- Surface Electromyography, which is the electric unit measure of a muscle. Literature presents many works in TMJ and Orthodontics areas, special attention to the chewing muscles- temporal and masseter- for been bigger muscles, presenting more evident results in EMG. Less attention is paid for mimic muscles. The objective of our work is to identify, by means of EMG, the electrical activity of facial muscles of healthy adults during facial movements normally used in speech therapy clinic, to identify the role of each muscle during movements and to differentiate the electrical activity of these muscles during this movements. 31 volunteers have been evaluated (18 women) with mean age of 29,84 years, no speech therapy or odontological complains. Bipolar surface electrodes have been adhered to masseter, buccinator and suprahyoid muscles bilaterally and to superior and inferior orbicular oris muscles. Electrodes were connected to a EMG 1000 from Lynx Tecnologia Eletrônica of 8 channels, and it was asked each participant to carry out the following movements: Labial Protrusion (PL), Lingual Protrusion (L), Cheek Inflating (CI), Opened Smile (OS), Closed Smile (CS), Labial Lateralization (LL) and pressure of one lip against the other (LP). EMG data was registered in microvolts (RMS) and the movement media was considered for data analyses, which were normalized using as bases the rest EMG and results show that orbicular oris are more electric activity than other muscles in PL, CI, OS, LL and LP. In LL movements, orbicularis oris also showed greater activity, but buccinator muscles showed effective participation in movement, especially in right LL. L didnt show any differences between evaluated muscles. Buccinator was the most active muscle during CS. We concluded that Orbicularis Ores were the most active muscles during the tasks, exception made to L and CS. In L no muscle was significantly higher and in CS Buccinators were the most active. Opened Smile is the movement where the muscles are more activated in a role. This results shows that EMG is of great use for mimic muscles evaluation, but should be used carefully in specific tongue assessment
APA, Harvard, Vancouver, ISO, and other styles
30

Amaro, Maria Teresa Valentim. "As expressões faciais no estudo de emoções específicas: Uma análise de importância do contexto situacional no reconhecimento de algumas emoções." Master's thesis, Instituto Superior de Psicologia Aplicada, 2000. http://hdl.handle.net/10400.12/303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Lee, Douglas Spencer. "Facial action determinants of pain judgment." Thesis, University of British Columbia, 1985. http://hdl.handle.net/2429/25812.

Full text
Abstract:
Nonverbal indices of pain are some of the least researched sources of data for assessing pain. The extensive literature on the communicative functions of nonverbal facial expressions suggests that there is potentially much information to be gained in studying facial expressions associated with pain. Results from two studies support the position that facial expressions related to pain may indeed be a source of information for pain assessment. A review of the literature found several studies indicating that judges could make discriminations amongst levels of discomfort from viewing a person's facial expressions. Other studies found that the occurrence of a small set of facial movements could be used to discriminate amongst several levels of self-reported discomfort. However, there was no research directly addressing the question of whether judges ratings would vary in response to different patterns of the identified facial movements. Issues regarding the facial cues used by naive judges in making ratings of another person's discomfort were investigated. Four hypotheses were developed. From prior research using the Facial Action Coding System (FACS) (Ekman S. Friesen, 1978) a small set of facial muscle movements, termed Action Units (AUs), were found to be the best facial movements for discriminating amongst different levels of pain. The first hypothesis was that increasing the number of AUs per expression would lead to increased ratings of discomfort. The second hypothesis was that video segments with the AUs portrayed simultaneously would be rated higher than segments with the same AUs portrayed in a sequential configuration. Four encoders portrayed all configurations. The configurations were randomly editted onto video tape and presented to the judges. The judges used the scale of affective discomfort developed by Gracely, McGrath, and Dubner (1978). Twenty-five male and 25 female university students volunteered as judges. The results supported both hypotheses. Increasing the number of AUs per expression led to a sharp rise in judges' ratings. Video segments of overlapping AU configurations were rated higher than segments with non-averlapping configurations. Female judges always rated higher than male judges. The second study was methodologically similar to the first study. The major hypothesis was that expressions with only upper face AUs would be rated as more often indicating attempts to hide an expression than lower face expressions. This study contained a subset of expressions that were identical to ones used in the first study. This allowed for testing of the fourth hypothesis which stated that the ratings of this subset of expressions would differ between the studies due to the differences in the judgment conditions. Both hypotheses were again supported. Upper face expressions were more often judged as portraying attempts by the encoders to hide their expressions. Analysis of the fourth hypothesis revealed that the expressions were rated higher in study 2 than study 1. A sex of judge X judgment condition interaction indicated that females rated higher in study 1 but males rated higher in study 2. The results from these studies indicated that the nonverbal communication of facial expressions of pain was defined by a number of parameters which led judges to alter their ratings depending on the parameters of the facial expressions being viewed. While studies of the micro-behavioral aspects of facial expressions are new, the present studies suggest that such research is integral to understanding the complex communication functions of nonverbal facial expressions.<br>Arts, Faculty of<br>Psychology, Department of<br>Graduate
APA, Harvard, Vancouver, ISO, and other styles
32

Lin, Alice J. "THREE DIMENSIONAL MODELING AND ANIMATION OF FACIAL EXPRESSIONS." UKnowledge, 2011. http://uknowledge.uky.edu/gradschool_diss/841.

Full text
Abstract:
Facial expression and animation are important aspects of the 3D environment featuring human characters. These animations are frequently used in many kinds of applications and there have been many efforts to increase the realism. Three aspects are still stimulating active research: the detailed subtle facial expressions, the process of rigging a face, and the transfer of an expression from one person to another. This dissertation focuses on the above three aspects. A system for freely designing and creating detailed, dynamic, and animated facial expressions is developed. The presented pattern functions produce detailed and animated facial expressions. The system produces realistic results with fast performance, and allows users to directly manipulate it and see immediate results. Two unique methods for generating real-time, vivid, and animated tears have been developed and implemented. One method is for generating a teardrop that continually changes its shape as the tear drips down the face. The other is for generating a shedding tear, which is a kind of tear that seamlessly connects with the skin as it flows along the surface of the face, but remains an individual object. The methods both broaden CG and increase the realism of facial expressions. A new method to automatically set the bones on facial/head models to speed up the rigging process of a human face is also developed. To accomplish this, vertices that describe the face/head as well as relationships between each part of the face/head are grouped. The average distance between pairs of vertices is used to place the head bones. To set the bones in the face with multi-density, the mean value of the vertices in a group is measured. The time saved with this method is significant. A novel method to produce realistic expressions and animations by transferring an existing expression to a new facial model is developed. The approach is to transform the source model into the target model, which then has the same topology as the source model. The displacement vectors are calculated. Each vertex in the source model is mapped to the target model. The spatial relationships of each mapped vertex are constrained.
APA, Harvard, Vancouver, ISO, and other styles
33

Meyer, Eric C. "A visual scanpath study of facial affect recognition in schizotypy and social anxiety." Diss., Online access via UMI:, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
34

Shang, Lifeng, and 尚利峰. "Facial expression analysis with graphical models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B47849484.

Full text
Abstract:
Facial expression recognition has become an active research topic in recent years due to its applications in human computer interfaces and data-driven animation. In this thesis, we focus on the problem of how to e?ectively use domain, temporal and categorical information of facial expressions to help computer understand human emotions. Over the past decades, many techniques (such as neural networks, Gaussian processes, support vector machines, etc.) have been applied to facial expression analysis. Recently graphical models have emerged as a general framework for applying probabilistic models. They provide a natural framework for describing the generative process of facial expressions. However, these models often su?er from too many latent variables or too complex model structures, which makes learning and inference di±cult. In this thesis, we will try to analyze the deformation of facial expression by introducing some recently developed graphical models (e.g. latent topic model) or improving the recognition ability of some already widely used models (e.g. HMM). In this thesis, we develop three di?erent graphical models with di?erent representational assumptions: categories being represented by prototypes, sets of exemplars and topics in between. Our ¯rst model incorporates exemplar-based representation into graphical models. To further improve computational e±- ciency of the proposed model, we build it in a local linear subspace constructed by principal component analysis. The second model is an extension of the recently developed topic model by introducing temporal and categorical information into Latent Dirichlet Allocation model. In our discriminative temporal topic model (DTTM), temporal information is integrated by placing an asymmetric Dirichlet prior over document-topic distributions. The discriminative ability is improved by a supervised term weighting scheme. We describe the resulting DTTM in detail and show how it can be applied to facial expression recognition. Our third model is a nonparametric discriminative variation of HMM. HMM can be viewed as a prototype model, and transition parameters act as the prototype for one category. To increase the discrimination ability of HMM at both class level and state level, we introduce linear interpolation with maximum entropy (LIME) and member- ship coe±cients to HMM. Furthermore, we present a general formula for output probability estimation, which provides a way to develop new HMM. Experimental results show that the performance of some existing HMMs can be improved by integrating the proposed nonparametric kernel method and parameters adaption formula. In conclusion, this thesis develops three di?erent graphical models by (i) combining exemplar-based model with graphical models, (ii) introducing temporal and categorical information into Latent Dirichlet Allocation (LDA) topic model, and (iii) increasing the discrimination ability of HMM at both hidden state level and class level.<br>published_or_final_version<br>Computer Science<br>Doctoral<br>Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
35

Fraser, Matthew Paul. "Repetition priming of facial expression recognition." Thesis, University of York, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.431255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Hsu, Shen-Mou. "Adaptation effects in facial expression recognition." Thesis, University of York, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.403968.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Zalewski, Lukasz. "Statistical modelling for facial expression dynamics." Thesis, Queen Mary, University of London, 2012. http://qmro.qmul.ac.uk/xmlui/handle/123456789/2518.

Full text
Abstract:
One of the most powerful and fastest means of relaying emotions between humans are facial expressions. The ability to capture, understand and mimic those emotions and their underlying dynamics in the synthetic counterpart is a challenging task because of the complexity of human emotions, different ways of conveying them, non-linearities caused by facial feature and head motion, and the ever critical eye of the viewer. This thesis sets out to address some of the limitations of existing techniques by investigating three components of expression modelling and parameterisation framework: (1) Feature and expression manifold representation, (2) Pose estimation, and (3) Expression dynamics modelling and their parameterisation for the purpose of driving a synthetic head avatar. First, we introduce a hierarchical representation based on the Point Distribution Model (PDM). Holistic representations imply that non-linearities caused by the motion of facial features, and intrafeature correlations are implicitly embedded and hence have to be accounted for in the resulting expression space. Also such representations require large training datasets to account for all possible variations. To address those shortcomings, and to provide a basis for learning more subtle, localised variations, our representation consists of tree-like structure where a holistic root component is decomposed into leaves containing the jaw outline, each of the eye and eyebrows and the mouth. Each of the hierarchical components is modelled according to its intrinsic functionality, rather than the final, holistic expression label. Secondly, we introduce a statistical approach for capturing an underlying low-dimension expression manifold by utilising components of the previously defined hierarchical representation. As Principal Component Analysis (PCA) based approaches cannot reliably capture variations caused by large facial feature changes because of its linear nature, the underlying dynamics manifold for each of the hierarchical components is modelled using a Hierarchical Latent Variable Model (HLVM) approach. Whilst retaining PCA properties, such a model introduces a probability density model which can deal with missing or incomplete data and allows discovery of internal within cluster structures. All of the model parameters and underlying density model are automatically estimated during the training stage. We investigate the usefulness of such a model to larger and unseen datasets. Thirdly, we extend the concept of HLVM model to pose estimation to address the non-linear shape deformations and definition of the plausible pose space caused by large head motion. Since our head rarely stays still, and its movements are intrinsically connected with the way we perceive and understand the expressions, pose information is an integral part of their dynamics. The proposed 3 approach integrates into our existing hierarchical representation model. It is learned using sparse and discreetly sampled training dataset, and generalises to a larger and continuous view-sphere. Finally, we introduce a framework that models and extracts expression dynamics. In existing frameworks, explicit definition of expression intensity and pose information, is often overlooked, although usually implicitly embedded in the underlying representation. We investigate modelling of the expression dynamics based on use of static information only, and focus on its sufficiency for the task at hand. We compare a rule-based method that utilises the existing latent structure and provides a fusion of different components with holistic and Bayesian Network (BN) approaches. An Active Appearance Model (AAM) based tracker is used to extract relevant information from input sequences. Such information is subsequently used to define the parametric structure of the underlying expression dynamics. We demonstrate that such information can be utilised to animate a synthetic head avatar. Submitted
APA, Harvard, Vancouver, ISO, and other styles
38

Harris, Richard J. "The neural representation of facial expression." Thesis, University of York, 2012. http://etheses.whiterose.ac.uk/3990/.

Full text
Abstract:
Faces provide information critical for effective social interactions. A face can be used to determine who someone is, where they are looking and how they are feeling. How these different aspects of a face are processed has proved a popular topic of research over the last 25 years. However, much of this research has focused on the perception of facial identity and as a result less is known about how facial expression is represented in the brain. For this reason, the primary aim of this thesis was to explore the neural representation of facial expression. First, this thesis investigated which regions of the brain are sensitive to expression and how these regions represent facial expression. Two regions of the brain, the posterior superior temporal sulcus (pSTS) and the amygdala, were more sensitive to changes in facial expression than identity. There was, however, a dissociation between how these regions represented information about facial expression. The pSTS was sensitive to any change in facial expression, consistent with a continuous representation of expression. In comparison, the amygdala was only sensitive to changes in expression that resulted in a change in the emotion category. This reflects a more categorical response in which expressions are assigned into discrete categories of emotion. Next, the representation of expression was further explored by asking what information from a face is used in the perception of expression. Photographic negation was used to disrupt the surface-based facial cues (i.e. pattern of light and dark across the face) while preserving the shape-based information carried by the features of the face. This manipulation had a minimal effect on judgements of expression, highlighting the important role of the shape-based information in judgements of expression. Furthermore, combining the photo negation technique with fMRI demonstrated that the representation of faces in the pSTS was predominately based on feature shape information. Finally, the influence of facial identity on the neural representation of facial expression was measured. The pSTS, but not the amygdala, was most responsive to changes in facial expression when the identity of the face remained the same. It was found that this sensitivity to facial identity in the pSTS was a result of interactions with regions thought to be involved in the processing of facial identity. In this way identity information can be used to process expression in a socially meaningful way.
APA, Harvard, Vancouver, ISO, and other styles
39

Sloan, Robin J. S. "Emotional avatars : choreographing emotional facial expression animation." Thesis, Abertay University, 2011. https://rke.abertay.ac.uk/en/studentTheses/2363eb4a-2eba-4f94-979f-77b0d6586e94.

Full text
Abstract:
As a universal element of human nature, the experience, expression, and perception of emotions permeate our daily lives. Many emotions are thought to be basic and common to all humanity, irrespective of social or cultural background. Of these emotions, the corresponding facial expressions of a select few are known to be truly universal, in that they can be identified by most observers without the need for training. Facial expressions of emotion are subsequently used as a method of communication, whether through close face-to-face contact, or the use of emoticons online and in mobile texting. Facial expressions are fundamental to acting for stage and screen, and to animation for film and computer games. Expressions of emotion have been the subject of intense experimentation in psychology and computer science research, both in terms of their naturalistic appearance and the virtual replication of facial movements. From this work much is known about expression universality, anatomy, psychology, and synthesis. Beyond the realm of scientific research, animation practitioners have scrutinised facial expressions and developed an artistic understanding of movement and performance. However, despite the ubiquitous quality of facial expressions in life and research, our understanding of how to produce synthetic, dynamic imitations of emotional expressions which are perceptually valid remains somewhat limited. The research covered in this thesis sought to unite an artistic understanding of expression animation with scientific approaches to facial expression assessment. Acting as both an animation practitioner and as a scientific researcher, the author set out to investigate emotional facial expression dynamics, with the particular aim of identifying spatio-temporal configurations of animated expressions that not only satisfied artistic judgement, but which also stood up to empirical assessment. These configurations became known as emotional expression choreographies. The final work presented in this thesis covers the performative, practice-led research into emotional expression choreography, the results of empirical experimentation (where choreographed animations were assessed by observers), and the findings of qualitative studies (which painted a more detailed picture of the potential context of choreographed expressions). The holistic evaluation of expression animation from these three epistemological perspectives indicated that emotional expressions can indeed be choreographed in order to create refined performances which have empirically measurable effects on observers, and which may be contextualised by the phenomenological interpretations of both student animators and general audiences.
APA, Harvard, Vancouver, ISO, and other styles
40

Kaufmann, Jurgen Michael. "Interactions between the processing of facial identity, emotional expression and facial speech." Thesis, University of Glasgow, 2002. http://theses.gla.ac.uk/3110/.

Full text
Abstract:
The experiments investigate the functional relationship between the processing of facial identity, emotional expression and facial speech. They were designed in order to further explore a widely accepted model of parallel, independent face perception components (Bruce and Young, 1986), which has been challenged recently (e.g. Walker et. al., 1995; Yakel et. al., 2000; Schweinberger et. al., 1998; Schweinberger et.al., 1999). In addition to applying a selective attention paradigm (Garner, 1974; 1976), dependencies between face related processes are explored by morphing, a digital graphic editing technique which allows for the selective manipulation of facial dimensions, and by studying the influence of face familiarity on the processing of emotional expression and speechreading. The role of dynamic information for speechreading (lipreading) is acknowledged by investigating the influence of natural facial speech movements on the integration of identity specific talker information and facial speech cues. As for the relationship between the processing of facial identity and emotional expression, overall the results are in line with the notion of independent parallel routes. Recent findings of an "asymmetric interaction" between the two dimensions in the selective attention paradigm, in the sense that facial identity can be processed independently from expression but not vice versa (Schweinberger et. al., 1998; Schweinberger et. al., 1999) could not be unequivocally corroborated. Critical factors for the interpretation of results based on the selective attention paradigm when used with complex stimuli such as faces are outlined and tested empirically. However, the experiments do give evidence that stored facial representations might be less abstract than previously thought and might preserve some information about typical expressions. The results indicate that classifications of unfamiliar faces are not influenced by emotional expression, while familiar faces are recognized fastest for certain expressions.
APA, Harvard, Vancouver, ISO, and other styles
41

Bayet, Laurie. "Le développement de la perception des expressions faciales." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAS049/document.

Full text
Abstract:
Cette thèse se propose d'examiner le développement de la perception des expressions faciales émotionnelles en le replaçant dans le cadre théorique de la perception des visages: séparation entre aspects variants (expression, regard) et invariants (genre, type), rôle de l'expérience, attention sociale. Plus spécifiquement, nous avons cherché à mettre en évidence l'existence, tant chez l'enfant que chez le nourrisson, d'interactions réciproques entre la perception d'expressions faciales de colère, de sourire ou de peur et la perception du genre (Études 1-2), la perception du regard (Étude 3), et la détection des visages (Étude 4).Dans un premier temps, nous avons montré que les adultes et les enfants de 5 à 12 ans tendent à catégoriser les visages en colère comme masculins (Étude 1). Comparer les performances humaines avec celles de classifieurs automatique suggère que ce biais reflète l'utilisation de certains traits et relations de second-ordre des visages pour en déterminer le genre. Le biais est identique à tous les âges étudiés ainsi que pour les visages de types non-familiers. Dans un second temps, nous avons testé si, chez le nourrisson, la perception du sourire dépend de dimensions invariantes du visage sensibles à l'expérience - le genre et le type (Étude 2). Les nourrissons ont généralement plus d'expérience avec les visages féminins d'un seul type. Les nourrissons de 3.5 mois montrent une préférence visuelle pour les visages souriants (dents visibles, versus neutre, de type familier) lorsque ceux-ci sont féminins; l'inverse est observé lorsqu'ils sont masculins. L'effet n'est pas répliqué lorsque les dents des visages souriants (d'un type familier ou non) ne sont pas visibles. Nous avons cherché à généraliser ces résultats à une tâche de référencement d'objet chez des nourrissons de 3.5, 9 et 12 mois (Étude 3). Les objets préalablement référencés par des visages souriants étaient autant regardés que les objets préalablement référencés par des visages neutres, quel que soit le groupe d'âge ou le genre du visage, et ce malgré des différences en terme de suivi du regard. Enfin, en employant une mesure univariée (préférence visuelle pour le visage) et une mesure multivariée (évidence globale distinguant le visage du bruit) de la détection du visage à chaque essai, associées à une modélisation des courbes psychométriques par modèles non-linéaire mixtes, nous mettons en évidence une meilleure détection des visages de peur (comparés aux visages souriants) dans le bruit phasique chez les nourrissons à 3.5, 6 et 12 mois (Étude 4).Ces résultats éclairent le développement précoce et le mécanisme des relations entre genre et émotion dans la perception des visages ainsi que de la sensibilité à la peur<br>This thesis addressed the question of how the perception of emotional facial expressions develops, reframing it in the theoretical framework of face perception: the separation of variant (expression, gaze) and invariant (gender, race) streams, the role of experience, and social attention. More specifically, we investigated how in infants and children the perception of angry, smiling, or fearful facial expressions interacts with gender perception (Studies 1-2), gaze perception (Study 3), and face detection (Study 4).In a first study, we found that adults and 5-12 year-old children tend to categorize angry faces as male (Study 1). Comparing human performance with that of several automatic classifiers suggested that this reflects a strategy of using specific features and second-order relationships in the face to categorize gender. The bias was constant over all ages studied and extended to other-race faces, further suggesting that it doesn't require extensive experience. A second set of studies examined whether, in infants, the perception of smiling depends on experience-sensitive, invariant dimensions of the face such as gender and race (Study 2). Indeed, infants are typically most familiar with own-race female faces. The visual preference of 3.5 month-old infants for open-mouth, own-race smiling (versus neutral) faces was restricted to female faces and reversed in male faces. The effect did not replicate with own- or other-race closed-mouth smiles. We attempted to extend these results to an object-referencing task in 3.5-, 9- and 12-month-olds (Study 3). Objects previously referenced by smiling faces attracted similar attention as objects previously cued by neutral faces, regardless of age group and face gender, and despite differences in gaze following. Finally, we used univariate (face side preference) and multivariate (face versus noise side decoding evidence) trial-level measures of face detection, coupled with non-linear mixed modeling of psychometric curves, to reveal the detection advantage of fearful faces (compared to smiling faces) embedded in phase-scrambled noise in 3.5-, 6-, and 12-month-old infants (Study 4). The advantage was as or more evident in the youngest group than in the two older age groups.Taken together, these results provide insights into the early ontogeny and underlying cause of gender-emotion relationships in face perception and the sensitivity to fear
APA, Harvard, Vancouver, ISO, and other styles
42

Hattiangadi, Nina Uday. "Facial affect processing across a perceptual timeline : a comparison of two models of facial affect processing /." Full text (PDF) from UMI/Dissertation Abstracts International, 2000. http://wwwlib.umi.com/cr/utexas/fullcit?p3004278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Ersotelos, Nikolaos. "Highly automated method for facial expression synthesis." Thesis, Brunel University, 2010. http://bura.brunel.ac.uk/handle/2438/4524.

Full text
Abstract:
The synthesis of realistic facial expressions has been an unexplored area for computer graphics scientists. Over the last three decades, several different construction methods have been formulated in order to obtain natural graphic results. Despite these advancements, though, current techniques still require costly resources, heavy user intervention and specific training and outcomes are still not completely realistic. This thesis, therefore, aims to achieve an automated synthesis that will produce realistic facial expressions at a low cost. This thesis, proposes a highly automated approach for achieving a realistic facial expression synthesis, which allows for enhanced performance in speed (3 minutes processing time maximum) and quality with a minimum of user intervention. It will also demonstrate a highly technical and automated method of facial feature detection, by allowing users to obtain their desired facial expression synthesis with minimal physical input. Moreover, it will describe a novel approach to the normalization of the illumination settings values between source and target images, thereby allowing the algorithm to work accurately, even in different lighting conditions. Finally, we will present the results obtained from the proposed techniques, together with our conclusions, at the end of the paper.
APA, Harvard, Vancouver, ISO, and other styles
44

Saeed, Anwar Maresh Qahtan [Verfasser]. "Automatic facial analysis methods : facial point localization, head pose estimation, and facial expression recognition / Anwar Maresh Qahtan Saeed." Magdeburg : Universitätsbibliothek, 2018. http://d-nb.info/1162189878/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Zhao, Hui. "Expressive facial animation transfer for virtual actors /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?CSED%202007%20ZHAO.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Stoyanova, Raliza. "Contextual influences on perception of facial cues." Thesis, University of Cambridge, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.608041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Mistry, Kamlesh. "Intelligent facial expression recognition with unsupervised facial point detection and evolutionary feature optimization." Thesis, Northumbria University, 2016. http://nrl.northumbria.ac.uk/36011/.

Full text
Abstract:
Facial expression is one of the effective channels to convey emotions and feelings. Many shape-based, appearance-based or hybrid methods for automatic facial expression recognition have been proposed. However, it is still a challenging task to identify emotions from facial images with scaling differences, pose variations, and occlusions. In addition, it is also difficult to identify significant discriminating facial features that could represent the characteristic of each expression because of the subtlety and variability of facial expressions. In order to deal with the above challenges, this research proposes two novel approaches: unsupervised facial point detection and texture-based facial expression recognition with feature optimisation. First of all, unsupervised automatic facial point detection integrated with regression-based intensity estimation for facial Action Units (AUs) and emotion clustering is proposed to deal with challenges such as scaling differences, pose variations, and occlusions. The proposed facial point detector can detect 54 facial points in images of faces with occlusions, pose variations and scaling differences. We conduct AU intensity estimation respectively using support vector regression and neural networks for 18 selected AUs. FCM is also subsequently employed to recognise seven basic emotions as well as neutral expressions. It also shows great potential to deal with compound and newly arrived novel emotion class detection. The second proposed system focuses on a texture-based approach for facial expression recognition by proposing a novel variant of the local binary pattern for discriminative feature extraction and Particle Swarm Optimization (PSO)-based feature optimisation. Multiple classifiers are applied for recognising seven facial expressions. Finally, evaluations are conducted to show the efficiency of the above two proposed systems. Evaluated using well-known facial databases: Helen, labelled faces in the wild, PUT, and CK+ the proposed unsupervised facial point detector outperforms other supervised landmark detection models dramatically and shows excellent robustness and capability in dealing with rotations, occlusions and illumination changes. Moreover, a comprehensive evaluation is also conducted for the proposed texture-based facial expression recognition with mGA-embedded PSO feature optimisation. Evaluated using the CK+ and MMI benchmark databases, the experimental results indicate that it outperforms other state-of-the-art metaheuristic search methods and facial emotion recognition research reported in the literature by a significant margin.
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Jing. "Reconstruction and Analysis of 3D Individualized Facial Expressions." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32588.

Full text
Abstract:
This thesis proposes a new way to analyze facial expressions through 3D scanned faces of real-life people. The expression analysis is based on learning the facial motion vectors that are the differences between a neutral face and a face with an expression. There are several expression analysis based on real-life face database such as 2D image-based Cohn-Kanade AU-Coded Facial Expression Database and Binghamton University 3D Facial Expression Database. To handle large pose variations and increase the general understanding of facial behavior, 2D image-based expression database is not enough. The Binghamton University 3D Facial Expression Database is mainly used for facial expression recognition and it is difficult to compare, resolve, and extend the problems related detailed 3D facial expression analysis. Our work aims to find a new and an intuitively way of visualizing the detailed point by point movements of 3D face model for a facial expression. In our work, we have created our own 3D facial expression database on a detailed level, which each expression model has been processed to have the same structure to compare differences between different people for a given expression. The first step is to obtain same structured but individually shaped face models. All the head models are recreated by deforming a generic model to adapt a laser-scanned individualized face shape in both coarse level and fine level. We repeat this recreation method on different human subjects to establish a database. The second step is expression cloning. The motion vectors are obtained by subtracting two head models with/without expression. The extracted facial motion vectors are applied onto a different human subject’s neutral face. Facial expression cloning is proved to be robust and fast as well as easy to use. The last step is about analyzing the facial motion vectors obtained from the second step. First we transferred several human subjects’ expressions on a single human neutral face. Then the analysis is done to compare different expression pairs in two main regions: the whole face surface analysis and facial muscle analysis. Through our work where smiling has been chosen for the experiment, we find our approach to analysis through face scanning a good way to visualize how differently people move their facial muscles for the same expression. People smile in a similar manner moving their mouths and cheeks in similar orientations, but each person shows her/his own unique way of moving. The difference between individual smiles is the differences of movements they make.
APA, Harvard, Vancouver, ISO, and other styles
49

Zhao, Xi. "3D face analysis : landmarking, expression recognition and beyond." Phd thesis, Ecole Centrale de Lyon, 2010. http://tel.archives-ouvertes.fr/tel-00599660.

Full text
Abstract:
This Ph.D thesis work is dedicated to automatic facial analysis in 3D, including facial landmarking and facial expression recognition. Indeed, facial expression plays an important role both in verbal and non verbal communication, and in expressing emotions. Thus, automatic facial expression recognition has various purposes and applications and particularly is at the heart of "intelligent" human-centered human/computer(robot) interfaces. Meanwhile, automatic landmarking provides aprior knowledge on location of face landmarks, which is required by many face analysis methods such as face segmentation and feature extraction used for instance for expression recognition. The purpose of this thesis is thus to elaborate 3D landmarking and facial expression recognition approaches for finally proposing an automatic facial activity (facial expression and action unit) recognition solution.In this work, we have proposed a Bayesian Belief Network (BBN) for recognizing facial activities, such as facial expressions and facial action units. A StatisticalFacial feAture Model (SFAM) has also been designed to first automatically locateface landmarks so that a fully automatic facial expression recognition system can be formed by combining the SFAM and the BBN. The key contributions are the followings. First, we have proposed to build a morphable partial face model, named SFAM, based on Principle Component Analysis. This model allows to learn boththe global variations in face landmark configuration and the local ones in terms of texture and local geometry around each landmark. Various partial face instances can be generated from SFAM by varying model parameters. Secondly, we have developed a landmarking algorithm based on the minimization an objective function describing the correlation between model instances and query faces. Thirdly, we have designed a Bayesian Belief Network with a structure describing the casual relationships among subjects, expressions and facial features. Facial expression oraction units are modelled as the states of the expression node and are recognized by identifying the maximum of beliefs of all states. We have also proposed a novel method for BBN parameter inference using a statistical feature model that can beconsidered as an extension of SFAM. Finally, in order to enrich information usedfor 3D face analysis, and particularly 3D facial expression recognition, we have also elaborated a 3D face feature, named SGAND, to characterize the geometry property of a point on 3D face mesh using its surrounding points.The effectiveness of all these methods has been evaluated on FRGC, BU3DFEand Bosphorus datasets for facial landmarking as well as BU3DFE and Bosphorus datasets for facial activity (expression and action unit) recognition.
APA, Harvard, Vancouver, ISO, and other styles
50

Wang, Shihai. "Boosting learning applied to facial expression recognition." Thesis, University of Manchester, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.511940.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!