Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click 'Find out more' for information on how to change your cookie settings.

The main purpose of the present study is to investigate the capacity of schizotypy and alexithymia traits, in combination with affectivity to predict facial emotion recognition capability in a sample of nonclinical adults. Consecutive healthy participants (N= 98) were investigated using the Toronto Alexithymia Scale-20 (TAS-20), the Oxford-Liverpool Inventory of Feelings and Experiences-Reduced Version (O-LIFE-R), and the Positive and NA Schedule (PANAS). A set of validated photographs (static images) and virtual faces (dynamic images) for presenting the basic emotions was used to assess emotion recognition. Pearson correlations were applied to investigate the relationship between the study variables; the amount of variance in emotion recognition capability predicted by OLIFE-R, TAS-20 and PANAS was calculated by using the linear regression model. Results showed that alexithymia was strongly associated with schizotypy and NA; furthermore, alexithymia and NA made a significant contribution to the prediction of emotion recognition capability. The predictive model was fitted for two types of presentations (photographs and virtual reality). The inclusion of virtual faces emerges as a response to the need to consider computer characters as new assessment and treatment material for research and therapy in psychology. © 2013, Facultat de Psicologia.


Journal article


Anuario de Psicologia

Publication Date





7 - 21