Facial Emotions Are Accurately Encoded in the Neural Signal of Those With Autism Spectrum Disorder: A Deep Learning Approach

ABSTRACT:

Background

Individuals with autism spectrum disorder (ASD) exhibit frequent behavioral deficits in facial emotion recognition (FER). It remains unknown whether these deficits arise because facial emotion information is not encoded in their neural signal or because it is encodes but fails to translate to FER behavior (deployment). This distinction has functional implications, including constraining when differences in social information processing occur in ASD, and guiding interventions (i.e., developing prosthetic FER vs. reinforcing existing skills).

Methods

We utilized a discriminative and contemporary machine learning approach—deep convolutional neural networks—to classify facial emotions viewed by individuals with and without ASD (N = 88) from concurrently recorded electroencephalography signals.

Results

The convolutional neural network classified facial emotions with high accuracy for both ASD and non-ASD groups, even though individuals with ASD performed more poorly on the concurrent FER task. In fact, convolutional neural network accuracy was greater in the ASD group and was not related to behavioral performance. This pattern of results replicated across three independent participant samples. Moreover, feature importance analyses suggested that a late temporal window of neural activity (1000–1500 ms) may be uniquely important in facial emotion classification for individuals with ASD.

Conclusions

Our results reveal for the first time that facial emotion information is encoded in the neural signal of individuals with (and without) ASD. Thus, observed difficulties in behavioral FER associated with ASD likely arise from difficulties in decoding or deployment of facial emotion information within the neural signal. Interventions should focus on capitalizing on this intact encoding rather than promoting compensation or FER prostheses.

Condividi

Post correlati