Loading...

DEEP MULTIMODAL LEARNING FOR EMOTION RECOGNITION IN SPOKEN LANGUAGE

In this paper, we present a novel deep multimodal framework to predict human emotions based on sentence-level spoken language. Our architecture has two distinctive characteristics. First, it extracts the high-level features from both text and audio via a hybrid deep multimodal structure, which consi...

Full description

Saved in:
Bibliographic Details
Published in:Proc IEEE Int Conf Acoust Speech Signal Process
Main Authors: Gu, Yue, Chen, Shuhong, Marsic, Ivan
Format: Artigo
Language:Inglês
Published: 2018
Subjects:
Online Access:https://ncbi.nlm.nih.gov/pmc/articles/PMC6261381/
https://ncbi.nlm.nih.gov/pubmed/30505240
https://ncbi.nlm.nih.govhttp://dx.doi.org/10.1109/ICASSP.2018.8462440
Tags: Add Tag
No Tags, Be the first to tag this record!