A carregar...

DEEP MULTIMODAL LEARNING FOR EMOTION RECOGNITION IN SPOKEN LANGUAGE

In this paper, we present a novel deep multimodal framework to predict human emotions based on sentence-level spoken language. Our architecture has two distinctive characteristics. First, it extracts the high-level features from both text and audio via a hybrid deep multimodal structure, which consi...

ver descrição completa

Na minha lista:
Detalhes bibliográficos
Publicado no:Proc IEEE Int Conf Acoust Speech Signal Process
Main Authors: Gu, Yue, Chen, Shuhong, Marsic, Ivan
Formato: Artigo
Idioma:Inglês
Publicado em: 2018
Assuntos:
Acesso em linha:https://ncbi.nlm.nih.gov/pmc/articles/PMC6261381/
https://ncbi.nlm.nih.gov/pubmed/30505240
https://ncbi.nlm.nih.govhttp://dx.doi.org/10.1109/ICASSP.2018.8462440
Tags: Adicionar Tag
Sem tags, seja o primeiro a adicionar uma tag!