A carregar...

Provable Limitations of Acquiring Meaning from Ungrounded Form: What Will Future Language Models Understand?

AbstractLanguage models trained on billions of tokens have recently led to unprecedented results on many NLP tasks. This success raises the question of whether, in principle, a system can ever “understand” raw text without access to some form of grounding. We formally investigate the...

ver descrição completa

Na minha lista:
Detalhes bibliográficos
Main Authors: William Merrill, Yoav Goldberg, Roy Schwartz, Noah A. Smith
Formato: Artigo
Idioma:Inglês
Publicado em: The MIT Press 2021-01-01
Colecção:Transactions of the Association for Computational Linguistics
Acesso em linha:https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00412/107385/Provable-Limitations-of-Acquiring-Meaning-from
Tags: Adicionar Tag
Sem tags, seja o primeiro a adicionar uma tag!