Chargement en cours...

Provable Limitations of Acquiring Meaning from Ungrounded Form: What Will Future Language Models Understand?

AbstractLanguage models trained on billions of tokens have recently led to unprecedented results on many NLP tasks. This success raises the question of whether, in principle, a system can ever “understand” raw text without access to some form of grounding. We formally investigate the...

Description complète

Enregistré dans:
Détails bibliographiques
Auteurs principaux: William Merrill, Yoav Goldberg, Roy Schwartz, Noah A. Smith
Format: Artigo
Langue:Inglês
Publié: The MIT Press 2021-01-01
Collection:Transactions of the Association for Computational Linguistics
Accès en ligne:https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00412/107385/Provable-Limitations-of-Acquiring-Meaning-from
Tags: Ajouter un tag
Pas de tags, Soyez le premier à ajouter un tag!