ロード中...

Provable Limitations of Acquiring Meaning from Ungrounded Form: What Will Future Language Models Understand?

AbstractLanguage models trained on billions of tokens have recently led to unprecedented results on many NLP tasks. This success raises the question of whether, in principle, a system can ever “understand” raw text without access to some form of grounding. We formally investigate the...

詳細記述

保存先:
書誌詳細
主要な著者: William Merrill, Yoav Goldberg, Roy Schwartz, Noah A. Smith
フォーマット: Artigo
言語:Inglês
出版事項: The MIT Press 2021-01-01
シリーズ:Transactions of the Association for Computational Linguistics
オンライン・アクセス:https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00412/107385/Provable-Limitations-of-Acquiring-Meaning-from
タグ: タグ追加
タグなし, このレコードへの初めてのタグを付けませんか!