ロード中...

Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations.

While the need for interpretable machine learning has been established, many common approaches are slow, lack fidelity, or hard to evaluate. Amortized explanation methods reduce the cost of providing interpretations by learning a global selector model that returns feature importances for a single in...

詳細記述

保存先:
書誌詳細
出版年:Proc Mach Learn Res
主要な著者: Jethani, Neil, Sudarshan, Mukund, Aphinyanaphongs, Yindalon, Ranganath, Rajesh
フォーマット: Artigo
言語:Inglês
出版事項: 2021
主題:
オンライン・アクセス:https://ncbi.nlm.nih.gov/pmc/articles/PMC8096519/
https://ncbi.nlm.nih.gov/pubmed/33954293
タグ: タグ追加
タグなし, このレコードへの初めてのタグを付けませんか!