ロード中...
Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations.
While the need for interpretable machine learning has been established, many common approaches are slow, lack fidelity, or hard to evaluate. Amortized explanation methods reduce the cost of providing interpretations by learning a global selector model that returns feature importances for a single in...
保存先:
| 出版年: | Proc Mach Learn Res |
|---|---|
| 主要な著者: | , , , |
| フォーマット: | Artigo |
| 言語: | Inglês |
| 出版事項: |
2021
|
| 主題: | |
| オンライン・アクセス: | https://ncbi.nlm.nih.gov/pmc/articles/PMC8096519/ https://ncbi.nlm.nih.gov/pubmed/33954293 |
| タグ: |
タグ追加
タグなし, このレコードへの初めてのタグを付けませんか!
|