A carregar...

Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations.

While the need for interpretable machine learning has been established, many common approaches are slow, lack fidelity, or hard to evaluate. Amortized explanation methods reduce the cost of providing interpretations by learning a global selector model that returns feature importances for a single in...

ver descrição completa

Na minha lista:
Detalhes bibliográficos
Publicado no:Proc Mach Learn Res
Main Authors: Jethani, Neil, Sudarshan, Mukund, Aphinyanaphongs, Yindalon, Ranganath, Rajesh
Formato: Artigo
Idioma:Inglês
Publicado em: 2021
Assuntos:
Acesso em linha:https://ncbi.nlm.nih.gov/pmc/articles/PMC8096519/
https://ncbi.nlm.nih.gov/pubmed/33954293
Tags: Adicionar Tag
Sem tags, seja o primeiro a adicionar uma tag!