טוען...
Data-Free Adversarial Perturbations for Practical Black-Box Attack
Neural networks are vulnerable to adversarial examples, which are malicious inputs crafted to fool pre-trained models. Adversarial examples often exhibit black-box attacking transferability, which allows that adversarial examples crafted for one model can fool another model. However, existing black-...
שמור ב:
| הוצא לאור ב: | Advances in Knowledge Discovery and Data Mining |
|---|---|
| Main Authors: | , , , , , |
| פורמט: | Artigo |
| שפה: | Inglês |
| יצא לאור: |
2020
|
| נושאים: | |
| גישה מקוונת: | https://ncbi.nlm.nih.gov/pmc/articles/PMC7206253/ https://ncbi.nlm.nih.govhttp://dx.doi.org/10.1007/978-3-030-47436-2_10 |
| תגים: |
הוספת תג
אין תגיות, היה/י הראשונ/ה לתייג את הרשומה!
|