Clean-label Backdoor Attack (0.024%)
1 papers with code • 1 benchmarks • 1 datasets
This task has no description! Would you like to contribute one?
Most implemented papers
Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information
With poisoning equal to or less than 0. 5% of the target-class data and 0. 05% of the training set, we can train a model to classify test examples from arbitrary classes into the target class when the examples are patched with a backdoor trigger.