BlackboxBench
What is BlackboxBench?
BlackboxBench is a comprehensive benchmark containing mainstream adversarial black-box attack methods. It can be used to evaluate the adversarial robustness of any ML models, or as the baseline to develop more advanced attack and defense methods. We mainly provide:
- Easy implementations: we provide the implementations of 15 query-based black-box attack methods, covering both score-based and decision-based attacks:
- 7 score-based attacks: NES, ZOSignSGD, Bandit-prior, ECO attack, SimBA, SignHunter, Sqaure attack.
- 8 decision-based attacks: Boundary attack, OPT attack, Sign-OPT, Evoluationary attack, GeoDA, HSJA, Sign Flip, RayS.
- A public leaderboard: we evaluate above attack methods against several undefended and defended deep models, on two widely used databases (including CIFAR-10, ImageNet).
This benchmark will be continuously updated to track the lastest advances of black-box attacks, including the implementations of more (query and transfer-based) black-box attack and defense methods, as well as their evaluations in the leaderboard. You are welcome to contribute your blackbox methods to BlackboxBench.
About Us
This benchmark is built by the Secure Computing Lab of Big Data (SCLBD) at The Chinese University of Hong Kong, Shenzhen, directed by Professor Baoyuan Wu. SCLBD focuses on research of trustworthy AI, including blackbox learning, adversarial examples, federated learning, fairness, etc.
Related Work
If interested, you can read our recent works about black-box attack and defense methods, and more works about trustworthy AI can be found here.
@inproceedings{cgattack-cvpr2022,
title={Boosting Black-Box Attack with Partially Transferred Conditional Adversarial Distribution},
author={Feng, Yan and Wu, Baoyuan and Fan, Yanbo and Liu, Li and Li, Zhifeng and Xia, Shutao},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2022}
}
@article{rnd-blackbox-defense-nips2021,
title={Random Noise Defense Against Query-Based Black-Box Attacks},
author={Qin, Zeyu and Fan, Yanbo and Zha, Hongyuan and Wu, Baoyuan},
journal={Advances in Neural Information Processing Systems},
volume={34},
year={2021}
}
@inproceedings{liang2021parallel,
title={Parallel Rectangle Flip Attack: A Query-Based Black-Box Attack Against Object Detection},
author={Liang, Siyuan and Wu, Baoyuan and Fan, Yanbo and Wei, Xingxing and Cao, Xiaochun},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={7697--7707},
year={2021}
}
@inproceedings{chen2020boosting,
title={Boosting decision-based black-box adversarial attacks with random sign flip},
author={Chen, Weilun and Zhang, Zhaoxiang and Hu, Xiaolin and Wu, Baoyuan},
booktitle={European Conference on Computer Vision},
pages={276--293},
year={2020},
organization={Springer}
}
@inproceedings{evolutionary-blackbox-attack-cvpr2019,
title={Efficient decision-based black-box adversarial attacks on face recognition},
author={Dong, Yinpeng and Su, Hang and Wu, Baoyuan and Li, Zhifeng and Liu, Wei and Zhang, Tong and Zhu, Jun},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={7714--7722},
year={2019}
}
Contact
If you are interested to contribute your blackbox methods to BlackboxBench, or have any questions or suggestions, please feel free to contact us at wubaoyuan@cuhk.edu.cn.
Website
https://blackboxbench.github.io/