Reducing Adversarial Vulnerability through Adaptive Training Batch Size

Authors

  • Ken Sasongko Universitas Indonesia
  • Adila Alfa Krisnadhi Universitas Indonesia
  • Mohamad Ivan Fanany

DOI:

https://doi.org/10.21609/jiki.v14i1.907

Keywords:

adversarial examples, batch normalization, fixup initialization, batch size variation

Abstract

Neural networks possess an ability to generalize well to data distribution, to an extent that they are capable of fitting to a randomly labeled data. But they are also known to be extremely sensitive to adversarial examples. Batch Normalization (BatchNorm), very commonly part of deep learning architecture, has been found to increase adversarial vulnerability. Fixup Initialization (Fixup Init) has been shown as an alternative to BatchNorm, which can considerably strengthen the networks against adversarial examples. This robustness can be improved further by employing smaller batch size in training. The latter, however, comes with a tradeoff in the form of a significant increase of training time (up to ten times longer when reducing batch size from the default 128 to 8 for ResNet-56). In this paper, we propose a workaround to this problem by starting the training with a small batch size and gradually increase it to larger ones during training. We empirically show that our proposal can still improve adversarial robustness (up to 5.73\%) of ResNet-56 with Fixup Init and default batch size of 128. At the same time, our proposal keeps the training time considerably shorter (only 4 times longer, instead of 10 times).

Author Biographies

Ken Sasongko, Universitas Indonesia

Faculty of Computer Science

Adila Alfa Krisnadhi, Universitas Indonesia

Faculty of Computer Science

Downloads

Published

2021-02-28

How to Cite

Sasongko, K., Krisnadhi, A. A., & Fanany, M. I. (2021). Reducing Adversarial Vulnerability through Adaptive Training Batch Size. Jurnal Ilmu Komputer Dan Informasi, 14(1), 27–37. https://doi.org/10.21609/jiki.v14i1.907