Regional Saliency Map Attack for Medical Image Segmentation
Published in 2022 IEEE International Conference on Image Processing (ICIP), 2022
State-of-the-art Deep Neural Networks (DNNs) are promoting medical image processing. However, DNNs are susceptible to adversarial attacks, which could significantly deteriorate model performances and pose a threat to clinical diagnoses. One common method for prevention is through adversarial training, which is highly dependent on harnessing adversarial examples during the training stage. However, adversarial medical examples generated by many existing works are too perceptible to be adversarial examples. To improve the imperceptibility, we proposed a Regional Saliency Map Attack that generates an adversarial example by only perturbing a small number of pixels. Extensive experiments have shown that, on average, our method caused the same degradation in model performance by quantitatively less perceptible perturbations. Visualisations have also verified that the improvement in imperceptibility in an image is both global and regional.
Recommended citation: P. Yang and H. Wang, "Regional Saliency Map Attack for Medical Image Segmentation," 2022 IEEE International Conference on Image Processing (ICIP), 2022, pp. 846-850, doi: 10.1109/ICIP46576.2022.9898066. https://doi.org/10.1109/ICIP46576.2022.9898066