Adversarial Analysis for Source Camera Identification | IEEE Journals & Magazine | IEEE Xplore

Adversarial Analysis for Source Camera Identification


Abstract:

Recent studies highlight the vulnerability of convolutional neural networks (CNNs) to adversarial attacks, which also calls into question the reliability of forensic meth...Show More

Abstract:

Recent studies highlight the vulnerability of convolutional neural networks (CNNs) to adversarial attacks, which also calls into question the reliability of forensic methods. Existing adversarial attacks generate one-to-one noise, which means these methods have not learned the fingerprint information. Therefore, we introduce two powerful attacks, fingerprint copy-move attack, and joint feature-based auto-learning attack. To validate the performance of attack methods, we move a step ahead and introduce the higher possible defense mechanism relation mismatch. which expands the characterization differences of classifiers in the same classification network. Extensive experiments show that relation mismatch is superior in recognizing adversarial examples and prove that the proposed fingerprint-based attacks are more powerful. Both proposed attacks show excellent attack transferability to unknown samples. The Pytorch® implementations of these methods can download from an open-source GitHub project https://github.com/Dlut-lab-zmn/Source-attack.
Page(s): 4174 - 4186
Date of Publication: 24 December 2020

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.