skip to main content
research-article
Open access

Social Media Authentication and Combating Deepfakes Using Semi-Fragile Invisible Image Watermarking

Published: 09 December 2024 Publication History

Abstract

With the significant advances in deep generative models for image and video synthesis, Deepfakes and manipulated media have raised severe societal concerns. Conventional machine learning classifiers for deepfake detection often fail to cope with evolving deepfake generation technology and are susceptible to adversarial attacks. Alternatively, invisible image watermarking is being researched as a proactive defense technique that allows media authentication by verifying an invisible secret message embedded in the image pixels. A handful of invisible image watermarking techniques introduced for media authentication have proven vulnerable to basic image processing operations and watermark removal attacks. In response, we have proposed a semi-fragile image watermarking technique that embeds an invisible secret message into real images for media authentication. Our proposed watermarking framework is designed to be fragile to facial manipulations or tampering while being robust to benign image-processing operations and watermark removal attacks. This is facilitated through a unique architecture of our proposed technique consisting of critic and adversarial networks that enforce high image quality and resiliency to watermark removal efforts, respectively, along with the backbone encoder-decoder and the discriminator networks. This allows images shared over the Internet to retain the verifiable watermark as long as facial manipulations or any other Deepfake modification technique is not applied. Thorough experimental investigations on SOTA facial Deepfake datasets demonstrate that our proposed model can embed a \(64\)-bit secret as an imperceptible image watermark that can be recovered with a high-bit recovery accuracy when benign image processing operations are applied while being non-recoverable when unseen Deepfake manipulations are applied. In addition, our proposed watermarking technique demonstrates high resilience to several white-box and black-box watermark removal attacks. Thus, obtaining state-of-the-art performance.

1 Introduction

Media authentication refers to the process of verifying the authenticity and integrity of digital media such as images, videos, audio recordings, or text documents [18]. With the advancement in generative models, combined with the widespread availability of vast datasets, there is a rise in digital manipulation tools and techniques that have enabled the creation of high-quality and convincing AI-generated synthetic media (such as face, audio, and text) known as Deepfakes [42, 59, 63]. Apart from many creative and artistic uses of Deepfakes [11], many harmful uses range from non-consensual pornography to disinformation campaigns intended to sow civil unrest and disrupt democratic elections. Deepfakes have been flagged as a top AI threat to society [30, 42].
In this context, several deepfake generation techniques based on facial manipulation (forgery) have been proposed [42, 63]. These facial manipulations or forgery techniques depict human subjects with altered identities (identity swap), attributes, or malicious actions and expressions (face reenactment) in a given image or video. Specifically, identity or face swapping is the task of transferring a face from the source to the target image [43]. Attribute manipulation [13, 24] is a fine-grained facial manipulation obtained by modifying simple attributes (e.g., hair color, skin tone, and gender). Similarly to identity swap, face reenactment [43] involves a facial expression swap between source and target facial images. These facial manipulation tools are easily abused by malicious users, with little to no technical knowledge, to manipulate facial images of the user, resulting in a threat to privacy, reputation, and security. In fact, several smartphone-based applications have such attribute modifications in the form of filters. For example, FaceApp,1 a popular smartphone application, modifies an uploaded image based on the selected attribute that can be edited using a slider to regulate the magnitude of the change. The entire process of facial modification can be easily accomplished within five minutes using these applications or other pretrained models available in the online repositories.
Consequently, every year the volume of facial Deepfakes on social media has witnessed a significant rise. For instance, in \(2023\) alone, about 500,000 deepfake videos were added to social media, marking a substantial rise from previous years. In \(2021\), there were approximately 14,678 deepfake videos online, which itself was double the number from \(2018\) [65].
With this staggering growth of facial manipulation-based deepfake content in social media, it has become increasingly important to ensure the media’s authenticity against malicious tampering. The classical forensic approach for media authentication against facial manipulation includes running an automated deepfake detector [16, 31, 38]. Common Deepfake detectors include pre-trained machine learning-based binary baselines that aim to distinguish between real and deepfake data based on visual artifacts, blending boundaries, attention modules, and motion analysis [14, 21, 23, 30, 55, 67]. These passive Deepfake detection techniques, an ex-post forensics countermeasure, are still in their early stage [61, 62] as these techniques suffer from poor detection accuracy (DA) [14, 45], cross-dataset generalizability [38, 67], obtain differential performance across demographic attributes such as gender and race [37, 60], and are vulnerable to adversarial attacks [37, 60]. Further, they fail to cope with ever-evolving deepfake generation techniques.
Alternatively, watermarking is being actively researched as a proactive defense technique because it involves embedding invisible markers or signatures into authentic media content such as images or videos. These markers are unique to the creator and can help in identifying the authenticity of the content [25, 56, 70] by matching the watermark message retrieved from the media to the original embedded watermarked message. Thus, by watermarking content before it gets shared or distributed, creators can take preventive measures against malicious use or alteration of their work (as indicated by alteration to the watermark retrieved from the media). Hence, promoting and regulating the responsible and ethical use of AI is an important initiative of government leaders and the legislature [68]. Invisible watermarks are preferred because they preserve the image quality and it is less likely for a layperson to tamper with it. Traditional image watermarking [6] techniques typically transform domain coefficients of an image, using various transforms such as the discrete cosine transform (DCT) and Discrete Fourier Transform for watermark embedding. Deep-learning-based techniques such as StegaStamp [56], and HiDDeN [70] have emerged as an efficient solution over traditional image watermarking in terms of an end-to-end solution for efficient message embedding [4, 22, 35, 56, 70]. However, these aforementioned watermarking techniques are also either fragile (i.e., watermark message is altered) to basic image processing operations such as compression and color adjustments [25, 70] or overly robust to malicious transform, which deters manipulated media detection [56]. A study in [40] proposed GAN-based visible watermarking for media authentication. However, visible watermarks are more likely for the layperson to tamper with.
Importantly, for efficient media authentication and detection of manipulations, a semi-fragile invisible watermarking scheme is required that is robust to benign image transformations (such as contrast enhancement) and vulnerable to malicious transformations such as face-swapping-based Deepfakes. In other words, watermark content/messages retrieved from the media remain unaltered to the benign and altered to the malicious transforms (facial manipulations). Resiliency to benign transforms ensures that the authenticity of the digital media can be validated in the presence of basic image processing operations such as compression, resizing, and color adjustment, which are often applied during image sharing, editing, or storing. At the same time, the vulnerability of the embedded watermark to malicious transformations is required to detect potential forgery or unauthorized modifications. Traditional semi-fragile watermarking techniques [25, 56, 70] that function within the transform domain, such as semi-fragile DCT [25], are vulnerable to high-level semantic transformations to images/media and may struggle to keep up with new or advanced manipulation techniques as technology advances. Additionally, the effectiveness of these techniques largely relies on the specific transform domain selected and the parameters set for embedding the watermark. A recent work [41] proposes a first-of-its-kind deep-learning-based semi-fragile invisible watermarking scheme called FaceSigns, which is based on encoder-decoder style model and can withstand benign transformations but is vulnerable to malicious deepfake transformations for manipulated media detection. Although invisible watermarks are less likely for a layperson to tamper with, abusers are not laypersons. They will make a deliberate attempt to remove these watermarks. Therefore, the injected invisible watermark must be robust to evasion (watermark removal) attacks. However, there is a notable gap in this research regarding the model’s susceptibility to watermark removal attacks. Furthermore, there is also limited investigation into the model’s ability to generalize to unseen facial manipulations obtained using different generative techniques.
This paper aims to introduce a novel semi-fragile invisible watermarking scheme for social media authentication that can generate high-quality watermarked images. At the same time, demonstrate resilience (i.e., the retrieved watermark remains intact) to both known and unknown benign transformations. In addition, the watermarking is vulnerable to unknown malicious facial transformations. In addition, the embedded watermark using our proposed model is robust against watermark removal attacks, including white-box attacks [3, 10, 20] and black-box attacks [68]. Thus, addressing the limitation of the existing semi-fragile watermarking technique. This is facilitated through a unique architecture of our proposed model consisting of critic and adversarial networks together with their corresponding novel loss functions, along with the backbone encoder-decoder and the discriminator network. Semi-fragile watermarking was chosen for this authentication method because it effectively balances robustness and sensitivity, making it ideal for deepfake detection. In other words, this watermarking scheme withstands benign transformations, such as resizing or compression, when applied to the watermarked images without triggering false positives while remaining sensitive enough to detect malicious alterations such as Deepfakes. This ensures reliable and accurate detection of Deepfakes, which is crucial to preserving the integrity of social media images. Figure 1 illustrates the overview of the proposed approach in embedding semi-fragile invisible encrypted watermarks in facial images that withstand benign transformations and are vulnerable to malicious transformations for social media authentication. The technical contributions of our work are as follows:
Fig. 1.
Fig. 1. Overview of our proposed framework that involves embedding a secret encrypted message into an image using an encoder-decoder style network for the purpose of media authentication. This watermark is imperceptible to the human eye and resistant to typical image alterations and watermark removal attacks, but it is vulnerable to malicious facial transformations, i.e., Deepfakes.
(1)
A novel semi-fragile invisible facial image watermarking technique for social media authentication and for combating Deepfakes, that offers superior imperceptibility and is resilient to adversarial watermark removal attacks.
(2)
Evaluation of the model’s imperceptibility over other state-of-the-art (SOTA) watermarking models in terms of peak signal-to-noise ratio (PSNR) and structural similarity index metrics (SSIM).
(3)
Robustness analysis of the proposed model against unknown benign and malicious facial manipulation using different generative models.
(4)
Robustness analysis of the proposed model against watermark removal attacks using various white-box based (such as fast gradient sign method (FGSM) [20], Carlini & Wagner (C & W) [10], backward pass differentiable approximation (BPDA), and expectation over transformation (EOT) [3]) and black-box based watermark removal attacks (based on variational autoencoder (VAE) Embedding and Reconstruction [68]).
(5)
Through evaluation of the SOTA facial image datasets, namely, FaceForensics++ [51], CelebFaces attributes (CelebA) [34], and IMDB-WIKI [50], widely adopted for facial manipulation-based deepfake generation and detection.
(6)
Ablation study to better understand the impact of each module (network) used in our proposed model and threat model for the adversarial attacks against our proposed model.
The pros and cons of our proposed work are as follows: Our work presents a novel semi-fragile invisible watermarking scheme for social media authentication, generating high-quality watermarked images that withstand both known and unknown benign transformations while remaining vulnerable to malicious facial manipulations. This is facilitated through our proposed model’s unique architecture, which combines critic and adversarial networks with novel loss functions, a backbone encoder-decoder, and a discriminator network. This innovative design enables our watermarking scheme to overcome the shortcomings of previous watermarking methods, including susceptibility to watermark removal attacks, white-box attacks, and limited generalizability to unseen facial manipulations. Our work has two primary limitations. Firstly, the complexity of our model necessitates advanced hardware and GPU support, which we plan to address in future iterations by optimizing the model for practical deployment. Secondly, we were unable to simulate all the potential attacks described in the threat model in Section 7.2, but we intend to expand our model to better withstand these threats in future work.
This paper is organized as follows. Section 2 discusses the prior work on facial manipulation generation, passive deepfake detection, and image watermarking for media authentication. Section 3 discusses our proposed methodology of semi-fragile invisible watermarking technique. Section 4 discusses the implementation and experimental details, including the datasets used and the performance evaluation metrics. Section 5 discusses the imperceptibility and capability measures of various watermarking schemes. Section 6 discusses the comparative evaluation of the robustness and fidelity of different watermarking methods by exposing them to unseen benign and malicious transformations and calculating the retrieved bit recovery accuracy (BRA). Section 7 discusses the adversarial attacks and threat analysis of the proposed models against different white-box and black-box adversarial attacks along with the threat model. Section 8 discusses the ablation study to better understand the impact of each module used in our proposed model. Section 9 discusses the conclusion and future research directions.

2 Related Work

2.1 Facial Manipulation Generation and Passive Deepfake Detection

Facial manipulations are categorized primarily into three groups: identity swap [43], attribute manipulation [13, 24], and expression swap [57, 58]. Generative models such as auto encoders [5] (such as Faceswap), generative adversarial networks (GANs) [19] (as FSGAN [43] and AttGAN [24]), and Diffusion Models (stable diffusion) [26] are used to create highly realistic fake content, including non-existent faces or altering existing ones [47]. Among all, GAN is the most commonly adopted model for the generation of facial manipulation because it excels at generating high-quality realistic images that mimic the distribution of the original dataset.
The majority of methods currently used for DeepFake detection are based on convolutional neural networks (CNNs) (such as VGG16, ResNet50, ResNet101, ResNet152, and Xception) based binary baselines [32, 37, 38, 39, 59]. Other approaches include Long-Short-term Memory networks [12] to analyze spatio-temporal data, using facial and behavioral biometrics [1, 17, 49], examining inconsistencies in mouth movements [21], multi-attentional [67] models that focus on different parts of the image, and \(F^{3}\)-Net [48], which detects subtle manipulative patterns by analyzing frequency aspects of images. Additionally, an ensemble model [45] that combines two ConvNext models trained at varying epochs, and a Swin transformer has recently been proposed for enhanced deepfake detection.

2.2 Image Watermarking for Media Authentication

The existing digital watermarking techniques are utilized to embed three types of watermarks: fragile [8, 36], robust [9, 15, 46, 70], and semi-fragile [25, 33, 41, 54, 69]. Fragile watermarks are particularly sensitive, designed to invalidate the authentication of an image at the slightest modification, ensuring stringent authenticity checks. In contrast, robust watermarks are crafted to endure various forms of manipulation, thus allowing content creators to assert ownership over their media, even when it undergoes alterations. Semi-fragile watermarks combine features of both fragile and robust watermarks, i.e., fragile to manipulations and robust to genuine transformations. Traditional embedding techniques for semi-fragile watermarks have manipulated both the spatial, such as least significant bits [64], and frequency domains (such as DCT [25] and DWT [7, 29]) of digital media. However, these conventional approaches can either make watermarks perceptible, distort the media, or render them susceptible to image transformations, particularly JPEG compression. Thus, rendering them inefficient for media authentication against tampering and alteration.
Deep-learning-based watermarking techniques offer more efficient watermark encoding with high imperceptibility compared to traditional techniques. A robust watermarking technique called HiDDeN is proposed in [71], consisting of an encoder, a decoder, and a discriminator. However, this technique introduces distortions in the media and is not suitable for identifying manipulated media. Similarly, a watermarking technique called StegaStamp [56] encodes hyperlinks into image pixels using a trained neural network, imperceptible to human eyes. However, this model lacks vulnerability against malicious transformations and therefore is unsuitable for media authentication. A study in [41] introduces a semi-fragile deep-learning-based invisible watermark into the image pixels (FaceSigns) that utilizes a U-Net-based encoder-decoder architecture designed to be robust against benign image-processing operations yet fragile to any facial manipulation for media authentication. However, the proposed model is not resistant to adversarial attacks based on watermark removal, rendering it unsuitable under adversarial settings.

3 Proposed Methodology

Figure 2 illustrates an overview of our proposed proactive defense technique based on U-Net-based encoder-decoder architecture for invisible image watermarking. The five primary components of our proposed system are an encoder network \(E_{\alpha}\), a decoder network \(D_{\beta}\), an adversary network \(A_{adv}\), an adversarial discriminator network \(A_{\gamma}\), and a critic network \(C\). Training the encoder \(E_{\alpha}\) and decoder \(D_{\beta}\) networks involves embedding watermarks and encouraging message retrieval from watermarked images that have undergone benign modifications and discouraging retrieval from watermarked images that have undergone malicious changes. The adversary network \(A_{adv}\) makes an effort to mimic an intruder to remove the watermark, making it resistant to watermark removal approaches. The imperceptibility of the watermark is guaranteed by image reconstruction and adversarial loss from the discriminator \(A_{\gamma}\). The critic network, denoted as \(C\), is responsible for assessing the quality of images by evaluating their degree of authenticity or realism.
Fig. 2.
Fig. 2. Overview of our proposed semi-fragile watermarking technique based on U-Net-based encoder-decoder architecture for media authentication. Training the encoder \(E_{\alpha}\) and decoder \(D_{\beta}\) network involves encouraging message retrieval from watermarked images that have undergone benign modifications and discouraging retrieval from watermarked images that have undergone malicious changes. The critic \(C\) network is in charge of obtaining a critic score based on the quality of the image by estimating how “real” or “authentic” the images appear. The adversary network \(A_{adv}\) mimics the efforts of an intruder to remove the watermark for adversarial purposes. The imperceptibility of the watermark is guaranteed by image reconstruction and adversarial loss from the discriminator \(A_{\gamma}\). The loss functions proposed associated with all networks in our proposed model are also shown in the figure.
In detail, the encoder network \(E_{\alpha}\) takes an input image \(x\) and a bit string \(b\in\left\{0,1\right\}^{L}\) of length \(L\), and outputs a watermarked image \(x_{w}\) where \(x_{w}=E(x,b)\). These watermarked images undergo image transformations, which include benign as well as malicious transformations. In this context, the watermarked images generated from the encoder undergo benign image transformations \((g_{bt}\sim G_{bt})\) to obtain a benign image \(x_{bt}=g_{bt}(x_{w})\). Similarly, watermarked images of the encoder undergo malicious facial manipulation-based transformations \((g_{mt}\sim G_{mt})\) to obtain a malicious image \(x_{mt}=g_{mt}(x_{w})\). These transformed watermarked images are fed to the decoder network to retrieve the embedded watermarked message \(b_{bt}=D(x_{bt})\) and \(b_{mt}=D(x_{mt})\) (note that \(b^{{}^{\prime}}\) is the notation used to denote the bit string retrieved for any image in general in Section 4.3), respectively.
During training, we employ the \(L_{1}\) distortion between the retrieved and ground truth bit strings to optimize secret watermark retrieval. Further, the decoder is encouraged to minimize message distortion \(L_{1}(b,b_{bt})\) in order to make them resilient to benign transformations, and to maximize error \(L_{1}(b,b_{mt})\) to make them vulnerable to malicious manipulations. Therefore the secret retrieval error for an image \(L_{RE}(x)\) is given as follows:
\begin{align}L_{RE}(x)=L_{1}(b,b_{bt})-L_{1}(b,b_{mt}).\end{align}
(1)
Further, we calculate the image reconstruction loss between the original \(x\) and watermarked image \(x_{w}\) by optimizing three specific image distortion metrics: (\(L_{1},L_{2},L_{pips}\)). Each of these metrics measures different aspects of image distortion, helping to ensure that the watermarked image retains visual fidelity to the original while embedding the necessary data. For example, the \(L_{1}\) metric calculates the absolute differences between the corresponding pixel values of the original and watermarked images. Similarly, \(L_{2}\), also known as the mean squared error, calculates the square of the Euclidean distance between the original and watermarked images. Finally, the \(L_{pips}\) metric evaluates the perceptual similarity between two images based on their high-level features extracted from pre-trained deep networks. The pips loss is particularly effective in assessing how perceptually similar two images are beyond just their direct pixel-wise differences.
These metrics collectively contribute to \(L_{d}(x,x_{w})\), which is used to compute the image reconstruction loss. This optimization ensures that the watermarked image closely resembles the original image in terms of aesthetics. In addition, we incorporate an adversarial loss \(L_{G}(x_{w})=\log(1-A(x_{w}))\), derived from a discriminator that is concurrently trained to distinguish between the watermarked and original images. Consequently, the total image reconstruction loss is computed by combining these individual loss components.
\[L_{d}(x,x_{w})=L_{1}(x,x_{w})+L_{2}(x,x_{w}) +c_{pips}L_{pips}(x,x_{w}),\]
(2)
\[L_{image}(x,x_{w})=L_{d}(x,x_{w})+c_{g}L_{G} (x_{w}).\]
(3)
Finally, mini-batch gradient descent is used to train the encoder and decoder network’s parameters \(\alpha,\beta\) to maximize the following loss over the distribution of input messages and images:
\begin{align}\mathbb{E}_{x,b,g_{bt},g_{mt}}[L_{image}(x,x_{w})+c_{RE}L_{RE}(x)].\end{align}
(4)
Likewise, original images \(x\) and watermarked images \(x_{w}\) are trained using the discriminator parameters \(\gamma\) as follows:
\begin{align}\mathbb{E}_{x,b}[\log(1-A(x))+\log(A(x_{w}))].\end{align}
(5)
In the above equations, \(c_{pips},c_{g},c_{RE}\) are scalar coefficients for the respective loss terms which are obtained through empirical evidence.
In addition to the encoder \(E_{\alpha}\), decoder \(D_{\beta}\), and discriminator networks \(A_{\gamma}\), we also introduce the critic \(C\) and adversary networks \(A_{adv}\) in the overall model pipeline.

3.1 Critic

The critic network, denoted as \(C\), is responsible for assessing the quality of images by evaluating their degree of authenticity or realism. Motivating the encoder to watermark the images in a way that makes the distortion less obvious and deceives the observer, thus improving the quality of the watermarked images. The two convolutional blocks that comprise this module are followed by a linear classification layer that generates the critic score and an adaptive spatial pooling layer. The loss associated with the critic network is given as follows:
\begin{align}L_{c}=\mathbb{E}_{x,b}[C(E(x,b))].\end{align}
(6)
We further optimize the critic \(C\) module using the Wasserstein loss function that is employed to distinguish between real and watermarked images which generally provides a more stable gradient that helps in smoother and more reliable training.
\begin{align}L_{w}=\mathbb{E}_{x}[C(x)]-\mathbb{E}_{x,b}[C(E(x,b))].\end{align}
(7)

3.2 Adversary

The adversary network makes an effort to mimic an intruder to remove the watermark. To be more precise, an adversary network takes watermarked images and extracts the watermark to produce an additional set of unaffected images. This module is similar to the encoder module except that it does not have a data tensor. This module comprises two convolutional blocks followed by a linear layer that creates the residual mask. Subsequently, we employ a scaled TanH activation function to limit the maximum perturbation of each pixel to \(\pm\)0.01. We then combine the residual mask with the watermarked image to produce the final output. The loss associated with the Adversary network is given as follows:
\begin{align}L_{adv}=\mathbb{E}_{x,b}[CrossEntropy(b,D(A_{adv}(E(x,b))))].\end{align}
(8)
We further optimize the adversary module \(A_{adv}\), which incorporates the negative cross-entropy loss to instruct the adversary to remove the embedded watermark.
\begin{align}L_{r}=-\mathbb{E}_{x,b}[CrossEntropy(b,D(A_{adv}(E(x,b))))].\end{align}
(9)
Finally, the overall combined loss associated with the proposed model is given by:
\begin{align}L_{total}=\mathbb{E}_{x,b,g_{bt},g_{mt}}[L_{image}(x,x_{w})+c_{RE}L_{RE}(x)] + \mathbb{E}_{x,b}[\log(1-A(x))+\log(A(x_{w}))]+L_{w}+L_{r}. \end{align}
(10)

3.3 Message Encoding

Watermarking data is used by the encoder network as a bit string \(b\) with length \(L\). This watermarking data may include a secret message that may be used to verify the authenticity of the image or details about the camera that took the image. Using hashing, symmetric, or asymmetric encryption methods,2 we can encrypt a target message to deter adversaries (who have obtained white-box access to the encoder network) from encoding it. In our experiments, we incorporate \(64\)-bit encrypted messages, enabling the network to encode \(2^{64}\) distinct messages. Encryption involves securing data by converting readable information, termed plaintext, into an encoded format called ciphertext. In our study, we employ symmetric encryption, where the same secret key is used for both the encryption and the decryption processes. Specifically, we utilize the data encryption standard (DES), a symmetric key algorithm designed for electronic data encryption. DES functions as a block cipher, encrypting data in blocks, typically \(64\)-bit blocks, using a \(56\)-bit secret key for message encryption or decryption [52].

3.4 Network Architectures

Following existing studies in [27, 41], the foundation of our encoder and decoder networks is the U-Net CNN architecture, which takes images of size \(224\times 224\). Initially, a fully trained trainable layer converts encrypted messages, which are represented as an L-length bitstring, to the \(84\times 84\) tensor \(b_{proj}\). The original RGB image is scaled to \(224\times 224\) using bilinear interpolation, and these tensors are then added as the fourth channel to form the encoder network’s input. There are eight downsampling and eight upsampling layers in the U-Net encoder. As recommended by [41, 44], we improve the original U-Net architecture by substituting convolutions followed by nearest-neighbor upsampling for transposed convolutions in the upsampling layers. The structure of the decoder network substantially resembles the encoder network. First, the U-Net decoder creates an intermediate output that is \(224\times 224\). The bilinear downsampling is then used to reduce the size of the intermediate output to \(84\times 84\), creating \(b_{Decoded}\). After that, a fully connected layer project \(b_{Decoded}\) onto a vector of size \(L\). A sigmoid layer is then used to scale the values between \(0\) and \(1\).
We used the patch discriminator described in [27] for the discriminator network. The discriminator’s job is to identify if each \(N\times N\) image patch is legitimate or fake. To obtain the output of the discriminator, we aggregate the discriminator responses across all patches. Three convolutional blocks with a stride of \(2\) are used for our discriminator network, making it easier to classify patches of size \(28\times 28\).
Transformation Functions. In our work, we used benign and malicious transformation functions to establish the robustness and fragility of the embedded watermark using our proposed model.
Benign Transforms. During training, we apply the diverse set of differentiable benign image transformations, denoted as (\(G_{bt}\)), to our watermarked images, in order to imitate usual image processing operation.
(1)
JPEG Compression: Recall that during training, we apply JPEG compression with quality of 25%, 50%, and 75%. We use the differentiable JPEG function introduced in [53] to approximate JPEG compression.
(2)
Gaussian Blur: We use a Gaussian kernel \(k\) to convolve the original image. The expression for this transform is t(x) \(=\) k * x, where \(*\) denotes the convolution operator. We employ kernel sizes between \(k=5\) and \(k=10\).
(3)
Saturation Settings: We randomly linearly interpolate between the original image and its grayscale version to allow for different color modifications from social media filters.
(4)
Contrast Settings: Using a contrast factor \(\sim u[0.8,1.8]\), we linearly rescale the histogram of the image.
(5)
Downsizing and Upsizing: Using bilinear upsampling, the image is first downscaled by a factor scale \(\sim u[3,8]\) and then upsampled by the same factor.
(6)
Translation and Rotation: We shift image both horizontally and vertically by \(n_{h}\) and \(n_{v}\) pixels where \(n_{h},n_{v}\sim u[-8,8]\) and rotated by \(\theta\) degrees where \(\theta\sim u[-8,8]\).
In general, Compression attacks, like those using JPEG algorithms, are deliberate attempts to degrade image quality or test system vulnerabilities, often introducing artifacts to obscure details. In contrast, social media platforms apply compression to optimize storage and improve loading times, aiming for efficient data use while maintaining acceptable visual quality. While both processes involve lossy compression, the former is typically used for exploiting or testing, whereas the latter is a routine practice for user experience.
During training, we selected one transformation function from the aforementioned list, together with an Identity transform, for every mini-batch iteration, and we applied it to every image in the batch.
Malicious Transforms. The embedded watermarks obtained using our proposed model should be vulnerable to all malicious attacks or generative techniques. To facilitate this, we assume that all Deepfake approaches operate by modifying facial features to mimic the appearance of the target identity. Consequently, we represent malicious manipulation as a transformation function (\(g_{mt}\)) that specifically involves changing the watermark within certain facial regions. To this front, the facial landmarks are detected using MTCNN [66], Then these points are used as vertices to create polygons. For example, the landmarks identifying the outline of the lips are connected to form a lip polygon. We used the Dlib library [28] to draw these polygons on the image by connecting the landmark points. This library provides functions to draw shapes based on specified points. Then, for every image, we create a mask \(M_{h\times w\times c}\) made up of all ones. Then, we locate the polygons that represent the lips, nose, and eyes on the face, and we set the pixel values inside these polygons to a preset watermark retention percentage, \(w_{r}\in[0,1]\). In other words, M[i, j,: ] \(=\) \(w_{r}\) for all pixels (i, j) inside the face feature polygons. Ultimately, the maliciously altered image, \(g_{mt}(x_{w})\), is determined based on the below equation:
\begin{align}g_{mt}(x_{w})=M.x_{w}+(1-M).x.\end{align}
(11)
Thus, based on the aforementioned configuration, we have implemented three different versions of this model as shown in Table 1 where U-Net denotes the encoder-decoder model with the adversarial network, \(C\) is the critic model, and \(A_{adv}\) is the adversarial network.
Table 1.
ModelTransformations used during training
Our-U-Net+C+\(A_{adv}\) (Baseline)No Transformations
Our-U-Net+C+\(A_{adv}\) (\(g_{bt}\))Only Benign Transformations
Our-U-Net+C+\(A_{adv}\) (\(g_{bt}\),\(g_{mt}\))Both Benign and Malicious Transformations
Table 1. Implementation of Different Configurations of Our Proposed Model

4 Experimental Validations

In our experiments, we used three datasets, namely, FaceForensics++, CelebA, and IMDB-WIKI datasets for both intra-dataset and cross-dataset evaluations.

4.1 Datasets

FaceForensics++: FaceForensics\(++\) (FF\(++\)) [51] is an automated benchmark for facial manipulation detection. It consists of several manipulated videos created using two different generation techniques: Identity Swapping (FaceSwap, FaceSwap-Kowalski, FaceShifter, Deepfakes) and Expression Swapping (Face2Face and NeuralTextures). We used the FF\(++\) dataset’s \(c23\) version for both training and testing (\(80\%\) videos for training, \(20\%\) videos for testing, with 60 frames per video). We used the real images from this dataset for embedding watermarks using our model.
CelebA: The Large-scale CelebA Dataset [34] is the publicly available face dataset with more than \(200K\) celebrity images. In addition, this dataset covers large pose variations and background clutter with \(10k\) identities, 202,599 face images, \(5\) landmark locations, and \(40\) binary attribute annotations per image. This dataset is used to train our model (\(70\%\) used for training and \(30\%\) used for testing) to generate watermarked facial images.
IMDB-WIKI: IMDB-WIKI [50] is a highly curated dataset of popular celebrities that is created from both the IMDB website and Wikipedia. The dataset has rich annotations like DOB, year of photo taken, gender, name of the celebrity, and celebrity ID along with other essential information in the metadata file. Altogether, the dataset has 460,723 facial images representing 20,284 celebrities sourced from IMDb, and 62,328 images from Wikipedia, resulting in a combined total of 523,051 images.

4.2 Training Procedure

The training process involves 70,000 mini-batch iterations of images with a batch size of \(64\), employing an Adam optimizer with a fixed learning rate of \(0.0001\). All the implementation was performed using Python, and all the models were trained using images of size \(224\times 224\), obtained after cropping and resizing the images from the datasets. In our experiments, we consider a message length of L \(=\) 64 (i.e., \(2^{6}\)) bits. Here, the bit length is explicitly specified as L \(=\) 64, which is equivalent to \(2^{6}\) bits. This means that each message handled in these experiments consists of \(64\) binary digits (bits), allowing a substantial amount of information per message given that the total number of possible distinct messages that can be created with \(64\) bits is \(2^{64}\). This message length is quite significant because it offers a large combination of bits, enabling strong encryption standards or the capacity to handle complex data structures or identifiers in computational and cryptographic applications. The parameters are computed once for the model during the offline training stage, and not for each input image. Figure 3 shows the pictorial representation of watermarked output \(x_{w}\) when the original image \(x\) is input to our proposed model for watermarking.
We primarily evaluate image watermark embedding techniques based on the following criteria.
Fig. 3.
Fig. 3. Pictorial representation of watermarked output \(x_{w}\) when original image \(x\) is given to our proposed model for watermarking.

4.3 Evaluation Criteria

Imperceptibility and Capacity: For imperceptibility, we compare the original and watermarked images using two metrics: PSNR and SSIM. Both PSNR and SSIM are widely used to assess the quality and perceptibility of watermarked images. A higher PSNR signifies less distortion between the original and watermarked images, while a higher SSIM value indicates closer resemblance. Thus, higher values for both metrics are preferable, suggesting a more imperceptible watermark.
Capacity refers to the quantity of information that can be successfully embedded within an image. This metric is crucial in contexts like digital watermarking, where you need to hide data within visual content without affecting the perceptibility or integrity of the original image. The amount of bits of the encrypted message embedded in each pixel of the image is measured in terms of bits per pixel, or BPP which corresponds to the number of bits of the encrypted message embedded per pixel of the image. This is simply calculated as the ratio of the message length (L) to the total number of pixels in the image (\(H\times W\times C\)) defining the capacity.
The challenge is to embed enough data without impacting the imperceptibility of the image.
Robustness and Fragility: We quantify the BRA of the embedded watermarked images under the unknown benign and malicious image transformations to evaluate the robustness and fragility of the watermarking technique. A high BRA against unseen benign modifications would indicate that the watermarked or embedded data should remain detectable and recoverable even after the image has undergone common image processing operations. On the other hand, a low BRA is preferred for fragility against malicious transformations such as Deepfakes.
In this context, to calculate the BRA, we directly compare the original input bit string, denoted \(b\), with the decoded output, \(b^{{}^{\prime}}\!\), from the decoder. Recall that in our experiments, we use the same \(56\)-bit secret key for both encryption and decryption of \(64\)-bit message bit string using DES [52] (please refer to section 3.3). This \(56\)-bit secret key is used to decrypt the extracted \(64\)-bit string (\(b^{{}^{\prime}}\)) from the watermarked image.
Let \(n\) represent the total number of bits in \(b\) (and \(b^{{}^{\prime}}\), assuming they have the same length), and let \(m\) denote the number of bits that match between \(b\) and \(b^{{}^{\prime}}\) using distance metrics, hamming distance in our study. The BRA is then calculated using the following equation:
\begin{align}BRA=\frac{m}{n}\times 100\%.\end{align}
(12)

5 Imperceptibility and Capacity

We evaluate the imperceptibility and capacity of our proposed watermarking framework against four existing deep-learning-based image watermarking techniques: HiDDeN [70], StegaStamp [56], Semi-fragile-DCT [25], and FaceSigns [41] trained on benign and malicious transformations.
In Table 2, we present the image imperceptibility and capacity metrics of different watermarking techniques. Our observations reveal that our model achieves superior imperceptibility in encoding messages compared to those encoded by StegaStamp, HiDDeN, and Semi-fragile-DCT, as denoted by higher PSNR and SSIM. The imperceptibility of our proposed model is at par with the FaceSigns model (trained on both benign and malicious transformations) which is due to the similar backbone architecture, i.e., U-Net. We also observed that while it’s commonly believed that there is a trade-off between capacity and imperceptibility, this isn’t always the case. For instance, as shown in Table 2, the HiDDeN watermarking scheme with just a \(30\)-bit length did not yield better imperceptibility in terms of PSNR and SSIM. This suggests that the quality of imperceptibility depends significantly on how effectively the bit string is embedded rather than solely on the capacity of the data embedded.
Table 2.
MethodCapacityImperceptibility
H,WLBPPPSNRSSIM
SemiFragile-DCT1282565.2e-320.290.846
Hidden128306.1e-424.960.928
StegaStamp4001002.0e-428.640.922
FaceSigns (Semi-Fragile)2561286.5e-436.080.975
Our-U-Net (Baseline)224643.71e-434.240.959
Our-U-Net+C224643.71e-434.820.965
Our-U-Net+C+\(A_{adv}\)224643.71e-435.570.970
Table 2. Capability and Imperceptibility Measures of Various Invisible Image Watermarking Schemes
The input image’s width and height are denoted by \(H\) and \(W\).
The improved performance of our model is largely due to the difference in the network architecture. In addition, we utilized an intermediate message reconstruction loss, which encourages the network to preserve important features and details throughout its depth, which might otherwise be lost during downsampling in the contracting path. Furthermore, our model (Our-U-Net+C+\(A_{adv}\)) employs nearest-neighbor upsampling instead of transposed convolutions. This choice helps minimize upsampling artifacts, further improving the imperceptibility of the image with an embedded watermark, as also noted in [41]. We also did experiments with a \(128\)-bit length string using our proposed model and obtained similar imperceptibility as compared to a \(64\)-bit length string. Therefore, we used a \(64\)-bit length string for all the remaining experiments as a more computational-friendly alternative.

6 Robustness and Fidelity

In order to examine the resilience and susceptibility of different Deep Neural Network-based watermarking methods, we expose the watermarked images to unseen benign and malicious transformations and evaluate the retrieved BRA. The results highlighted in bold are the top performances.

6.1 Benign Transform

For benign transformations, we applied different levels of Gaussian blur, JPEG compression, and different Instagram filters, namely Aden, Brooklyn, and Clarendon. We utilized the open-source Pilgram library [2] to implement various Instagram filters, including Aden, Brooklyn, and Clarendon, to test their impact on our proposed watermarking scheme. This Pilgram [2] library provides a range of image processing filters, including Instagram-style filters. Figure 4(a) shows the illustration of Gaussian blur on invisible watermarked images using different kernel sizes and \(\sigma\) values. Figure 4(b) shows the illustration of JPEG compression on invisible watermarked images at different compression rates varying from \(25\%\to 75\)%, respectively. For Figure 4, the watermarked samples are generated using our U-Net+C+\(A_{adv}\) model trained only on benign transformations.
Fig. 4.
Fig. 4. (a) Illustration of Gaussian blur on invisible watermarked images using different kernel sizes and the \(\sigma\) values. (b) Application of JPEG compression to invisible watermarked images at different compression rates ranging from 25 → 75 [best viewed in Zoom].
Table 3 tabulates the BRA in \(\%\) for different watermarking techniques after applying Gaussian blur as a benign transform at different kernel sizes and \(\sigma\) values. Table 4 tabulates BRA in \(\%\) for different watermarking techniques after applying JPEG compression (benign transform) at compression rates of 25\(\%\), 50\(\%\), and 75\(\%\), respectively. Table 5 tabulates the effect of unseen benign transformations on invisible watermarked images in terms of BRA using different Instagram filters. All models are trained and tested on the CelebA dataset.
Table 3.
MethodGaussian Blur (BRA\(\%\))
None(Kernel_size \(=\) 3, \(\sigma\) \(=\) \(-\)1)(Kernel_size \(=\) 5, \(\sigma\) \(=\) \(-\)1)(Kernel_size \(=\) 19, \(\sigma\) \(=\) \(-\)1)(Kernel_size \(=\) 23, \(\sigma\) \(=\) \(-\)1)
SemiFragile DCT99.4376.2472.1967.6363.49
Hidden97.6584.9782.2979.4573.62
StegaStamp99.6299.1498.6594.9592.48
FaceSigns (Semi-Fragile)99.4998.2496.5894.1291.65
U-Net\(+\)C\(+\) \(A_{adv}\) (Baseline)99.2995.5692.6189.3785.95
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\))99.4599.1898.7696.5993.87
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\)+\(g_{mt}\))99.3198.8497.2895.5892.26
Table 3. The Effect of Gaussian Blur on the Invisible Watermarked Images, in Terms of BRA with Different Kernel Sizes and \(\sigma\) Values on Different Versions of Our Proposed Model as Shown in Table 1
Table 4.
MethodJPEG Compression (BRA\(\%\))
None25\(\%\)50\(\%\)75\(\%\)
SemiFragile DCT99.4360.1358.2853.86
Hidden97.6571.4570.6467.19
StegaStamp99.6298.3897.2895.24
FaceSigns (Semi-Fragile)99.4998.5495.3893.75
U-Net\(+\)C\(+\) \(A_{adv}\) (Baseline)99.2992.4977.7871.59
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\))99.4598.6895.8794.52
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\)+\(g_{mt}\))99.3196.6892.2589.45
Table 4. The Effect of JPEG Compression on the Invisible Watermarked Images in Terms of BRA at Different Compression Rates on Different Versions of Our Proposed Model as Described in Table 1
Table 5.
MethodInstagram Filters (BRA\(\%\))
NoneAdenBrooklynClarendonAden\(+\)BrooklynBrooklyn\(+\)ClarendonAden\(+\)ClarendonAden\(+\)Brooklyn\(+\)Clarendon
SemiFragile DCT99.4393.4795.7995.1292.3993.4294.7491.56
Hidden97.6594.6193.8294.1391.3791.2992.2389.79
StegaStamp99.6299.4899.2699.0997.1896.2895.7994.13
FaceSigns (Semi-Fragile)99.4999.4599.2299.1598.5397.8697.5196.36
U-Net\(+\)C\(+\) \(A_{adv}\) (Baseline)99.2999.2798.8798.6497.2496.0295.0994.61
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\))99.4599.3999.2599.1998.8798.5498.6997.18
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\)+\(g_{mt}\))99.3198.1998.5698.2897.4997.1796.5295.37
Table 5. The Effect of Unseen Benign Transformations on the Invisible Watermarked Images in Terms of BRA When Different Instagram Filters Are Applied on the Watermarked Images Obtained Using Different Versions of our Proposed Model as Described in Table 1
The overall performance in terms of BRA is \(97.1\%\) for the U-Net\(+\)C\(+\) \(A_{adv}\) on Gaussian blur, which is better than \(96.3\%\) BRA of the second-best model, StegaStamp, when only benign transformations \(g_{bt}\) are used during the training stage. Similar observations were seen on JPEG compression and unseen benign transformations.
Figure 5 illustrates the application of Instagram filters as unseen benign transforms on invisible watermarked images. In this study, Instagram filters such as Brooklyn, Clarendon, and Aden are used. These filters are designed to enhance facial images with unique aesthetic effects, each altering images in distinctive ways to cater to diverse visual preferences and styles. Figure 5 illustrates the application of multiple Instagram filters and their combined impact as unseen benign transforms on invisible watermarked images. In this study, Instagram filters such as Brooklyn, Clarendon, and Aden are used. The symbol (A \(+\) B) in the figure denotes the combined application of the Brooklyn and Aden filters to the watermarked image. Similarly, (B \(+\) C) in the figure denotes the combined application of Brooklyn and Clarendon filters to the watermarked image. Finally, (A \(+\) B \(+\) C) in the figure denotes the combined application of Aden, Brooklyn, and Clarendon filters to the watermarked image. The evaluation shows that the model incorporating U-Net with a Critic (\(C\)) and an Adversary (\(A_{adv}\)) network performs best in terms of BRA. The overall performance in terms of BRA is \(98.73\%\) for U-Net\(+\)C\(+\) \(A_{adv}\) on unseen benign transformations, which is better than \(98.29\%\) BRA of the second-best model, Facesigns (Semi-Fragile), when only benign transformations \(g_{bt}\) are used during the training stage. This superior performance is consistently observed across various settings that involve different image filters. The consistent performance of the model across all unknown benign transformations can be attributed to its specialized training exclusively on benign transformations such as cropping, compression, or subtle filtering. Training specifically on these types of transformations enables the model to become highly proficient in identifying and managing the specific patterns and distortions they introduce.
Fig. 5.
Fig. 5. (a) Application of unseen benign transforms on invisible watermarked images. Instagram filters like Brooklyn, Clarendon, and Aden are examples of benign transformations shown in this diagram. (b) Combined application of unseen benign transforms on invisible watermarked images. In this work, Instagram filters such as Brooklyn, Clarendon, and Aden are used. The symbol (A \(+\) B) in the figure denotes the combined application of the Brooklyn and Aden filters to the watermarked image. Similarly, (B \(+\) C) in the figure denotes the combined application of Brooklyn and Clarendon filters to the watermarked image. Finally, (A \(+\) B \(+\) C) in the figure denotes the combined application of Aden, Brooklyn, and Clarendon filters to the watermarked image.
Overall, training the model using only benign transformations renders it robust to unseen benign transformations. Further, the integration of the adversary module during the training stage also plays a pivotal role in enhancing the robustness and imperceptibility of the watermarking process to unknown benign transformations. This process renders the watermark resilient against a variety of benign transformations, thereby preserving the integrity of the media content.

6.2 Malicious Transforms

For malicious transformations, we applied facial manipulations on the watermarked images using different generative models, namely, auto-encoder, GANs, and diffusion models and calculated the BRA. In this case, a low BRA is preferred for fragility against malicious transformations such as Deepfakes.
FaceSwap Based Malicious Transforms: The Faceswap model is a graphics-based method that aligns the facial landmarks to swap the faces between the source and the target using an encoder and decoder style model. This technology is widely used for various applications ranging from entertainment and media to more serious uses such as personalized advertisements and synthetic data generation for AI training. Figure 6 shows the sample watermarked facial images with identity swaps generated from the Faceswap model. The input to the Faceswap model is the source image (\(x_{sw}\)) (not watermarked) and target image (\(x_{tw}\)) (watermarked). The output is the maliciously transformed facial image \(x_{mt}=g_{mt}(x_{sw},x_{tw})\) with the identity swapped between the source and the target. For detailed technical description and implementation details, please refer to face-swap-based malicious transforms.3
Fig. 6.
Fig. 6. Example of sample watermarked facial images with identity swapped using the Faceswap model. The input to the Faceswap model is the source (\(x_{sw}\)) (not watermarked) and the target image (\(x_{tw}\)) (watermarked). The output is the malicious transformed facial image \(x_{mt}=g_{mt}(x_{sw},x_{tw})\) with the identity swapped between the source and the target.
Table 6 shows the effect of malicious transformations based on FaceSwap (based on the encoder-decoder model) on invisible watermarked target images obtained using our proposed model in terms of BRA. All models are trained on the FF\(++\) dataset and tested on the FF\(++\) and CelebA datasets. From the table, overall performance in terms of (BRA) is \(42.62\%\) for the U-Net\(+\)C\(+\) \(A_{adv}\) on Faceswap-based malicious transforms, which is lower than \(43.49\%\) BRA of second best model FaceSigns (Semi-Fragile), when both benign \(g_{bt}\) and malicious transformations \(g_{mt}\) are used during training.
Table 6.
Testing DatasetMethodBRA(\(\%\))
NoneFaceswap
FF\(++\)SemiFragile DCT99.4384.29
Hidden97.6579.26
StegaStamp99.6296.39
FaceSigns (Semi-Fragile)99.4943.84
U-Net\(+\)C\(+\) \(A_{adv}\) (Baseline)99.2952.71
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\))99.4563.39
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\)+\(g_{mt}\))99.3141.28
CelebASemiFragile DCT99.2885.45
Hidden97.4671.64
StegaStamp99.5195.56
FaceSigns (Semi-Fragile)99.2743.14
U-Net\(+\)C\(+\) \(A_{adv}\) (Baseline)98.5954.59
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\))99.1666.72
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\)+\(g_{mt}\))98.8243.97
Table 6. The Effect of Faceswap Encoder-Decoder-Based Malicious Transformation on the Invisible Watermarked Target Images, Obtained Using Our Proposed Models in Table 1, in Terms of BRA
All models are trained on the FF\(++\) dataset.
A lower BRA indicates an increased fragility, which is particularly valuable in the context of detecting malicious transformations like Deepfakes. These results are consistent for the FaceSwap model in both intra- and cross-dataset settings. The superior performance in terms of low BRA can be attributed to the malicious transform used during the training stage. This shows that the transform is able to mimic the facial manipulation process where the facial features are perturbed. Similarly, Table 7 shows the BRA for Faceswap model-based malicious transformations when trained on CelebA and tested on FF\(++\), and CelebA datasets. Again, overall performance in terms of (BRA) is \(39.62\%\) for the U-Net\(+\)C\(+\) \(A_{adv}\) on Faceswap-based malicious transforms, which are lower than \(41.95\%\) BRA of second best model FaceSigns (Semi-Fragile), when both benign \(g_{bt}\) and malicious transformations \(g_{mt}\) are used during training. These results are consistent for the Faceswap model in both intra- and cross-dataset settings.
Table 7.
Testing DatasetMethodBRA(\(\%\))
NoneFaceswap
FF\(++\)SemiFragile DCT99.2981.53
Hidden97.7378.24
StegaStamp99.5594.72
FaceSigns (Semi-Fragile)99.1845.58
U-Net\(+\)C\(+\) \(A_{adv}\) (Baseline)99.3652.79
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\))99.2564.37
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\)+\(g_{mt}\))98.6942.72
CelebASemiFragile DCT99.5184.69
Hidden98.1275.42
StegaStamp99.6796.83
FaceSigns (Semi-Fragile)99.3838.32
U-Net\(+\)C\(+\) \(A_{adv}\) (Baseline)99.1849.88
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\))99.4862.11
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\)+\(g_{mt}\))99.2736.52
Table 7. The Effect of Faceswap Encoder-Decoder-Based Malicious Transformation on the Invisible Watermarked Target Images, Obtained Using Our Proposed Models, in Terms of BRA
All models are trained on the CelebA dataset.
GAN-Based Malicious Transforms: For this experiment, we have used three popular GAN variants, namely, FSGAN [43] (identity swap), StarGAN [13], and AttGAN [24] (attribute manipulation) for the generation of manipulated images. These GANs are widely used for identity, expression, and attribute-based facial manipulation generation. For detailed technical description and implementation details, please refer to the original paper on FSGAN [43], StarGAN [13], and AttGAN [24]. This information is not included for the sake of space.
Table 8 shows the effect of FSGAN [43], StarGAN [13], and AttGAN [24] based facial manipulations, in terms of BRA, on invisible watermarked facial images. These manipulations are applied to all three different versions of our model, as mentioned in Table 1. Figure 7 gives an example of sample watermarked target facial images with identity swapped generated from the FSGAN model. The input to the FSGAN is the source image (\(x_{sw}\)) (not watermarked) and the target image (\(x_{tw}\)) (watermarked). The output is the malicious facial image transformed \(x_{mt}=g_{mt}(x_{w})\) with the identity swapped between the source and the target. Similarly, Figure 8 gives the example of sample watermarked facial images with attribute manipulations generated from the StarGAN and AttGAN models. The input to the StarGAN and AttGAN models is the watermarked image \(x_{w}\) and the output is the malicious transformed facial image \(x_{mt}=g_{mt}(x_{w})\) with the manipulated facial attributes such as eye glasses, facial expression and hair color.
Table 8.
Testing DatasetMethodGenerative technique (BRA\(\%\))
NoneFSGANStarGANAttGAN
FF\(++\)SemiFragile DCT99.4368.5459.7864.68
Hidden97.6574.9667.4572.86
StegaStamp99.6296.5297.4196.38
FaceSigns (Semi-Fragile)99.4951.4950.8948.14
U-Net\(+\)C\(+\) \(A_{adv}\) (Baseline)99.2951.7650.1448.67
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\))99.4565.3570.6361.29
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\)+\(g_{mt}\))99.3150.2948.2745.71
CelebASemiFragile DCT99.2866.8662.1465.41
Hidden97.4676.2966.8374.19
StegaStamp99.5197.2396.8497.52
FaceSigns (Semi-Fragile)99.2753.2952.2849.38
U-Net\(+\)C\(+\) \(A_{adv}\) (Baseline)98.5953.4352.0850.73
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\))99.1667.2972.7364.58
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\)+\(g_{mt}\))98.8251.8350.6947.28
Table 8. The Effect of FSGAN, StarGAN, and AttGAN-Based Facial Manipulations on the Invisible Watermarked Images in Terms of BRA
These malicious transformations are applied to watermarked images obtained using different variants of the proposed model as described in Table 1. All the models are trained on the FF\(++\) Dataset.
Fig. 7.
Fig. 7. Example of sample watermarked facial images with identity swapping-based manipulation applied using the FSGAN model. The input to the FSGAN is the source image (\(x_{sw}\)) (not watermarked) and the target image (\(x_{tw}\)) (watermarked). The output is the maliciously transformed facial image \(x_{mt}=g_{mt}(x_{w})\) with the identity swapped between the source and the target.
Fig. 8.
Fig. 8. Example of sample watermarked facial images with attribute manipulations generated using the StarGAN and AttGAN models. The input to the StarGAN and AttGAN models is the watermarked image \(x_{w}\) and the output is the manipulated watermarked facial image \(x_{mt}=g_{mt}(x_{w})\) with the facial attributes, such as eyeglasses, facial expression, and hair color, edited.
All these models, including our U-Net-based and GAN-based models, were trained on the FF\(++\) dataset and then tested on both the FF\(++\) and CelebA datasets. As can be seen from the Table, overall performance in terms of (BRA) is \(49.01\%\) for the U-Net\(+\)C\(+\) \(A_{adv}\) on GAN-based malicious transforms, which is lower than \(50.91\%\) BRA of second-best model FaceSigns (Semi-Fragile), when both benign \(g_{bt}\) and malicious transformations \(g_{mt}\) are used during training. These results are consistent across FSGAN, StarGAN, and AttGAN models in the intra as well as cross-dataset settings. Similarly, in Table 9 we used the same FSGAN, StarGAN, and AttGAN models for malicious transformations. The models are trained on CelebA and tested on FF\(++\) and CelebA datasets. Again, the overall performance in terms of (BRA) is \(47.39\%\) for the U-Net\(+\)C\(+\) \(A_{adv}\) on GAN-based malicious transforms, which is lower than \(49.88\%\) BRA of the second-best model, FaceSigns (Semi-Fragile), when both benign \(g_{bt}\) and malicious transformations \(g_{mt}\) are used during training.
Table 9.
Testing DatasetMethodGenerative technique (BRA\(\%\))
NoneFSGANStarGANAttGAN
FF\(++\)SemiFragile DCT99.2967.8464.6165.74
Hidden97.7376.4169.2874.17
StegaStamp99.5597.7298.1597.28
FaceSigns (Semi-Fragile)99.1850.1652.7850.54
U-Net\(+\)C\(+\) \(A_{adv}\) (Baseline)98.6952.5754.2652.25
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\))99.3968.2873.8671.52
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\)+\(g_{mt}\))99.1748.5351.2947.17
CelebASemiFragile DCT99.5169.8662.6463.82
Hidden98.1278.1468.1872.69
StegaStamp99.6797.5897.6495.48
FaceSigns (Semi-Fragile)99.3847.2750.9747.56
U-Net\(+\)C\(+\) \(A_{adv}\) (Baseline)99.1849.6552.5450.49
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\))99.4867.5372.1267.28
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\)+\(g_{mt}\))99.2745.6348.4843.29
Table 9. The Effect of FSGAN, StarGAN, and AttGAN-Based Facial Manipulations on the Invisible Watermarked Images in Terms of BRA
These malicious transformations are applied to watermarked target images obtained using different variants of the proposed model as described in Table 1. All the models are trained on the CelebA Dataset.
The superior performance of the model, indicated by a low BRA, can indeed be traced back to the inclusion of malicious transformations during the training phase. This approach allows the model to become adept at detecting and responding to manipulations akin to those encountered in real-world scenarios, such as Deepfakes and other forms of digital forgery. In addition, we trained the suggested model exclusively on malicious transformations; nonetheless, its performance is not as good as that of the model trained on both benign and malicious transformations. The reason could be that a model trained solely on malicious transformations tends to develop a narrow focus, optimizing specifically for certain types of data alterations. This specialization may limit the model’s capacity to generalize across a wider array of real-world scenarios, which could include benign transformations. Lacking exposure to these benign transformations, the model may struggle to accurately differentiate between genuinely malicious modifications and benign transformations in images, resulting in decreased overall performance.
Diffusion Model-Based Malicious Transforms: In this work, we use a Stable diffusion-based model which is a latent text-to-image/image-to-image diffusion model able to take any type of text input and produce realistic-looking images. Figure 9 shows an example of maliciously transformed facial images from Stable Diffusion V \(1.5\) and in-painting. The input to the Stable Diffusion V \(1.5\) and in-painting models is the watermarked image \(x_{w}\) and the output is a de-noised synthetic facial image \(x_{mt}=g_{mt}(x_{w})\).
Fig. 9.
Fig. 9. Example of Stable Diffusion V \(1.5\) and inpainting models based malicious transformation on facial images. The input to the Stable Diffusion V \(1.5\) and inpainting models is the watermarked image \(x_{w}\) and the output is the maliciously transformed facial image \(x_{mt}=g_{mt}(x_{w})\).
In our experiments, we used stable diffusion V \(1.5\) [26] and stable diffusion inpainting [26] based diffusion models, which use the underlying concept of conditional mechanism and generative modeling of latent representation following a reverse Markov process. In order to make the training process more efficient and faster, we used low-rank adaptation (LoRA)4 which is a simple training method that drastically lowers the total number of trainable parameters of specific computationally complex layers. Instead of modifying the entire weight matrix of a layer, LoRA introduces two low-rank matrices \(A\) and \(B\). These matrices are much smaller in size compared to the original weight matrix \(W\) of the layer. As a result, LoRA training is considerably quicker and more memory-efficient, and smaller model weights are generated that are simpler to share and store. For detailed technical description and implementation details, please refer to the original work [26], not discussed for the sake of space.
Tables 10, 11, and 12 show the effect of stable diffusion V \(1.5\) [26] and stable diffusion inpainting [26] based malicious transformations on the invisible watermarked images in terms of BRA. The impact of these synthetic manipulations is evaluated on different versions of our proposed model as tabulated in Table 1. All the models are trained on IMDB-WIKI Dataset and tested on FF\(++\), IMDB-WIKI, and CelebA Datasets. From Table 10, the overall performance in terms of (BRA) is \(42\%\) for the U-Net\(+\)C\(+\) \(A_{adv}\) on Diffusion-based malicious transforms which is lower than \(52.74\%\) BRA of second best model FaceSigns (Semi-Fragile), when both benign \(g_{bt}\) and malicious transformations \(g_{mt}\) are used during training. These results are consistent across Stable diffusion V \(1.5\) and Stable Diffusion Inpainting models in the intra as well as cross-dataset settings.
Table 10.
Testing DatasetMethodModelGenerative technique (BRA\(\%\))
NoneStable Diffusion
FF\(++\)SemiFragile DCTSD 1.598.7852.59
Hidden97.5654.74
StegaStamp99.1661.87
FaceSigns (Semi-Fragile)98.5849.81
U-Net\(+\)C\(+\) \(A_{adv}\) (Baseline)99.0755.76
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\))98.2260.76
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\)+\(g_{mt}\))98.3137.29
SemiFragile DCTSD Inpainting98.7858.87
Hidden97.5654.69
StegaStamp99.1669.26
FaceSigns (Semi-Fragile)98.5855.67
U-Net\(+\)C\(+\) \(A_{adv}\) (Baseline)99.0759.52
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\))98.2267.85
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\)+\(g_{mt}\))98.3146.72
Table 10. The Effect of Stable Diffusion V \(1.5\) (SD 1.5) and Stable Diffusion-Based Inpainting Models (SD Inpainting) for Malicious Transformations on the Invisible Watermarked Images, in Terms of BRA, Using Different Versions of Our Proposed Models as Given in Table 1
All these models, including our U-Net-based models and Diffusion models, are trained on the IMDB-WIKI dataset and tested on the FF\(++\) dataset.
Table 11.
Testing DatasetMethodModelGenerative technique (BRA\(\%\))
NoneStable Diffusion
CelebASemiFragile DCTSD 1.599.2454.29
Hidden97.0957.24
StegaStamp99.5270.17
FaceSigns (Semi-Fragile)99.1150.52
U-Net\(+\)C\(+\) \(A_{adv}\) (Baseline)99.3254.59
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\))98.9263.82
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\)+\(g_{mt}\))98.5638.74
SemiFragile DCTSD Inpainting99.2460.73
Hidden97.0958.26
StegaStamp99.5267.81
FaceSigns (Semi-Fragile)99.1153.15
U-Net\(+\)C\(+\) \(A_{adv}\) (Baseline)99.3257.18
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\))98.9265.39
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\)+\(g_{mt}\))98.5647.59
Table 11. The Effect of Stable Diffusion V 1.5 and Stable Diffusion Inpainting-Based Malicious Transformations on the Invisible Watermarked Images, Obtained Using Different Versions of Our Proposed Models as Given in Table 1, in Terms of BRA
All the models are trained on the IMDB-WIKI Dataset and tested on the CelebA Dataset.
Table 12.
Testing DatasetMethodModelGenerative technique (BRA\(\%\))
NoneStable Diffusion
IMDB-WIKISemiFragile DCTSD 1.599.1654.57
Hidden97.2352.84
StegaStamp99.4168.25
FaceSigns (Semi-Fragile)98.9149.24
U-Net\(+\)C\(+\) \(A_{adv}\) (Baseline)99.2452.19
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\))99.0861.86
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\)+\(g_{mt}\))98.8536.61
SemiFragile DCTSD Inpainting99.1662.64
Hidden97.2356.45
StegaStamp99.4174.24
FaceSigns (Semi-Fragile)98.9151.71
U-Net\(+\)C\(+\) \(A_{adv}\) (Baseline)99.2457.79
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\))99.0864.65
U-Net\(+\)C\(+\) \(A_{adv}\) (\(g_{bt}\)+\(g_{mt}\))98.8544.94
Table 12. The Effect of Stable Diffusion V \(1.5\) and Stable Diffusion Inpainting-Based Malicious Transformations on the Invisible Watermarked Images, Obtained Using Different Versions of Our Proposed Models as Given in Table 1, in Terms of BRA
All the models are trained and tested on the IMDB-WIKI dataset.
Overall, when exposed to malicious transformations, the proposed model that is trained on both benign and malicious transforms exhibits a lower BRA, which is actually desirable. This lower BRA indicates increased fragility, a characteristic that is vital for the detection of altered media generated using malicious transformations like Deepfakes. The model’s performance is attributed to its training with both benign and malicious transformations, enabling it to acquire a comprehensive understanding of such transforms overall. We also experimented by training the model solely on malicious transformations, but its performance was notably inferior compared to the model trained on both benign and malicious transformations.

7 Adversarial Attacks and Threat Model

7.1 Adversarial Attacks for Watermark Removal

To further understand the robustness and analyze potential threats against our proposed watermarking technique, we conducted evaluations of our model against specific adversarial attacks aimed at watermark removal. We did not reevaluate other existing watermarking methods under these adversarial conditions since our model had already demonstrated superior performance in terms of BRA under normal conditions over existing watermarking techniques. Further, a study in [68] documents the vulnerability of existing invisible watermarking techniques to watermark removal attacks.
Our evaluations have focused on both white-box and black-box scenarios, which are detailed as follows.
White-box attacks: In white-box attacks, the adversary has complete knowledge of the model, including its architecture and parameters [10, 20]. This access allows the attacker to precisely calculate the most effective perturbations to the input data to confuse the model. As the attacker has full information about the model, white-box attacks are generally considered more powerful and effective compared to black-box attacks. In this series of experiments involving adversarial attacks, we employ BRA and DA as evaluation metrics. The popular white-box attacks used in this study are gradient-based methods, namely the FGSM [20], C & W [10], BPDA, and EOT [3] that iteratively perturb input features to maximize the model’s prediction error. These attacks are applied to the watermarked facial images to evaluate the robustness of our model against white-box attacks in terms of BRA and DA). For detailed technical description and implementation details on white-box attacks, please refer to the original work [3, 10, 20].
Figure 10 shows the example of an adversarial attack with FGSM, given an input image \(x\), the FGSM method utilizes the gradients of the loss function of the individual classifiers from the Table 1 with respect to the input image to generate a new image \(x_{adv}\) that maximizes the loss function. Similarly, Figure 11 shows the samples of watermarked images generated from the FGSM-based adversarial attack. Here \(\epsilon\) is the multiplier to ensure the perturbations are small. In our experiments, we tested various values of \(\epsilon\) to determine the effectiveness of the attack. We ultimately selected an \(\epsilon\) value of \(0.010\) for all our experiments involving the FGSM-based white-box attack. This particular value was chosen because it introduces distortions that are not visible to the human eye, ensuring that the modifications remain imperceptible while still assessing the system’s robustness against adversarial attacks.
Fig. 10.
Fig. 10. Example of an adversarial white-box attack using FGSM. Given an input image \(x\), FGSM method utilizes the gradients of the loss function with respect to the input image to generate a new image \(x_{adv}\) that maximizes the prediction error.
Fig. 11.
Fig. 11. Example of sampled watermarked images generated using the FGSM-based white-box adversarial attack using different \(\epsilon\). \(\epsilon\) is the multiplier to control the magnitude of perturbation. All the experiments are based on \(\epsilon\) value of \(0.010\) to ensure perturbation imperceptibility [best viewed in zoom].
In our experiments, we combined both BPDA and EOT to render the attack very powerful. As BPDA can navigate through non-differentiable operations, EOT can ensure the adversarial example remains effective across a range of expected transformations [3]. This combination is especially useful in attacking systems where input preprocessing and dynamic transformations are common, such as in vision-based machine learning models used in real-world scenarios.
Table 13 shows the effect of FGSM, C & W, BPDA, and EOT based white-box based adversarial attacks on the invisible watermarked images generated using our proposed models (All the models are trained and tested on CelebA dataset) in terms of BRA. As can be seen from the table, the U-Net\(+\)C\(+\) \(A_{adv}\) (Baseline) model outperforms the U-Net and U-Net\(+\)C baselines in terms of overall BRA performance. From the results, it is evident that the models, particularly U-Net\(+\)C\(+\) \(A_{adv}\), maintain high BRA even under the challenging conditions posed by these white-box adversarial attacks. This enhanced performance of the U-Net\(+\)C\(+\) \(A_{adv}\) model can be attributed to its integrated approach, which combines the basic capabilities of U-Net with the advanced refinement processes offered by the Critic (C) and Adversary (\(A_{adv}\)) modules. These additional components help improve the model’s resilience against attacks by effectively learning to counteract the specific manipulations introduced by adversarial techniques, thus ensuring more robust watermark recovery.
Table 13.
Testing DatasetAdversarial AttackMethodBRA(%)
CelebANoneU-Net99.57
U-Net\(+\)C99.43
U-Net\(+\)C\(+\) \(A_{adv}\)99.18
FGSMU-Net70.45
U-Net\(+\)C71.68
U-Net\(+\)C\(+\) \(A_{adv}\)74.69
Carlini&WagnerU-Net65.29
U-Net\(+\)C66.53
U-Net\(+\)C\(+\) \(A_{adv}\)68.26
BPDA&EOTU-Net56.62
U-Net\(+\)C58.79
U-Net\(+\)C\(+\) \(A_{adv}\)62.64
Table 13. The Effect of FGSM, C & W, BPDA, and EOT-Based White-Box Adversarial Attacks on the Invisible Watermarked Images in Terms of BRA on Our Proposed Models
All the models are trained and tested on CelebA dataset.
Black-box attacks: For black-box attacks, the attacker has limited or no access to the target model’s details, such as its architecture or parameters [68]. Instead, the attacker can only interact with the model by providing inputs and observing the corresponding outputs.
In this study, we used the latest regeneration attacks proposed in [68], which aim to remove invisible watermarks. These attacks work by first adding random noise to the image to disrupt the watermark and then using image reconstruction techniques to restore the image quality. The authors instantiated the regeneration attacks into three instances, namely, Identity Embedding with Denoising Reconstruction, VAE Embedding and Reconstruction, and Diffusion Embedding and Reconstruction. In our work, we employ VAE based embedding and reconstruction for black-box attacks proposed in [68] because it is computationally less expensive and more flexible and efficient. In VAE Embedding and Reconstruction, we have VAEs which are trained using two different losses: a prior matching loss that constrains the latent to follow a pre-specified prior distribution and a reconstruction loss that calculates the distance between the reconstructed and the original sample. For detailed technical description and implementation details, please refer to the original work [68].
Figure 12 shows the sample of watermarked images after the application of the VAE Embedding and Reconstruction-based black-box watermark removal attack. The VAE-based attack is successful in removing the watermark only to an extent and at the same time the attack over-smooths the image, resulting in blurriness.
Fig. 12.
Fig. 12. Example of sampled watermarked images generated after the VAE Embedding and Reconstruction-based black-box adversarial attack for watermark removal has been applied. The VAE-based attack is successful in removing the watermark to an extent; however, the attack over-smoothen’s the image, resulting in blurriness [best viewed in zoom].
Table 14 shows the VAE embedding and reconstruction-based black-box adversarial attack on the invisible watermarked images in terms of BRA on our proposed models. All the models are trained and tested on the CelebA dataset. As can be seen from the table, the U-Net\(+\)C\(+\) \(A_{adv}\) (Baseline) model outperforms the U-Net and U-Net\(+\)C baselines in terms of overall BRA performance. The results clearly demonstrate that the U-Net\(+\)C\(+\) \(A_{adv}\) model excels in maintaining high BRA even when faced with the rigorous demands of white-box adversarial attacks. The superior performance of this model can be traced back to its integrated design, which merges the foundational attributes of U-Net with the sophisticated enhancement capabilities provided by the Critic (C) and Adversary (\(A_{adv}\)) modules. These additional features enhance the model’s robustness by enabling it to effectively respond to and neutralize the specific types of manipulations typical of adversarial attacks. Consequently, this ensures a more resilient process for watermark recovery, preserving the integrity of the watermarked images under adversarial conditions.
Table 14.
Testing DatasetAdversarial Attack (Black Box)MethodBRA(%)
CelebANoneU-Net99.57
U-Net\(+\)C99.43
U-Net\(+\)C\(+\) \(A_{adv}\)99.18
VAE EmbeddingU-Net71.25
U-Net\(+\)C70.89
U-Net\(+\)C\(+\) \(A_{adv}\)72.84
Table 14. The Effect of VAE Embedding and Reconstruction-Based Black-Box Adversarial Attack on the Invisible Watermarked Images in Terms of BRA of Our Proposed Models
All the models are trained and tested on the CelebA dataset.
Additionally, while the BRA is lower than it was without adversarial attacks, it remains above the threshold necessary for detecting authentic media. This indicates that despite the impact of the attacks, the system’s ability to verify authenticity through watermark recovery is still effective.
Overall, the effectiveness of our proposed model to adversarial attacks again stems from the implementation of adversarial training, in which the adversary network attempts to remove the watermark, while the encoder strives to preserve it. This adversarial interaction trains the model to embed watermarks that are more difficult to remove or manipulate. Thus, this dynamic architecture not only guarantees the presence of the watermark but also significantly boosts its robustness, making it capable of withstanding a variety of adversarial attacks. This robustness is crucial for maintaining the integrity and security of the embedded data across different scenarios.

7.2 Threat Model

Adversarial threats from attackers trying to avoid detection of altered media will be very likely encountered by our watermark embedding model. Next, we enlist a few potential threat scenarios that our model might face and discuss possible solutions.
Attack 1. Requesting information from the decoder network to launch hostile attacks: Using an image, the attacker can query the decoder network to obtain the decoded message. Once the decoded message matches the target message, the attacker can manipulate the query image in an adversarial manner.
Defense: The Attacker lacks knowledge of the specific target messages that validate media authenticity, as these messages may be kept confidential and regularly updated. Even if the attacker obtains access to the secret message by querying the decoder with a watermarked image, the secrecy of the encryption key can prevent the attacker from identifying the target encrypted message for the decoder. Additionally, the decoder network can be securely hosted and is only capable of producing a binary label indicating whether the image is authentic or manipulated by comparing the decoded secret with a list of trusted secrets. Consequently, the signal from the decoder becomes impractical for executing adversarial attacks to match a target message from the vast pool of \(2^{64}\) possible messages.
Attack 2. A proxy encoder’s training: The attacker can collect an original and watermarked image dataset and use it to train an encoder-decoder neural network that performs image-to-image translation. Any new image can be successfully mapped by this network to a watermarked image.
Defense: To keep an attacker from obtaining a pair of original and watermarked images, one protection strategy is to store only watermarked images on devices. Furthermore, in order to enable the adversary to learn a generator for watermarking new images with the same secret message, the attack outlined above can only be executed if all of the encoded images have the same secret message. In order to combat this, some message components can be kept dynamic. These can include device-specific codes and a distinct timestamp, ensuring that every embedded bit-string is distinct. Another defense against such attacks is to update the encryption key or trustworthy message on a regular basis.
Attack 3. Transferring the watermark perturbations between different images: To verify the altered media, the adversary can try to extract the added perturbations of the watermark and apply them to a Deepfake image.
Defense: We speculate that, as our model produces a perturbation that is specific to a particular message, the decoder should not be able to retrieve the same perturbation when it is applied to other images. Through an experiment, we extract added perturbations from 50 watermarked images and apply the extracted perturbation to 50 alternate images in order to prove this notion. Such an attack has a BRA of only \(18.5\%\), which is less accurate than random prediction.
Attack 4. Model Inversion Attacks: Attackers use output data from watermarking or Deepfake detection systems to reconstruct the original input data or sensitive attributes about the data, compromising privacy.
Defense: Incorporating differential privacy techniques during the training of watermarking and detection models helps safeguard the confidentiality of the input data by preventing the models from disclosing sensitive information. Additionally, introducing noise to the generated outputs of these systems further enhances privacy protection by ensuring that the output cannot be used to accurately reconstruct the input data, or reveal precise details about it.
Attack 5. Side-Channel Attacks: An attacker exploits side-channel information such as computation time, power consumption, or electromagnetic emissions to gain insights into the watermarking or detection algorithms, potentially revealing secret keys or operations.
Defense: To mitigate timing attacks, it is essential to design algorithms that execute in constant time, ensuring that their operation duration does not vary based on the input. This approach prevents attackers from deducing sensitive information based on how long the algorithm takes to process different inputs. Additionally, to safeguard against side-channel attacks, implementing physical security measures such as shielding techniques and restricting physical access to systems is crucial.
Attack 8. Reverse Engineering Attacks: Attackers deconstruct the watermarking or Deepfake detection system to understand its mechanism fully. With this knowledge, they could develop more effective methods to remove or bypass watermarks or to create more convincing Deepfakes that evade detection.
Defense: Applying code obfuscation techniques to make reverse engineering more difficult and time-consuming. Utilizing secure hardware environments like Trusted Execution Environments to run critical parts of the watermarking or detection processes, shielding them from reverse engineering attempts.

8 Ablation Study

In this section, an ablation study is conducted to assess the individual contributions of various modules within the proposed model, which consists of different configurations of the U-Net architecture enhanced with additional modules like Critic and Adversary. This methodical approach allows for a clearer understanding of how each component affects the overall performance of the model.
Note that in this series of experiments, we utilized the baseline model along with the critic and adversarial modules, which were not trained on either benign or malicious transformations. The purpose of employing only the baseline model without exposing it to these transformations was to clearly demonstrate the fundamental capabilities and limitations of each module prior to any influence from benign or malicious changes. This is crucial for analyzing the inherent effectiveness of each component within our proposed model, providing a foundational understanding of their impact before considering the additional complexities introduced by specific transformations. This kind of analysis is essential for systems where understanding the discrete contribution of each component is key to overall performance and reliability.
(1)
U-Net: Initially, the U-Net model along with the discriminator is trained without benign and malicious transformations and without the integration of any Critic or Adversary modules. This setup serves as the control group, providing a benchmark to measure the impact of adding the other modules.
(2)
U-Net\(+\)C: To this baseline, a Critic module is added, creating a second variant of the model. The Critic module is designed to assess the quality of the output and guide the network towards generating more realistic images.
(3)
U-Net\(+\)C\(+\) \(A_{adv}\): The most complex variant includes both the Critic and the Adversary modules alongside the baseline U-Net. The Adversary module simulates potential attacks or challenges the model might face, aiming to ensure that the watermarks are robust against various types of manipulations, particularly those that might be encountered in adversarial environments.
The results, as shown in Table 15, provide a comparative analysis of the impact of each configuration when tested on the CelebA dataset. Specifically, the study finds that the model equipped with both the Critic and Adversary modules (U-Net\(+\)C\(+\) \(A_{adv}\)) shows superior performance in BRA. This configuration notably excels when evaluated on malicious transformation.
Table 15.
Testing DatasetMethodBRA(%)
NoneFaceswap
CelebAU-Net99.5754.68
U-Net\(+\)C99.4351.82
U-Net\(+\)C\(+\) \(A_{adv}\)99.1849.88
Table 15. Ablation Study on the Impact of Each Module (Network) Used in Our Proposed Model
The introduction of the Critic and Adversary modules to the model influences the BRA in various scenarios. The Critic module, aimed at enhancing visual fidelity, can compromise the (BRA), as it may prioritize image quality over the exactness of watermark retrieval. This effect is noted both under normal conditions and for malicious transformation, i.e., Faceswap, where the Critic helps maintain realistic reconstructions but may lower BRA by prioritizing visual authenticity. Similarly, the Adversary module, which simulates attacks to test robustness, can lower BRA by making the watermark more secure but harder to decode in standard conditions. This module proves particularly useful in strengthening the system’s resilience against malicious changes, such as those in Faceswap, yet this robustness can also complicate watermark extraction, leading to lower BRA in typical detection scenarios. The addition of benign and malicious transformations during the training stage will add enhance the robustness and fragility of our model to benign and malicious transformations, respectively, as already discussed in the previous set of experiments.
Based on the enhanced performance observed, the U-Net model augmented with both Critic and Adversary modules is selected as the baseline model for all subsequent experiments. This decision is based on the model’s demonstrated ability to handle both benign and malicious transformations effectively, ensuring higher fidelity in watermark recovery under adversarial conditions. Thus, this methodical approach not only emphasizes the importance of each module but also demonstrates how integrating these modules can result in substantial enhancements in the robustness and accuracy of our proposed model for image watermarking. This is especially pertinent in situations where maintaining the integrity and authenticity of digital content is paramount.

9 Conclusion and Future work

With the volume of Deepfakes showing staggering growth, advanced proactive defense mechanisms are required for media authentication and to control misinformation spread in advance. In this paper, we introduce a novel deep learning-based semi-fragile invisible image watermarking technique as a proactive defense that allows media authentication by verifying an invisible secret message embedded in the image pixels. Our proposed approach systematically integrates a U-Net-based encoder-decoder style architecture with the discriminator, critic, and adversarial network for efficient watermark embedding and robustness against watermark removal. Thorough experimental investigations on popular facial Deepfake datasets demonstrate that our proposed watermarking technique generates highly imperceptible watermarks that are recoverable with high BRA under benign image processing operations. Further, the watermark is not recoverable when facial manipulations based Deepfakes, generated using different generative algorithms, are applied. Cross-comparison with the existing invisible image watermarking techniques proves the efficacy of our proposed approach in terms of imperceptibility and BRA. In addition, the watermarked images obtained using our proposed model are resilient to several white-box and black-box watermark removal attacks. This is attributed to the adversarial network used during the training stage that mimics the efforts of an adversary in removing the watermark embedded by the encoder, thereby obtaining resilience. Thus advancing the SOTA in watermarking as a proactive defense for media authentication and for combating Deepfakes. Our proposed watermarking technique can be vital to media authenticators in social media platforms, news agencies, and legal offices and help create more trustworthy and responsible platforms and establish consumer trust in digital media. Our work has two primary limitations. Firstly, the complexity of our model necessitates advanced hardware and GPU support, which we plan to address in future iterations by optimizing the model for improved generalizability. Secondly, we were unable to simulate all potential attacks outlined in the threat model discussed in Section 7.2. As a part of future work, we aim to address these limitations. Further, we plan to extend our proposed semi-fragile technique for watermarking multi-modal audio-visual data streams in videos.

Footnotes

References

[1]
Shruti Agarwal, Hany Farid, Yuming Gu, Mingming He, Koki Nagano, and Hao Li. 2019. Protecting world leaders against deep fakes. In Proceedings of the CVPR Workshops.
[2]
Akiomik. 2024. GitHub - akiomik/pilgram: A python library for instagram filters. Retrieved from https://github.com/akiomik/pilgram
[3]
Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Proceedings of the International Conference on Machine Learning. PMLR, 274–283.
[4]
Shumeet Baluja. 2017. Hiding images in plain sight: Deep steganography. In Proceedings of the 31st International Conference on Neural Information Processing Systems. I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), Vol. 30. Curran Associates, Inc. Retrieved from https://proceedings.neurips.cc/paper_files/paper/2017/file/838e8afb1ca34354ac209f53d90c3a43-Paper.pdf
[5]
Dor Bank, Noam Koenigstein, and Raja Giryes. 2023. Autoencoders. In Machine Learning for Data Science Handbook: Data Mining and Knowledge Discovery Handbook. Lior Rokach, Oded Maimon, Erez Shmueli (Eds.), Springer, Cham, 353–374.
[6]
Mahbuba Begum and Mohammad Shorif Uddin. 2020. Digital image watermarking techniques: A review. Information 2 (2020). 2078–2489
[7]
Oussama Benrhouma, Houcemeddine Hermassi, and Safya Belghith. 2014. Tamper detection and self-recovery scheme by DWT watermarking. Nonlinear Dynamics 79 (2014), 1817–1833. Retrieved from https://api.semanticscholar.org/CorpusID:120498588
[8]
Siddharth Bhalerao, Irshad Ahmad Ansari, and Anil Kumar. 2020. A secure image watermarking for tamper detection and localization. Journal of Ambient Intelligence and Humanized Computing 12 (2020), 1057–1068. Retrieved from https://api.semanticscholar.org/CorpusID:219734648
[9]
Ning Bi, Qiyu Sun, Daren Huang, Zhihua Yang, and Jiwu Huang. 2007. Robust image watermarking based on multiband wavelets and empirical mode decomposition. IEEE Transactions on Image Processing 16, 8 (2007), 1956–1966. DOI:
[10]
Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In Proceedings of the IEEE Symposium on Security and Privacy (SP). IEEE, 39–57.
[11]
Caroline Chan, Shiry Ginosar, Tinghui Zhou, and Alexei A. Efros. 2018. Everybody dance now. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 5932–5941.
[12]
B. Chen, T. Li, and W. Ding. 2022. Detecting deepfake videos based on spatiotemporal attention and convolutional LSTM. Information Sciences 601 (2022), 58–70.
[13]
Yunjey Choi, Min-Je Choi, Mun Su Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. 2017. StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE CVPR, 8789–8797.
[14]
François Chollet. 2017. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society, 1800–1807. DOI:
[15]
Ingemar J. Cox, Joe Kilian, Frank Thomson Leighton, and Talal Shamoon. 1997. Secure spread spectrum watermarking for multimedia. IEEE Transactions on Image Processing: A Publication of the IEEE Signal Processing Society 612 (1997), 1673–1687. DOI: https://api.semanticscholar.org/CorpusID:5291243
[16]
Brian Dolhansky, Joanna Bitton, Ben Pflaum, Jikuo Lu, Russ Howes, Menglin Wang, and Cristian Canton Ferrer. 2020. The DeepFake Detection Challenge (DFDC) dataset. arXiv:2006.07397.
[17]
Xiaoyi Dong, Jianmin Bao, Dongdong Chen, Weiming Zhang, Nenghai Yu, Dong Chen, Fang Wen, and Baining Guo. 2020. Identity-driven deepfake detection. arXiv:2012.03930.
[18]
Jessica Fridrich. 2009. Steganography in Digital Media: Principles, Algorithms, and Applications. Cambridge University Press.
[19]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems.
[20]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv:1412.6572.
[21]
Alexandros Haliassos, Konstantinos Vougioukas, Stavros Petridis, and Maja Pantic. 2021. Lips don’t lie: A generalisable and robust approach to face forgery detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 5037–5047. DOI:
[22]
Jamie Hayes and George Danezis. 2017. Generating steganographic images via adversarial training. In Proceedings of the 31st International Conference on Neural Information Processing Systems.
[23]
Kaiming He, X. Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–778.
[24]
Zhenliang He, Wangmeng Zuo, Meina Kan, S. Shan, and Xilin Chen. 2017. AttGAN: Facial attribute editing by only changing what you want. IEEE Transactions on Image Processing 28 (2017), 5464–5478.
[25]
Chi Kin Ho and Chang-Tsun Li. 2004. Semi-fragile watermarking scheme for authentication of JPEG images. In Proceedings of the International Conference on Information Technology: Coding and Computing (ITCC ’04), Vol. 1, DOI:.
[26]
Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. In Proceedings of the 34th International Conference on Neural Information Processing Systems, 6840–6851.
[27]
Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. 2017. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1125–1134.
[28]
Davis E. King. 2009. Dlib-ml: A machine learning toolkit. Journal of Machine Learning Research 10 (Dec. 2009), 1755–1758.
[29]
Chunlei Li, Aihua Zhang, Zhoufeng Liu, Liang Liao, and Di Huang. 2015. Semi-fragile self-recoverable watermarking algorithm based on wavelet group quantization and double authentication. Multimedia Tools and Applications 74, 23 (Dec. 2015), 10581–10604. DOI:
[30]
Lingzhi Li, Jianmin Bao, Ting Zhang, Hao Yang, Dong Chen, Fang Wen, and Baining Guo. 2020. Face X-ray for more general face forgery detection. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 5000–5009. DOI:
[31]
Yuezun Li, Ming-Ching Chang, and Siwei Lyu. 2018. In Ictu Oculi: Exposing AI created fake videos by detecting eye blinking. In Proceedings of the IEEE International Workshop on Information Forensics and Security (WIFS). 1–7. DOI:
[32]
Yuezun Li and Siwei Lyu. 2019. Exposing deepfake videos by detecting face warping artifacts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops.
[33]
Eugene T. Lin, Christine Podilchuk, and Edward J. Delp. 2000. Detection of image alterations using semifragile watermarks. In Electronic Imaging. Retrieved from https://api.semanticscholar.org/CorpusID:2686286
[34]
Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. 2015. Deep learning face attributes in the wild. In Proceedings of the International Conference on Computer Vision (ICCV).
[35]
Xiyang Luo, Ruohan Zhan, Huiwen Chang, Feng Yang, and Peyman Milanfar. 2020. Distortion agnostic deep watermarking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13548–13557.
[36]
Ferdinando Di Martino and Salvatore Sessa. 2012. Fragile watermarking tamper detection with images compressed by fuzzy transform. Information Sciences 195 (2012), 62–90. Retrieved from https://api.semanticscholar.org/CorpusID:35069151
[37]
Aakash Varma Nadimpalli and Ajita Rattani. 2022. GBDF: Gender balanced deepfake dataset towards fair deepfake detection. In Proceedings of the International Conference on Pattern Recognition. Springer, 320–337.
[38]
Aakash Varma Nadimpalli and Ajita Rattani. 2022. On improving cross-dataset generalization of deepfake detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 91–99.
[39]
Aakash Varma Nadimpalli and Ajita Rattani. 2023. Facial forgery-based deepfake detection using fine-grained features. In Proceedings of the International Conference on Machine Learning and Applications (ICMLA), 2174–2181.
[40]
Aakash Varma Nadimpalli and Ajita Rattani. 2023. ProActive DeepFake detection using GAN-based visible watermarking. ACM Transactions on Multimedia Computing, Communications, and Applications (Sep. 2023). DOI:
[41]
Paarth Neekhara, Shehzeen Hussain, Xinqiao Zhang, Ke Huang, Julian McAuley, and Farinaz Koushanfar. 2022. FaceSigns: Semi-fragile neural watermarks for media authentication and countering deepfakes. arXiv:2204.01960.
[42]
Thanh Thi Nguyen, Quoc Viet Hung Nguyen, Dung Tien Nguyen, Duc Thanh Nguyen, Thien Huynh-The, Saeid Nahavandi, Thanh Tam Nguyen, Quoc-Viet Pham, and Cuong M. Nguyen. 2022. Deep learning for deepfakes creation and detection: A survey. Computer Vision and Image Understanding 223 (2022), 103525. DOI:
[43]
Yuval Nirkin, Yosi Keller, and Tal Hassner. 2019. FSGAN: Subject agnostic face swapping and reenactment. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 7183–7192.
[44]
Augustus Odena, Vincent Dumoulin, and Chris Olah. 2016. Deconvolution and Checkerboard Artifacts. Distill. Retrieved from http://distill.pub/2016/deconv-checkerboard/
[45]
Bo Peng, Wei Xiang, Yue Jiang, Wei Wang, Jing Dong, Zhen Sun, Zhen Lei, and Siwei Lyu. 2022. DFGC 2022: The second DeepFake game competition. In Proceedings of the IEEE International Joint Conference on Biometrics (IJCB), 1–10.
[46]
Shelby Pereira and Thierry Pun. 2000. Robust template matching for affine resistant image watermarks. IEEE Transactions on Image Processing: A Publication of the IEEE Signal Processing Society 96 (2000), 1123–9. Retrieved from https://api.semanticscholar.org/CorpusID:1556380
[47]
Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. 2024. SDXL: Improving latent diffusion models for high-resolution image synthesis. In Proceedings of the 12th International Conference on Learning Representations. Retrieved from https://openreview.net/forum?id=di52zR8xgf
[48]
Yuyang Qian, Guojun Yin, Lu Sheng, Zixuan Chen, and Jing Shao. 2020. Thinking in frequency: Face forgery detection by mining frequency-aware clues. In Proceedings of the 16th European Conference on Computer Vision (ECCV ’20). Springer-Verlag, Berlin, 86–103. DOI:
[49]
Sreeraj Ramachandran, Aakash Varma Nadimpalli, and Ajita Rattani. 2021. An experimental evaluation on deepfake detection using deep face recognition. In Proceedings of the IEEE International Carnahan Conference on Security Technology (ICCST), 1–6. DOI:
[50]
Rasmus Rothe, Radu Timofte, and Luc Van Gool. 2015. DEX: Deep expectation of apparent age from a single image. In Proceedings of the IEEE International Conference on Computer Vision Workshop (ICCVW), 252–257. DOI:
[51]
Andreas Rössler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, and Matthias Niessner. 2019. FaceForensics\(++\): Learning to detect manipulated facial images. In Proceedings of the I IEEE/CVF International Conference on Computer Vision (ICCV), 1–11. DOI:
[52]
Bruce Schneier. 1996. Applied Cryptography: Protocols, Algorithms, and Source Code in C (2nd ed.). John Wiley & Sons, Inc., New York, NY. 265–279.
[53]
Richard Shin. 2017. JPEG-Resistant Adversarial Images. Retrieved from https://api.semanticscholar.org/CorpusID:204804905
[54]
Rui Sun, Hong Sun, and Tianren Yao. 2002. A SVD- and quantization based semi-fragile watermarking technique for image authentication. In Proceedings of the 6th International Conference on Signal Processing, Vol. 2, 1592–1595. DOI:
[55]
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2818–2826. DOI:
[56]
Matthew Tancik, Ben Mildenhall, and Ren Ng. 2020. Stegastamp: Invisible hyperlinks in physical photographs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2117–2126.
[57]
Justus Thies, Michael Zollhöfer, and Matthias Nießner. 2019. Deferred neural rendering: Image synthesis using neural textures. ACM Transactions on Graphics 38, 4 (2019), 1–12.
[58]
Justus Thies, Michael Zollhofer, Marc Stamminger, Christian Theobalt, and Matthias Nießner. 2016. Face2face: Real-time face capture and reenactment of rgb videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2387–2395.
[59]
Rubén Tolosana, Rubén Vera-Rodríguez, Julian Fierrez, Aythami Morales, and Javier Ortega-Garcia. 2020. DeepFakes and beyond: A survey of face manipulation and fake detection. Information Fusion 64 (2020), 131–148.
[60]
Loc Trinh and Y. Liu. 2021. An examination of fairness of AI models for deepfake detection. In Proceedings of the International Joint Conference on Artificial Intelligence. Retrieved from https://api.semanticscholar.org/CorpusID:233481637
[61]
Run Wang, Zi-Shun Huang, Zhikai Chen, Li Liu, Jing Chen, and Lina Wang. 2022a. Anti-Forgery: Towards a stealthy and robust deepfake disruption attack via adversarial perceptual-aware perturbations. In Proceedings of the International Joint Conference on Artificial Intelligence.
[62]
Xueyu Wang, Jiajun Huang, Siqi Ma, Surya Nepal, and Chang Xu. 2022b. DeepFake disrupter: The detector of deepfake is my friend. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 14920–14929.
[63]
Mika Westerlund. 2019. The emergence of deepfake technology: A review. Technology Innovation Management Review 9 (2019), 40–53. DOI:
[64]
Jun Xiao and Ying Wang. 2008. A semi-fragile watermarking tolerant of laplacian sharpening. In Proceedings of the International Conference on Computer Science and Software Engineering, Vol. 3, 579–582. DOI:
[65]
Nadirah Zaidi. 2023. As Singapore Faces Deepfake Surge, Authorities Warn of Threats from Cybercrime, Online Scams. Retrieved from https://www.channelnewsasia.com/singapore/combating-risks-ai-and-deepfakes-experts-cybercrime-3966226
[66]
Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li, and Yu Qiao. 2016. Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters 23, 10 (2016), 1499–1503.
[67]
Hanqing Zhao, Tianyi Wei, Wenbo Zhou, Weiming Zhang, Dongdong Chen, and Nenghai Yu. 2021. Multi-attentional deepfake detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2185–2194.
[68]
Xuandong Zhao, Kexun Zhang, Yu-Xiang Wang, and Lei Li. 2023. Generative autoencoders as watermark attackers: Analyses of vulnerabilities and threats. arXiv:2306.01953.
[69]
Yuan Zhao, Bo Liu, Tianqing Zhu, Ming Ding, Xin Yu, and Wanlei Zhou. 2024. Proactive image manipulation detection via deep semi-fragile watermark. Neurocomputing 585 (2024), 127593. DOI:
[70]
Jiren Zhu, Russell Kaplan, Justin Johnson, and Li Fei-Fei. 2018. HiDDeN: Hiding data with deep networks. In Proceedings of the European Conference on Computer Vision (ECCV), 657–672.
[71]
Jiren Zhu, Russell Kaplan, Justin Johnson, and Li Fei-Fei. 2018. HiDDeN: Hiding data with deep networks. In Proceedings of the European Conference on Computer Vision. Retrieved from https://api.semanticscholar.org/CorpusID:50784854

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Digital Threats: Research and Practice
Digital Threats: Research and Practice  Volume 5, Issue 4
December 2024
197 pages
EISSN:2576-5337
DOI:10.1145/3613701
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 December 2024
Online AM: 12 October 2024
Accepted: 21 September 2024
Revised: 20 August 2024
Received: 21 May 2024
Published in DTRAP Volume 5, Issue 4

Check for updates

Author Tags

  1. Facial Manipulations
  2. Deepfakes
  3. Media Authentication
  4. Watermarking

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 792
    Total Downloads
  • Downloads (Last 12 months)792
  • Downloads (Last 6 weeks)391
Reflects downloads up to 03 Mar 2025

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media