supplementary document: large-capacity image steganography

7
Supplementary Document: Large-capacity Image Steganography Based on Invertible Neural Networks Shao-Ping Lu 1* Rong Wang 1* Tao Zhong 1 Paul L. Rosin 2 1 TKLNDST, CS, Nankai University, Tianjin, China 2 School of Computer Science & Informatics, Cardiff University, UK [email protected]; [email protected]; [email protected]; [email protected] We provide more experimental results in this supplemen- tary document. Sec. 1 presents the ablation experiments on different numbers of invertible blocks or different structures of our sub-modules; Sec. 2 shows how losses’s weights af- fect the construction quality of the container and revealed hidden images; Sec. 3 shows the exact hyper-parameters of our models; Sec. 4 shows more visual results on comparing our method against [1]’s method or the ground truths. 1. Submodule Selection For the network structure, our experiments are carried out on different numbers of invertible blocks and models, where the Dense Block or Residual Block are employed as invertible block sub-modules. Table 1. Ablation experiments for the network architecture. Inv sub-module Container Revealed Blocks Image Image 16 Dense Block 39.48/.968 37.26/.964 8 Dense Block 38.81/.965 38.65/.976 4 Dense Block 35.27/.965 40.28/.980 16 Residual Block 36.62/.968 38.46/.972 8 Residual Block 37.44/.953 39.17/.973 4 Residual Block 36.50/.956 40.23/.977 conv conv conv conv conv Input H*W*C_in LeakyReLu LeakyReLu LeakyReLu LeakyReLu Output H*W*C_out Figure 1. The architecture of our Dense Block sub-model. * indicates equal contribution. Table 2. Sub-module architectures. Conv (input channel, output channel, size, stride, activation ) Conv2D (Cin, 32, 3, 1, LeakyReLU) Conv2D (Cin + 32, 32, 3, 1, LeakyReLU) Conv2D (Cin + 32 × 2 , 32, 3, 1, LeakyReLU) Conv2D (Cin + 32 × 3, 32, 3, 1, LeakyReLU) Conv2D (Cin + 32 × 4, Cout , 3, 1, None) Let’s take hiding an image as an example to discuss the network architecture by setting all the loss weights to 1. In Tab. 1, as the number of invertible blocks increases, the PSNR of the generated container image increases, but the PSNR of the revealed image decreases gradually. Easy to follow that increasing the loss weight of the container im- age, i.e. α co , could improve the quality of the container im- age. Therefore, we choose to use 4 invertible blocks when hiding an image. It can also be observed from Tab. 1 that choosing Dense Block or Residual Block as sub-modules has slight influence on the results. We then adopt a 5-layer Dense Block ( Tab. 2, Fig. 1) as our sub-module. For the sub-module φ(·), C in is the number of feature channels in the hidden branch b 2 , and C out is the number of feature channels in the host branch b 1 . Here C out is always set to 3. For sub-modules ρ(·) and η(·), the values of C in and C out are equal to C out and C in of the sub-module φ(·), respec- tively. 2. Loss Function Adjustment Table 3. Ablation experiments for loss weights. (α co hi ) Container Image Revealed Image (1, 1) 38.81/.965 38.65/.976 (2, 1) 40.11/.974 35.49/.950 (1, 2) 31.78/.949 40.76/.980 (2, 2) 36.44/.963 36.13/.965 Ablation experiments are also designed for different loss function weights. In addition to improving the container 1

Upload: others

Post on 30-Apr-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Supplementary Document: Large-capacity Image Steganography

Supplementary Document: Large-capacity Image Steganography Based onInvertible Neural Networks

Shao-Ping Lu1∗ Rong Wang1∗ Tao Zhong1 Paul L. Rosin2

1TKLNDST, CS, Nankai University, Tianjin, China2School of Computer Science & Informatics, Cardiff University, UK

[email protected]; [email protected]; [email protected]; [email protected]

We provide more experimental results in this supplemen-tary document. Sec. 1 presents the ablation experiments ondifferent numbers of invertible blocks or different structuresof our sub-modules; Sec. 2 shows how losses’s weights af-fect the construction quality of the container and revealedhidden images; Sec. 3 shows the exact hyper-parameters ofour models; Sec. 4 shows more visual results on comparingour method against [1]’s method or the ground truths.

1. Submodule Selection

For the network structure, our experiments are carriedout on different numbers of invertible blocks and models,where the Dense Block or Residual Block are employed asinvertible block sub-modules.

Table 1. Ablation experiments for the network architecture.

Inv sub-module Container RevealedBlocks Image Image

16 Dense Block 39.48/.968 37.26/.9648 Dense Block 38.81/.965 38.65/.9764 Dense Block 35.27/.965 40.28/.980

16 Residual Block 36.62/.968 38.46/.9728 Residual Block 37.44/.953 39.17/.9734 Residual Block 36.50/.956 40.23/.977

conv

conv

conv

conv

convInput

H*W*C_in Leak

yReL

u

Leak

yReL

u

Leak

yReL

u

Leak

yReL

u

Output

H*W*C_out

Figure 1. The architecture of our Dense Block sub-model.

∗indicates equal contribution.

Table 2. Sub-module architectures.Conv (input channel, output channel, size, stride, activation )Conv2D (Cin, 32, 3, 1, LeakyReLU)Conv2D (Cin + 32, 32, 3, 1, LeakyReLU)Conv2D (Cin + 32× 2 , 32, 3, 1, LeakyReLU)Conv2D (Cin + 32× 3, 32, 3, 1, LeakyReLU)Conv2D (Cin + 32× 4, Cout, 3, 1, None)

Let’s take hiding an image as an example to discuss thenetwork architecture by setting all the loss weights to 1. InTab. 1, as the number of invertible blocks increases, thePSNR of the generated container image increases, but thePSNR of the revealed image decreases gradually. Easy tofollow that increasing the loss weight of the container im-age, i.e. αco, could improve the quality of the container im-age. Therefore, we choose to use 4 invertible blocks whenhiding an image. It can also be observed from Tab. 1 thatchoosing Dense Block or Residual Block as sub-moduleshas slight influence on the results. We then adopt a 5-layerDense Block ( Tab. 2, Fig. 1) as our sub-module. For thesub-module φ(·), Cin is the number of feature channels inthe hidden branch b2, and Cout is the number of featurechannels in the host branch b1. Here Cout is always set to 3.For sub-modules ρ(·) and η(·), the values of Cin and Cout

are equal to Cout and Cin of the sub-module φ(·), respec-tively.

2. Loss Function Adjustment

Table 3. Ablation experiments for loss weights.

(αco, αhi) Container Image Revealed Image(1, 1) 38.81/.965 38.65/.976(2, 1) 40.11/.974 35.49/.950(1, 2) 31.78/.949 40.76/.980(2, 2) 36.44/.963 36.13/.965

Ablation experiments are also designed for different lossfunction weights. In addition to improving the container

1

Page 2: Supplementary Document: Large-capacity Image Steganography

image quality by adjusting the loss weights with 4 invert-ible blocks (as described in the paper), we also perform theexperiments when using 8 invertible blocks. However, asshown in Tab. 3, increasing the loss weight αco can indeedimprove the PSNR of the container image, but the qualityof revealed hidden image will significantly decline, makingit difficult to make a good trade-off. Therefore, it is reason-able for us to use 4 invertible blocks when hiding an image.

3. Parameters SettingThe detailed parameters setting of ours models are

shown in Tab. 4.

Table 4. The detailed parameters setting.

The number of hidden images 1 2 3 4 5Channels of b1 3 3 3 3 3Channels of b2 3 6 9 12 15

Inv Blocks 4 8 16 16 16αco 32 64 64 64 64

4. ComparisonsHere we add some visual comparisons. Fig. 2 shows that

the container images and the revealed hidden images cre-ated by our method and [1]’s method are visually plausi-ble. However, our results are with less reconstruction er-rors. What’s more, as shown in Fig. 3, the method in [1]is prone to color bias in some cases when hiding two im-ages, while both the container image and revealed hiddenimages generated from our method are with higher quality.Fig. 4, Fig. 5 and Fig. 6 show some examples where we hide3 ∼ 5 images, respectively.

References[1] Shumeet Baluja. Hiding images within images. IEEE Trans.

Pattern Anal. Mach. Intell., 2019. 1, 2, 3, 4

Page 3: Supplementary Document: Large-capacity Image Steganography

Host

Errors (mag.x50)Original Generated Errors (mag.x50)Original GeneratedHi

dden

(b)

(d)

(a)

(c)

Host

Hidd

enHo

stHi

dden

Host

Hidd

en

(a) Original (b) [1] (c) Ours (d) [1] (e) OursFigure 2. Visual comparison for hiding and revealing an image.

Page 4: Supplementary Document: Large-capacity Image Steganography

Host

ContainerErrors (mag.x50)

Hidd

en-1

Hidd

en-2

Original Generated

Errors (mag.x50)Original Generated

(a)

(b)

(c)

(d)

Host

Hidd

en-1

Hidd

en-2

Host

Hidd

en-1

Hidd

en-2

Host

Hidd

en-1

Hidd

en-2

(a) Original (b) [1] (c) Ours (d) [1] (e) OursFigure 3. Visual comparison for hiding and revealing two images.

Page 5: Supplementary Document: Large-capacity Image Steganography

Orig

inal

Our

sEr

rors

(a)

(b)

Orig

inal

Our

sEr

rors

Figure 4. Two examples for hiding and revealing three images, with a blue border on the host images and an orange border on the hiddenimages. In each example, the top row is the original images and the middle row is our generated results, while the third row is the ×50magnified errors between them.

Page 6: Supplementary Document: Large-capacity Image Steganography

Orig

inal

Our

sEr

rors

(a)

(b)

Orig

inal

Our

sEr

rors

Figure 5. Two examples for hiding and revealing four images, with a blue border on the host images and an orange border on the hiddenimages. In each example, the top row is the original images and the middle row is our generated results, while the third row is the ×50magnified errors between them.

Page 7: Supplementary Document: Large-capacity Image Steganography

Orig

inal

Our

sEr

rors

(a)

(b)

Orig

inal

Our

sEr

rors

Figure 6. Two examples for hiding and revealing five images, with a blue border on the host images and an orange border on the hiddenimages. In each example, the top row is the original images and the middle row is our generated results, while the third row is the ×50magnified errors between them.