Skip to main content

Face image-sketch synthesis via generative adversarial fusion.

Sun, J., Yu, H., Zhang, J. J., Dong, J., Yu, H. and Zhong, G., 2022. Face image-sketch synthesis via generative adversarial fusion. Neural Networks, 154, 179-189.

Full text available as:

[img]
Preview
PDF
Face_image_sketch_synthesis_via_generative_adversarial_fusion_pp.pdf - Accepted Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

3MB

DOI: 10.1016/j.neunet.2022.07.013

Abstract

Face image-sketch synthesis is widely applied in law enforcement and digital entertainment fields. Despite the extensive progression in face image-sketch synthesis, there are few methods focusing on generating a color face image from a sketch. The existing methods pay less attention to learning the illumination or highlight distribution on the face region. However, the illumination is the key factor that makes the generated color face image looks vivid and realistic. Moreover, existing methods tend to employ some image preprocessing technologies and facial region patching approaches to generate high-quality face images, which results in the high complexity and memory consumption in practice. In this paper, we propose a novel end-to-end generative adversarial fusion model, called GAF, which fuses two U-Net generators and a discriminator by jointly learning the content and adversarial loss functions. In particular, we propose a parametric tanh activation function to learn and control illumination highlight distribution over faces, which is integrated between the two U-Net generators by an illumination distribution layer. Additionally, we fuse the attention mechanism into the second U-Net generator of GAF to keep the identity consistency and refine the generated facial details. The qualitative and quantitative experiments on the public benchmark datasets show that the proposed GAF has better performance than existing image-sketch synthesis methods in synthesized face image quality (FSIM) and face recognition accuracy (NLDA). Meanwhile, the good generalization ability of GAF has also been verified. To further demonstrate the reliability and authenticity of face images generated using GAF, we use the generated face image to attack the well-known face recognition system. The result shows that the face images generated by GAF can maintain identity consistency and well maintain everyone's unique facial characteristics, which can be further used in the benchmark of facial spoofing. Moreover, the experiments are implemented to verify the effectiveness and rationality of the proposed parametric tanh activation function and attention mechanism in GAF.

Item Type:Article
ISSN:0893-6080
Uncontrolled Keywords:Attention mechanism; Generated face image; Illumination distribution layer; U-net generator; Algorithms; Face; Facial Recognition; Image Processing; Computer-Assisted; Lighting; Reproducibility of Results
Group:Faculty of Media & Communication
ID Code:37686
Deposited By: Symplectic RT2
Deposited On:21 Oct 2022 09:34
Last Modified:26 Jul 2023 01:08

Downloads

Downloads per month over past year

More statistics for this item...
Repository Staff Only -