ABSTRACT
Recently, deep learning-based image matting methods have emerged. However, the existing methods lack the capability to provide precise matting for anime-style illustrations because their network parameters are trained on primarily photo-realistic images. In this paper, we introduces a new anime image dataset, Chara-1M, designed for matting purposes. In addition, we propose AniCropify, a new matting method for character anime images. Focusing on the commonalities of representation between anime images and photo-realistic images, in AniCropify, an anime image is first converted into a photo-realistic image. From the converted image, a trimap is generated to identify the human regions in images. By using the trimap in the matting process, precise alpha masks of anime images can be obtained. From experiments, we confirmed that based on the quality evaluation of matting results, the proposed method received the highest rating compared to other state-of-the-art techniques.
- Shaofan Cai, Xiaoshuai Zhang, Haoqiang Fan, Haibin Huang, Jiangyu Liu, Jiaming Liu, Jiaying Liu, Jue Wang, and Jian Sun. 2019. Disentangled image matting. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 8819–8828.Google ScholarCross Ref
- Quan Chen, Tiezheng Ge, Yanyu Xu, Zhiqiang Zhang, Xinxin Yang, and Kun Gai. 2018. Semantic human matting. In Proceedings of the 26th ACM international conference on Multimedia. 618–626.Google ScholarDigital Library
- XiangGuang Chen, Ye Zhu, Yu Li, Bingtao Fu, Lei Sun, Ying Shan, and Shan Liu. 2022. Robust human matting via semantic guidance. In Proceedings of the Asian Conference on Computer Vision (ACCV). 2984–2999.Google Scholar
- Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 770–778.Google ScholarCross Ref
- Qiqi Hou and Feng Liu. 2019. Context-aware image matting for simultaneous foreground and alpha estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 4130–4139.Google ScholarCross Ref
- Zhanghan Ke, Jiayu Sun, Kaican Li, Qiong Yan, and Rynson W.H. Lau. 2022. MODNet: Real-time trimap-free portrait matting via objective decomposition. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 1140–1147.Google ScholarCross Ref
- Jizhizi Li, Sihan Ma, Jing Zhang, and Dacheng Tao. 2021. Privacy-preserving portrait matting. In Proceedings of the 29th ACM International Conference on Multimedia. 3501–3509.Google ScholarDigital Library
- Jizhizi Li, Jing Zhang, , and Dacheng Tao. 2021. Deep automatic natural image matting. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI-21). 800–806.Google ScholarCross Ref
- Yaoyi Li and Hongtao Lu. 2020. Natural image matting via guided contextual attention. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 11450–11457.Google ScholarCross Ref
- Shanchuan Lin, Andrey Ryabtsev, Soumyadip Sengupta, Brian L. Curless, Steven M. Seitz, and Ira Kemelmacher-Shlizerman. 2021. Real-time high-resolution background matting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 8762–8771.Google ScholarCross Ref
- Jinlin Liu, Yuan Yao, Wendi Hou, Miaomiao Cui, Xuansong Xie, Changshui Zhang, and Xian-Sheng Hua. 2020. Boosting semantic human matting with coarse annotations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 8563–8572.Google ScholarCross Ref
- Qinglin Liu, Haozhe Xie, Shengping Zhang, Bineng Zhong, and Rongrong Ji. 2021. Long-range feature propagating for natural image matting. In Proceedings of the 29th ACM International Conference on Multimedia. 526–534.Google ScholarDigital Library
- Hao Lu, Yutong Dai, Chunhua Shen, and Songcen Xu. 2019. Indices matter: Learning to index for deep image matting. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 3266–3275.Google ScholarCross Ref
- Simon Niklaus and Feng Liu. 2018. Context-aware synthesis for video frame interpolation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1701–1710.Google ScholarCross Ref
- GyuTae Park, SungJoon Son, JaeYoung Yoo, SeHo Kim, and Nojun Kwak. 2022. MatteFormer: Transformer-based image matting via prior-tokens. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 11696–11706.Google ScholarCross Ref
- Yu Qiao, Yuhao Liu, Xin Yang, Dongsheng Zhou, Mingliang Xu, Qiang Zhang, and Xiaopeng Wei. 2020. Attention-guided hierarchical structure aggregation for image matting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 13676–13685.Google ScholarCross Ref
- Christoph Rhemann, Carsten Rother, Jue Wang, Margrit Gelautz, Pushmeet Kohli, and Pamela Rott. 2009. A perceptually motivated online benchmark for image matting. In Proceddings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1826–1833.Google ScholarCross Ref
- Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 10684–10695.Google ScholarCross Ref
- Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention. 234–241.Google ScholarCross Ref
- Soumyadip Sengupta, Vivek Jayaram, Brian Curless, Steven M. Seitz, and Ira Kemelmacher-Shlizerman. 2020. Background matting: The world is your green screen. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2291–2300.Google ScholarCross Ref
- Rishab Sharma, Rahul Deora, and Anirudha Vishvakarma. 2020. AlphaNet: An attention guided deep network for automatic image matting. In Proceedings of 2020 International Conference on Omni-layer Intelligent Systems (COINS). 1–8.Google ScholarCross Ref
- Yanan Sun, Chi-Keung Tang, and Yu-Wing Tai. 2021. Semantic image matting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 11120–11129.Google ScholarCross Ref
- Bo Xu, Jiake Xie, Han Huang, Ziwen Li, Cheng Lu, Yong Tang, and Yandong Guo. 2022. Situational perception guided image matting. In Proceedings of the 30th ACM International Conference on Multimedia. 5283–5293.Google ScholarDigital Library
- Ning Xu, Brian Price, Scott Cohen, and Thomas Huang. 2017. Deep image matting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2970–2979.Google ScholarCross Ref
- Qihang Yu, Jianming Zhang, He Zhang, Yilin Wang, Zhe Lin, Ning Xu, Yutong Bai, and Alan Yuille. 2021. Mask guided matting via progressive refinement network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 1154–1163.Google ScholarCross Ref
- Yunke Zhang, Lixue Gong, Lubin Fan, Peiran Ren, Qixing Huang, Hujun Bao, and Weiwei Xu. 2019. A late fusion CNN for digital matting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 7469–7478.Google ScholarCross Ref
Index Terms
- AniCropify: Image Matting for Anime-Style Illustration
Recommendations
Automatic and accurate image matting
ICCCI'10: Proceedings of the Second international conference on Computational collective intelligence: technologies and applications - Volume Part IIIThis paper presents a modified spectral matting to obtain automatic and accurate image matting. Spectral matting is the state-of-the-art image matting and also a milestone in theoretic matting research. However, using spectral matting without user ...
Unsupervised and reliable image matting based on modified spectral matting
Spectral matting is the state-of-the-art image matting and also a milestone in theoretic matting research. For spectral matting without user intervention, the accuracy of alpha matte is low and the computational cost is high. Therefore, this paper ...
Automatic image matting using component-hue-difference-based spectral matting
ACIIDS'12: Proceedings of the 4th Asian conference on Intelligent Information and Database Systems - Volume Part IIThis paper presents automatic image matting using component-hue-difference-based spectral matting to obtain accurate alpha mattes. Spectral matting is the state-of-the-art image matting and it is also a milestone in theoretic matting research. However, ...
Comments