Abstract
Purpose
Random forests and dictionary-based statistical regressions have common characteristics, including non-linear mapping and supervised learning. To reduce the reconstruction error of high-resolution images, we integrate random forests and coupled dictionary learning.
Methods
Textural differences of image blocks are considered by the classification of patches using an Auto-Encoder network. The proposed algorithm partitions an input LR image by 5 × 5 blocks and classifies training patches into six categories. A single random forest regressor is then trained corresponding to each class. The output of an RF is considered as an initial estimate of the HR slice. If a slice’s representation is sparse in the Discrete Cosine Transform domain, the initial reconstructed image is further improved by a coupled dictionary.
Results
In this study, we applied our method to abdominal CT scans and compared them to conventional and recent researches. We achieved an average improvement of 0.06 (2.37) using the SSIM (PSNR) index compared to the random forest + dictionary learning method.
Conclusion
The low standard deviation of the results reveals the stability of the proposed method as well. The proposed algorithm depicts the effectiveness of classifying image patches and individual treatment of each class.
Similar content being viewed by others
References
Wei S, Zhou X, Wu W, Pu Q, Wang Q, Yang X (2018) Medical image super-resolution by using multi-dictionary and random forest. Sustain Cities Soc. https://doi.org/10.1016/j.scs.2017.11.012
Li Y, Song B, Guo J, Du X, Guizani M (2019) Super-resolution of brain MRI images using overcomplete dictionaries and nonlocal similarity. IEEE Access 7:25897–25907
Zhang F, Wu Y, Xiao Z, Geng L, Wu J, Wen J, Wang W, Liu P (2019) Super resolution reconstruction for medical image based on adaptive multi-dictionary learning and structural self-similarity. Comput Assist Surg 24:81–88
Li H, Lam K-M, Li D (2018) Joint maximum purity forest with application to image super-resolution. J Electron Imaging 27:43005
Grabner A, Poier G, Opitz M, Schulter S, Roth PM (2017) Loss-specific training of random forests for super-resolution. Comput Complex 35:39
Huang JJ, Liu T, Luigi P, Dragotti T (2017) Stathaki, SRHRF+: Self-example enhanced single image super-resolution using hierarchical random forests, in: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Work., pp. 71–79.
Gu P, Jiang C, Ji M, Zhang Q, Ge Y, Liang D, Liu X, Yang Y, Zheng H, Hu Z (2019) Low-dose computed tomography image super-resolution reconstruction via random forests. Sensors 19:207
Hu Z, Wang Y, Zhang X, Zhang M, Yang Y, Liu X, Zheng H, Liang D (2019) Super-resolution of PET image based on dictionary learning and random forests. Nucl Instrum Methods Phys Res Sect A Accel Spectrom Detect Assoc Equip 927:320–329
Li H, Lam K-M, Wang M (2019) Image super-resolution via feature-augmented random forest. Signal Process Image Commun 72:25–34
Yang X, Wu W, Yan B, Wang H, Zhou K, Liu K (2018) Infrared image super-resolution with parallel random Forest. Int J Parallel Program 46:838–858
Zhi-SongL, Siu WC (2018) Cascaded random forests for fast image super-resolution, in: 2018 25th IEEE Int. Conf. Image Process, pp. 2531–2535
Lu Z, Wu C, Yu X (2019) Learning weighted forest and similar structure for image super resolution. Appl Sci 9:543
Jiang C, Zhang Q, Fan R, Hu Z (2018) Super-resolution ct image reconstruction based on dictionary learning and sparse representation. Sci Rep 8:1–10
Lee J-G, Jun S, Cho Y-W, Lee H, Kim GB, Seo JB, Kim N (2017) Deep learning in medical imaging: general overview. Korean J Radiol 18:570–584
Yang J, Wright J, Huang TS, Ma Y (2010) Image super-resolution via sparse representation. IEEE Trans Image Process 19:2861–2873
Lin D, Tang X (2005) Coupled space learning of image style transformation, in: Tenth IEEE Int. Conf. Comput. Vis. Vol. 1, pp. 1699–1706
Wang S, Zhang L, Liang Y, Pan Q (2012) Semi-coupled dictionary learning with applications to image super-resolution and photo-sketch synthesis, in: 2012 IEEE Conf. Comput. Vis. Pattern Recognit, pp. 2216–2223
Yang M, Zhang L, Yang J, Zhang D (2010) Metaface learning for sparse representation based face recognition, in: 2010 IEEE Int Conf Image Process, pp. 1601–1604
Wang J, Li J, Han X-H, Lin L, Hu H, Xu Y, Chen Q, Iwamoto Y, Chen Y-W (2020) Tensor-based sparse representations of multi-phase medical images for classification of focal liver lesions. Pattern Recognit Lett 130:207–215
Xu Y, Lin L, Hu H, Wang D, Zhu W, Wang J, Han X-H, Chen Y-W (2018) Texture-specific bag of visual words model and spatial cone matching-based method for the retrieval of focal liver lesions using multiphase contrast-enhanced CT images. Int J Comput Assist Radiol Surg 13:151–164
Li C, Deng K, Sun J, Wang H (2016) Compressed sensing, pseudodictionary-based, superresolution reconstruction. J Sensors. https://doi.org/10.1155/2016/1250538
Wang Y, Teng Q, He X, Feng J, Zhang T (2018) Ct-image super resolution using 3d convolutional neural network, ArXiv Prepr. ArXiv1806.09074
Amaranageswarao G, Deivalakshmi S, Ko SB (2020) Wavelet based medical image super resolution using cross connected residual-in-dense grouped convolutional neural network. J Vis Commun Image Represent 70:102819
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author declares that they have no conflict of interest.
Human and animal rights
All human and animal studies have been approved and performed in accordance with ethical standards.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Akbari, M., Foruzan, A.H., Chen, YW. et al. Reducing reconstruction error of classified textural patches by integration of random forests and coupled dictionary nonlinear regressors: with applications to super-resolution of abdominal CT images. Int J CARS 16, 1469–1480 (2021). https://doi.org/10.1007/s11548-021-02449-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11548-021-02449-3