Skip to main content

Advertisement

Log in

Reducing reconstruction error of classified textural patches by integration of random forests and coupled dictionary nonlinear regressors: with applications to super-resolution of abdominal CT images

  • Original Article
  • Published:
International Journal of Computer Assisted Radiology and Surgery Aims and scope Submit manuscript

    We’re sorry, something doesn't seem to be working properly.

    Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Abstract

Purpose

Random forests and dictionary-based statistical regressions have common characteristics, including non-linear mapping and supervised learning. To reduce the reconstruction error of high-resolution images, we integrate random forests and coupled dictionary learning.

Methods

Textural differences of image blocks are considered by the classification of patches using an Auto-Encoder network. The proposed algorithm partitions an input LR image by 5 × 5 blocks and classifies training patches into six categories. A single random forest regressor is then trained corresponding to each class. The output of an RF is considered as an initial estimate of the HR slice. If a slice’s representation is sparse in the Discrete Cosine Transform domain, the initial reconstructed image is further improved by a coupled dictionary.

Results

In this study, we applied our method to abdominal CT scans and compared them to conventional and recent researches. We achieved an average improvement of 0.06 (2.37) using the SSIM (PSNR) index compared to the random forest + dictionary learning method.

Conclusion

The low standard deviation of the results reveals the stability of the proposed method as well. The proposed algorithm depicts the effectiveness of classifying image patches and individual treatment of each class.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Wei S, Zhou X, Wu W, Pu Q, Wang Q, Yang X (2018) Medical image super-resolution by using multi-dictionary and random forest. Sustain Cities Soc. https://doi.org/10.1016/j.scs.2017.11.012

    Article  Google Scholar 

  2. Li Y, Song B, Guo J, Du X, Guizani M (2019) Super-resolution of brain MRI images using overcomplete dictionaries and nonlocal similarity. IEEE Access 7:25897–25907

    Article  Google Scholar 

  3. Zhang F, Wu Y, Xiao Z, Geng L, Wu J, Wen J, Wang W, Liu P (2019) Super resolution reconstruction for medical image based on adaptive multi-dictionary learning and structural self-similarity. Comput Assist Surg 24:81–88

    Article  Google Scholar 

  4. Li H, Lam K-M, Li D (2018) Joint maximum purity forest with application to image super-resolution. J Electron Imaging 27:43005

    Google Scholar 

  5. Grabner A, Poier G, Opitz M, Schulter S, Roth PM (2017) Loss-specific training of random forests for super-resolution. Comput Complex 35:39

    Google Scholar 

  6. Huang JJ, Liu T, Luigi P, Dragotti T (2017) Stathaki, SRHRF+: Self-example enhanced single image super-resolution using hierarchical random forests, in: Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Work., pp. 71–79.

  7. Gu P, Jiang C, Ji M, Zhang Q, Ge Y, Liang D, Liu X, Yang Y, Zheng H, Hu Z (2019) Low-dose computed tomography image super-resolution reconstruction via random forests. Sensors 19:207

    Article  Google Scholar 

  8. Hu Z, Wang Y, Zhang X, Zhang M, Yang Y, Liu X, Zheng H, Liang D (2019) Super-resolution of PET image based on dictionary learning and random forests. Nucl Instrum Methods Phys Res Sect A Accel Spectrom Detect Assoc Equip 927:320–329

    Article  CAS  Google Scholar 

  9. Li H, Lam K-M, Wang M (2019) Image super-resolution via feature-augmented random forest. Signal Process Image Commun 72:25–34

    Article  Google Scholar 

  10. Yang X, Wu W, Yan B, Wang H, Zhou K, Liu K (2018) Infrared image super-resolution with parallel random Forest. Int J Parallel Program 46:838–858

    Article  Google Scholar 

  11. Zhi-SongL, Siu WC (2018) Cascaded random forests for fast image super-resolution, in: 2018 25th IEEE Int. Conf. Image Process, pp. 2531–2535

  12. Lu Z, Wu C, Yu X (2019) Learning weighted forest and similar structure for image super resolution. Appl Sci 9:543

    Article  Google Scholar 

  13. Jiang C, Zhang Q, Fan R, Hu Z (2018) Super-resolution ct image reconstruction based on dictionary learning and sparse representation. Sci Rep 8:1–10

    Google Scholar 

  14. Lee J-G, Jun S, Cho Y-W, Lee H, Kim GB, Seo JB, Kim N (2017) Deep learning in medical imaging: general overview. Korean J Radiol 18:570–584

    Article  Google Scholar 

  15. Yang J, Wright J, Huang TS, Ma Y (2010) Image super-resolution via sparse representation. IEEE Trans Image Process 19:2861–2873

    Article  Google Scholar 

  16. Lin D, Tang X (2005) Coupled space learning of image style transformation, in: Tenth IEEE Int. Conf. Comput. Vis. Vol. 1, pp. 1699–1706

  17. Wang S, Zhang L, Liang Y, Pan Q (2012) Semi-coupled dictionary learning with applications to image super-resolution and photo-sketch synthesis, in: 2012 IEEE Conf. Comput. Vis. Pattern Recognit, pp. 2216–2223

  18. Yang M, Zhang L, Yang J, Zhang D (2010) Metaface learning for sparse representation based face recognition, in: 2010 IEEE Int Conf Image Process, pp. 1601–1604

  19. Wang J, Li J, Han X-H, Lin L, Hu H, Xu Y, Chen Q, Iwamoto Y, Chen Y-W (2020) Tensor-based sparse representations of multi-phase medical images for classification of focal liver lesions. Pattern Recognit Lett 130:207–215

    Article  CAS  Google Scholar 

  20. Xu Y, Lin L, Hu H, Wang D, Zhu W, Wang J, Han X-H, Chen Y-W (2018) Texture-specific bag of visual words model and spatial cone matching-based method for the retrieval of focal liver lesions using multiphase contrast-enhanced CT images. Int J Comput Assist Radiol Surg 13:151–164

    Article  Google Scholar 

  21. Li C, Deng K, Sun J, Wang H (2016) Compressed sensing, pseudodictionary-based, superresolution reconstruction. J Sensors. https://doi.org/10.1155/2016/1250538

    Article  Google Scholar 

  22. Wang Y, Teng Q, He X, Feng J, Zhang T (2018) Ct-image super resolution using 3d convolutional neural network, ArXiv Prepr. ArXiv1806.09074

  23. Amaranageswarao G, Deivalakshmi S, Ko SB (2020) Wavelet based medical image super resolution using cross connected residual-in-dense grouped convolutional neural network. J Vis Commun Image Represent 70:102819

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amir Hossein Foruzan.

Ethics declarations

Conflict of interest

The author declares that they have no conflict of interest.

Human and animal rights

All human and animal studies have been approved and performed in accordance with ethical standards.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Akbari, M., Foruzan, A.H., Chen, YW. et al. Reducing reconstruction error of classified textural patches by integration of random forests and coupled dictionary nonlinear regressors: with applications to super-resolution of abdominal CT images. Int J CARS 16, 1469–1480 (2021). https://doi.org/10.1007/s11548-021-02449-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11548-021-02449-3

Keywords

Navigation