Skip to main content
Log in

A decentralised approach to scene completion using distributed feature hashgram

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Scene completion is automated image reconstruction in a plausible way. Typically, semantically valid images are retrieved by pair-wise comparison and subsequently a completion candidate is selected. The primary challenge in scene completion is the computational cost of pair-wise comparisons which increases geometrically with the increase in the number of images. Another challenge is a large number of incoming completion requests which are to be completed on a centralised server. In this work, we propose a decentralised scene completion system using distributed feature hashgram. The system comprises of two principal components, (i) a deep signature-based decentralised image retrieval component that retrieves semantically valid images by way of signature comparison, and (ii) a fog computing enabled scene completion algorithm which finds optimal patches from the most suitable retrieved image to fill in the missing parts using graph-cut technique. A detailed experimental study on LabelMe dataset is performed to evaluate the quality of the solution. Another challenge in scene completion is the absence of ground truth. We propose an evaluation method to evaluate the image completion in the absence of ground truth. The results demonstrate the novelty of the system and the applicability of the solution for large image data repositories.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

  1. Abbasi F, Muzammal M, Qu Q (2018) A decentralized approach for negative link prediction in large graphs. In: 2018 IEEE international conference on data mining workshops (ICDMW). IEEE, pp 144–150

  2. Andoni A, Indyk P (2006) Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. In: FOCS’06. 47th annual IEEE symposium on foundations of computer science, 2006. IEEE, pp 459–468

  3. Boykov Y, Veksler O, Zabih R (2001) Fast approximate energy minimization via graph cuts. IEEE Trans Pattern Anal Mach Intell 23(11):1222–1239. https://doi.org/10.1109/34.969114

    Article  Google Scholar 

  4. Chum O, Matas J (2010) Large-scale discovery of spatially related images. IEEE Trans Pattern Anal Mach Intell 32(2):371–377. https://doi.org/10.1109/TPAMI.2009.166

    Article  Google Scholar 

  5. Chum O, Philbin J, Zisserman A (2008) Near duplicate image detection: min-hash and tf-idf weighting. In: Proceedings of the British machine vision conference 2008, Leeds September 2008. https://doi.org/10.5244/C.22.50, pp 1–10

  6. Criminisi A, Pérez P, Toyama K (2003) Object removal by exemplar-based inpainting. In: 2003 IEEE computer society conference on computer vision and pattern recognition (CVPR 2003), 16–22 june 2003, Madison, WI, USA, pp 721–728. https://doi.org/10.1109/CVPR.2003.1211538

  7. Demir U, Unal G (2018) Patch-based image inpainting with generative adversarial networks. arXiv preprint arXiv:180307422

  8. Ding D, Ram S, Rodriguez JJ (2018) Perceptually aware image inpainting. Pattern Recogn 83:174–184

    Article  Google Scholar 

  9. Erin Liong V, Lu J, Wang G, Moulin P, Zhou J (2015) Deep hashing for compact binary codes learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2475–2483

  10. Fan Q, Zhang L (2018) A novel patch matching algorithm for exemplar-based image inpainting. Multimedia Tools and Applications 77:10807. https://doi.org/10.1007/s11042-017-5077-z

    Article  Google Scholar 

  11. Fedorov V, Arias P, Facciolo G, Ballester C (2016) Affine invariant self-similarity for exemplar-based inpainting. In: VISAPP

  12. Fergus R, Perona P, Zisserman A (2003) Object class recognition by unsupervised scale-invariant learning. In: Proceedings. 2003 IEEE computer society conference on computer vision and pattern recognition, 2003, vol 2. IEEE, pp II–II

  13. Filali J, Zghal H, Martinet J (2019) Ontology and hmax features-based image classification using merged classifiers. In: International conference on computer vision theory and applications 2019 (VISAPP’19)

  14. Gong Y, Lazebnik S (2011) Iterative quantization: a procrustean approach to learning binary codes. In: 2011 IEEE conference on computer vision and pattern recognition (CVPR). IEEE, pp 817–824

  15. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems, pp 2672– 2680

  16. Grauman K, Darrell T (2005) The pyramid match kernel: discriminative classification with sets of image features. In: ICCV 2005. Tenth IEEE international conference on computer vision, 2005, vol 2. IEEE, pp 1458–1465

  17. Hays J, Efros AA (2007) Scene completion using millions of photographs. ACM Trans Graph 26(3):4. https://doi.org/10.1145/1276377.1276382

    Article  Google Scholar 

  18. He K, Sun J (2012) Statistics of patch offsets for image completion. In: Computer vision–ECCV 2012. Springer, pp 16–29

  19. He K, Wen F, Sun J (2013) K-means hashing: an affinity-preserving quantization method for learning binary compact codes. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2938–2945

  20. Heo JP, Lee Y, He J, Chang SF, Yoon SE (2012) Spherical hashing. In: 2012 IEEE conference on computer vision and pattern recognition (CVPR). IEEE, pp 2957–2964

  21. Hore S, Chatterjee S, Chakraborty S, Shaw RK (2018) Analysis of different feature description algorithm in object recognition. In: Computer vision: concepts, methodologies, tools, and applications, IGI global, pp 601–635

  22. Janardhana Rao B, Chakrapani Y, Srinivas Kumar S (2018) Image inpainting method with improved patch priority and patch selection. IETE J Educ 59(1):26–34

    Article  Google Scholar 

  23. Jin D, Bai X (2018) Patch-sparsity-based image inpainting through facet deduced directional derivative. IEEE Transactions on Circuits and Systems for Video Technology 29(5):1310–1324

    Article  Google Scholar 

  24. Jo Y, Park J (2019) Sc-fegan: face editing generative adversarial network with user’s sketch and color. arXiv:190206838

  25. Leskovec J, Rajaraman A, Ullman JD (2014) Mining of massive datasets. Cambridge University Press, Cambridge

    Book  Google Scholar 

  26. Mahmud R, Kotagiri R, Buyya R (2018) Fog computing: a taxonomy, survey and future directions. In: Internet of everything. Springer, pp 103–130

  27. Muzammal M (2011) Mining sequential patterns from probabilistic databases by pattern-growth. In: British national conference on databases. Springer, pp 118–127

  28. Muzammal M, Gohar M, Rahman AU, Qu Q, Ahmad A, Jeon G (2017) Trajectory mining using uncertain sensor data. IEEE Access 6:4895–4903

    Article  Google Scholar 

  29. Muzammal M, Talat R, Sodhro AH, Pirbhulal S (2019) A multi-sensor data fusion enabled ensemble approach for medical data from body sensor networks. Information Fusion 53, 2020:155–164. https://doi.org/10.1016/j.inffus.2019.06.021

    Google Scholar 

  30. Oliva A, Torralba A (2001) Modeling the shape of the scene: a holistic representation of the spatial envelope. Int J Comput Vis 42(3):145–175

    Article  Google Scholar 

  31. Pathak D, Krahenbuhl P, Donahue J, Darrell T, Efros AA (2016) Context encoders: feature learning by inpainting. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2536–2544

  32. Pelka O, Nensa F, Friedrich CM (2018) Adopting semantic information of grayscale radiographs for image classification and retrieval. In: BIOIMAGING, pp 179–187

  33. Pérez P, Gangnet M, Blake A (2003) Poisson image editing. ACM Trans Graph 22(3):313–318. https://doi.org/10.1145/882262.882269

    Article  Google Scholar 

  34. Qu Q, Nurgaliev I, Muzammal M, Jensen CS, Fan J (2019) On spatio-temporal blockchain query processing. Futur Gener Comput Syst 98:208–218

    Article  Google Scholar 

  35. Russell BC, Torralba A, Murphy KP, Freeman WT (2008) Labelme: a database and web-based tool for image annotation. Int J Comput Vis 77(1-3):157–173

    Article  Google Scholar 

  36. Smeulders AW, Worring M, Santini S, Gupta A, Jain R (2000) Content-based image retrieval at the end of the early years. IEEE Trans Pattern Anal Mach Intell 22(12):1349–1380

    Article  Google Scholar 

  37. Vogel C, Knöbelreiter P, Pock T (2018) Learning energy based inpainting for optical flow. arXiv:181103721

  38. Wang H, Jiang L, Liang R, Li XX (2017) Exemplar-based image inpainting using structure consistent patch matching. Neurocomputing 269:90–96

    Article  Google Scholar 

  39. Wang J, Kumar S, Chang SF (2012) Semi-supervised hashing for large-scale search. IEEE Transactions on Pattern Analysis & Machine Intelligence 34(12):2393–2406. https://doi.org/10.1109/TPAMI.2012.48

    Article  Google Scholar 

  40. Weiss Y, Torralba A, Fergus R (2009) Spectral hashing. In: Advances in neural information processing systems, pp 1753–1760

  41. Xiao M, Li G, Xie L, Peng L, Chen Q (2018) Exemplar-based image completion using image depth information. PloS one 13(9):e0200,404

    Article  Google Scholar 

  42. Yeh RA, Lim TY, Chen C, Schwing AG, Hasegawa-Johnson M, Do M (2018) Image restoration with deep generative models. In: 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 6772–6776

  43. Yu J, Lin Z, Yang J, Shen X, Lu X, Huang TS (2018a) Free-form image inpainting with gated convolution. arXiv:180603589

  44. Yu J, Lin Z, Yang J, Shen X, Lu X, Huang TS (2018b) Generative image inpainting with contextual attention. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5505–5514

  45. Yuheng S, Hao Y (2018) Image inpainting based on a novel criminisi algorithm. arXiv:180804121

  46. Zhao Y, Price B, Cohen S, Gurari D (2019) Guided image inpainting: replacing an image region by pulling content from another image. In: 2019 IEEE winter conference on applications of computer vision (WACV). IEEE, pp 1514–1523

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to R. Talat.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Talat, R., Muzammal, M. & Shan, R. A decentralised approach to scene completion using distributed feature hashgram. Multimed Tools Appl 79, 9799–9817 (2020). https://doi.org/10.1007/s11042-019-08403-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-019-08403-5

Keywords

Navigation