Abstract
Photographic evaluation is based mainly on qualitative factors which are very personal and subjective. The qualitative factors must be converted into quantitative measures, however, in order to develop a digitalized recomposition algorithm. In addition, the theories of composition in photography should be explored for digitizing photo quality evaluation criteria. In this paper, a new evaluation algorithm for photographic recomposition based on photo quality evaluation principles will be presented. In detail, we formulate the rule-of-thirds as an optimization problem of the feature vector in the image. Simplicity factors can be formulated as a calculation problem of the size of region of interest (ROI) segments. The size of ROI and the moving direction of foreground object are used to formulate the rule of space. The presented algorithm can be extended to photographic evaluation field. Algorithmic excellence will be proven by experimental results. The proposed technique is fully automatic while previous works need manual interaction. The authors expect that the presented algorithm can be applied in the near future since many related state-of-the-art technologies are embedded on commercial cameras these days.
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11277-015-2977-y/MediaObjects/11277_2015_2977_Fig1_HTML.jpg)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11277-015-2977-y/MediaObjects/11277_2015_2977_Fig2_HTML.jpg)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11277-015-2977-y/MediaObjects/11277_2015_2977_Fig3_HTML.jpg)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11277-015-2977-y/MediaObjects/11277_2015_2977_Fig4_HTML.jpg)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11277-015-2977-y/MediaObjects/11277_2015_2977_Fig5_HTML.jpg)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11277-015-2977-y/MediaObjects/11277_2015_2977_Fig6_HTML.jpg)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11277-015-2977-y/MediaObjects/11277_2015_2977_Fig7_HTML.jpg)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11277-015-2977-y/MediaObjects/11277_2015_2977_Fig8_HTML.jpg)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11277-015-2977-y/MediaObjects/11277_2015_2977_Fig9_HTML.jpg)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11277-015-2977-y/MediaObjects/11277_2015_2977_Fig10_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11277-015-2977-y/MediaObjects/11277_2015_2977_Fig11_HTML.jpg)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11277-015-2977-y/MediaObjects/11277_2015_2977_Fig12_HTML.jpg)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11277-015-2977-y/MediaObjects/11277_2015_2977_Fig13_HTML.jpg)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11277-015-2977-y/MediaObjects/11277_2015_2977_Fig14_HTML.jpg)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11277-015-2977-y/MediaObjects/11277_2015_2977_Fig15_HTML.jpg)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs11277-015-2977-y/MediaObjects/11277_2015_2977_Fig16_HTML.jpg)
References
London, B., Upton, J., Kobre, K., & Brill, B. (2004). Photography (8th ed.). Englewood Cliffs, NJ: Prentice Hall.
Movania, M. M., Chiew, W. M., & Lin, F. (2014). On-site volume rendering with GPU-enabled devices. Wireless Personal Communications, 76(4), 795–812.
Memon, I., Chen, L., Majid, A., Lv, M., Hussain, I., & Chen, G. (2015). Travel recommendation using geo-tagged photos in social media for tourist. Wireless Personal Communications, 80(4), 1347–1362.
Wang, Y., Chen, I.-R., & Wang, D.-C. (2015). A survey of mobile cloud computing applications: Perspectives and challenges. Wireless Personal Communications, 80(4), 1607–1623.
Banerjee, S., & Evans, B. L. (2004). Unsupervised automation of photographic composition rules in digital still cameras. In Proceedings of SPIE conference on sensors, color, cameras, and systems for digital photography (Vol. 6, pp. 364–373).
Banerjee, S., & Evans, B. L. (2004). Unsupervised merger detection and mitigation in still images using frequency and color content analysis. In Proceedings of IEEE international conference on acoustics, speech, and signal processing (pp. 549–552).
Banerjee, S., & Evans, B. L. (2007). In-camera automation of photographic composition rules. IEEE Transactions on Image Processing, 16(7), 1807–1820.
Santella, A., Agrawala, M., DeCarlo, D., Salesin, D. H., & Cohen, M. F. (2006). Gaze-based interaction for semi-automatic photo cropping. In ACM human factors in computing systems (CHI) (pp. 771–780.
Chen, L., Xie, X., Fan, X., Ma, W., Zhang, H., & Zhou, H. (2003). A visual attention model for adapting images on small displays. Multimedia Systems, 9(4), 353–364.
Suh, B., Ling, H., Bederson, B. B., & Jacobs, D. W. (2003). Automatic thumbnail cropping and its effectiveness. CHI Letters, 5(2), 95–104.
Bai, X., & Sapiro, G. (2009). A geodesic framework for fast interactive image and video segmentation and matting. International Journal of Computer Vision, 82(2), 113–132.
Protiere, A., & Sapiro, G. (2007). Interactive image segmentation via adaptive weighted distances. IEEE Transactions on Image Processing, 16, 1046–1057.
Wang, J., & Cohen, M. F. (2005). An iterative optimization approach for unified image segmentation and matting. Proceedings of IEEE ICCV, 2005, 936–943.
Yatziv, L., & Sapiro, G. (2006). Fast image and video colorization using chrominance blending. IEEE Transactions on Image Processing, 15(5), 1120–1129.
Lee, T.-H., Hwang, B.-H., Yun, J.-H., & Choi, M.-R. (2014). A road region extraction using OpenCV CUDA to advance the processing speed. Journal of Digital Convergence, 12(6), 231–236.
Kang, S.-K., & Lee, J.-H. (2013). Real-time head tracking using adaptive boosting in surveillance. Journal of Digital Convergence, 11(2), 243–248.
Kang, S.-K., Choi, K.-H., Chung, K.-Y., & Lee, J.-H. (2012). Object detection and tracking using Bayesian classifier in surveillance. Journal of Digital Convergence, 10(6), 297–302.
Kim, S.-H., & Jeong, Y.-S. (2013). Mobile image sensors for object detection using color segmentation. Cluster Computing, 16(4), 757–763.
Balafoutis, E., Panagakis, A., Laoutaris, N., & Stavrakakis, I. (2005). Study of the impact of replacement granularity and associated strategies on video caching. Cluster Computing, 8(1), 89–100.
Zhang, S., McCullagh, P., Zhang, J., & Yu, T. (2014). A smartphone based real-time daily activity monitoring system. Cluster Computing, 17(3), 711–721.
Park, R. C., Jung, H., Shin, D.-K., Kim, G.-J., & Yoon, K.-H. (2014). M2M-based smart health service for human UI/UX using motion recognition. Cluster Computing (accepted for publication).
Wang, J. Z., Li, J., Gray, R. M., & Wiederhold, G. (2001). Unsupervised multiresolution segmentation for images with low depth of field. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23, 85–90.
Beauchemin, S. S., & Barron, J. L. (1995). The computation of optical flow. ACM Computing Surveys, 27(3), 433–466.
Weber, J., & Malik, J. (1995). Robust computation of optical-flow in a multiscale differential Framework. International Journal of Computer Vision, 14(1), 67–81.
Haussecker, H., & Fleet, D. J. (2001). Estimating optical flow with physical models of brightness variation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(6), 661–673.
Lucas, B. D., & Kanade, T. (1981). An iterative image registration technique with an application to stereo vision. In Proceedings of imaging understanding workshop (pp. 121–130).
Jeong, K. (2009). Paradigm shift of camera: Part I. Computational photography. Journal of the Korea Computer Graphics Society, 15(4), 23–30.
Raskar, R., Tumblin, J., Levoy, M., & Nayer, S. (2006). SIGGRAPH 2006 course notes on computational photography. In SIGGRAPH.
Yuan, L., Sun, J., Quan, L., & Shum, H. Y. (2007). Image Deblurring with blurred/noisy image pairs. ACM Transactions on Graphics, 26(3), 1–9.
Eisemann, E., & Durand, F. (2004). Flash photography enhancement via intrinsic relighting. ACM Transactions on Graphics, 23(3), 673–678.
Petschnigg, G., Szeliski, R., Agrawala, M., Cohen, M., Hoppe, H., & Toyama, K. (2004). Digital photography with flash and no-flash image pairs. ACM Transactions on Graphics, 23(3), 664–672.
Raskar, R., Tan, K., Feris, R., Yu, J., & Turk, M. (2004). Non-photorealistic camera: Depth edge detection and stylized rendering using multi-flash imaging. ACM Transactions on Graphics, 23(3), 679–688.
Masselus, V., Peers, P., Dutré, P., & Willems, Y. D. (2003). Relighting with 4D incident light fields. ACM Transactions on Graphics, 22(3), 613–620.
Matusik, W., Loper, M., Pfister, H. (2004). Progressively-refined reflectance functions from natural illumination. In Eurographics symposium on rendering (pp. 299–308).
Sajadi, B., Majumder, A., Hiwada, K., Maki, A., & Raskar, R. (2011). Switchable primaries using shiftable layers of color filter arrays. ACM Transactions on Graphics, 30(4), 1–10.
Bando, Y., Chen, B., & Nishita, T. (2008). Extracting depth and matte using a color-filtered aperture. ACM Transactions on Graphics, 27(5), 1–9.
Cossairt, O., Zhou, C., & Nayar, S. (2010). Diffusion coded photography for extended depth of field. ACM Transactions on Graphics, 29(4), 1–10.
Levin, A., Fergus, R., Durand, F., & Freeman, B. (2007). Image and depth from a conventional camera with a coded aperture. ACM Transactions on Graphics, 26(3), 70.
Jeong, K., Kim, D., Park, S.-Y., & Lee, S. (2008). Digital shallow depth-of-field adapter for photographs. The Visual Computer, 24(4), 281–294.
Shan, Q., Jia, J., & Agarwala, A. (2008). High-quality motion Deblurring from a single image. ACM Transactions on Graphics, 27(3), 1–10.
Fergus, R., Singh, B., Hertzmann, A., Roweis, S., & Freeman, W. (2006). Removing camera shake from a single image. ACM Transactions on Graphics, 24(3), 787–794.
Cho, S., Matsushita, Y., & Lee, S. (2007). Removing non-uniform motion blur from images. In IEEE international conference on computer vision (pp. 1–8).
Talvala, E., Adams, A., Horowitz, M., & Levoy, M. (2007). Veiling glare in high dynamic range imaging. ACM Transactions on Graphics, 26(3), 37.
Debevec, P., & Malik, J. (1997). Recovering high dynamic range radiance maps from photographs. In Proceedings of ACM SIGGRAPH (pp. 369–378).
Shan, Q., Li, Z., Jia, J., & Tang, C. (2008). Fast image/video upsampling. ACM Transactions on Graphics, 27(5), 1–7.
Sun, J., Xu, Z., & Shum, H. (2008). Image super-resolution using gradient profile prior. In Proceedings of IEEE conference on computer vision and pattern recognition (pp. 1–8).
Freeman, G., & Fattal, R. (2010). Image and video upscaling from local self-examples. ACM Transactions on Graphics, 28(3), 1–10.
Kopf, J., Uyttendaele, M., Deussen, O., & Cohen, M. (2007). Capturing and viewing gigapixel images. ACM Transactions on Graphics, 26(3), 93–102.
Kopf, J., Cohen, M., Lischinski, D., & Uyttendaele, M. (2007). Joint bilateral upsampling. ACM Transactions on Graphics, 26(3), 1–5.
Wang, Y., Hsiao, J., Sorkine, O., & Lee, T. (2011). Scalable and coherent video resizing with per-frame optimization. ACM Transactions on Graphics, 30(3). doi:10.1145/2077341.2077343.
Wang, Y., Lin, H., Sorkine, O., & Lee, T. (2010). Motion-based video retargeting with optimized crop-and-warp. ACM Transactions on Graphics, 29(3). doi:10.1145/1833351.1778827.
Rubinsteing, M., Shamir, A., & Avidan, S. (2008). Improved seam carving for video retargeting. ACM Transactions on Graphics, 27(3). doi:10.1145/1360612.1360615
Zheng, Y., Kambhamettu, C., Yu, J., Bauer, T., & Steiner, K. (2008). FuzzyMatte: A computationally efficient scheme for interactive matting. In Proceedings of IEEE conference on computer vision and pattern recognition (pp. 1–8).
Wang, J., Agrawala, M., & Cohen, M. (2007). Soft scissors: An interactive tool for realtime high quality matting. ACM Transactions on Graphics, 26(3), 9.
McGuire, M., Matusik, W., Pfister, H., Hughes, J., & Durand, F. (2005). Defocus video matting. ACM Transactions on Graphics, 24(3), 567–576.
Gooch, A. A., Olsen, S. C., Tumblin, J., & Gooch, B. (2005). Color2Gray: Salience-preserving color removal. ACM Transactions on Graphics, 24(3), 1–6.
Kim, Y., Jang, C., Demouth, J. &, Lee, S. (2009). Robust color-to-gray via nonlinear global mapping. ACM Transactions on Graphics, 28(5), 161.
Grundland, M., & Dodgson, N. A. (2007). Decolorize: Fast, contrast enhancing, color to grayscale conversion. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 40(11), 2891–2896.
Smith, K., Landes, P., Thollot, J., & Myszkowski, K. (2008). Apparent Greyscale: A simple and fast conversion to perceptually accurate images and video. Computer Graphics Forum, 27(2), 193–200.
Tang, H., Joshi, N., & Kapoor, A. (2011). Learning a blind measure of perceptual image quality. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR) (pp. 305–312).
Lee, Y.-H., Cho, H.-J., & Lee, J.-H. (2014). Image retrieval using multiple features on mobile platform. Journal of Digital Convergence, 12(2), 237–243.
Acknowledgments
This research was supported by the Daegu University Research Grant, 2012.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Jeong, K., Cho, HJ. A Digitalized Recomposition Technique Based on Photo Quality Evaluation Criteria. Wireless Pers Commun 86, 301–314 (2016). https://doi.org/10.1007/s11277-015-2977-y
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11277-015-2977-y