Skip to main content
Log in

A novel hybrid approach of ABC with SCA for the parameter optimization of SVR in blind image quality assessment

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Images may be distorted to different degrees in the process of acquisition, transmission, and reconstruction, which is not conducive to the perception and recognition of the human eye. Therefore, it is necessary to reasonably quantify the image quality through image quality assessment. This enables people to correctly understand the image content. In practical applications, blind image quality assessment (BIQA) has attracted widespread attention in the field of image processing because it can evaluate the image itself without any prior knowledge. Support vector regression (SVR) is widely adopted in the field of BIQA, which is one important step in many BIQA methods based on a two-step framework that contains feature extraction and SVR. However, the parameters of SVR greatly affect its predictive performance and generalization ability. Therefore, grid search (GS) is usually utilized to select the appropriate parameters of SVR in the field of BIQA. However, GS may cause the omission of potential solutions when searching the best parameters of SVR, degenerating the performance of SVR. To search the promising parameters of SVR, a novel meta-heuristic algorithm named hybrid artificial bee colony and sine cosine algorithm (ABC-SCA) is proposed. Besides, a feasible scheme for SVR parameter selection is given to further improve the prediction accuracy of the BIQA algorithm. The proposed SVR parameter selection algorithm is compared with GS and several other meta-heuristic algorithms on the three image databases (LIVE, TID2013, and CSIQ) with three BIQA schemes. The experimental results show that the proposed ABC-SCA can effectively obtain the optimal parameters of SVR in the field of BIQA.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Li Q, Lin W, Xu J, Fang Y (2016) Blind image quality assessment using statistical structural and luminance features. IEEE Trans Multimed 18(12):2457–2469

    Article  Google Scholar 

  2. Gu K, Zhai G, Yang X, Zhang W (2015) Using free energy principle for blind image quality assessment. IEEE Trans Multimed 17(1):50–63

    Article  Google Scholar 

  3. Nizami IF, Majid M, Khurshid K (2018) New feature selection algorithms for no-reference image quality assessment. Appl Intell 48(10):3482–3501

    Article  Google Scholar 

  4. Fang Y, Ma K, Wang Z, Lin W, Fang Z, Zhai G (2015) No-reference quality assessment of contrast-distorted images based on natural scene statistics. IEEE Signal Process Lett 22(7):838–842

    Google Scholar 

  5. Wu Q, Li H, Ngan KN, Ma K (2018) Blind image quality assessment using local consistency aware retriever and uncertainty aware evaluator. IEEE Trans Circ Syst Vid 28(9):2078–2089

    Article  Google Scholar 

  6. Liu D, Li F, Song H (2016) Image quality assessment using regularity of color distribution. IEEE Access 4:4478–4483

    Article  Google Scholar 

  7. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612

    Article  Google Scholar 

  8. Chandler DM, Hemami SS (2007) VSNR: a wavelet-based visual signal-to-noise ratio for natural images. IEEE Trans Image Process 16(9):2284–2298

    Article  MathSciNet  Google Scholar 

  9. Wang Z, Li Q (2011) Information content weighting for perceptual image quality assessment. IEEE Trans Image Process 20(5):1185–1198

    Article  MathSciNet  MATH  Google Scholar 

  10. Sheikh HR, Bovik AC (2006) Image information and visual quality. IEEE Trans Image Process 15(2):430–444

    Article  Google Scholar 

  11. Rehman A, Wang Z (2012) Reduced-reference image quality assessment by structural similarity estimation. IEEE Trans Image Process 21(8):3378–3389

    Article  MathSciNet  MATH  Google Scholar 

  12. Tao D, Li X, Lu W, Gao X (2009) Reduced-reference IQA in contourlet domain. IEEE Trans Syst Man Cy B 39(6):1623–1627

    Article  Google Scholar 

  13. Wu J, Lin W, Shi G, Liu A (2013) Reduced-reference image quality assessment with visual information fidelity. IEEE Trans Multimed 15(7):1700–1705

    Article  Google Scholar 

  14. Mittal A, Soundararajan R, Bovik AC (2013) Making a “completely blind” image quality analyzer. IEEE Signal Process Lett 20(3):209–212

    Article  Google Scholar 

  15. Zhang L, Zhang L, Bovik AC (2015) A feature-enriched completely blind image quality evaluator. IEEE Trans Image Process 24(8):2579–2591

    Article  MathSciNet  MATH  Google Scholar 

  16. JiangQ, Shao F, Jiang G, Yu M, Peng Z (2015) Supervised dictionary learning for blind image quality assessment. In: 2015 Visual Commun Image Process (VCIP), Singapore, pp. 1–4

  17. YeP, Kumar J, Kang L, Doermann D (2012) Unsupervised feature learning framework for no-reference image quality assessment. In: 2012 IEEE conf. Comput Vis pattern Recog Providence, RI, 2012, pp. 1098–1105

  18. Saad MA, Bovik AC, Charrier C (2012) Blind image quality assessment: a natural scene statistics approach in the DCT domain. IEEE Trans Image Process 21(8):3339–3352

    Article  MathSciNet  MATH  Google Scholar 

  19. Moorthy AK, Bovik AC (2011) Blind image quality assessment: from natural scene statistics to perceptual quality. IEEE Trans Image Process 20(12):3350–3364

    Article  MathSciNet  MATH  Google Scholar 

  20. Shen J, Li Q, Erlebacher G (2011) Hybrid no-reference natural image quality assessment of noisy, blurry, JPEG2000, and JPEG images. IEEE Trans Image Process 20(8):2089–2098

    Article  MathSciNet  MATH  Google Scholar 

  21. Mittal A, Moorthy AK, Bovik AC (2012) No-reference image quality assessment in the spatial domain. IEEE Trans Image Process 21(12):4695–4708

    Article  MathSciNet  MATH  Google Scholar 

  22. Xue W, Mou X, Zhang L, Bovik AC, Feng X (2014) Blind image quality assessment using joint statistics of gradient magnitude and laplacian features. IEEE Trans Image Process 23(11):4850–4862

    Article  MathSciNet  MATH  Google Scholar 

  23. ZhangW, Ma K, Yan J, Deng D, Wang Z (2018) Blind image quality assessment using a deep bilinear convolutional neural network. In: IEEE Trans Circ Syst Vid

  24. Ma K, Liu W, Zhang K, Duanmu Z, Wang Z, Zuo W (2018) End-to-end blind image quality assessment using deep neural networks. IEEE Trans Image Process 27(3):1202–1213

    Article  MathSciNet  MATH  Google Scholar 

  25. Zhang L, Gu Z, Liu X, Li H, Lu J (2014) Training quality-aware filters for no-reference image quality assessment. IEEE Multimed 21(4):67–75

    Article  Google Scholar 

  26. Li S, Fang H, Liu X (2018) Parameter optimization of support vector regression based on sine cosine algorithm. Exp Syst Appl 91:63–77

    Article  Google Scholar 

  27. FreitasG, Akamine WYL, Farias MCQ (2016) No-reference image quality assessment based on statistics of local ternary pattern. In: 2016 8th Int. conf. Qual. Multimedia Exper. (QoMEX), Lisbon, pp. 1–6

  28. Gu K, Tao D, Qiao J, Lin W (2018) Learning a no-reference quality assessment model of enhanced images with big data. IEEE Trans Neur Net Lear 29(4):1301–1313

    Article  Google Scholar 

  29. Duan K, Keerthi SS, Poo AN (2003) Evaluation of simple performance measures for tuning SVM hyperparameters. Neurocomputing 51:41–59

    Article  Google Scholar 

  30. HsuCW, Chang CC, Lin CJ (2003) A practical guide to support vector classification

  31. Tharwat A, Hassanien AE, Elnaghi BE (2017) A BA-based algorithm for parameter optimization of support vector machine. Pattern Recogn Lett 93:13–22

    Article  Google Scholar 

  32. Liu HH, Chang LC, Li CW, Yang CH (2018) Particle swarm optimization-based support vector regression for tourist arrivals forecasting. Comput Intell Neurosci 2018:1

    Google Scholar 

  33. Buyukyildiz M, Tezel G (2017) Utilization of PSO algorithm in estimation of water level change of Lake Beysehir. Theor Appl Climatol 128(1–2):181–191

    Article  Google Scholar 

  34. Liu S, Tai H, Ding Q, Li D, Xu L, Wei Y (2013) A hybrid approach of support vector regression with genetic algorithm optimization for aquaculture water quality prediction. Math Comput Model 58(3–4):458–465

    Article  Google Scholar 

  35. Wu CH, Tzeng GH, Lin RH (2009) A Novel hybrid genetic algorithm for kernel function and parameter optimization in support vector regression. Exp Syst Appl 36(3):4725–4735

    Article  Google Scholar 

  36. Li C, Li S, Liu Y (2016) A least squares support vector machine model optimized by moth-flame optimization algorithm for annual power load forecasting. Appl Intell 45(4):1166–1178

    Article  Google Scholar 

  37. Zhao S, Gao L, Yu D, Tu J (2016) Ant lion optimizer with chaotic investigation mechanism for optimizing SVM parameters. J Comput Front Comput Sci Tech-ch 10(5):722–731

    Google Scholar 

  38. Kang F, Li J (2015) Artificial bee colony algorithm optimized support vector regression for system reliability analysis of slopes. J Comput Civ Eng 30(3):04045040

    Google Scholar 

  39. Vapnik V, The nature of statistical learning theory. Springer

  40. Drucker H, Burges CJ, Kaufman L, Smola AJ, Vapnik V (1997) Support vector regression machines. Adv Neural Inf Process Syst 9:155–161

    Google Scholar 

  41. Schölkopf B, Smola AJ, Williamson RC, Bartlett PL (2000) New support vector algorithms. Neural Comput 12(5):1207–1245

    Article  Google Scholar 

  42. Mohammadi K, Shamshirband S, Anisi MH, Alam KA, Petković D (2015) Support vector regression based prediction of global solar radiation on a horizontal surface. Energ Convers Manag 91:433–441

    Article  Google Scholar 

  43. Karaboga D (2005) An idea based on honey bee swarm for numerical optimization. Tech. Rep Erciyes Univ 200:1–10

    Google Scholar 

  44. Mirjalili S (2016) SCA: a sine cosine algorithm for solving optimization problems. Knowl-Based Syst 96:120–133

    Article  Google Scholar 

  45. Narwaria M, Lin W (2010) Objective image quality assessment based on support vector regression. IEEE Trans Neural Networ 21(3):515–519

    Article  Google Scholar 

  46. LiangJJ, Qu BY, Suganthan PN, Hernández-Díaz AG (2013) Problem definitions and evaluation criteria for the CEC 2013 special session on real-parameter optimization. In: Comput. Intell. Lab., Zhengzhou Univ., Zhengzhou, China, Tech. Rep. Dec. 2012, Jan. 2013

  47. Karaboga D, Basturk B (2007) A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J Global Optim 39(3):459–471

    Article  MathSciNet  MATH  Google Scholar 

  48. ShiY, Eberhart R (1998) A modified particle swarm optimizer. In: 1998 IEEE Int. conf. Evol. Comput. Proc.. IEEE World Congress Comput. Intell. (Cat. No.98TH8360), Anchorage, AK, USA, pp. 69–73

  49. Jadon SS, Bansal JC, Tiwari R, Sharma H (2018) Artificial bee colony algorithm with global and local neighborhoods. Int J Syst Assur Eng Manag 9(3):589–601

    Article  Google Scholar 

  50. Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Trans Intel Syst Tech 2(3):27

    Article  Google Scholar 

  51. SheikhHR, Wang Z, Cormack L, Bovik AC (2005) LIVE image quality assessment database release 2. Available: http://live.ece.utexas.edu/research/quality/subjective.htm

  52. LarsonEC, Chandler D (2010) Categorical image quality (CSIQ) database. Available: http://vision.okstate.edu/csiq

  53. Ponomarenko N et al (2015) Image database TID2013: peculiarities results and perspectives. Signal Process Image Commun 30:57–77

    Article  Google Scholar 

Download references

Funding

This work was supported by the National Natural Science Foundation of China under Grants 6217022520, 61503177, 81660299, and 61863028 by the China Scholarship Council under the State Scholarship Fund (CSC No. 201606825041), by the Science and Technology Department of Jiangxi Province of China Under Grants 2020ABC03A39, 20161ACB21007, 20171BBE50071, and 20171BAB202033, and by the Education Department of Jiangxi province of China Under Grants GJJ14228 and GJJ150197.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chunquan Li.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix of “A novel hybrid approach of ABC with SCA to optimize parameters of blind image quality assessment based on SVR”

Appendix of “A novel hybrid approach of ABC with SCA to optimize parameters of blind image quality assessment based on SVR”

1.1 Part I. Details of benchmark functions and performance evaluation index

1.1.1 Details of CEC2013 benchmark functions

The CEC2013 is executed to systematically appraise the performance of ABC-SCA. The CEC2013 contains 28 benchmark functions [46] including the complex shifted or shifted rotated benchmark functions listed in Table 11. Functions \(F_{1} {-}F_{5}\), functions \(F_{6} {-}F_{20}\), and functions \(F_{21} {-}F_{28}\) belong to unimodal, multimodal, and composition functions, respectively. They are executed to evaluate the diverse algorithms in extremely complex cases.

Table 11 The complicated function test suite includes 28 extremely complex CEC2013 benchmark functions; for each function, both its initialization and search range are located in [− 100, 100]D; D and \(F_{\min }\) [x*] are the dimension number and the minimum value for each function, detailed in [46]

1.1.2 Performance evaluation index

The error mean value (Mean) and the standard deviation value (Std) are two commonly adopted indicators for evaluating the performance of the meta-heuristic algorithm. For each benchmark function, the smaller the Mean, the better the performance of the algorithm; if the Mean value is the same, the smaller the Std, the better the performance of the algorithm. The two indicators are calculated as follows:

$${\text{Mean}} = \mathop \sum \limits_{i = 1}^{{{\text{runs}}}} \frac{{\left[ {F_{i} \left( x \right) - F_{\min } \left( {x^{*} } \right)} \right]}}{{{\text{runs}}}}$$
(20)
$${\text{Std}} = \sqrt {\frac{{\mathop \sum \nolimits_{i = 1}^{{{\text{runs}}}} \left[ {F_{i} \left( x \right) - {\text{Mean}}} \right]^{2} }}{{{\text{runs}} - 1}}}$$
(21)

where \(i = 1,2, \ldots ,{\text{runs}}\) represents the number of operations on the benchmark function; \(F_{i} \left( x \right)\) represents the best fitness value obtained by the algorithm; \(F_{\min } \left( {x^{*} } \right)\) represents the global optimum fitness value.

1.2 Part II. Parameters setting of ABC-SCA algorithm

The proposed ABC-SCA mainly contains three uncertain parameters: \({\upomega }\), \({\text{limit}}\), and \(a\). The parameter \({\upomega }\) is employed to balance the exploration and exploitation in the iterative search process; the parameter \({\text{limit}}\) is utilized to preset the maximum number of stagnation times that allow the global optimal value to stagnate; the parameter \(a\) is employed to control the range of the parameter \(r_{1}\). The ABC-SCA will show different performance with different parameter pairs. To ascertain the appropriate values of three parameters, ABC-SCA is executed on CEC2013. The algorithm runs 30 times on 28 benchmark functions of CEC2013, the algorithm’s population size is set to 50, and the maximum number of iterations is set to 6000. Tables 12, 13 and 14 give the experimental results involving mean error and standard deviation values for three different parameters in different scenarios. In  Tables 12, 13 and 14, the optimal result are highlighted in bold.

Table 12 Comparisons among different settings of \(\omega_{0}\) with \({\text{limit}} =\) 700 and \(a =\) 1 being unchanged
Table 13 Comparisons among different settings of limit with \({\varvec{\omega}}_{0} =\) 0.6 and \(a =1\) being unchanged
Table 14 Comparisons among different settings of \(a\) with \(\omega_{0} =\) 0.6 and \(a =\) 1 being unchanged

1.2.1 Comparisons among different settings of \({\varvec{\omega}}_{0}\)

In this experiment, we set the parameter \({\text{limit}} = 700\), the parameter \(a = 1\), and experimented on the parameters \(\omega_{0} = 0.5,0.6,0.7\) and \(0.8\), respectively. Table 12 gives the corresponding experimental results, and the best results are bold italics. Table 12 shows that \({\upomega } =\) 0.6 gives better overall performance for ABC-SCA on CEC2013 compared with \(\omega_{0}\) = 0.5, 0.7 and 0.8.

1.2.2 Comparisons among different settings of limit

In this experiment, we set the parameter \(\omega_{0} = 0.6\) the parameter \(a = 1\), and experimented on the parameters \({\text{limit}} = 500,600,700\), and \(800\), respectively. Table 13 gives the corresponding experimental results, and the best results are bold italics. Table 13 shows that \({\text{limit}} =\) 700 gives better overall performance for ABC-SCA on CEC2013 compared with \({\text{limit}}\) = 500, 600 and 800.

1.2.3 Comparisons among different settings of \({\varvec{a}}\)

In this experiment, we set the parameter \(\omega_{0} = 0.6\) the parameter \({\text{limit}} = 700\), and experimented on the parameters \(a = 1,2,3\) and \(4\), respectively. Table 14 gives the corresponding experimental results, and the best results are bold italics. Table 14 shows that \({\text{a}} =\) 1 gives better overall performance for ABC-SCA on CEC2013 compared with \(a\) = 2, 3 and 4.

It can be seen from the experimental results that the ABC-SCA has the best performance when \({ }\omega_{0} = 0.6\), \({\text{limit}} = 700\), and \(a = 1\). Therefore, the set of parameters (\(\omega_{0} = 0.6\), \({\text{limit}} = 700\), and \(a = 1\)) are recommended on CEC2013. In particular,, due to \(a =\) 1, \(r_{1}\) is linearly decreasing between 1 and 0.

1.3 Part III: experimental comparisons with four meta-heuristic algorithms

To verify the performance of the proposed ABC-SCA algorithm, the ABC-SCA is compared with four meta-heuristic algorithms: ABC [47], SCA [44], PSO [48], and ABCGLN [49] on CEC2013. We run each algorithm on each benchmark function 30 times, each algorithm’s population size is set to 50 (The population size of ABCGLN is consistent with the original text.), and the maximum number of iterations is set to 6000.

On CEC2013, the error mean value and the standard deviation value of the 28 functions of the five algorithms are listed in Table 15. For 28 functions, the optimal results of the five algorithms are bold .

Table 15 Comparison of ABC-SCA and four meta-heuristic algorithms on CEC2013

We compared the performance of ABC-SCA with the four meta-heuristic algorithms ABC, SCA, PSO, and ABCGLN on CEC2013. The performance of the five algorithms in the unimodal functions, the multimodal functions and the composition functions are analyzed.

In the performance of the unimodal functions, ABC-SCA ranks first on \(F_{1} ,F_{2} ,{ }F_{5}\) and ranks second on \(F_{3} ,F_{4}\). ABC ranks third on \(F_{1} ,F_{3} ,F_{5}\), and fourth on \(F_{2} { },F_{4}\). SCA ranks fifth on \(F_{1} ,F_{2} { },F_{3} ,F_{5}\), and third on \(F_{4}\). PSO ranks first on \(F_{3} ,F_{4}\), second on \(F_{1} ,{ }F_{5}\), and third on \(F_{2}\). ABCGLN ranks second on \(F_{2}\), fourth on \(F_{1} ,F_{3} ,F_{5}\), and fifth on \(F_{4}\). Therefore, for unimodal functions, ABC-SCA is superior to the other four algorithms. For the final ranking of all unimodal functions performance, ABC-SCA, PSO, ABC, ABCGLN and SCA are first, second, third, fourth and fifth, respectively.

In the implementation of the multimodal functions, ABC-SCA ranks first on the six functions \(F_{8} ,F_{11} { },F_{14} ,F_{15} ,F_{16}\). It only performed poorly on \(F_{12}\), ranking fourth. According to the results in Table 15, the final rank of ABC-SCA, ABC, PSO, ABCGLN and SCA on all multimodal functions is first, second, third, fourth and fifth, respectively. Therefore, the overall performance of ABC-SCA in solving the problem of the multimodal functions is relatively good.

In the performance of the composition functions, ABC-SCA ranks first on \(F_{21} ,F_{23} { },F_{26}\), and third on \(F_{22}\). The other four functions rank second. According to the results in Table 15, the final rank of ABC-SCA, ABCGLN, ABC, PSO and SCA on all composition functions isfirst, second, third, fourth and fifth, respectively.

1.4 Part IV: the details of image databases

To measure the performance of image quality assessment (IQA) algorithms, many organizations and researchers have created multiple IQA databases that contain a series of images and their corresponding human subjective scores. Human subjective scores are expressed with mean opinion score (MOS) or differential mean opinion score (DMOS). A larger MOS indicates better image quality; otherwise, the image quality is worse. DMOS is the opposite of MOS. When constructing the database, each high-quality original image is employed as a reference image, simulated distortion is introduced into the reference image. Then, all images are subjectively scored, and a database containing various distorted images and their subjective scores is obtained. The commonly added simulated distortions are the common Gaussian blur (GB), additive white Gaussian noise (WN), JPEG2000 compression (JP2K), and JPEG compression (JPEG). The most commonly utilized databases for measuring the performance of IQA algorithms are LIVE [51], CSIQ [52], and TID2013 [53], which are three traditionally synthetically distorted image databases. The reference images in the three image databases are corrupted by a single type of distortion, and each simulated distortion is introduced into the reference image in different magnitudes. The three databases are described in detail as follows:

1.4.1 LIVE

The LIVE database contains 29 reference images with an image resolution ratio ranging from 634 pixels \(\times\) 438 pixels to 768 pixels \(\times\) 512 pixels. These reference images are degraded with five types of distortions: GB, WN, JP2K, JPEG, and FF, resulting in a total of 779 distortion images. The human subjective score of the image is given in the range [0, 100] with the form of DMOS. More details can be found in [51].

1.4.2 CSIQ

The CSIQ database contains 30 reference images with an image resolution ratio is 512 pixels \(\times\) 512 pixels. These reference images are degraded with GB, WN, JP2K, JPEG, pink Gaussian noise (PGN), and global contrast decrements (GCD). Each distortion type has 4 or 5 different levels, resulting in a total of 866 distortion images. The human subjective score of the image is given in the range [0, 1] with the form of DMOS. More details can be found in [52].

1.4.3 TID2013

The TID2013 database is an upgraded version of the TID2008 database that adds seven new distortion types to the TID2008 database. The TID2013 database contains 25 reference images with an image resolution ratio is 512 pixels \(\times\) 384 pixels. These reference images are degraded with 24 types of distortion. Each distortion type has 5 different levels, resulting in a total of 3000 distortion images. The human subjective score of the image is given in the range [0, 9] with the form of MOS. The 24 distortion types are: T01 WN; T02 WN in color components is more intensive than WN in the luminance component; T03 spatially correlated noise; T04 masked noise; T05 high-frequency noise; T06 impulse noise; T07 quantization noise; T08 GB; T09 image denoising; T10 JPEG; T11 JPEG2000 transmission errors; T14 non-eccentricity pattern noise; T15 local block-wise distortions of different intensities; T16 mean shift (intensity shift); T17 contrast change; T18 change of color saturation; T19 multiplicative Gaussian noise; T20 comfort noise; T21 lossy compression of noisy images; T22 image color quantization with dither; T23 chromatic aberrations; T24 sparse sampling and reconstruction. More details can be found in [53].

The range or change trend of human subjective scores in different image databases is not exactly the same. For example, in the image database, some human subjective scores range are [0, 1], some are [0, 9], and others are [0, 100]. To maintain the consistency of the calculated indicators and avoid the occurrence of numerical problems, the human subjective scores in all image databases are normalized to [0,100].

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, C., He, Y., Xiao, D. et al. A novel hybrid approach of ABC with SCA for the parameter optimization of SVR in blind image quality assessment. Neural Comput & Applic 34, 4165–4191 (2022). https://doi.org/10.1007/s00521-021-06435-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-021-06435-3

Keywords