Skip to main content
Log in

Incremental small sphere and large margin for online recognition of communication jamming

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

In the anti-jamming field of radio communication, the problem of online and multiclass jamming recognition is fundamental to implement reasonable anti-jamming measures. The incremental small sphere and large margin (IncSSLM) is proposed, this model can learn the compact boundary for own communication signals and known jamming, which relieves the open-set problem of radio data. Meanwhile it can also update the model of classifier in real time, which avoids the large memory requirement for vast jamming data and saving much time for training. The core of proposed method is the small sphere and large margin (SSLM) approach, which makes the spherical area as compact as possible, like support vector data description (SVDD), and also makes the margin between them as far as possible, like support vector machine (SVM). In other words, it can minimize intra-class divergence and maximize inter-class space. Therefore, there is a significant enhancement of recognition performance when compared with open classifiers such as SVM, and considerable superiority of training efficiency when compared with the canonical SSLM algorithm. Numerical experiments based on synthetic data, practical complex feature data of high-resolution range profile (HRRP), and jamming data of radio communication demonstrate that IncSSLM is efficient and promising for multiple and online recognition of vase and open-set radio jamming.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Yan Q, Zeng H, Jiang T, Li M, Lou W, Hou Y T (2016) Jamming resilient communication using MIMO interference cancellation. IEEE T Inf Foren Sec 11(7):1486–1499

    Article  Google Scholar 

  2. Ho-Van K, Do-Dac T (2018) Reliability-Security Trade-Off Analysis of cognitive radio networks with jamming and licensed interference. Wirel Commun Mob Com 2018:1–15

    Article  Google Scholar 

  3. Wu Z, Zhao Y, Yin Z, Luo H (2017) Jamming Signals Classification Using Convolutional Neural Network. In: Proceedings of IEEE International Symposium on Signal Processing and Information Technology. Bilbao, Spain, pp 62–67

  4. Azami M E, Lartizien C, Canu S (2017) Converting SVDD scores into probability estimates: Application to outlier detection. Neurocomputing 268:64–75

    Article  Google Scholar 

  5. Wang G, Ren Q, Jiang Z, Liu Y, Xu B (2017) Jamming classification and recognition in transform domain communication system based on signal feature space. J Syst Eng Electron 39(9):1950–1958

    Google Scholar 

  6. Yue G, Wang X, Madihian M (2007) Design of Anti-Jamming Coding for Cognitive Radio. In: Proceedings of IEEE Global Telecommunications Conference. Washington, pp 4190– 4194

  7. Huang W, Liu Z, Lv L, Wang L, Zhang S (2018) A novel Anti-Jamming driven sparse Analysis-Based spread spectrum communication methodology. Int J Pattern Recogn Arti 33(01):1958001

    Article  Google Scholar 

  8. Yue G, Wang X (2009) Anti-jamming coding techniques with application to cognitive radio. IEEE T Wirel Commun 8(12):5996–6007

    Article  Google Scholar 

  9. Cauwenberghs G, Poggio T (2001) Incremental and decremental support vector machine learning. in proc of advances in neural information processing systems, Vancouver, pp 409–415

  10. Laskov P, Gehl C, Kruger S, Muller K (2006) Incremental support vector learning: analysis, Implementation and Applications. J Mach Learn Res 7:1909–1936

    MathSciNet  MATH  Google Scholar 

  11. Molina J F G, Zheng L, Sertdemir M, Dinter D J, Schonberg S, Radle M (2014) Incremental learning with SVM for multimodal classification of prostatic adenocarcinoma. Plos One 9(4):e93600

    Article  Google Scholar 

  12. Xie W, Uhlmann S, Kiranyaz S (2014) Incremental learning with support vector data description. In: Proceedings of international conference on pattern recognition, Stockholm, Sweden, pp 3904–3909

  13. Tax D M J, Laskov P (2003) Online SVM Learning: from Classification to Data Description and Back. In: Proceedings of IEEE Workshop on Neural Network for Signal Processing. Toulouse, pp 499–508

  14. Xu J, Xu C, Zou B, Tang Y Y, Peng J, You X (2019) New incremental learning algorithm with support vector machines. IEEE T Syst Man Cy-S 49(11):2230–2241

    Article  Google Scholar 

  15. Cheng S, Shih F (2007) An improved incremental training algorithm for support vector machines using active query. Pattern Recogn 40:964–971

    Article  Google Scholar 

  16. Gu B, Quan X, Gu Y, Sheng V S, Zheng G (2018) Chunk incremental learning for cost-sensitive hinge loss support vector machine. Pattern Recogn 83:196–208

    Article  Google Scholar 

  17. Katagiri S, Abe S (2006) Incremental training of support vector machines using hyperspheres. Pattern Recogn Lett 27:1495–1507

    Article  Google Scholar 

  18. Laxhammar R, Falkman G (2014) Online learning and sequential anomaly detection in trajectories. IEEE T Pattern Anal 36(6):1158–1173

    Article  Google Scholar 

  19. Liu Y, Liu M (2017) An online learning approach to improving the quality of Crowd-Sourcing. IEEE ACM T Netw 25(4):2166–2179

    Article  Google Scholar 

  20. Ristin M, Guillaumin M, Gall J, Van-Gool L (2016) Incremental learning of random forests for Large-Scale image classification. IEEE T Pattern Anal 38(3):490–503

    Article  Google Scholar 

  21. Jain L C, Seera M, Lim C P, Balasubramaniam P (2014) A review of online learning in supervised neural networks. Neural Comput Appl 25(3-4):491–509

    Article  Google Scholar 

  22. Chen C L P, Liu Z (2018) Broad learning system: an effective and efficient incremental learning system without the need for deep architecture. IEEE T Neur Net Lear 29(1):10–24

    Article  MathSciNet  Google Scholar 

  23. Deng W, Hu J, Zhou X, Guo J (2014) Equidistant prototypes embedding for single sample based face recognition with generic learning and incremental learning. Pattern Recogn 47:3738–3749

    Article  Google Scholar 

  24. Krawczyk B, Woźniak M (2015) One-class classifiers with incremental learning and forgetting for data streams with concept drift. Soft Comput 19(12):3387–3400

    Article  Google Scholar 

  25. Vapnik V N (1995) The nature of statistical learning theory. Springer, New York

    Book  Google Scholar 

  26. Maldonado S, Lopez J (2017) Robust kernel-based multiclass support vector machines via second-order cone programming. Appl Intell 46(4):983–992

    Article  Google Scholar 

  27. Wu M, Ye J (2009) A small sphere and large margin approach for novelty detection using training data with outliers. IEEE T Pattern Anal 31(11):2088–2092

    Article  Google Scholar 

  28. Guo Y, Xiao H, Fu Q (2017) Least square support vector data description for HRRP-based radar target recognition. Appl Intell 46(2):365–372

    Article  Google Scholar 

  29. Li C, Liu K, Wang H (2011) The incremental learning algorithm with support vector machine based on hyperplane-distance. Appl Intell 34(1):19–27

    Article  Google Scholar 

  30. Tax D M J, Duin R P W (2004) Support vector data description. Mach Learn 54:45–66

    Article  Google Scholar 

  31. Guo Y, Xiao H (2018) Multiclass multiple kernel learning using hypersphere for pattern recognition. Appl Intell 48(9):2746–2754

    Article  Google Scholar 

  32. Zeng Y, Zhang M, Han F, Gong Y, Zhang J (2019) Spectrum analysis and convolutional neural network for automatic modulation recognition. IEEE Wirel Commun Le 8(3):929–932

    Article  Google Scholar 

  33. Branco P, Torgo L, Ribeiro R (2015) A survey of predictive modelling under imbalanced distributions. ACM Comput Surv 49(2):1:50

    Google Scholar 

  34. Maratea A, Petrosino A, Manzo M (2014) Adjusted F-measure and Kernel Scaling for imbalanced Data Learning. Inform Sci 257:331–341

    Article  Google Scholar 

Download references

Acknowledgements

The research was partly supported by National Natural Science Foundation of China (No.61801501, 61801502) and Research Development Foundation of Naval University of Engineering (No. 425317S123, 425317S126)

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yaxing Li.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

The derivation of \(\overline {G}\) is written as:

$$ \begin{array}{l} \overline{G}={{\left[ \begin{array}{cc} Q & {{\eta }_{c}} \\ {\eta_{c}^{T}} & 2{{H}_{cc}} \end{array} \right]}^{-1}}\text{=}{{\left[ \begin{array}{cc} {{G}^{-1}} & {{\eta }_{c}} \\ {\eta_{c}^{T}} & 2{{H}_{cc}} \end{array} \right]}^{-1}} \\ \text{ } \text{ } \text{ =}\left[ \begin{array}{cc} G+G{{\eta }_{c}}{{\left( 2{{H}_{cc}}-{\eta_{c}^{T}}G{{\eta }_{c}} \right)}^{-1}}{\eta_{c}^{T}}G & -G{{\eta }_{c}}{{\left( 2{{H}_{cc}}-{\eta_{c}^{T}}G{{\eta }_{c}} \right)}^{-1}} \\ -{{\left( 2{{H}_{cc}}-{\eta_{c}^{T}}G{{\eta }_{c}} \right)}^{-1}}{\eta_{c}^{T}}G & {{\left( 2{{H}_{cc}}-{\eta_{c}^{T}}G{{\eta }_{c}} \right)}^{-1}} \end{array} \right] \\ \text{ } \end{array} $$
(34)

Let \(\kappa =2{{H}_{cc}}-{\eta _{c}^{T}}G{{\eta }_{c}}\), βc = −Gηc, the (31) can be rewritten as:

$$ \begin{array}{l} \overline{G}=\left[ \begin{array}{cc} G+\frac{1}{\kappa }{{\upbeta }_{c}}{{\upbeta}_{c}^{T}} & \frac{1}{\kappa }{{\upbeta }_{c}} \\ \frac{1}{\kappa }{{\upbeta}_{c}^{T}} & \frac{1}{\kappa } \end{array} \right] \\ \text{ }\text{ }=\left[ \begin{array}{cc} G & 0 \\ 0 & 0 \end{array} \right]+\frac{1}{\kappa }\left[ \begin{array}{cc} {{\upbeta }_{c}} \\ 1 \end{array} \right]\left[ \begin{array}{cc} {{\upbeta}_{c}^{T}} & 1 \end{array} \right] \end{array} $$
(35)

The derivation of κ is written as:

$$ \begin{array}{l} \kappa =2{{H}_{cc}}-{\eta_{c}^{T}}G{{\eta }_{c}}=2{{H}_{cc}}\text{+}{\eta_{c}^{T}}{{\upbeta }_{c}} \\ \text{ } \text{ }=2{{H}_{cc}}\text{+}{{\left[ \begin{array}{cc} {{y}_{c}} \\ 2H_{cs}^{T} \end{array} \right]}^{T}}\left[ \begin{array}{cc} {{\upbeta }_{\mu }} \\ {{\upbeta }_{s}} \end{array} \right] \\ \text{ } \text{ }=2{{H}_{cc}}\text{+}2{{H}_{cs}}{{\upbeta }_{s}}+{{y}_{c}}{{\upbeta }_{\mu }} \\ \text{ }\text{ } \text{ =}{{\gamma }_{c}} \end{array} $$
(36)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Guo, Y., Meng, J., Li, Y. et al. Incremental small sphere and large margin for online recognition of communication jamming. Appl Intell 50, 3429–3440 (2020). https://doi.org/10.1007/s10489-020-01717-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-020-01717-0

Keywords