Abstract
In this work we propose a digital symbol synchronizer for M-PSK modulations based on the Q-Learning algorithm. Through Reinforcement Learning, the system is able to autonomously adapt to environment changes, learning the correct Timing Recovery Loop behavior. The proposed synchronizer has been tested considering a white gaussian noisy channel. We analyzed the modulation error rate and the signal to noise ratio. The obtained results show improved timing recovery capabilities exhibiting a lower locking time.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Haykin, S.: Communication Systems, vol. 103, no 6, 4th edn. Wiley, New York (2000). (Simon Haykin With Solutions Manual.pdf. Cell)
Ling, F., Proakis, J.: Synchronization in Digital Communication Systems. Cambridge University Press, Cambridge (2017). https://doi.org/10.1017/9781316335444
Naidoo, G.M.: Digital communication: information Communication Technology (ICT) usage for teaching and learning. In: Montebello, M. (ed.) Handbook of Research on Digital Learning, pp. 1–19. IGI Global (2020). https://doi.org/10.4018/978-1-5225-9304-1.ch001
Sklar, B.: Digital Communications: Fundamentals and Applications. Signals, 2nd edn. Communications Engineering Services, Tarzana (2001)
Mueller, K., Muller, M.: Timing recovery in digital synchronous data receivers. IEEE Trans. Commun. 24(5), 516–531 (1976). https://doi.org/10.1109/TCOM.1976.1093326
Gardner, F.: A BPSK/QPSK timing-error detector for sampled receivers. IEEE Trans. Commun. 34(5), 423–429 (1986). https://doi.org/10.1109/TCOM.1986.1096561
Giardino, D., et al.: M-PSK demodulator with joint carrier and timing recovery. IEEE Trans. Circuits Syst. II Express Briefs (2020). https://doi.org/10.1109/TCSII.2020.3041342
Cardarilli, G.C., et al.: A Q-learning based PSK symbol synchronizer (2019). https://doi.org/10.1109/ISSCS.2019.8801727
Matta, M., et al.: A reinforcement learning-based QAM/PSK symbol synchronizer. IEEE Access 7, 124147–124157 (2019). https://doi.org/10.1109/ACCESS.2019.2938390
Sutton, R.S., Barto, A.G.: Reinforcement learning: an introduction. IEEE Trans. Neural Networks 9(5), 1054–1054 (1998). https://doi.org/10.1109/TNN.1998.712192
Cardarilli, G.C., et al.: An action-selection policy generator for reinforcement learning hardware accelerators. In: Saponara, S., De Gloria, A. (eds.) ApplePies 2020. LNEE, vol. 738, pp. 267–272. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-66729-0_32
Watkins, C.J.C.H., Dayan, P.: Q-learning. Mach. Learn. 8(3–4), 279–292 (1992). https://doi.org/10.1007/BF00992698
Spanò, S., et al.: An efficient hardware implementation of reinforcement learning: the q-learning algorithm. IEEE Access 7, 186340–186351 (2019). https://doi.org/10.1109/ACCESS.2019.2961174
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Cardarilli, G.C. et al. (2022). A M-PSK Timing Recovery Loop Based on Q-Learning. In: Saponara, S., De Gloria, A. (eds) Applications in Electronics Pervading Industry, Environment and Society. ApplePies 2021. Lecture Notes in Electrical Engineering, vol 866. Springer, Cham. https://doi.org/10.1007/978-3-030-95498-7_6
Download citation
DOI: https://doi.org/10.1007/978-3-030-95498-7_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-95497-0
Online ISBN: 978-3-030-95498-7
eBook Packages: EngineeringEngineering (R0)