Abstract
English is the most spoken language in the world. Despite this, only 1.35 billion people worldwide use English as their first or second language of communication. English is the world’s main language, 83% of the world’s population do not speak English. As a result, most people must spend considerable money and time to learn English. Continuous practice ensures that the language develops as often as possible. Developed, Pocket English Master (PEM) mobile app assumes the role of a full-fledged English teacher to solve the above problems. This application entails conducting an early assessment of the student’s present English competence and directing him or her to the appropriate class. Using artificial intelligence, a curriculum is created that matches the knowledge level of the students. The application enables students to complete learning activities at their convenience by evaluating the student’s learning style using artificial intelligence and making the student’s daily activities efficient and aesthetically pleasing using reinforcement learning and algorithms. This includes an artificial intelligence component to improve students’ ability to speak and read English as well as their ability to engage in natural relationships. Also, the app incorporates augmented reality, real-time handwritten text scanning, and real-time grammar and spelling error correction.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Myers, J.: Which languages are most widely spoken? (2015). https://www.weforum.org/
Ruan, S., et al.: EnglishBot: an AI-powered conversational system for second language learning (2021)
Boonkita, K.: Enhancing the development of speaking skills for non-native speakers of English (2010)
Huang, Y., Xie, Z., Jin, L., Zhu, Y., Zhang, S.: Adversarial feature enhancing network for end-to-end handwritten paragraph recognition. In: 2019 International Conference on Document Analysis and Recognition (ICDAR) (2019)
Agarwal, S., Kaushik, J.S.: Student’s perception of online learning during COVID pandemic. Indian J. Pediatr. 87 (2021)
Coniam, D.F.L.: Bots for language learning now: current and future directions (2020)
Kabudi, T., Pappas, I., Olsen, D.H.: AI-enabled adaptive learning systems: a systematic mapping of the literature. Comput. Educ. Artif. Intell. 2(100017), 100017 (2021)
Bijwe, R.P., Raut, A.B.: A survey of adaptive learning with predictive analytics to improve students learning. Bulletinmonumental.com (2022). http://www.bulletinmonumental.com/gallery/2-jan2021.pdf
Encarnacion, R.E., Galang, A.D., Hallar, B.A.: The impact and effectiveness of e-learning on teaching and learning. Int. J. Comput. Sci. Res. 5(1), 383–397 (2021). https://doi.org/10.25147/ijcsr.2017.001.1.47
Shen, N.: A deep learning approach of English vocabulary for mobile platform. In: 2021 13th International Conference on Measuring Technology and Mechatronics Automation (ICMTMA), pp. 463–466 (2021). https://doi.org/10.1109/ICMTMA52658.2021.00106
Sutton, R.S., Barto, E.G.: Reinforcement learning: an introduction (2018). https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf
Balasubramanian, V., Anouneia, S.M., Abraham, G.: Reinforcement learning approach for adaptive e-learning systems using learning styles. Inf. Technol. J. 12, 2306–2314 (2013)
adityajn. “Flickr 8k Dataset”
Panayotov, V., Chen, G., Povey, D., Khudanpur, S.: ICASSP. Open Speech and Language Resources. http://www.openslr.org/12/
Zhang, G., et al.: Mixed-phoneme BERT: improving BERT with mixed phoneme and sup-phoneme representations for text to speech (2022)
Joshi, P.: How do transformers work in NLP? A guide to the latest state-of-the-art models. Analytics Vidya (2019)
Yiamwinya, T.: Character-bert-next-word-prediction (2020)
Microsoft. A guide to voice bots and AI
Shah, S.K.A., Mahmood, W.: Smart home automation using IOT and its low cost implementation. Int. J. Eng. Manuf. 10, 28–36 (2020)
Arxiv.org (2022). https://arxiv.org/pdf/2012.12877.pdf
Bao, H., Dong, L., Piao, S., Wei, F.: Beit: BERT pre-training of Image transformers. arXiv.org (2022)
Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach (2019)
Wang, W., et al.: MINILM: deep self-attention distillation for task-agnostic compression of pre-trained transformers (2020)
Research Group on Computer Vision and Artificial Intelligence—Computer Vision and Artificial Intelligence. Fki.tic.heia-Fr.ch, https://fki.tic.heia-fr.ch/databases/iam-handwriting-database
SROIE Datasetv2. https://www.kaggle.com/datasets/urbikn/sroie-datasetv2
Overview-Incidental Scene Text - Robust Reading Competition. (2022) Cvc.uab.es, https://rrc.cvc.uab.es/?ch=4
Tjandra, A.: (2020). wav2vec 2.0. Github. https://github.com/facebookresearch/fairseq/tree/main/examples/wav2vec#wav2vec-20
Bgn, J.: The Illustrated Wav2vec (2021). Jonathanbgn.com. https://jonathanbgn.com/2021/06/29/illustrated-wav2vec.html#:~:text=Wav2vec%20is%20a%20speech%20encoder,speech%20recognition%20or%20emotion%20recognition
Sus, Ł.: Wav2Vec 2.0: A Framework for Self-Supervised Learning of Speech Representations (2021). Towardsdatascience. https://towardsdatascience.com/wav2vec-2-0-a-framework-for-self-supervised-learning-of-speech-representations-7d3728688cae
Johnson, K.I.A.: LJ speech dataset [Data set]. In: The LJ Speech Dataset (2017). https://keithito.com/LJ-Speech-Dataset/
Ott, et al.: Fairseq: A Fast, Extensible Toolkit for Sequence Modeling (2019). Github. https://github.com/facebookresearch/fairseq
Yi, R., et al.: FastSpeech 2: fast and high-quality end-to-end text to speech (2020). https://arxiv.org/abs/2006.04558
Huggingface (n.d.) DistilBERT. https://huggingface.co/docs/transformers/model_doc/distilbert
Narsil, P.V.: Facebook/blenderbot-400M-distill (2019). https://huggingface.co/facebook/blenderbot-400M-distill?text=Hey+my+name+is+Julien%21+How+are+you%3F
Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer (2019). https://arxiv.org/abs/1910.10683v3
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Imasha, A., Wimalaweera, K., Maddumage, M., Gunasekara, D., Manathunga, K., Ganegoda, D. (2023). Pocket English Master – Language Learning with Reinforcement Learning, Augmented Reality and Artificial Intelligence. In: González-González, C.S., et al. Learning Technologies and Systems. ICWL SETE 2022 2022. Lecture Notes in Computer Science, vol 13869. Springer, Cham. https://doi.org/10.1007/978-3-031-33023-0_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-33023-0_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-33022-3
Online ISBN: 978-3-031-33023-0
eBook Packages: Computer ScienceComputer Science (R0)