Skip to main content
Log in

ChatGPT: potential, prospects, and limitations

ChatGPT:潜力、前景和局限

  • Comment
  • Published:
Frontiers of Information Technology & Electronic Engineering Aims and scope Submit manuscript

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

References

  • Bai YT, Jones A, Ndousse K, et al., 2022. Training a helpful and harmless assistant with reinforcement learning from human feedback. https://arxiv.org/abs/2204.05862

  • Brooks RA, 1991. Intelligence without representation. Artif Intell, 47(1–3):139–159. https://doi.org/10.1016/0004-3702(91)90053-M

    Article  Google Scholar 

  • Brown TB, Mann B, Ryder N, et al., 2020. Language models are few-shot learners. Proc 34th Int Conf on Neural Information Processing Systems, p.1877–1901.

  • Chen M, Tworek J, Jun H, et al., 2021. Evaluating large language models trained on code. https://arxiv.org/abs/2107.03374

  • Chowdhery A, Narang S, Devlin J, 2022. PaLM: scaling language modeling with pathways. https://arxiv.org/abs/2204.02311

  • Devlin J, Chang MW, Lee K, et al., 2019. BERT: pre-training of deep bidirectional transformers for language understanding. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.4171–4186. https://doi.org/10.18653/v1/N19-1423

  • Fedus W, Zoph B, Shazeer N, et al., 2022. Switch transformers: scaling to trillion parameter models with simple and efficient sparsity. J Mach Learn Res, 23(120):1–39.

    MathSciNet  Google Scholar 

  • Glaese A, McAleese N, Trebacz M, et al., 2022. Improving alignment of dialogue agents via targeted human judgements. https://arxiv.org/abs/2209.14375

  • Hoffmann J, Borgeaud S, Mensch A, et al., 2022. Training compute-optimal large language models. https://arxiv.org/abs/2203.15556

  • Hu K, 2023. ChatGPT Sets Record for Fastest-Growing User Base—Analyst Note. https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ [Accessed on Feb. 12, 2023].

  • Huang J, Mo ZB, Zhang ZY, et al., 2022. Behavioral control task supervisor with memory based on reinforcement learning for human-multi-robot coordination systems. Front Inform Technol Electron Eng, 23(8):1174–1188. https://doi.org/10.1631/FITEE.2100280

    Article  Google Scholar 

  • Li L, Lin YL, Zheng NN, et al., 2017. Parallel learning: a perspective and a framework. IEEE/CAA J Autom Sin, 4(3):389–395. https://doi.org/10.1109/JAS.2017.7510493

    Article  MathSciNet  Google Scholar 

  • Lighthill J, 1973. Artificial intelligence: a general survey. In: Artificial Intelligence: a Paper Symposium. Science Research Council, London, UK.

    Google Scholar 

  • Moravec H, 1988. Mind Children. Harvard University Press, Cambridge, USA.

    Google Scholar 

  • Ouyang L, Wu J, Jiang X, et al., 2022. Training language models to follow instructions with human feedback. https://arxiv.org/abs/2203.02155

  • Rae JW, Borgeaud S, Cai T, et al., 2021. Scaling language models: methods, analysis & insights from training Gopher. https://arxiv.org/abs/2112.11446

  • Sanh V, Webson A, Raffel C, et al., 2021. Multitask prompted training enables zero-shot task generalization. 10th Int Conf on Learning Representations.

  • Schulman J, Wolski F, Dhariwal P, et al., 2017. Proximal policy optimization algorithms. https://arxiv.org/abs/1707.06347

  • Schulman J, Zoph B, Kim C, et al., 2022. ChatGPT: Optimizing Language Models for Dialogue. https://openai.com/blog/chatgpt [Accessed on Feb. 12, 2023].

  • Stiennon N, Ouyang L, Wu J, et al., 2020. Learning to summarize from human feedback. Proc 34th Int Conf on Neural Information Processing Systems, p.3008–3021.

  • Sun Y, Wang SH, Feng SK, et al., 2021. ERNIE 3.0: large-scale knowledge enhanced pre-training for language understanding and generation. https://arxiv.org/abs/2107.02137

  • Vaswani A, Shazeer N, Parmar N, et al., 2017. Attention is all you need. Proc 31st Int Conf on Neural Information Processing Systems, p.6000–6010.

  • Wang FY, Guo JB, Bu GQ, et al., 2022. Mutually trustworthy human-machine knowledge automation and hybrid augmented intelligence: mechanisms and applications of cognition, management, and control for complex systems. Front Inform Technol Electron Eng, 23(8):1142–1157. https://doi.org/10.1631/FITEE.2100418

    Article  Google Scholar 

  • Wang FY, Miao QH, Li X, et al., 2023. What does chatGPT say: the DAO from algorithmic intelligence to linguistic intelligence. IEEE/CAA J Autom Sin, 10(3):575–579.

    Article  Google Scholar 

  • Wang YZ, Kordi Y, Mishra S, et al., 2022. Self-Instruct: aligning language model with self generated instructions. https://arxiv.org/abs/2212.10560

  • Wei J, Bosma M, Zhao VY, et al., 2021. Finetuned language models are zero-shot learners. 10th Int Conf on Learning Representations.

  • Wei J, Wang XZ, Schuurmans D, et al., 2022a. Chain-of-thought prompting elicits reasoning in large language models. https://arxiv.org/abs/2201.11903

  • Wei J, Tay Y, Bommasani R, et al., 2022b. Emergent abilities of large language models. https://arxiv.org/abs/2206.07682

  • Weigang L, Enamoto LM, Li DL, et al., 2022. New directions for artificial intelligence: human, machine, biological, and quantum intelligence. Front Inform Technol Electron Eng, 23(6):984–990. https://doi.org/10.1631/FITEE.2100227

    Article  Google Scholar 

  • Xue JR, Hu B, Li LX, et al., 2022. Human-machine augmented intelligence: research and applications. Front Inform Technol Electron Eng, 23(8):1139–1141. https://doi.org/10.1631/FITEE.2250000

    Article  Google Scholar 

  • Zeng W, Ren XZ, Su T, et al., 2021. PanGu-α: large-scale autoregressive pretrained Chinese language models with auto-parallel computation. https://arxiv.org/abs/2104.12369

  • Zhang ZY, Gu YX, Han X, et al., 2021. CPM-2: large-scale cost-effective pre-trained language models. AI Open, 2:216–224. https://doi.org/10.1016/j.aiopen.2021.12.003

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Contributions

Jie ZHOU, Pei KE, and Junping ZHANG drafted the paper. Xipeng QIU and Minlie HUANG helped organize and revised and finalized the paper.

Corresponding authors

Correspondence to Jie Zhou  (周杰) or Junping Zhang  (张军平).

Ethics declarations

Jie ZHOU, Pei KE, Xipeng QIU, Minlie HUANG, and Junping ZHANG declare that they have no conflict of interest.

Additional information

Project supported by the National Natural Science Foundation of China (No. 62176059)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhou, J., Ke, P., Qiu, X. et al. ChatGPT: potential, prospects, and limitations. Front Inform Technol Electron Eng 25, 6–11 (2024). https://doi.org/10.1631/FITEE.2300089

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1631/FITEE.2300089

Navigation