skip to main content
10.1145/3658835.3658836acmotherconferencesArticle/Chapter ViewAbstractPublication PagesaieeConference Proceedingsconference-collections
research-article

Is "Deep Learning" Fraudulent in Statistics?

Published: 12 June 2024 Publication History

Abstract

This is the third theoretical paper on “Deep Learning” misconduct, addressing the statistical aspect. The first and second papers on Deep Learning misconduct are [26, 27]. Regardless of learning modes, e.g., supervised, reinforcement, adversarial, and evolutional, almost all Deep Learning projects are rooted in the same misconduct—cheating and hiding—cheating by reporting fit error as test error and hiding bad data. This paper presents new mathematical results that explain why Deep Learning is fraudulent in statistics. Furthermore, this paper presents new statistical reasons why authors must report at least the average error of all trained networks, good and bad, on the validation set along with their standard deviation. For the first time, this paper reveals that both PSUTS (Post-Selection Using Test Set) and PSUVS (Post-Selection Using Validation Set) egregiously replace the mean of random samples with the smallest sample. The paper further alleges that more recent Deep Learning systems such as Transformer, ChatGPT, Bard, and AlphaDev are also fraudulent because they are based on the same Deep Learning fraud. Detailed evidence of the alleged frauds is beyond the scope of this paper and should be heard by a court.

Supplemental Material

ZIP File
Zipped directory with latex source files.

References

[1]
R. Berk, L. Brown, A. Buja, K. Zhang, and L. Zhao. 2013. Valid Post-Selection Inference. The Annals of Statistics 41, 2 (2013), 802–837.
[2]
V. Cherkassky and F. Mulier. 1998. Learning from Data. Wiley, New York.
[3]
R. O. Duda, P. E. Hart, and D. G. Stork. 2001. Pattern Classification (2nd ed.). Wiley, New York.
[4]
A. A. Giordano and F. M. Hsu. 1985. Least Square Estimation with Applications to Digital Signal Processing. John Wiley & Sons, New York.
[5]
A. Graves, G. Wayne, M. Reynolds, D. Hassabis, 2016. Hybrid computing using a neural network with dynamic external memory. Nature 538 (2016), 471–476.
[6]
S. Hochreiter and J. Schmidhuber. 1997. Long Short-Term Memory. Neural Computation 9, 8 (1997), 1735–1780.
[7]
G. B. Huang and C. K.Siew. 2006. Universal Approximation Using Incremental Constructive Feedforward Networks With Random Hidden Nodes. IEEE Transactions on Neural Networks 17, 4 (2006), 879–892.
[8]
G. B. Huang, K.Z. Mao, C. K. Siew, and D. S. Huang. 2005. Fast Modular network implementation for support vector machines. IEEE Transactions on Neural Networks 16, 6 (2005), 1651– 1663.
[9]
A. Krizhevsky, I. Sutskever, and G. E. Hinton. 2017. Imagenet classification with deep convolutional neural networks. Commun. ACM 60, 6 (2017), 84–90.
[10]
D. J. Mankowitz, D. J. Michi, D. Silver, 2023. Faster sorting algorithms discovered using deep reinforcement learning. Nature 618 (2023), 257–263.
[11]
G. J. McLachlan. 1992. Discriminant Analysis and Statistical Pattern Recognition. Wiley, New York.
[12]
O. Russakovsky, J. Deng, L. Fei-Fei, 2015. ImageNet Large Scale Visual Recognition Challenge. Int’l Journal of Computer Vision 115 (2015), 211–252.
[13]
V. Saggio, B. E. Asenbeck, P. Walther, 2021. Experimental quantum speed-up in reinforcement learning agents. Nature 591, 7849 (March 11 2021), 229–233.
[14]
J. Schrittwieser, I. Antonoglou, D. Silver, 2020. Mastering Atari, Go, chess and shogi by planning with a learned model. Science 588, 7839 (2020), 604–609.
[15]
A. W. Senior, R. Evans, D. Hassabis, 2020. Improved protein structure prediction using potentials from deep learning. Nature 577 (2020), 706–710.
[16]
D. Silver, A. Huang, D. Hassabis, 2016. Mastering the game of Go with deep neural networks and tree search. Nature 529 (January 27 2016), 484–489.
[17]
D. Silver, T. Hubert, D. Hassabis, 2018. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 362, 6419 (2018), 1140–1144.
[18]
D. Silver, J. Schrittwieser, D. Hassabis, 2017. Mastering the game of Go without human knowledge. Nature 550 (2017), 354–359.
[19]
N. Slonim, Y. Bilu, C. Alzate, R. Aharonov, 2021. An autonomous debating system. Nature 591, 7850 (March 18 2021), 379–384.
[20]
A. Vaswani, N. Shazeer, I. Polosukhin, 2017. Attemtion Is All You Need. In Proc. Neural Info. Proc. Systems (NIPS). NIPS Foundation, Long Beach, CA, 1–15.
[21]
J. Weng. 2011. Why Have We Passed “neural networks do not abstract well”?Natural Intelligence: the INNS Magazine 1, 1 (2011), 13–22.
[22]
J. Weng. 2015. Brain as an Emergent Finite Automaton: A Theory and Three Theorems. Int’l Journal of Intelligence Science 5, 2 (2015), 112–131.
[23]
J. Weng. 2021. Data Deletions in AI Papers in Nature since 2015 and the Appropriate Protocol. http://www.cse.msu.edu/ weng/research/2021-06-28-Report-to-Nature-specific-PSUTS.pdf. submitted to Nature, June 28, 2021.
[24]
J. Weng. 2021. Data Deletions in AI Papers in Science since 2015 and the Appropriate Protocol. http://www.cse.msu.edu/ weng/research/2021-12-13-Report-to-Science-specific-PSUTS.pdf. submitted to Science, Dec. 13, 2021.
[25]
J. Weng. 2022. 20 million-dollar problems for any brain models and a holistic solution: Conscious learning. In Proc. Int’l Joint Conference on Neural Networks. NJ: IEEE Press, Padua, Italy, 1–9. http://www.cse.msu.edu/ weng/research/20M-IJCNN2022rvsd-cite.pdf.
[26]
J. Weng. 2022. 3D-to-2D-to-3D Conscious Learning. In Proc. IEEE 40th Int’l Conference on Consumer Electronics. NJ: IEEE Press, Las Vegas, NV, USA, 1–6.
[27]
J. Weng. 2022. An Algorithmic Theory of Conscious Learning. In 2022 3rd Int’l Conf. on Artificial Intelligence in Electronics Engineering. NY: ACM Press, Bangkok, Thailand, 1–10.
[28]
J. Weng. 2022. A Developmental Network Model of Conscious Learning in Biological Brains. Research Square., 32 pages.
[29]
J. Weng. 2022. On “Deep Learning” Misconduct. In Proc. 2022 3rd International Symposium on Automation, Information and Computing (ISAIC 2022). SciTePress, Beijing, China, 1–8. arXiv:2211.16350.
[30]
J. Weng. 2023. A Protocol for Testing Conscious Learning Robots. In Proc. Int’l Joint Conference on Neural Networks. NJ: IEEE Press, Queensland, Australia, 1–8.
[31]
J. Weng. 2023. Why Deep Learning’s Performance Data Are Misleading. In 2023 4th Int’l Conf. on Artificial Intelligence in Electronics Engineering. NY: ACM Press, Haikou, China, 1–10. arXiv:2208.11228.
[32]
J. Weng, N. Ahuja, and T. S. Huang. 1997. Learning recognition and segmentation using the Cresceptron. Int’l Journal of Computer Vision 25, 2 (Nov. 1997), 109–143.

Index Terms

  1. Is "Deep Learning" Fraudulent in Statistics?

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    AIEE '24: Proceedings of the 2024 5th International Conference on Artificial Intelligence in Electronics Engineering
    January 2024
    89 pages
    ISBN:9798400716850
    DOI:10.1145/3658835
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 12 June 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Data Availability

    Zipped directory with latex source files. https://dl.acm.org/doi/10.1145/3658835.3658836#aiee24-3002.zip

    Conference

    AIEE 2024

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 31
      Total Downloads
    • Downloads (Last 12 months)31
    • Downloads (Last 6 weeks)5
    Reflects downloads up to 16 Feb 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media