Abstract
The present paper aims to propose a new method to minimize and maximize information and its cost, accompanied by the ordinary error minimization. All these computational procedures are operated as independently as possible from each other. This method aims to solve the contradiction in conventional computational methods in which many procedures are intertwined with each other, making it hard to compromise among them. In particular, we try to minimize information at the expense of cost, followed by information maximization, to reduce humanly biased information obtained through artificially created input variables. The new method was applied to the detection of relations between mission statements and firms’ financial performance. Though the relation between them has been considered one of the main factors for strategic planning in management, the past studies could only confirm very small positive relations between them. In addition, those results turned out to be very dependent on the operationalization and variable selection. The studies suggest that there may be some indirect and mediating variables or factors to internalize the mission statements in organizational members. If neural networks have an ability to infer those mediating variables or factors, new insight into the relation can be obtained. Keeping this in mind, the experiments were performed to infer some positive relations. The new method, based on minimizing the humanly biased effects from inputs, could produce linear, non-linear, and indirect relations, which could not be extracted by the conventional methods. Thus, this study shows a possibility for neural networks to interpret complex phenomena in human and social sciences, which, in principle, conventional models cannot deal with.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Hinton, G.E., McClelland, J.L., Rumelhart, D.E.: Distributed representations. In: Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1: Foundations, pp. 77–109 (1986)
Rumelhart, D.E., Zipser, D.: Feature discovery by competitive learning. Cogn. Sci. 9, 75–112 (1985)
Kohonen, T.: Self-Organization and Associative Memory. Springer, New York (1988). https://doi.org/10.1007/978-3-642-88163-3
Kohonen, T.: Self-Organizing Maps. Springer, Heidelberg (1995). https://doi.org/10.1007/978-3-642-97610-0
Xu, Y., Xu, L., Chow, T.W.S.: PPoSOM: a new variant of PolSOM by using probabilistic assignment for multidimensional data visualization. Neurocomputing 74(11), 2018–2027 (2011)
Xu, L., Chow, T.W.S.: Multivariate data classification using PolSOM. In: Prognostics and System Health Management Conference (PHM-Shenzhen), pp. 1–4. IEEE (2011)
DeSieno, D.: Adding a conscience to competitive learning. In: IEEE International Conference on Neural Networks, vol. 1, pp. 117–124. Institute of Electrical and Electronics Engineers, New York (1988)
Lei, X.: Rival penalized competitive learning for clustering analysis, RBF net, and curve detection. IEEE Trans. Neural Netw. 4(4), 636–649 (1993)
Choy, C.S., Siu, W.: A class of competitive learning models which avoids neuron underutilization problem. IEEE Trans. Neural Netw. 9(6), 1258–1269 (1998)
Banerjee, A., Ghosh, J.: Frequency-sensitive competitive learning for scalable balanced clustering on high-dimensional hyperspheres. IEEE Trans. Neural Netw. 15(3), 702–719 (2004)
Van Hulle, M.M.: Entropy-based kernel modeling for topographic map formation. IEEE Trans. Neural Netw. 15(4), 850–858 (2004)
Hubel, D.H., Wisel, T.N.: Receptive fields, binocular interaction and functional architecture in cat’s visual cortex. J. Physiol. 160, 106–154 (1962)
Bienenstock, E.L., Cooper, L.N., Munro, P.W.: Theory for the development of neuron selectivity. J. Neurosci. 2, 32–48 (1982)
Schoups, A., Vogels, R., Qian, N., Orban, G.: Practising orientation identification improves orientation coding in V1 neurons. Nature 412(6846), 549–553 (2001)
Ukita, J.: Causal importance of low-level feature selectivity for generalization in image recognition. Neural Netw. 125, 185–193 (2020)
Nguyen, A., Yosinski, J., Clune, J.: Understanding neural networks via feature visualization: a survey. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 55–76. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_4
Montavon, G., Binder, A., Lapuschkin, S., Samek, W., Müller, K.-R.: Layer-wise relevance propagation: an overview. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 193–209. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_10
Morcos, A.S., Barrett, D.G.T., Rabinowitz, N.C., Botvinick, M.: On the importance of single directions for generalization. Stat 1050, 15 (2018)
Leavitt, M.L., Morcos, A.: Selectivity considered harmful: evaluating the causal impact of class selectivity in DNNs. arXiv preprint arXiv:2003.01262 (2020)
Arpit, D., Zhou, Y., Ngo, H., Govindaraju, V.: Why regularized auto-encoders learn sparse representation? In: International Conference on Machine Learning, pp. 136–144. PMLR (2016)
Goodfellow, I., Bengio, Y., Courville, A.: Regularization for deep learning. Deep Learn. 216–261 (2016)
Kukačka, J., Golkov, V., Cremers, D.: Regularization for deep learning: a taxonomy. arXiv preprint arXiv:1710.10686 (2017)
Wu, C., Gales, M.J.F., Ragni, A., Karanasou, P., Sim, K.C.: Improving interpretability and regularization in deep learning. IEEE/ACM Trans. Audio Speech Lang. Process. 26(2), 256–265 (2017)
Linsker, R.: Self-organization in a perceptual network. Computer 21(3), 105–117 (1988)
Linsker, R.: Local synaptic rules suffice to maximize mutual information in a linear network. Neural Comput. 4, 691–702 (1992)
Linsker, R.: Improved local learning rule for information maximization and related applications. Neural Netw. 18, 261–265 (2005)
Moody, J., Hanson, S., Krogh, A., Hertz, J.A.: A simple weight decay can improve generalization. Adv. Neural Inf. Process. Syst. 4, 950–957 (1995)
Fan, F.-L., Xiong, J., Li, M., Wang, G.: On interpretability of artificial neural networks: a survey. IEEE Trans. Radiat. Plasma Med. Sci. 5(6), 741–760 (2021)
Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013)
Hu, J., et al.: Architecture disentanglement for deep neural networks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 672–681 (2021)
Cubuk, E.D., Zoph, B., Mane, D., Vasudevan, V., Le, Q.V.: Autoaugment: learning augmentation strategies from data. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 113–123 (2019)
Gupta, A., Murali, A., Gandhi, D., Pinto, L.: Robot learning in homes: improving generalization and reducing dataset bias. arXiv preprint arXiv:1807.07049 (2018)
Kim, B., Kim, H., Kim, K., Kim, S., Kim, J.: Learning not to learn: training deep neural networks with biased data. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9012–9020 (2019)
Wang, T., Zhao, J., Yatskar, M., Chang, K.W., Ordonez, V.: Balanced datasets are not enough: estimating and mitigating gender bias in deep image representations. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5310–5319 (2019)
Hendricks, L.A., Burns, K., Saenko, K., Darrell, T., Rohrbach, A.: Women also snowboard: overcoming bias in captioning models. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 771–787 (2018)
Cortés-Sánchez, J.D., Rivera, L.: Mission statements and financial performance in Latin-American firms. Verslas: Teorija ir praktika/Business Theory Pract. 20, 270–283 (2019)
Bart, C.K., Bontis, N., Taggar, S.: A model of the impact of mission statements on firm performance. Manag. Decis. 39(1), 19–35 (2001)
Hirota, S., Kubo, K., Miyajima, H., Hong, P., Park, Y.W.: Corporate mission, corporate policies and business outcomes: evidence from japan. Manag. Decis. (2010)
Alegre, I., Berbegal-Mirabent, J., Guerrero, A., Mas-Machuca, M.: The real mission of the mission statement: a systematic review of the literature. J. Manag. Organ. 24(4), 456–473 (2018)
Atrill, P., Omran, M., Pointon, J.: Company mission statements and financial performance. Corp. Ownersh. Control. 2(3), 28–35 (2005)
Vandijck, D., Desmidt, S., Buelens, M.: Relevance of mission statements in flemish not-for-profit healthcare organizations. J. Nurs. Manag. 15(2), 131–141 (2007)
Desmidt, S., Prinzie, A., Decramer, A.: Looking for the value of mission statements: a meta-analysis of 20 years of research. Manag. Decis. (2011)
Macedo, I.M., Pinho, J.C., Silva, A.M.: Revisiting the link between mission statements and organizational performance in the non-profit sector: the mediating effect of organizational commitment. Eur. Manag. J. 34(1), 36–46 (2016)
Cover, T.M., Thomas, J.A.: Elements of Information Theory. Wiley, Hoboken (1991)
Buciluǎ, C., Caruana, R., Niculescu-Mizil, A.: Model compression. In: Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 535–541. ACM (2006)
Ba, J., Caruana, R.: Do deep nets really need to be deep? In: Advances in Neural Information Processing Systems, pp. 2654–2662 (2014)
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)
Romero, A., Ballas, N., Kahou, S.E., Chassang, A., Gatta, C., Bengio, Y.: Hints for thin deep nets. In: Proceedings of ICLR, Fitnets (2015)
Luo, P., Zhu, Z., Liu, Z., Wang, X., Tang, X.: Face model compression by distilling knowledge from neurons. In: Thirtieth AAAI Conference on Artificial Intelligence (2016)
Neill, J.O.: An overview of neural network compression. arXiv preprint arXiv:2006.03669 (2020)
Gou, J., Yu, B., Maybank, S.J., Tao, D.: Knowledge distillation: a survey. Int. J. Comput. Vis. 129(6), 1789–1819 (2021)
Cheng, Y., Wang, D., Zhou, P., Zhang, T.: A survey of model compression and acceleration for deep neural networks (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Kamimura, R., Kitajima, R. (2023). Min-Max Cost and Information Control in Multi-layered Neural Networks. In: Arai, K. (eds) Proceedings of the Future Technologies Conference (FTC) 2022, Volume 1. FTC 2022 2022. Lecture Notes in Networks and Systems, vol 559. Springer, Cham. https://doi.org/10.1007/978-3-031-18461-1_1
Download citation
DOI: https://doi.org/10.1007/978-3-031-18461-1_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-18460-4
Online ISBN: 978-3-031-18461-1
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)