Abstract
Recent evaluations have highlighted the tapered posit number format as a promising alternative to the uniform precision IEEE 754 floating-point numbers, which suffer from various deficiencies. Although the posit encoding scheme offers superior coding efficiency at values close to unity, its efficiency markedly diminishes with deviation from unity. This reduction in efficiency leads to suboptimal encodings and a consequent diminution in dynamic range, thereby rendering posits suboptimal for general-purpose computer arithmetic.
This paper introduces and formally proves ‘takum’ as a novel general-purpose logarithmic tapered-precision number format, synthesising the advantages of posits in low-bit applications with high encoding efficiency for numbers distant from unity. Takums exhibit an asymptotically constant dynamic range in terms of bit string length, which is delineated in the paper to be suitable for a general-purpose number format. It is demonstrated that takums either match or surpass existing alternatives. Moreover, takums address several issues previously identified in posits while unveiling novel and beneficial arithmetic properties.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Alam, S.A., Garland, J., Gregg, D.: Low-precision logarithmic number systems: Beyond base-2. ACM Trans. Arch. Code Optim. 18(4), July 2021. https://doi.org/10.1145/3461699
Basir, M.S.S.M., Ismail, R.C., Naziri, S.Z.M.: An investigation of extended co-transformation using second-degree interpolation for logarithmic number system, pp. 59–63 (2020). https://doi.org/10.1109/FORTEI-ICEE50915.2020.9249931
Burks, A.W., Goldstine, H.H., Neumann, J.: Preliminary discussion of the logical design of an electronic computing instrument. In: Randell, B., et al. (eds.) The Origins of Digital Computers, pp. 399–413. Springer Berlin Heidelberg, Berlin, Heidelberg (1982). https://doi.org/10.1007/978-3-642-61812-3_32
Carlström, J.: Wheels - on division by zero. Math. Struct. Comput. Sci. 14, 143–184 (2004). https://doi.org/10.1017/S0960129503004110
Coleman, J.N., Ismail, R.C.: Lns with co-transformation competes with floating-point. IEEE Trans. Comput. 65(1), 136–146 (2016). https://doi.org/10.1109/TC.2015.2409059
Coleman, J.N., Chester, E.I., Softley, C.I., Kadlec, J.: Arithmetic on the european logarithmic microprocessor. IEEE Trans. Comput. 49(7), 702–715 (2000). https://doi.org/10.1109/12.863040
De Dinechin, F., Forget, L., Muller, J.M., Uguen, Y.: Posits: the good, the bad and the ugly. In: CoNGA’19, New York, NY, USA, 2019. Association for Computing Machinery. https://doi.org/10.1145/3316279.3316285
John, L.: Gustafson. Posit arithmetic, Every bit counts (2024)
Gustafson, J.L.: The End of Error: Unum Computing. Chapman & Hall/CRC Computational Science. CRC Press, April 2015. ISBN 9781482239874
Gustafson, J.L., Yonemoto, I.T.: Beating floating point at its own game: Posit arithmetic. Supercomput. Front. Innov. 4(2), 71–86 (2017). https://doi.org/10.14529/jsfi170206
Gustafson, J.L., et al.: Standard for \({\rm posit}^{{\rm TM}}\) arithmetic (2022). https://web.archive.org/web/20220603115338/https://posithub.org/docs/posit_standard-2.pdf
IEEE standard for floating-point arithmetic July (2019)
Johnson, J.: Efficient, arbitrarily high precision hardware logarithmic arithmetic for linear algebra. In: 2020 IEEE 27th Symposium on Computer Arithmetic (ARITH), pp. 25–32, Los Alamitos, CA, USA, June (2020). IEEE Computer Society. https://doi.org/10.1109/ARITH48897.2020.00013
Johnson, J.: Rethinking floating point for deep learning, p. 1–8 (2018)
Kahan, W.: Lecture notes on the status of IEEE standard 754 for binary floating-point arithmetic. October 1997. https://web.archive.org/web/20240308034347, https://people.eecs.berkeley.edu/~kahan/ieee754status/IEEE754.PDF
Kahan, W.: Names for standardized floating-point formats (April 2002). https://web.archive.org/web/20231227155514, https://people.eecs.berkeley.edu/wkahan/ieee754status/Names.pdf
Kharya, P.: Tensorfloat-32 in the a100 gpu accelerates ai training, hpc up to 20x. May 2020. https://web.archive.org/web/20231126174430, https://blogs.nvidia.com/blog/tensorfloat-32-precision-format
Kouretas, I., Paliouras, V.: Logarithmic number system for deep learning. In: 2018 7th International Conference on Modern Circuits and Systems Technologies (MOCAST), pp. 1–4. IEEE (June 2018). https://doi.org/10.1109/MOCAST.2018.8376572
Lilley, C.: Color on the Web, chapter 16, page 271-291. John Wiley & Sons, Ltd, 2023. ISBN 9781119827214. https://doi.org/10.1002/9781119827214.ch16
Lindstrom, P., Lloyd, S., Hittinger, J.: Universal coding of the reals: alternatives to IEEE floating point (2018). https://doi.org/10.1145/3190339.3190344
Miyashita, D., Lee, E.H., Murmann, B.: Convolutional neural networks using logarithmic data representation, pp. 1–10, March (2016)
Morris, R.: Tapered floating point: a new floating-point representation. IEEE Trans. Comput. C-20(12), 1578–1579 (1971). https://doi.org/10.1109/T-C.1971.223174
Muller, J.M.: Discrete basis and computation of elementary functions. IEEE Trans. Comput. 34(09), 857–862 (1985). ISSN 1557-9956. https://doi.org/10.1109/TC.1985.1676643
Ramachandran, A., et al.: Algorithm-hardware co-design of distribution-aware logarithmic-posit encodings for efficient DNN inference, pp. 1–6 (2024)
Quevedo, L.T.: Automática: Complemento de la teoría de las máquinas. Revista de Obras Públicas(2043), 575–583, November (1914). https://quickclick.es/rop/pdf/publico/1914/1914_tomoI_2043_01.pdf
Wang, S., Kanwar, P.: Bfloat16: The secret to high performance on cloud tpus. August 2019. https://web.archive.org/web/20190826170119/https://cloud.google.com/blog/products/ai-machine-learning/bfloat16-the-secret-to-high-performance-on-cloud-tpus
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Hunhold, L. (2024). Beating Posits at Their Own Game: Takum Arithmetic. In: Michalewicz, M., Gustafson, J., De Silva, H. (eds) Next Generation Arithmetic. CoNGA 2024. Lecture Notes in Computer Science, vol 14666. Springer, Cham. https://doi.org/10.1007/978-3-031-72709-2_1
Download citation
DOI: https://doi.org/10.1007/978-3-031-72709-2_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-72708-5
Online ISBN: 978-3-031-72709-2
eBook Packages: Computer ScienceComputer Science (R0)