Skip to main content

Beating Posits at Their Own Game: Takum Arithmetic

  • Conference paper
  • First Online:
Next Generation Arithmetic (CoNGA 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14666))

Included in the following conference series:

  • 141 Accesses

Abstract

Recent evaluations have highlighted the tapered posit number format as a promising alternative to the uniform precision IEEE 754 floating-point numbers, which suffer from various deficiencies. Although the posit encoding scheme offers superior coding efficiency at values close to unity, its efficiency markedly diminishes with deviation from unity. This reduction in efficiency leads to suboptimal encodings and a consequent diminution in dynamic range, thereby rendering posits suboptimal for general-purpose computer arithmetic.

This paper introduces and formally proves ‘takum’ as a novel general-purpose logarithmic tapered-precision number format, synthesising the advantages of posits in low-bit applications with high encoding efficiency for numbers distant from unity. Takums exhibit an asymptotically constant dynamic range in terms of bit string length, which is delineated in the paper to be suitable for a general-purpose number format. It is demonstrated that takums either match or surpass existing alternatives. Moreover, takums address several issues previously identified in posits while unveiling novel and beneficial arithmetic properties.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Alam, S.A., Garland, J., Gregg, D.: Low-precision logarithmic number systems: Beyond base-2. ACM Trans. Arch. Code Optim. 18(4), July 2021. https://doi.org/10.1145/3461699

  2. Basir, M.S.S.M., Ismail, R.C., Naziri, S.Z.M.: An investigation of extended co-transformation using second-degree interpolation for logarithmic number system, pp. 59–63 (2020). https://doi.org/10.1109/FORTEI-ICEE50915.2020.9249931

  3. Burks, A.W., Goldstine, H.H., Neumann, J.: Preliminary discussion of the logical design of an electronic computing instrument. In: Randell, B., et al. (eds.) The Origins of Digital Computers, pp. 399–413. Springer Berlin Heidelberg, Berlin, Heidelberg (1982). https://doi.org/10.1007/978-3-642-61812-3_32

    Chapter  Google Scholar 

  4. Carlström, J.: Wheels - on division by zero. Math. Struct. Comput. Sci. 14, 143–184 (2004). https://doi.org/10.1017/S0960129503004110

    Article  MathSciNet  Google Scholar 

  5. Coleman, J.N., Ismail, R.C.: Lns with co-transformation competes with floating-point. IEEE Trans. Comput. 65(1), 136–146 (2016). https://doi.org/10.1109/TC.2015.2409059

  6. Coleman, J.N., Chester, E.I., Softley, C.I., Kadlec, J.: Arithmetic on the european logarithmic microprocessor. IEEE Trans. Comput. 49(7), 702–715 (2000). https://doi.org/10.1109/12.863040

  7. De Dinechin, F., Forget, L., Muller, J.M., Uguen, Y.: Posits: the good, the bad and the ugly. In: CoNGA’19, New York, NY, USA, 2019. Association for Computing Machinery. https://doi.org/10.1145/3316279.3316285

  8. John, L.: Gustafson. Posit arithmetic, Every bit counts (2024)

    Google Scholar 

  9. Gustafson, J.L.: The End of Error: Unum Computing. Chapman & Hall/CRC Computational Science. CRC Press, April 2015. ISBN 9781482239874

    Google Scholar 

  10. Gustafson, J.L., Yonemoto, I.T.: Beating floating point at its own game: Posit arithmetic. Supercomput. Front. Innov. 4(2), 71–86 (2017). https://doi.org/10.14529/jsfi170206

  11. Gustafson, J.L., et al.: Standard for \({\rm posit}^{{\rm TM}}\) arithmetic (2022). https://web.archive.org/web/20220603115338/https://posithub.org/docs/posit_standard-2.pdf

  12. IEEE standard for floating-point arithmetic July (2019)

    Google Scholar 

  13. Johnson, J.: Efficient, arbitrarily high precision hardware logarithmic arithmetic for linear algebra. In: 2020 IEEE 27th Symposium on Computer Arithmetic (ARITH), pp. 25–32, Los Alamitos, CA, USA, June (2020). IEEE Computer Society. https://doi.org/10.1109/ARITH48897.2020.00013

  14. Johnson, J.: Rethinking floating point for deep learning, p. 1–8 (2018)

    Google Scholar 

  15. Kahan, W.: Lecture notes on the status of IEEE standard 754 for binary floating-point arithmetic. October 1997. https://web.archive.org/web/20240308034347, https://people.eecs.berkeley.edu/~kahan/ieee754status/IEEE754.PDF

  16. Kahan, W.: Names for standardized floating-point formats (April 2002). https://web.archive.org/web/20231227155514, https://people.eecs.berkeley.edu/wkahan/ieee754status/Names.pdf

  17. Kharya, P.: Tensorfloat-32 in the a100 gpu accelerates ai training, hpc up to 20x. May 2020. https://web.archive.org/web/20231126174430, https://blogs.nvidia.com/blog/tensorfloat-32-precision-format

  18. Kouretas, I., Paliouras, V.: Logarithmic number system for deep learning. In: 2018 7th International Conference on Modern Circuits and Systems Technologies (MOCAST), pp. 1–4. IEEE (June 2018). https://doi.org/10.1109/MOCAST.2018.8376572

  19. Lilley, C.: Color on the Web, chapter 16, page 271-291. John Wiley & Sons, Ltd, 2023. ISBN 9781119827214. https://doi.org/10.1002/9781119827214.ch16

  20. Lindstrom, P., Lloyd, S., Hittinger, J.: Universal coding of the reals: alternatives to IEEE floating point (2018). https://doi.org/10.1145/3190339.3190344

  21. Miyashita, D., Lee, E.H., Murmann, B.: Convolutional neural networks using logarithmic data representation, pp. 1–10, March (2016)

    Google Scholar 

  22. Morris, R.: Tapered floating point: a new floating-point representation. IEEE Trans. Comput. C-20(12), 1578–1579 (1971). https://doi.org/10.1109/T-C.1971.223174

  23. Muller, J.M.: Discrete basis and computation of elementary functions. IEEE Trans. Comput. 34(09), 857–862 (1985). ISSN 1557-9956. https://doi.org/10.1109/TC.1985.1676643

  24. Ramachandran, A., et al.: Algorithm-hardware co-design of distribution-aware logarithmic-posit encodings for efficient DNN inference, pp. 1–6 (2024)

    Google Scholar 

  25. Quevedo, L.T.: Automática: Complemento de la teoría de las máquinas. Revista de Obras Públicas(2043), 575–583, November (1914). https://quickclick.es/rop/pdf/publico/1914/1914_tomoI_2043_01.pdf

  26. Wang, S., Kanwar, P.: Bfloat16: The secret to high performance on cloud tpus. August 2019. https://web.archive.org/web/20190826170119/https://cloud.google.com/blog/products/ai-machine-learning/bfloat16-the-secret-to-high-performance-on-cloud-tpus

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Laslo Hunhold .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hunhold, L. (2024). Beating Posits at Their Own Game: Takum Arithmetic. In: Michalewicz, M., Gustafson, J., De Silva, H. (eds) Next Generation Arithmetic. CoNGA 2024. Lecture Notes in Computer Science, vol 14666. Springer, Cham. https://doi.org/10.1007/978-3-031-72709-2_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72709-2_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72708-5

  • Online ISBN: 978-3-031-72709-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics