Abstract
Spectral analysis plays an important role in detection of damage in structures and deep learning. The IEEE Std 754TM floating-point format (IEEE 754 for short) is supported by most major hardware vendors for “normal” floats. However, it has several limitations. The positTM format has been proposed as an alternative to IEEE 754. The choice of a floating-point format plays a crucial role in determining the accuracy and performance of spectral analysis. Previous work has attempted to evaluate posit format with respect to accuracy and performance. The accuracy of the posit has been established over IEEE 754 for a variety of applications. For example, our analysis of the Fast Fourier Transform – a critical component of spectral analysis – shows 2x better accuracy when using a 32-bit posit vs. a 32-bit IEEE754 format. For spectral analysis, 32-bit posits are substantially more accurate than 32-bit IEEE 754 floats. Although posit has shown better accuracy than IEEE 754, a fair evaluation of posit with IEEE 754 format using a real hardware implementation has been lacking so far. A software simulation of posit format on an x86 CPU is about \(\mathbf {69.3\times }\) slower than native IEEE 754 hardware for normal floats for a Fast Fourier Transform (FFT) of \(\mathbf {2^{28}}\) points. We propose the use of a software-defined dataflow architecture to evaluate performance and accuracy of posits in spectral analysis. Our dataflow architecture uses reconfigurable logical elements that express algorithms using only integer operations. Our architecture does not have a floating point arithmetic unit, and we express both IEEE 754 and posit arithmetic using the same integer operations within the hardware. On our dataflow architecture, the posit format is only \(\mathbf {1.8\times }\) slower than IEEE 754 for a Fast Fourier Transform (FFT) of \(\mathbf {2^{28}\approx 268}\) million points. This performance is achieved even though the number of operations for posit is almost \(\mathbf {5\times }\) higher than IEEE 754. With this implementation, we empirically propose a new lower bound for the performance of posit compared to IEEE 754 format.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
The significand is sometimes called the mantissa, but we prefer to use it in the context of logarithms and hence discourage its use with floating point representation of numbers.
- 2.
formerly called denormalized numbers.
References
Better floating point: posits in plain language. https://loyc.net/2019/unum-posits.html. Accessed 26 Aug 2024
IEEE standard for floating-point arithmetic pp. 1–84. https://doi.org/10.1109/IEEESTD.2019.8766229
Abel, A., Reineke, J.: uops.info: characterizing latency, throughput, and port usage of instructions on intel microarchitectures. In: ASPLOS, pp. 673–686. ASPLOS 2019, ACM, New York, NY, USA (2019). https://doi.org/10.1145/3297858.3304062
Bronstein, M.M., Bruna, J., LeCun, Y., Szlam, A., Vandergheynst, P.: Geometric deep learning: going beyond Euclidean data. IEEE Signal Process. Mag. 34(4), 18–42 (2017). https://doi.org/10.1109/msp.2017.2693418
Bruna, J., Zaremba, W., Szlam, A., LeCun, Y.: Spectral networks and locally connected networks on graphs (2014). https://doi.org/10.48550/arXiv.1312.6203
Carmichael, Z., Langroudi, H.F., Khazanov, C., Lillie, J., Gustafson, J.L., Kudithipudi, D.: Deep positron: a deep neural network using the posit number system (2019). https://doi.org/10.48550/arXiv.1812.01762
Chien, S.W.D., Peng, I.B., Markidis, S.: Posit NPB: assessing the precision improvement in HPC scientific applications. In: Wyrzykowski, R., Deelman, E., Dongarra, J., Karczewski, K. (eds.) PPAM 2019. LNCS, vol. 12043, pp. 301–310. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-43229-4_26
Ciocirlan, S.D., Loghin, D., Ramapantulu, L., Ţăpuş, N., Teo, Y.M.: The accuracy and efficiency of posit arithmetic. In: 2021 IEEE 39th International Conference on Computer Design (ICCD), pp. 83–87 (2021). https://doi.org/10.1109/ICCD53106.2021.00024
De Silva, H., Tan, H., Ho, N.M., Gustafson, J.L., Wong, W.F.: Towards a better 16-bit number representation for training neural networks. In: Gustafson, J., Leong, S.H., Michalewicz, M. (eds.) CoNGA 2023. LNCS, vol. 13851, pp. 114–133. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-32180-1_8
Fousse, L., Hanrot, G., Lefèvre, V., Pélissier, P., Zimmermann, P.: MPFR: a multiple-precision binary floating-point library with correct rounding. ACM Trans. Math. Softw. 33(2), 13-es (2007). https://doi.org/10.1145/1236463.1236468
Franchetti, F., et al.: FFTX and SpectralPack: a first look. In: 2018 IEEE 25th International Conference on High Performance Computing Workshops (HiPCW), pp. 18–27. IEEE, Bengaluru, India (2018). https://doi.org/10.1109/HiPCW.2018.8634111
Goldberg, D.: What every computer scientist should know about floating-point arithmetic. ACM Comput. Surv. 23(1), 5–48 (1991). https://doi.org/10.1145/103162.103163
Yonemoto, G.: Beating floating point at its own game. https://dl.acm.org/doi/abs/10.14529/jsfi170206
Gustafson, J.: Standard for posit arithmetic (2022). https://posithub.org/docs/posit_standard-2.pdf
Gustafson, J.L.: A generalized framework for matching arithmetic format to application requirements, p. 9 (2022)
Leong, S.H., Gustafson, J.L.: Lossless FFTs using posit arithmetic. In: Gustafson, J., Leong, S.H., Michalewicz, M. (eds.) CoNGA 2023. LNCS, vol. 13851, pp. 1–18. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-32180-1_1
Montero, R.M., Del Barrio, A.A., Botella, G.: Template-based posit multiplication for training and inferring in neural networks (2019). https://doi.org/10.48550/arXiv.1907.04091
Murillo, R., Mallasén, D., Del Barrio, A.A., Botella, G.: Comparing different decodings for posit arithmetic. In: Gustafson, J., Dimitrov, V. (eds.) CoNGA 2022. LNCS, vol. 13253, pp. 84–99. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-09779-9_6
Ootomo, H., Manabe, H., Harada, K., Yokota, R.: Quantum circuit simulation by SGEMM emulation on tensor cores and automatic precision selection (2023). https://doi.org/10.48550/arXiv.2303.08989
Ozaki, K., Ogita, T., Oishi, S., Rump, S.M.: Error-free transformations of matrix multiplication by using fast routines of matrix multiplication and its applications. Numer. Algorithms 59(1), 95–118 (2012). https://doi.org/10.1007/s11075-011-9478-1
Palacz, M.: Spectral methods for modelling of wave propagation in structures in terms of damage detection-a review. Appl. Sci. 8(7) (2018). https://doi.org/10.3390/app8071124
Parikh, D.N., Geijn, R.A., Henry, G.M.: Cascading GEMM: high precision from low precision. ArXiv abs/2303.04353 (2023). https://doi.org/10.48550/arXiv.2303.04353, https://api.semanticscholar.org/CorpusID:257405236
Sharma, N., et al.: CLARINET: a RISC-V based framework for posit arithmetic empiricism (2021). https://doi.org/10.1016/j.sysarc.2022.102801
Trefethen, L.N.: Spectral methods in MATLAB. Softw. Environ. Tools, Soc. Ind. Appl. Math. (2000). https://doi.org/10.1137/1.9780898719598
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Deshmukh, S., Khankin, D., Killian, W., Gustafson, J., Raz, E. (2025). Evaluation of Posits for Spectral Analysis Using a Software-Defined Dataflow Architecture. In: Dolev, S., Elhadad, M., Kutyłowski, M., Persiano, G. (eds) Cyber Security, Cryptology, and Machine Learning. CSCML 2024. Lecture Notes in Computer Science, vol 15349. Springer, Cham. https://doi.org/10.1007/978-3-031-76934-4_15
Download citation
DOI: https://doi.org/10.1007/978-3-031-76934-4_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-76933-7
Online ISBN: 978-3-031-76934-4
eBook Packages: Computer ScienceComputer Science (R0)