ABSTRACT
People quit saying "computers don't make mistakes" after the Y2K scare that generated billions of dollars in hardware upgrades, and hence the "dot.com bubble". Computer arithmetic with finite-length bit-strings has its limitations and creates concerns. In computational mathematics using computers, it is a pity to abandon rigor just at the point we have reduced the problem to computer algorithms. In fact we need not do so. By using pairs of machine numbers as upper and lower bounds of unknown numbers of interest, and by rounding upper bounds up and lower bounds down, we can modify ordinary floating-point machine arithmetic to produce mathematically rigorous results. This is called "interval arithmetic with outward rounding", and it has been implemented and used for several decades. In particular it has been successfully used in non-trivial computer-aided proofs in mathematical analysis, such as the Kepler conjecture (a problem that was outstanding for more than 300 years) and many others. Practical applications also abound in chemical engineering, structural engineering, economics, aircraft control circuitry design, beam physics, global optimization, differential equations, etc. Rigor in computing depends on the integrity of order relations. Commonly used floating-point hardware can lose that integrity. A few examples are presented and it is shown how we can remedy the situation and regain rigor in computing by using outwardly rounded interval arithmetic.
Recommendations
That's Not Computing--This Is Computing
We've all become blase about our computing: computers in our pockets, in our washing machines, controlling our car engines, in our greeting cards, in our ski boots and tennis shoes. The giddy sense of amazement, of possibility, has gradually given way ...
Comments