New trends in building numerical programs

https://doi.org/10.1016/j.compchemeng.2010.07.004Get rights and content

Abstract

This manuscript focuses the attention on the possibility that some basic programs for solving numerical problems have to be revised.

The fundamental case of linear system solution and related concepts is proposed as example of the different ways to approach a problem using object-oriented programming and procedural approaches. The different points of view shows that the traditional approach adopted to deem the conditioning of a system as well as all existing programs to solve linear systems need to be revised. A brief discussion deals with the possibility of parallelizing programs for personal computers and with interaction of parallel computing and object-oriented programming.

Introduction

Calculation programs are nowadays essential in each scientific area and their use was widely spread in past thanks to computers more and more performing and economically accessible to most of the population.

Together with computers, specific programming languages were introduced to solve numerical problems; two of the most important ones were FORTRAN and Pascal, while C was associated to only in a successive period. All these languages adopt a procedural programming philosophy, which is based on the possibility to write generalized pieces of code to solve problems of different natures. In practice, it consists of implementing generalized algorithms in a SUBROUTINE so to solve a family of similar problems involving different physical and chemical phenomena. Years ago, such an approach based on numerical methods had radically changed earlier manual computing.

A noticeable example is given by nonlinear algebraic systems: manual calculations required ad hoc techniques for each problem, whereas procedural programming gave the opportunity to implement all available algorithms into one or more generalized subroutines leading to collections (or libraries) of methods ready to use. By coding the nonlinear system into a specific user-subroutine gave scientists and engineers opportunity to iteratively call the algorithms within these libraries and a wide category of problems became solvable by means of common solver.

The use of generalized subroutines for solving numerical problems brought to an important side effect: to solve a problem, there became a clear separation between the problem formulation and the algorithm implementation, the latter being the operation accomplished a priori and independently on the same problem formulation.

Accordingly, the problem formulation is uninfluenced by the method adopted to solve it. Conversely, these two steps were often merged with each other in manual calculations. The different philosophy of numerically approaching a problem had two effects, one positive and one negative:

  • The positive one is that calculation programs are more readable to third persons since the numerical methods were separated from the problem formulation.

  • The negative one is the fact that some features of the specific problem to be solved are unavoidably lost by focusing the attention on the development of general subroutines to solve generic problems. Moreover, there is a certain separation between who writes mathematically the problem and who writes general subroutines that highlights the following issue: who has the task of formulating the problem is not an expert on numerical methods, whereas who solves the problem numerically has usually no experience on the original physical problem.

In last two decades, we have been undergoing two silent revolutions that directly involve Process Systems Engineering (PSE) and Computer-Aided Process Engineering (CAPE) communities, besides many other scientific and industrial areas: the use of object-oriented programming and parallel computing on personal computers. Both these transformations have been widely discussed in the literature as they significantly modify the numerical analysis as it was conceived since the second part of the previous century and the way to apply numerical methods and algorithms for solving more and more complex problems and multifaceted issues.

It is worth underlining that the opportunity to exploit shared memory on personal computers for parallel computing has two important aspects that make it different from parallel computing on cluster and distributed memory architectures: the use of graphical processing units (GPU) and the use of more processors on personal computers.

The present paper focuses on the parallel computing only marginally, especially on those topics that are not so discussed in the literature: the synergic effect of coupling parallel computing and object-oriented programming and the new possibilities provided by the spread of shared memory in personal computers against clusters.

The paper is mainly aimed at an aspect apparently marginal in object-oriented programming: the possibility to consider a numerical problem as a single object and not as a problem to be solved by means of subroutines.

Section snippets

Procedural and object-oriented programming: which one?

Object-oriented programming dramatically changed the way to think and develop numerical programs and it is slowly having the upper hand over the traditional procedural philosophy, even though many FORTRAN and C users oppose it or sometimes aprioristically refuse it. The official motivation is the large amount of programs developed in half a century of procedural programming, especially in FORTRAN language; the real motivation is in my opinion the refusal of learning a new programming language,

Interactions between object-oriented and parallel programming

The object-oriented programming seems to be the optimal partner for parallel computing since each object could be seen as a micro-program that includes its own data and functions and all these micro-programs are joined to each other by means of well-defined interfaces: some data could be shared while some others could not. Thinking of parallel and object-oriented programming together, it is possible to write programs for many new algorithms for solving numerical problems which were not

Parallel programming on personal computer

Years ago only those (few and lucky) users that had the possibility to enter clusters used parallel computing and, apart from their significant impact, programs exploiting multi-processor structures were not so spread as they could be used only on few machines for complex calculations. Nowadays everything is changed and the new generations of personal computers should account for it.

It is important to speak about the more recent trend of moving to GPU (graphical processing unit) or hybrid

Numerical problem considered as an object

The third point that strongly recommended to re-formulate the existing programs is not easy to understand as people used to program by means of procedural philosophy cannot fully realize what an object is, what the benefits are in using the object-oriented approach and what really conceptually changes between procedural and object-oriented programming.

Whereas subroutines and functions are the essential element of procedural programming, objects are micro-programs that exchange information each

How can we deem whether a linear system is well- or ill-conditioned?

Saying that a system is well- or ill-conditioned should be independent from the algorithm adopted to solve the system and, hence, it is a fortiori independent from the fact that the system is solved by means of a factorization or an iterative method.

It is well known that the matrix determinant plays a fundamental role in the classical analysis, but it is useless in practice to provide any kind of information about the system well- or ill-conditioning.

The current way is based on the calculation

Is it possible to improve the conditioning of a linear system?

The object-oriented programming forces to focus the attention on an additional aspect that should be always considered even in the procedural programming: as the physical problem to solve is the main object in the object-oriented programming, the user has the task to formulate it at best. Let consider the following trivial example to clarify this concept.

An exact interpolation is performed to get a second-order polynomial passing through three support points: t1 = 10000., y1 = 1./6., t2 = 10500., y2 = 

Matrix conditioning and linear system conditioning

At this point we are able to answer the question about the conditioning of a linear system and the object-oriented approach takes us to an answer that is far to the one traditionally accepted.

It is worth noting that the condition number is traditionally used according to the relation (1) independently on its meaning: either solving the linear system to get the vector x or performing the product Ax to get the right-hand side term b. Whereas the system solution always allows bringing back the

If the system is well-conditioned, how can we assure that the algorithm adopted to solve it is stable?

Optimal pivoting during Gauss factorization is now discussed. As it shall be clarified in the following, the same considerations can be done for both direct and iterative algorithms.

As discussed in the previous examples, pivoting plays a fundamental role to get a good or bad system solution.

It is not sufficient to ensure that the pivot is nonzero or significantly far from zero at each iteration: some row and column interchanges should be carried out in order to obtain the best pivot. The

Which is the sequence of operations to solve a large and sparse system in a stable and efficient way?

Direct methods adopted to solve sparse matrix systems are conceptually equal to the ones used in solving problems with dense matrices. From a theoretical point of view, this kind of matrices does not require any specific consideration, but, in implementing better performing algorithms, it is suitable to exploit the matrix sparsity for avoiding useless calculations and storing null coefficients as well. Only the Gauss factorization will be considered here since it is the only one able to

Conclusions

This manuscript showed that object-oriented programming philosophy forces to modify some well-known and consolidated concepts of numerical analysis. The fundamental case of linear system solutions is discussed by showing the need for revising the following points:

  • 1.

    The matrix condition number is useful to check whether a system is well- or ill-conditioned only if the system is written in its standard form.

  • 2.

    The user has a task that is too much underestimated: he/she must formulate the system in the

References (12)

There are more references available in the full text version of this article.

Cited by (11)

  • Implementing robust thermodynamic model for reliable bubble/dew problem solution in cryogenic distillation of air separation units

    2021, International Journal of Thermofluids
    Citation Excerpt :

    Process System Engineering could be represented as a series of interconnected layers and in this image, the thermodynamic modelling and consequent robust-efficient properties estimation algorithms are the core of this structure [27]. It is well-known that more accurate predictions result in a more reliable process design parameters estimation [11,17,20,23,53,55,67], an optimal operative conditions definition [16,18,35,77,95,97], a better units design, a more clear estimation of investment and operative costs [39], more performing process control and a higher reduction in energy consumption [56,60,71]. In the 1970s, the request for accurate thermodynamic models was partially satisfied by Cubic Equations of State (CEoS) such as, for example, the Soave-Redlich-Kwong (SRK) CEoS [76], and the Peng-Robinson (PR) CEoS [62].

  • A Decision Support System for consumption optimization in a naphtha reforming plant

    2012, Computers and Chemical Engineering
    Citation Excerpt :

    Real-world databases are highly susceptible to noisy, missing and inconsistent data. Data cleaning can be applied to correct inconsistencies (Buzzi-Ferraris, 2011; Buzzi-Ferraris & Manenti, 2011). They are not clear outliers and spurious data, but the plant was partially stopped for five days during the period of study.

  • Considerations on nonlinear model predictive control techniques

    2011, Computers and Chemical Engineering
    Citation Excerpt :

    Object-oriented programming is fast spreading in scientific research for its great advantage of taking benefits from the existing numerical libraries. This research activity is based on BzzMath library, which is entirely written in C++ and developed by means of object-oriented programming (Buzzi-Ferraris, 2010a, 2010b). It covers several numerical fields such as linear algebra, linear/nonlinear regressions, optimization, linear programming, and differential systems.

View all citing articles on Scopus
View full text