In situ column generation for a cutting-stock problem

https://doi.org/10.1016/j.cor.2005.09.007Get rights and content

Abstract

Working with an integer bilinear programming formulation of a one-dimensional cutting-stock problem, we develop an ILP-based local-search heuristic. The ILPs holistically integrate the master and subproblem of the usual price driven pattern-generation paradigm, resulting in a unified model that generates new patterns in situ. We work harder to generate new columns, but we are guaranteed that new columns give us an integer linear-programming improvement (rather then the continuous linear-programming improvement sought by the usual price driven column generation). The method is well suited to practical restrictions such as when a limited number of cutting patterns should be employed, and our goal is to generate a profile of solutions trading off trim loss against the number of patterns utilized. We describe our implementation and results of computational experiments on instances from a chemical-fiber company.

Introduction

We assume familiarity with the basics of integer linear programming (see [1], for example). Let m be a positive integer, and let M{1,2,,m}. Stock rolls of (usable) width Wmax are available for satisfying demands di for rolls of width wi(<Wmax), iM. The classical (one-dimensional) CSP (cutting-stock problem) is to minimize the trim loss created in covering the demands. Many models for the problem employ the idea of a cutting pattern. A pattern is a zZ+M satisfyingWminiMwiziWmax,where Wmin is a lower bound on the allowed nominal trim loss per stock roll (note that the total actual trim loss may be greater than the sum of the nominal trim losses, since we allow the possibility of over production for our solution). The variable zi represents the number of rolls of width wi that are obtained from a single stock roll when we employ this pattern. With this concept, most models seek to determine which patterns to use and how many times to repeat each pattern to cover the demand. This is the classical framework of Gilmore and Gomory (GG) [2] who applied a simplex-method based column-generation scheme to solve the LP relaxation, followed by rounding the column utilizations up.

We allow for some variations of the classical model to accommodate practical application. In actual instances of the CSP, the cutting machines have a limited number, say , of (movable) knives. Ordinarily there is a pair of stationary knives, one at each end, that cut the roll to the standard width of Wmax. We do not count the boundary scrap trimmed by these stationary knives as trim loss. Hence we have the added restrictioniMzi+iMwiziWmax,or, equivalently,iM(Wmax-wi)ziWmax,since we want to allow +1 pieces to be cut when there is no nominal trim loss for a pattern.

Also, widths up to some threshold ω are deemed to be narrow, and there is often a limit on the number Cnar of narrows permitted to be cut from a stock roll. So we add the restrictioniMnarziCnar,where Mnar{iM:wiω}.

Additionally, we allow for the possibility of not requiring demand to be exactly satisfied. This is often customary for real applications, for example in the paper industry. Moreover, allowing this flexibility can lead to a significant decrease in trim loss. We assume that for some small nonnegative integers qi and pi, the customer will accept delivery of between di-qi and di+pi rolls of width wi, for iM. We allow for over production, beyond di+pi rolls of width wi, but we treat that as trim loss.

Finally, we limit the number of patterns allowed to some number nmax. This kind of limitation can be quite practical for actual applications, but it is difficult to handle computationally (see [3] and the references therein). Heuristic approaches to this type of limitation include: variations on the simplex-method based GG approach (see [4], [5], [6]), incremental heuristics for generating patterns (see [7] and [8]), local-search methods for reducing the number of patterns as a post-processing stage (see [9], [10], [11]), linear-programming based local search (see [12]), heuristically-based local search (see [13] and [14]). While a computationally intensive Branch, Cut and Price methodology is explored in [15] and later a Branch and Price approach in [16]. A further exact ILP formulation and associated computational experiments are presented in [17]. Our goal is find a good compromise method—more powerful than existing heuristics, but with less effort than exact ILP approaches. Such a tool can be used within an exact ILP approach to produce good candidate solutions, and may also be used when other heuristics fail to give good solutions. So, we view our contribution as complementary to existing approaches. Moreover, often it is hard to exactly quantify the cost of changing cutting patterns. Our method is particularly well suited to generating a profile of solutions trading off trim loss against the number of patterns employed. Then an appropriate solution can be selected from such a Pareto curve.

Before getting into the specifics of our model and algorithm, we give a high-level view of our philosophy. As we have mentioned, most models and associated algorithms for CSPs employ the idea of a cutting pattern. With this idea, many methods follow the GG pattern-generation approach. Constraints involving coverage of demand using a known set of patterns are treated in a “master problem” and constraints describing feasible patterns are treated in a subproblem. As we seek a LP solution at the master level in the GG approach, we can communicate the necessary information (capturing demand coverage) to the subproblem via prices. But since we are really after an integer solution to the master, prices are not sufficient to describe optimal demand coverage based on known patterns to the subproblem. We address this shortcoming of the usual pattern-generation approach by formulating a holistic model that simultaneously seeks a small number of new patterns, their usages, and the usages of the current set of patterns. That is, the unified model generates new patterns in situ. Another benefit of this approach is that it is easily imbedded in a local-search framework which is ideal for working with constraints that are not modelled effectively in an ILP setting (like the cardinality constraint limiting the number of patterns that we may employ)—both for finding a point solution and for generating a profile of solutions.

In Section 1, we describe the bilinear model of our CSP. In Section 2, we modify the model by fixing some patterns and linearizing the remaining bilinear terms. In Section 3, we describe our computational methodology for solving the model of Section 2 and for generating a sequence of such models aimed at producing a profile of good solutions trading off trim loss against the number of patterns utilized. In Section 4, we report on preliminary experiments designed to examine the trade off between economizing on some variables and the quality of our solutions. In Section 5, we describe the results of computational experiments. Finally, in Section 6, we describe some extensions on our approach.

Section snippets

The bilinear model

Let N{1,2,,n}. We describe a set of n patterns by the columns of an m×n matrix Z. So, zij is the number of rolls of width wi that is cut from a stock roll when we employ pattern number j. Let xj be a nonnegative integer variable that represents the number of times pattern number j is employed toward satisfying the demands. We let the variable si be the part of the deviation (from demand di) of the production of width wi that is within the allowed tolerance. The variable ti is the part of the

Partial linearization

Let βi be an upper bound on the greatest number of bits used for representing zij in any solution to (6–9). For example, we can take βilog2(1+min{Wmax/wi,/(1-wi/Wmax)}),iMMnarand βilog2(1+min{Wmax/wi,/(1-wi/Wmax),Cnar}),iMnar.Let β be an upper bound on the greatest number of bits used for representing xj in an optimal solution to (5–13). For example, we can take β=log2(1+jNxj), where (x,z,s,t) is any feasible solution to (5–13). It is desirable to have the βi and β as

Computational methodology

For each iM, jN^, the variables and equations of (15) describe the boolean quadric polytope (see [20]) of the complete bipartite graph of Fig. 1. Recognizing this, we can strengthen our formulation using known valid inequalities. Already, we can get some mileage by including simple ones that are not of the form (23). Specifically, we utilize all of the remaining facets of the boolean quadric polytopes of all of the 4-cycles of the graph:xkj+zlij+yklij1+yklij+yklij+yklij,yklij+yklij+ykl

Restricting β

Recall our notation that N^ indexes the new patterns that we seek to generate at each stage, and β is the number of bits that we allow in the binary representation of the usage xj for a pattern indexed by each jN^. As it stands, the number of 0/1 bit variables for these usages is β|N^|, and we need to limit the number of these variables if we are to have some hope of quickly solving each of the ILPs along the way.

We have already suggested taking |N^| to be quite small (no more than 3), but

Computational experiments

We have developed an implementation using the optimization modeling/scripting software AMPL, and we have experimented using the ILP solvers CPLEX 9.0, Xpress-MP 14.27, and the open-source solver CBC (available from www.coin-or.org). Our results did not significantly depend on the ILP solver employed. Our implementation is quite practical for many real CSP instances. Faced with the most difficult instances or stringent running-time limits, one may regard our implementation as a prototype.

We set

Extensions

Of course we could implement a Branch&Cut algorithm to solve the ILPs, rather than the Cut&Branch that we employed. But this would not be feasibly implemented within our paradigm of treating the ILP solver as a black box with a few knobs, so it would have to be the subject of a more involved computational study.

Also within a Branch&Cut framework, rather than instantiate the product variables yklijzlijxkj, we could implicitly account for them by applying appropriate cutting planes to the binary

Acknowledgements

The author thanks Ronny Aboudi, Christoph Helmberg and Shunji Umetani for educating him about practical issues concerning CSPs, Laci Ladányi, Janny Leung, Robin Lougee-Heimer and François Vanderbeck for stimulating conversations on the CSP in general, and John J. Forrest both for his tuning of CBC and for his invisible hand. Special thanks to Dave Jensen for enabling the use of CBC from AMPL.

References (32)

  • R.W. Haessler

    A heuristic programming solution to a nonlinear cutting stock problem

    Management Science

    (1971)
  • R.W. Haessler

    Controlling cutting pattern changes in one-dimensional trim problems

    Operations Research

    (1975)
  • R.E. Johnston

    Rounding algorithm for cutting stock problems

    Journal of Asian-Pacific Operations Research Societies

    (1986)
  • H. Foerster et al.

    Pattern reduction in one-dimensional cutting stock problems

    International Journal of Production Research

    (2000)
  • Umetani S, Yagiura M, Ibaraki T. An LP-based local search to the one dimensional cutting stock problem using a given...
  • S. Umetani et al.

    A local search approach to the one dimensional cutting stock problem to the pattern restricted one dimensional cutting stock problem

  • Cited by (21)

    • A semi-automated approach to stowage planning for Ro-Ro ships

      2022, Ocean Engineering
      Citation Excerpt :

      Assuming that the cargo shifting problem has been handled through manual interference, the first sub-problem mainly focuses on how to maximize the utilization of cargo space with the constraints of deck height and local strength, stowage areas, obstacles, ramps, distances between vehicles and vehicle-obstacles and fire safety. A nesting problem is a typical optimization problem, and there are common optimization methods such as the linear programming method (Lee, 2007), the dynamic programming method (Vanberkel et al., 2014), the heuristic algorithm (Tang et al., 2017) and the intelligent optimization algorithm (Tao et al., 2013). Among them, the lowest horizontal line searching algorithm, which is a heuristic algorithm, works well (Tang et al., 2017).

    View all citing articles on Scopus
    View full text