Compressed data structures for bi-objective {0,1}-knapsack problems
Introduction
Dealing with a large amount of solutions for further processing is a key concern in the field of multi-objective combinatorial optimization. Such processing include, for example, gathering or producing a collection of data sets within a limited memory (internal or external), extraction of important pieces of information from the whole data, manage the data, which deals with several operations, and process these operations in a reasonable amount of CPU-time. These aspects require the use of efficient data structures.
Solution methods for multi-objective combinatorial optimization (MOCO) problems typically require a large usage of memory resources. Parametric and recursive programming (Ebem-Chaime, 1996, Przybylski, Gandibleux, Ehrgott, 2010), approximation methods (Erlebach et al., 2002), metaheuristics (Köksalan and Phelps, 2007), or exact methods (Bazgan, Hugot, Vanderpooten, 2009, Delort, Spanjaard, 2013, Figueira, Paquete, Simões, Vanderpooten, 2013) are example of approaches that require more memory usage due to the large number of potential solutions that need to be kept during the search process. For instance, the experimental analysis reported in Figueira et al. (2013) shows that more than five million solutions need to be kept in memory in order to solve bi-objective knapsack problem instances with less than one thousand items; see similar results reported in Paquete et al. (2013) for a related problem and using a similar approach. Noteworthy, the implementations described in the literature only keep the outcome vectors in the objective space in memory, as for example, the algorithms by Bazgan et al. (2009b) and Figueira et al. (2013). Therefore, the memory requirements for keeping also the solutions should be much larger than those reported in the literature, making it infeasible for the usual memory capacity of current personal computers.
In this paper, we consider the bi-objective knapsack problem (BOKP). Several approaches to solve the BOKP exactly or with a good approximation quality have been proposed. Klamroth and Wiecek (2000) suggested five models to solve multi-objective integer knapsack problems (MOIKP). Each model is based on a network in which each state represents the set of all non-dominated solutions of a sub-problem. The authors show how these models can be adapted to different variants of the knapsack problem. Modeling the BOKP into a bi-objective shortest path problem over an acyclic network was proposed in Captivo et al. (2003). The model was solved by using a labeling algorithm. Three complementary dominance relations were proposed in Bazgan et al. (2009b) to be used in a dynamic programming algorithm. These relations were applied amongst potential solutions and used to fathom states that do not lead to efficient solution. The quality of the fathoming process of Bazgan et al. (2009b) was improved in Figueira et al. (2013) by proposing new prunning techniques for such a method. Delort and Spanjaard (2013) proposed a technique using an hybrid dynamic programming approach for a two phased algorithm to solve the BOKP. Some approaches provide quality guarantee approximations for the BOKP. In Erlebach et al., 2002 a fully polynomial time approximation scheme (FPTAS) scheme was developed to guarantee that for each efficient solution, another that is at most at a factor on all objective values is found. Bazgan et al. (2009a) proposed the usage of dominance relations to develop a new FPTAS scheme to solve the BOKP.
In this paper we study the impact of using different data structures in the case of the bi-objective knapsack problem when solutions in the space of decision variables (e.g., a binary string) should be kept in memory. Two main data structures are investigated: Binary decision diagrams (Akers, 1978) and differencing methods based on spanning tree structures (Kang et al., 1977). Although these techniques are well-known, they have never been applied in the context of compression of solutions during the optimization process in a multi-objective framework. For benchmark purpose, we compare them against more naïve approaches, such as compression algorithms based on the Lempel–Ziv–Welch variant (Welch, 1984). We remark that any compression procedure has a significant overhead on CPU-time, even if the update is performed incrementally. Therefore, we are interested in understanding the effect of these techniques in terms of the trade-off between memory and CPU-time. In fact, our computational results suggest that some of these techniques can be located in different places of this trade-off.
This paper is organized as follows. Section 2 provides theoretical background. Section 3 is devoted to the presentation of the data structures implemented. Section 4 deals with other methods developed for benchmarking purposes. Section 5 presents a computational study. Finally, some conclusions and avenues for future research are provided.
Section snippets
Theoretical background
In the following, we present the fundamental concepts, definitions, and notation for MOCO problems and for the bi-objective knapsack problem as well as the fundamental framework needed for our algorithmic developments, with an illustrative example.
Compressed data structures
Our goal is to modify Procedure 1 such that the required usage of memory resources for storing solutions is minimized, without considerably changing its run-time. We introduce two compression techniques to store a set of (partial) solutions by exploring two fundamental paradigms for compression: differencing methods and binary decision diagrams. Thereafter, we present three other basic techniques to store solutions for benchmarking purpose.
For each technique, we provide the additional amount of
Other methods
The two following methods are described to assess the compression quality of the approaches described in the previous section.
Computational study
In this section we describe the experiments performed to evaluate the quality of the approaches described in the previous sections. First, we explain the setup of our experiments. Then, we analyze the CPU-time and the maximum amount of memory required by each method to solve the problem. The algorithm terminates once it exceeds 30000 s.
Conclusions
Multi-objective combinatorial optimization problems usually require a large usage of memory resources to be solved. The reason for the usage of so many resources is that a large amount of candidate solutions are needed to be kept in memory during the run of the algorithms. Several previous studies have reported that problem in the literature. However, those studies only keep and generate the outcome vectors of the problem. In this article, we analyze the impact of using different data
Acknowledgements
This work was partially supported by iCIS (CENTRO-07-ST24-FEDER-002003). P.C. wishes to acknowledge the Portuguese funding institution FCT - Fundação para a Ciência e a Tecnologia (Grant SFRH/BD/91647/2012).
References (27)
- et al.
Implementing an efficient fptas for the 0–1 multi-objective knapsack problem
Eur. J. Oper. Res.
(2009) - et al.
Solving efficiently the multi-objective knapsack problem
Comput. Oper. Res.
(2009) - et al.
Bound sets for biobjective combinatorial optimization problems
Comput. Oper. Res.
(2007) - et al.
On a biobjective search problem in a line: formulations and algorithms
Theor. Comput. Sci.
(2013) Binary decision diagrams
IEEE Trans. Comput.
(1978)- et al.
A constraint store based on multivalued decision diagrams
- et al.
Bdds in a branch and cut framework
- et al.
0/1 vertex and facet enumeration with bdds
Proceedings of the Meeting on Algorithm Engineering & Expermiments
(2007) - et al.
Bdd-based heuristics for binary optimization
J. Heuristics
(2014) - et al.
Discrete optimization with decision diagrams
INFORMS J. Comput.
(2016)
Variable ordering for the application of bdds to the maximum independent set problem
Solving bicriteria 01 knapsack problems using a labeling algorithm
Comput. Oper. Res.
A hybrid dynamic programming approach to the biobjective binary knapsack problem
ACM J. Exp. Algo.
Cited by (6)
Knapsack problems — An overview of recent advances. Part II: Multiple, multidimensional, and quadratic knapsack problems
2022, Computers and Operations ResearchCitation Excerpt :Algorithmic improvements of these methods were proposed by Figueira et al. (2013) and Rong and Figueira (2014). The use of special data structures for the BOKP was evaluated by Correia et al. (2018). Daoud and Chaabane (2018) proposed a reduction strategy for the BOKP and showed, through computational experiments, its effectiveness in reducing the search space.
Greedy backtracking based local dynamic programming for complete 0-1 knapsack problem
2024, Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/Journal of Huazhong University of Science and Technology (Natural Science Edition)OPTIMIZING OVER THE EFFICIENT SET OF THE BINARY BI-OBJECTIVE KNAPSACK PROBLEM
2023, Yugoslav Journal of Operations ResearchDecision space robustness for multi-objective integer linear programming
2022, Annals of Operations ResearchA generalized decision support framework for large-scale project portfolio decisions
2022, Decision SciencesOptimal Scheduling Algorithm of Wireless Communication Packets Based on Knapsack Theory
2022, Mobile Information Systems