Abstract
We analyze the competitive ratio and the advice complexity of the online unbounded knapsack problem. An instance is given as a sequence of n items with a size and a value each, and an algorithm has to decide whether or not and how often to pack each item into a knapsack of bounded capacity. The items are given online and the total size of the packed items must not exceed the knapsack’s capacity, while the objective is to maximize the total value of the packed items. While each item can only be packed once in the classical knapsack problem (also called the 0-1 knapsack problem), the unbounded version allows for items to be packed multiple times. We show that the simple unbounded knapsack problem, where the size of each item is equal to its value, allows for a competitive ratio of 2. We also analyze randomized algorithms and show that, in contrast to the 0-1 knapsack problem, one uniformly random bit cannot improve an algorithm’s performance. More randomness lowers the competitive ratio to less than 1.736, but it can never be below 1.693. In the advice complexity setting, we measure how many bits of information (so-called advice bits) the algorithm has to know to achieve some desired solution quality. For the simple unbounded knapsack problem, one advice bit lowers the competitive ratio to \(\varvec{3/2}\). While this cannot be improved with fewer than \(\varvec{\log }_{\varvec{2}} \varvec{n} \) advice bits for instances of length n, a competitive ratio of \(\varvec{1}\varvec{+}\varvec{\varepsilon }\) can be achieved with \(\varvec{O}\varvec{(}\varvec{\varepsilon }^{\varvec{-1}} \varvec{\cdot }\varvec{\log }\varvec{(}\varvec{n}\varvec{\varepsilon }^{\varvec{-1}}\varvec{))}\) advice bits for any \(\varvec{\varepsilon }\varvec{>}\varvec{0}\). We further show that no amount of advice bounded by a function \(\varvec{f(n)}\) allows an algorithm to be optimal. We also study the online general unbounded knapsack problem and show that it does not allow for any bounded competitive ratio for both deterministic and randomized algorithms, as well as for algorithms using fewer than \(\varvec{\log }_{\varvec{2}} \varvec{n}\) advice bits. We also provide a surprisingly simple algorithm that uses \(\varvec{O}\varvec{(}\varvec{\varepsilon }^{\varvec{-1}} \varvec{\cdot }\varvec{\log }\varvec{(}\varvec{n}\varvec{\varepsilon }^{\varvec{-1}}\varvec{))}\) advice bits to achieve a competitive ratio of \(\varvec{1}\varvec{+}\varvec{\varepsilon }\) for any \(\varvec{\varepsilon }\varvec{>}\varvec{0}\).
Similar content being viewed by others
Explore related subjects
Discover the latest articles and news from researchers in related subjects, suggested using machine learning.Avoid common mistakes on your manuscript.
1 Introduction
Gerhard Woeginger was one of the pioneers in the field of online algorithms. Besides contributing to the research on online scheduling [1,2,3], bin packing [4], and machine covering [5], he is well known for one of the first comprehensive surveys on this topic, which he both co-edited and co-wrote [6]. He also left his trace in the field of knapsack problems, where, among other topics, he contributed to the unbounded knapsack problem [7,8,9] by analyzing the complexity of several of its special cases. We dedicate this paper on the online version of the unbounded knapsack problem to his memory.
The knapsack problem is a prominent optimization problem that has been studied extensively, particularly in the context of approximation algorithms. The input consists of items with different sizes and values and a container, the “knapsack,” with a given size. The goal is to select a subset of the items that fit together into the knapsack, maximizing their combined value. Sometimes the name general knapsack problem or weighted knapsack problem is used to denote this problem, and sometimes just knapsack problem. A variant where the values of the items are proportional to their sizes is sometimes called the proportional knapsack problem, simple knapsack problem, or just knapsack problem. That variant is a natural one: It applies if the items are made of the same material. Another popular name for both variants that emphasizes the fact that every item can be either rejected or taken once is 0-1 knapsack problem. In this paper we will abbreviate the term “knapsack problem” with KP and use the names general KP and simple KP to avoid any confusion.
It is well-known that both the general and simple KP are weakly NP-hard [10, 11]. There is an FPTAS for both problems, which means that they can be approximated arbitrarily well in polynomial time [12,13,14,15,16].
The KP has many practical applications related to packing resources into constrained spaces. Many of these scenarios take place on an industrial level and it is therefore not unreasonable to assume that items can be packed an essentially unlimited amount of times. We can therefore analyze these situations even more precisely with a variant of the KP where each item can be packed an arbitrary number of times instead of just once. This variant is known as the unbounded knapsack problem. Again, the problem is NP-hard and allows for an FPTAS [12, 13, 17, 18]. In this paper, we investigate the unbounded KP as an online problem. Instead of seeing all items at once, they arrive one by one and an algorithm has to decide on the spot whether (and how often) to pack an item into the knapsack or to discard it. Every decision is final.
Online algorithms are most commonly judged by their competitive ratio: If we have an input instance I with an optimal solution that fills a fraction \(\textrm{opt}(I)\) of the knapsack and an algorithm packs only a fraction \(\textrm{gain}(I)\), then it has a (strict) competitive ratio of \(\textrm{opt}(I)/\textrm{gain}(I)\) on instance I. The competitive ratio of the algorithm is the supremum of all ratios for all possible input instances. Finally, the competitive ratio of the problem is the best competitive ratio of all algorithms that can solve it. Competitive analysis and the competitive ratio were introduced by Sleator and Tarjan [19] and have been applied to myriads of online problems ever since [6]. For an overview, we also point to the textbooks by Komm [20] and by Borodin and El-Yaniv [21].
Solving both the simple and general 0-1 KP as online problems is very hard. Indeed, there is no algorithm with a bounded competitive ratio. Such problems are called non-competitive. It is easy to see that this is the case [22]: From now on, we will assume that the capacity of our knapsack is exactly 1. We look at two different instances for the online simple 0-1 KP: \(I_1=(\varepsilon )\) and \(I_2=(\varepsilon ,1)\); so the first instance consists of only one tiny item \(\varepsilon >0\), while the second instance starts with the same item, but follows up with an item that fills the whole knapsack on its own. Any algorithm sees first the item \(\varepsilon \) and has to decide whether to pack it or discard it. If the algorithm discards it, the competitive ratio is unbounded for \(I_1\). If it accepts it, the competitive ratio is very large for \(I_2\): The optimum is to pack only the second item, which yields a gain of 1. The algorithm, however, packs only \(\varepsilon \). The competitive ratio is therefore \(1/\varepsilon \), which is arbitrary large. Note that an average case analysis instead of worst-case analysis paints a much nicer picture [22, 23].
The situation is quite different for the simple unbounded KP. There is a simple algorithm that achieves a competitive ratio of 2: Just pack the first item you see as often as possible. This strategy will fill more than half of the knapsack. The optimum can never be better than 1, resulting in a competitive ratio of at most 2. Conversely, the simple unbounded KP cannot have a competitive ratio of less than 2. Again, consider two input instances \(I_1=(1/2+\varepsilon )\) and \(I_2=(1/2+\varepsilon ,1)\). Any deterministic algorithm will fill the knapsack at most half-optimal on either \(I_1\) or \(I_2\).
This is a first example of a variant of the KP that is competitive. There are many other such variants and a curious pattern emerges: Most of them have also a competitive ratio of 2. For example, there is the simple KP with reservations. In this model, the online algorithm can still pack or discard an item, but there is also a third possibility: For an item of size s, it may pay a reservation cost of \(\alpha s\), where \(\alpha \) is a constant – the reservation factor. Reserved items are not packed, but also not discarded forever. They can be packed at a later point in time. The gain of the algorithm is the combined size of all items minus the reservation costs. It is not surprising that the competitive ratio grows arbitrarily large as we make the reservation cost larger and larger. Naturally, if the reservation factor exceeds 1, the problem becomes non-competitive since the option of reserving becomes useless and we are back at the classical KP. What is surprising, however, is what happens if we decrease the reservation factor towards 0: In that case, the competitive ratio does not approach 1, but 2 [24].
The KP with removal allows the online algorithm to remove items from the knapsack. Once removed, they are discarded and cannot be used in the future. Iwama and Taketomi showed that a best possible online algorithm has the golden ratio \(\phi =(1+\sqrt{5})/2\approx 1.618\) as its competitive ratio [25] for the simple KP with removal, while the general KP with removal remains non-competitive. The variant where removals are allowed, but at a cost, has also been investigated [26], as well as a variant where a limited number of removed items can be packed again [27].
We have already seen that the simple unbounded KP has a competitive ratio of 2. To further investigate the problem, we consider randomized algorithms. For the simple 0-1 KP, it is known that the best possible competitive ratio for randomized online algorithms is 2 [28, 29]. Surprisingly, a single random bit is sufficient to achieve this ratio and increasing the number of random bits does not help at all. For the simple unbounded KP, one random bit does not help and the competitive ratio stays at 2. Using two random bits, however, lowers the competitive ratio to \(24/13<1.847\) in Corollary 2. We show that no randomized algorithm can have a competitive ratio of less than 1.693 in Theorem 3 and provide a randomized algorithm with a competitive ratio of less than 1.736 in Theorem 4.
Instead of only using classical competitive analysis, Dobrev, Královič, and Pardubská introduced the notion of advice complexity of online algorithms [30], which was later refined by Emek et al. [31] as well as Hromkovič et al. [32] and Böckenhauer et al. [33]. In this paper, we use the tape model by Böckenhauer et al. An online algorithm with advice is allowed to read additional information about the instance at hand which is provided by an all-knowing oracle. The question is how many advice bits have to be read to achieve a given competitive ratio or even to become optimal. Since its introduction, the concept of advice complexity has seen a myriad of applications. Several variants of the KP were investigated [28, 29, 34]. For an overview of further advice complexity results, see the textbook by Komm [20] and the survey by Boyar et al. [35]. There are also many more recent results. Some of them apply the classical framework of advice complexity to a wealth of other online problems including the minimum spanning tree problem [36], several variants of online matching [37,38,39], node and edge deletion problems [40,41,42], bin covering [43], two-way trading [44], dominating set [45, 46], disjoint path allocation [47], or 2D vector packing [48]. Advice complexity was also used in online-like settings such as exploring a graph by guiding an autonomous agent [49,50,51,52] or analyzing priority algorithms as a model of greedy strategies [53,54,55].
Two recent strands of research focus on relaxing the condition that the advice is given by an all-knowing oracle. In the model of untrusted advice [56,57,58], one tries to guarantee that the online algorithm can make good use of the advice as long as it is of high quality, while the solution quality does not deteriorate by much in the presence of faulty advice. In a closely related model, a machine-learning approach is used for generating the advice; the power of this machine-learned advice has been analyzed for various problems [59,60,61,62,63,64]. In a very recent paper, Emek et al. [65] amend randomized algorithms by substituting some of the random bits by advice bits without revealing this to the algorithm. In this paper, we focus only on the classic advice model.
For the simple 0-1 KP, Böckenhauer et. al. [29] showed that one bit of advice yields a competitive ratio of 2. This ratio cannot be improved by increasing the amount of advice up to \(o(\log n)\), where n is the number of items. With \(O(\log n)\) advice bits, however, a competitive ratio arbitrarily close to 1 can be achieved. Another large gap follows: To achieve optimality, at least \(n-1\) advice bits are needed. The latter bound has been recently improved by Frei to n bits [66], which is optimal as n bits can tell the online algorithm for every item whether to take it or to discard it.
The situation looks similar for the general 0-1 KP. We need \(\Theta (\log n)\) advice bits to achieve an arbitrarily good competitive ratio [29].
For the simple unbounded KP, we establish the following results: With one advice bit, the optimal competitive ratio is 3/2. This does not improve for \(o(\log n)\) bits of advice. Again, with \(O(\log n)\) advice bits, a competitive ratio of \(1+\varepsilon \) for an arbitrarily small constant \(\varepsilon >0\) can be achieved. The online general unbounded KP stays non-competitive for a sublogarithmic number of advice bits and becomes \((1+\varepsilon )\)-competitive with \(O(\log n)\) advice bits. The general unbounded KP stays non-competitive for randomized algorithms without advice. Table 1 contains an overview of the new results on the unbounded KP and compares them to the known results on the 0-1 KP.
The paper is organized as follows. In Sect. 2, we introduce all necessary definitions and background information. Section 3 contains our results on randomized online algorithms for the simple unbounded KP. In Sect. 4, we consider the advice complexity of the online simple unbounded KP. Section 5 is devoted to the results for the general case. In Sect. 6, we conclude with a collection of open questions and reflections on the research area in general. Throughout this paper, \(\log (\cdot )\) denotes the binary logarithm.
2 Preliminaries
An online KP instance of length n consists of a sequence of items \(x_1,\ldots ,x_n\), where the algorithm does not know n beforehand. Each item has a size \(s_i\in [0,1]\) and a value \(v_i\ge 0\); for the online simple KP, \(s_i=v_i\) for all \(1\le i\le n\).
An optimal solution is a subset of items that fit together in a knapsack of size 1 and maximize the sum of their values. We denote the total value of an optimal solution for an instance I by
for the 0-1 KP, and by
for the unbounded KP. An online algorithm \(\textsc {Alg}\) maps a sequence of items to a sequence of decisions. In the 0-1 KP, the decision is to take the last item or to discard it. In the unbounded KP, the algorithm decides how often the last item is packed into the knapsack. We define
both for the 0-1 KP and the unbounded KP. In the case of a randomized knapsack algorithm, the total value of the packed items is a random variable. We then define \(\textrm{gain}_{\textsc {Alg}}(I)\) as the expectation of the total value.
For a deterministic algorithm, we define its (strict) competitive ratio on an instance I to express the relationship of what it packs to the optimum:
If \(\textrm{gain}_{\textsc {Alg}}(I)\) refers to the expected gain of a randomized algorithm \(\textsc {Alg}\), we speak of the competitive ratio in expectation.
Let us look at an example for the simple 0-1 KP. We have a very primitive algorithm \(\textsc {Alg}\) that just packs every item it sees with a probability of 1/2 into the knapsack if it fits. Otherwise, the item is discarded. What are \(\textrm{opt}(I)\) and \(\textrm{gain}(I)\) for \(I=(x_1,x_2,x_3)\) with \(s_1=x_1=1/4\), \(s_2=x_2=1/3\), and \(s_3=x_3=1/2\)? Not all items fit into the knapsack together, but it is clear that the best we can do is to pack \(x_2\) and \(x_3\). So \(\textrm{opt}(I)=1/3+1/2=5/6\).
It is harder to find out what \(\textsc {Alg}\) is packing, but it can be established by looking at all cases:
Probability | Packed items | Total value |
---|---|---|
1/8 | none | 0 |
1/8 | \(x_1\) | 1/4 |
1/8 | \(x_2\) | 1/3 |
1/8 | \(x_3\) | 1/2 |
1/4 | \(x_1,x_2\) | 7/12 |
1/8 | \(x_1,x_3\) | 3/4 |
1/8 | \(x_2,x_3\) | 5/6 |
The expected gain is thus
and the competitive ratio in expectation is \(\textrm{opt}(I)/\textrm{gain}_{\textsc {Alg}}(I)=(5/6){\bigm /}(23/48)=40/23\approx 1.739\).
Next, we define the competitive ratio of an algorithm. This is basically the worst-case competitive ratio over all possible instances. Finally, the competitive ratio of an online problem is the competitive ratio of the best algorithm:
For a randomized algorithm, the competitive ratio in expectation is defined equivalently. Computing the competitive ratio is a typical min-max optimization problem, which can be seen as a game between an algorithm and an adversary who chooses the instance. In the context of online algorithms with advice, the situation stays basically the same. The algorithm, however, has also access to a binary advice string that is provided by an oracle that knows the whole instance beforehand. In that way, we can analogously define the competitive ratio of algorithms with advice. We say that an algorithm \(\textsc {Alg}\) is using f(n) bits of advice if its reads at most the first f(n) bits from the advice string provided by the oracle on all instances consisting of n items. Note that the length of the advice string is not allowed to be less than f(n). Otherwise the oracle could communicate additional information with the length of the advice string.
3 Randomized Algorithms
In this chapter we will consider algorithms for the unbounded online KP that may use randomization, but not advice. As a baseline, we first analyze what can be done without randomness. A simple observation shows that deterministic algorithms can be 2-competitive for the online simple unbounded KP, but not better.
Theorem 1
The competitive ratio for the online simple unbounded KP is 2.
Proof
We start with the upper bound by providing an algorithm with the claimed competitive ratio which simply packs the first item as often as it fits into the knapsack. If the first item is \(x_1> 1/2\), packing it once achieves a gain of at least 1/2. If the first item is \(x_1\le 1/2\), packing it as often as possible achieves a gain of at least \(1-x_1\ge 1/2\). In each case we achieve a gain of at least 1/2 and thus a competitive ratio of at most 2.
Conversely, consider the instances
where \(0<\varepsilon <1/2\). Any algorithm will either have to pack the item \(1/2+\varepsilon \) or not. If it does not, its competitive ratio on \(I_1\) is unbounded. If it does, its competitive ratio on \(I_2\) is at most
which tends to 2 as \(\varepsilon \) tends to 0. \(\square \)
In the simple 0-1 KP, the competitive ratio improves from “unbounded” to 2 when the algorithm is allowed access to one random bit [29]. In the unbounded variant, this is no longer the case. One random bit is of no use to the algorithm in terms of competitive ratio:
Theorem 2
An online algorithm for the simple unbounded KP that uses one uniformly random bit cannot have a competitive ratio of less than 2.
Proof
Let \(\textsc {Alg}\) be any algorithm and once again consider the instances \(I_1\) and \(I_2\) from the proof of Theorem 1. In both instances, \(\textsc {Alg}\) will choose the item \(1/2+\varepsilon \) with probability p. Since \(\textsc {Alg}\) only has access to one random bit, we know that \(p\in \{0,1/2,1\}\). We have seen in Theorem 1 that the competitive ratio of \(\textsc {Alg}\) is at least 2 on one of these instances if \(p\in \{0,1\}\) and, if \(p=1/2\), its competitive ratio on \(I_1\) is
\(\square \)
Additionally, in the 0-1 variant, the competitive ratio of an algorithm did not improve with additional random bits after the first one [29]. This is also no longer true in the unbounded case. For example, with two random bits, we can achieve a competitive ratio of 24/13 as we will see in Corollary 2.
A simple way to prove a lower bound on the competitive ratio of any randomized algorithm is to provide a set of instances and show that the expected gain has to be relatively small on at least one of those instances for every randomized algorithm. Let \(\varepsilon >0\) be very small. We look at two instances \(I_1=(1/2+\varepsilon )\) and \(I_2=(1/2+\varepsilon ,1)\). The optimal strategy is to take the first item for \(I_1\) and the second one for \(I_2\). Every randomized algorithm takes the first item with some probability \(p_0\). Its expected gain on \(I_1\) is therefore \(p_0\cdot (1/2+\varepsilon )\) and its expected gain on \(I_2\) is at most \(p_0\cdot (1/2+\varepsilon )+(1-p_0)\cdot 1\), because it can pack the second item only if it ignored the first one.
Its competitive ratio in expectation is at least
The competitive ratio of a best possible algorithm is then at least
and we can conclude that every randomized algorithm has a competitive ratio of at least 3/2 since we can make \(\varepsilon \) arbitrarily small.
While this lower bound is sound, it is not the best possible. We can use the same argument with three instead of two instances in order to improve it. Let \(I_1=(1/2+\varepsilon )\), \(I_2=(1/2+\varepsilon ,3/4)\), and \(I_3=(1/2+\varepsilon ,3/4,1)\). Using the same argument, the resulting lower bound is
which converges to \(19/12>1.58\) as \(\varepsilon \rightarrow 0\), which is better than 3/2. We can still improve this bound by looking at four instances and so on.
In order to squeeze the last drop out the lemon, we can look at \(n+1\) instances like that and let n tend to infinity. The instances will be all non-empty prefixes of the sequence \((1/2+\varepsilon ,1/2+1/2n,1/2+2/2n,\ldots ,1/2+n/2n)\); let \(I_k\) denote the prefix of length k, for \(k=0,\ldots ,n\). The above calculation becomes quite complicated as it involves a rather complicated recurrence relation. We will therefore first imagine a continuous variant of the online KP. In order to avoid the \(\varepsilon \), we assume for the moment that an item of size exactly 1/2 can be packed only once into the knapsack. Moreover, we assume that we see an item of size 1/2 that gradually grows in size until it reaches some maximum size \(s\in [0,1]\) that is unknown to the algorithm. An algorithm sees the item growing and can decide to pack it at any point in time. If the algorithm waits too long, however, it will be too late: As soon as its size reaches s, the item disappears and the knapsack remains empty. As we consider a randomized algorithm, there will be a probability of \(p_0\) that it grabs the item at the very beginning, when its size is only 1/2. There is also some probability that it will grab the item before it reaches some size x. Let p(t) be the density function of this probability; hence, the probability that the item is taken before it reaches size x will be exactly
If we look at the instance with maximum size s, the expected gain and the competitive ratio in expectation are
To minimize the maximum of the competitive ratio for all s, we choose p(t) and \(p_0\) in such a way that the competitive ratio is the same for every possible s. If \(s=1/2\), the competitive ratio becomes \(1/p_0\). We can determine p(t) by solving the following equation:
or, equivalently,
Taking the derivative of x on both sides of the equation yields \(p_0=xp(x)\), and we have \(p(t)=p_0/t\) for \(t\in [\frac{1}{2},1]\). It remains to determine \(p_0\). To this end, we can use the additional equation
which results from the fact that the total probability is 1.
In this game of grabbing continuously growing items, the competitive ratio of our specific algorithm is \(1+\ln 2\). However, this was just a thought experiment, which gives us a clue on how to choose the probabilities \(p_0,\ldots ,p_n\) for a best possible algorithm of the instances \(I_0,\ldots ,I_n\) defined earlier. They should be approximately \(p_k\approx p(1/2+k/2n)/2n= p_0/(n+k)= 1/((1+\ln 2)(n+k))\), where the first equality follows from \(p(t)=p_0/t\) and the second equality follows from (1). Moreover, we have to show that every other algorithm cannot be better than this one. To this end, we have to show that this choice of the values \(p_k\) is optimal and every other choice leads to a worse competitive ratio for at least one of the instances.
Theorem 3
The competitive ratio in expectation of every randomized algorithm that solves the online simple unbounded KP is at least \(1+\ln 2>1.693\).
Proof
We consider the instances
where n is large and \(\varepsilon \) is very small, say, \(\varepsilon =e^{-n}\). We design an algorithm that will work as follows: If it sees an item whose size is in the interval \([1/2+k/2n,1/2+(k+1)/2n)\), where \(0\le k\le n\), it will accept it with a probability of \(p_k\), where
We use the difference of Harmonic numbers \(H_{2n}-H_n=\ln 2+O(1/n)\) instead of \(\ln 2\) to make the sum of all probabilities exactly one:
What is the expected gain of this algorithm on \(I_k\)? It will accept the i-th item of size \(1/2+i/2n\) (possibly\( +\varepsilon \)) with a probability of \(1/\bigl ((1+H_{2n}-H_n)(n+i)\bigr )\). So the expected gain turns out to be exactly
The optimal gain for \(I_k\) is \(1/2+k/2n\) (or \(1/2+\varepsilon \) if \(k=0\)). Hence, the competitive ratio in expectation on every \(I_k\) is \(1+H_{2n}-H_n+O(\varepsilon )\), which goes to \(1+\ln (2)\) as \(n\rightarrow \infty \) and \(\varepsilon \rightarrow 0\).
Can there be a different algorithm with better competitive ratio? If yes, it has to beat our algorithm on every instance \(I_k\), since our algorithm has the same competitive ratio on those instances. We will show that, for an algorithm whose competitive ratios differ on the \(I_k\)’s, there is another, not worse algorithm whose competitive ratios are the same. We prove this by stepwise transforming such a non-uniform algorithm. So let us fix some algorithm for the online simple unbounded KP whose competitive ratio in expectation on \(I_k\) is \(c_k'\) and whose probability of taking the i-th item in \(I_n\) is \(p_i'\).
The following result tells us what happens if we change an algorithm by moving some probability from \(p_i'\) to \(p_{i+1}'\).
Claim 1
Let \(0\le i<n\). Let \(p_i''=p_i-\delta \), \(p_{i+1}''=p_i'+\delta \), and \(p_j''=p_j'\) for \(j\notin \{i,i+1\}\) for some \(\delta \in \mathbb {R}\) with sufficiently small absolute value. Let \(c_i''\) be the competitive ratios of the algorithm \(A''\) that uses the probabilities \(p_i''\).
Then \(c_j''=c_j'\), for \(j=0,\ldots ,i-1\).
If \(\delta >0\), then \(c_i''<c_i'\) and \(c_j''>c_j'\), for \(j>i\).
If \(\delta <0\), then \(c_i''>c_i'\) and \(c_j''<c_j'\), for \(j>i\).
Proof
Obviously, \(c_j''=c_j'\) for \(j<i\). Let us look at \(c_i''\) and \(c_i'\) for \(\delta >0\):
A similar calculation can be done for \(c_k''\) and \(c_k'\) for \(k>i\):
The calculation for \(\delta <0\) is completely analogous. \(\square \)
Using Claim 1, it is relatively easy to modify an algorithm whose competitive ratios \(c_i'\) differ from another. We look at those \(c_i'\) that are maximum. Let \(c_i'\) be the last one that has maximal value. If \(i<n\), we can use a small \(\delta >0\) and increase \(p_i'\) by \(\delta \) and decrease \(p_{i+1}'\) by \(\delta \). If \(\delta \) is small enough, then \(c_i''<c_i'\) while still \(c_i''>c_j''\) for \(j>i\). If \(c_i'\) was the only maximum ratio, we improved the algorithm. If not, it is no longer the last one and we can apply the same procedure again. Eventually, the algorithm will improve.
There is, however, the other possibility that the last ratio \(c_n'\) is a maximum one. Then \(c_{n+1}'\) does not exist and we cannot do the above transformation. In this case, however, we choose the last ratio \(c_i'\) where \(c_i'<c_n'\) (so \(c_{i+1}'=c_{i+2}'=\cdots =c_n'\)). We decrease \(p_i'\) by a small \(\delta \) and increase \(p_{i+1}'\) by the same \(\delta \). If \(\delta \) is small enough then still \(c_i''<c_j''\) and \(c_j''<c_j'=c_n'\) for all \(j>i\). Again, either the new algorithm has a better competitive ratio or the maximal \(c_i''\) is now not at the right end and the first case applies.
In this way, after finitely many steps, we either get to an algorithm that is better or an algorithm where the competitive ratios agree on all instances \(I_0,\dots ,I_n\). \(\square \)
Note that what an algorithm must do on this family of instances is choose the largest of a series of items. This problem is known as the Online Search Problem and has been studied in much detail [21]. In the classic variant of the problem, an algorithm would be allowed (or forced) to keep the final item presented, while in our model, a gain of 0 is possible. Thus, any lower bound for the Online Search Problem on items in the interval ]1/2, 1] also holds for the online unbounded KP. However, even deterministic search algorithms on that interval can achieve a competitive ratio of \(\sqrt{2}< 1.415\) (see for example Chapter 14.1.2 of [21]). The lower bound we provide is considerably stronger. We now complement this bound by a rather close upper bound.
Theorem 4
There is a randomized algorithm that solves the online simple unbounded KP with a competitive ratio of less than 1.7353.
Proof
The algorithm computes a random variable X with the following distribution:
-
\(\Pr [X=1/2]=p_{1/2}\)
-
\(f_X(x)=p_{1/2}/x\) if \(1/2<x<2/3\)
-
\(\Pr [X=2/3]=p_{2/3}\)
-
\(f_X(x)=p_{1/2}\cdot (1+\ln (2-x))/2x\) if \(2/3<x\le 1\)
where \(p_{1/2}\) and \(p_{2/3}\) solve the system of equations
Equation (3) guarantees that this is indeed a probabilistic distribution while (2) will be used to prove bounds on the competitive ratio. After choosing X, the algorithm packs as often as possible the first item x it encounters with \(x^*\ge X\), where \(x^*=\lfloor 1/x\rfloor \cdot x\) is defined as the gain that can be achieved by the item x alone. After that, it packs any item that fits greedily.
We now look at a fixed instance \(I=(x_1,\dots ,x_n)\). We define \(x_{\text {min}}=\min _i \{x_i\}\) and \(x_{\text {max}}^*=\max _i \{x_i^*\}\). If \(x_{\text {max}}^*<2/3\), there cannot be any items smaller than 1/2 (packing these items as often as possible would lead to \(x_{\text {max}}^*>2/3\)). The optimal solution is thus \(x_{\text {max}}^*\). The algorithm has a gain of at least X if \(X\le x_{\text {max}}^*\), so an expected gain of at least
Its competitive ratio in expectation is thus at most \(1/p_{1/2}\).
If \(x_{\text {max}}^*\ge 2/3\) and \(x_{\text {min}}\ge 1/2\), the optimal solution is still \(x_{\text {max}}^*\). The algorithm has a gain of at least X if \(X\le x_{\text {max}}^*\), and thus a competitive ratio of at most
It can be shown using standard techniques from calculus that this ratio increases in \(x_{\text {max}}^*\). This is done in full detail in Lemma 1 in the appendix. Hence this bound is maximized for \(x_{\text {max}}^*=1\) and is therefore at most
by (2).
If \(1/3<x_{\text {min}}<1/2\) and \(X\le 1-x_{\text {min}}\), the algorithm will always achieve a gain of at least \(1-x_{\text {min}}\). When it encounters \(x_{\text {min}}\) it either has not packed any items yet, in which case it packs \(x_{\text {min}}^*=2x_{\text {min}}\ge 1-x_{\text {min}}\ge X\), or it has already packed other items. In this case, it packs \(x_{\text {min}}\) greedily as often as possible and still achieves a gain of at least \(1-x_{\text {min}}\). If on the other hand \(1-x_{\text {min}}<X\le 2x_{\text {min}}\), the algorithm will always achieve a gain of at least X. When it encounters \(x_{\text {min}}\) it has already packed items of size at least X or it will be able to pack \(x_{\text {min}}^*=2x_{\text {min}}\ge X\). Its expected gain is therefore at least
With \(y=1-x_{\text {min}}\), we can simplify this to
by (2). Its competitive ratio is therefore at most \(1/p_{1/2}\).
If \(x_{\text {min}}\le 1/3\), we argue similarly to the previous case. If \(X\le 1-x_{\text {min}}\), the algorithm achieves a gain of at least \(1-x_{\text {min}}\) and if \(1-x_{\text {min}}\le X\le \lfloor 1/x_{\text {min}}\rfloor x_{\text {min}}\), it has a gain of at least X. Again with \(y=1-x_{\text {min}}\) and since \(y\ge 2/3\), this leads to an expected gain of at least
by (2).
So in each case the competitive ratio of the algorithm is at most \(1/p_{1/2}\). Solving the system of (2) and (3) for \(p_{1/2}\) and \(p_{2/3}\) leads to
so the algorithm has a competitive ratio of at most
\(\square \)
4 Advice Complexity
Recall that in the online simple 0-1 KP the competitive ratio turned out to be unbounded without advice and 2 for one and up to \(o(\log n)\) many advice bits. With \(O(\log n)\) advice bits, the competitive ratio is arbitrarily close to 1 [29]. For the online simple unbounded KP, the situation is almost analogous. With one advice bit, the competitive ratio is 3/2 and again stays at 3/2 for \(o(\log n)\) advice bits. Then, with \(O(\log n)\) many advice bits we again come arbitrarily close to 1.
Theorem 5
There is an algorithm for the online simple unbounded KP that reads only a single bit of advice and achieves a competitive ratio of 3/2. For any fixed number \(k>0\) of advice bits, this ratio cannot be improved. To achieve a competitive ratio of better than 3/2 on all instances of length n, an algorithm must read more than \(\log (n-1)\) advice bits on at least one such instance.
Proof
Let us first see why we can achieve a competitive ratio of 3/2 with a single advice bit. An algorithm can use the advice bit to choose between two strategies. Here, those two strategies are: (1) pack greedily and (2) wait for a item that is either smaller than or equal to 1/2 or is greater than or equal to 2/3 and pack it as often as possible.
We have to show that, for every possible instance, one of the two strategies manages to pack at least a 2/3-fraction of the optimal solution. So let us assume that there indeed exists at least one item for which strategy (2) is waiting. If its size is at least 2/3, the algorithm has a gain of at least 2/3. If its size s is smaller than or equal to 1/2, the same is true: After packing the item as often as possible, the remaining space in the knapsack will be smaller than s, but will contain at least two items of size s. So if \(s\ge 1/3\), the two items sufficiently fill the knapsack, and if \(s\le 1/3\), the remaining space is less than 1/3. In either case, the algorithm achieves a gain of at least 2/3 and thus a competitive ratio of at most 3/2.
So strategy (2) succeeds whenever the instance contains an item of size s with \(s\le 1/2\) or \(s\ge 2/3\). Let us assume now that the input contains no such item. Then strategy (1) succeeds: All items have sizes strictly between 1/2 and 2/3. This means in particular that only one item fits into the knapsack and the optimal solution fills the knapsack at most to a level of 2/3. The greedy algorithm obviously fills the knapsack at least half. The competitive ratio is then at most \((2/3)/(1/2)=4/3\).
In order to prove a lower bound, let \(0<\varepsilon <1/2\) and we consider \(n-1\) different instances \(I_2,\ldots ,I_n\) where
ending in \(n-k+1\) items of size \(2/3-\varepsilon ^{k-1}\), such that each instance consists of n items. It is easy to see that the optimal gain for every instance is 1. To be optimal on \(I_k\), the algorithm has to reject the first \(k-2\) items, and pack the \((k-1)\)-th item, as well as one of the larger items of size \(2/3-\varepsilon ^{k-1}\). Intuitively, an algorithm that receives fewer than \(\log (n-1)\) advice bits cannot choose between \(n-1\) or more different strategies. More formally, assume that the algorithm reads fewer than \(\log (n-1)\) advice bits on all \(n-1\) instances. Since there are fewer than \(n-1\) binary strings of length less than \(\log (n-1)\), at least two instances \(I_i\) and \(I_j\) for \(2\le i<j\le n\) must receive the same advice string. Any decision of the algorithm, including the amount of advice bits to read, can only depend on the prefix of the instance and the advice bits read at that point. This means that, after the common prefix \((1/3+\varepsilon ,\dots ,1/3+\varepsilon ^{i-1})\), the algorithm must make the same decision in \(I_i\) and \(I_j\). However, to be optimal on \(I_i\), it must take the item of size \(1/3+\varepsilon ^{i-1}\), while to be optimal on \(I_j\), it must not take it. Therefore, it will behave suboptimally on at least one of the two instances. For that instance, its competitive ratio will be at least \(1/(2/3+2\varepsilon )\), because the algorithm can pack only two small items or one large one. As \(\varepsilon \) can be chosen arbitrarily small, the competitive ratio of any algorithm cannot be better than 3/2. \(\square \)
Corollary 1
For any \(0<p<1\), there is a randomized algorithm for the online simple unbounded KP that makes a single random decision with probabilities p and \(1-p\) and has a competitive ratio of at most
Proof
The algorithm simulates the algorithm with one advice bit in the proof of Theorem 5 and guesses the advice bit. With a probability of p, it packs items greedily and, with a probability of \(1-p\), it waits for an item \(x_i\) with \(x_i\le 1/2\) or \(x_i\ge 2/3\) and then packs \(x_i\) as often as possible. If there is such an item, its competitive ratio in expectation is at most
If there is no such item, the optimal solution is at most 2/3 and the algorithm’s competitive ratio in expectation is at most \((2/3){\bigm /}(p/2)\). \(\square \)
Corollary 2
There is a randomized algorithm for the online simple unbounded KP that uses exactly two uniformly random bits and has a competitive ratio of at most 24/13.
Proof
The algorithm uses two uniformly random bits to generate a random decision with probability \(p=3/4\) and uses the algorithm from Corollary 1. \(\square \)
Corollary 3
There is a randomized algorithm for the online simple unbounded KP that makes a single non-uniformly random decision and has a competitive ratio of at most 11/6.
Proof
This is the algorithm from Corollary 1 for \(p=8/11\). \(\square \)
Theorem 5 is tight. If we provide \(O(\log n)\) advice, the competitive ratio can be made arbitrarily close to optimal.
Theorem 6
Let \(\varepsilon >0\). Then there is an algorithm using \(O((1/\varepsilon )\log (n/\varepsilon ))\) advice bits that solves the online simple unbounded KP with a competitive ratio of at most \(1+\varepsilon \).
This is a special case of the algorithm we will see in Theorem 10 for the online general unbounded KP.
An interesting phenomenon can be observed when we ask about the necessary number of advice bits to solve the problem exactly, i.e., to reach a competitive ratio that is exactly 1. Of course, n advice bits can achieve this in the 0-1 model: Just use one advice bit for each item, which tells us whether to take or discard it. It also turned out that one really needs n bits [66] to be optimal.
Surprisingly, the same argument does not work for the unbounded KP. One bit can tell us whether to take or discard an item, but there are more than these two possibilities: The algorithm can discard an item, take it once, take it twice, etc. No amount of advice can tell it exactly what to do in the first step because there is an unbounded number of possible actions. Does that mean that the online unbounded KP cannot be solved exactly by any amount of advice, even when we allow the number of advice bits to depend on the number of items? Yes, this is indeed the case and completely different from the classic 0-1 setting.
Theorem 7
Let \(f:\mathbb {N}\rightarrow \mathbb {N}\) be an arbitrary computable function. The online simple unbounded KP cannot be solved exactly with f(n) advice bits on instances of length n.
Proof
Let \(m\in \mathbb {N}\). We look at the instances \(I_k=(1/m-1/m^3, 1-k/m+k/m^3)\), for \(k=0,\ldots ,m\).
An algorithm that sees the first item of size \(1/m-1/m^3\) has to decide how often to take it, which corresponds to a number between 0 and m. The only possibility to be optimal on the instance \(I_k\) is to take the first item k times, because then the knapsack can be filled completely. We have \(k\cdot (1/m-1/m^3) + (1-k/m+k/m^3)=1\) and there is no other possibility to achieve that task. An algorithm needs sufficient advice to choose the right one out of \(m+1\) possibilities, which requires at least \(\log (m+1)\) advice bits. The length of the instances \(I_k\) is only 2, so, if \(\log (m+1)>f(2)\), the algorithm will be suboptimal on at least one \(I_k\). \(\square \)
5 The Online General Unbounded KP
Finally, let us take a look at the online general unbounded KP. Here each item \(x_i\) has a size \(0\le s_i\le 1\) and a value \(v_i\ge 0\). Of course, all lower bounds of the online simple unbounded KP can be transferred immediately to the more general problem. Furthermore, it turns out that the online general unbounded KP is non-competitive for deterministic or randomized algorithms and even for algorithms that use fewer than \(\log n\) advice bits.
Theorem 8
The competitive ratio for any randomized online algorithm for the general unbounded KP is unbounded.
Theorem 9
The competitive ratio of any online algorithm for the general unbounded KP that uses less than \(\log (n)\) advice bits is unbounded.
Both Theorem 8 and 9 follow immediately from results by Böckenhauer et al. for the 0-1 KP [29]. The examples given in the proofs used only items of size 1, so they still hold for the unbounded variant.
It might be of interest to compare the general unbounded KP and the general 0-1 KP with respect to their advice complexity. The latter problem is very sensitive to how the input is encoded. One possibility is to assume that both the sizes and values of items are given as real numbers and the algorithm is able to do basic operations like comparisons and arithmetic on these numbers. To achieve arbitrarily good competitive ratios with logarithmic advice, a more restrictive input encoding was used [29]. The reason is that all optimal—and all near-optimal—solutions use a very large number of items. Their indices cannot be communicated with a small amount of advice, and the alternative is to encode sizes of items.
It turns out that the general unbounded KP is easier to handle. The basic property that we will use in the proof of Theorem 10 is that a near-optimal solution can be built with a constant number of items. Their indices can be communicated with logarithmic advice. Moreover, these items are each used only a constant number of times with the exception of only one item. The following intuition shows why this is the case: If two items are both packed a large number of times, both of them have to be tiny. Then you could just use the denser one of them. The resulting new solution cannot be much worse.
Theorem 10
Let \(\varepsilon >0\) be a number that does not have to be constant with respect to the number n of items. Then the online general unbounded KP can be solved with a competitive ratio of at most \(1+\varepsilon \) with \(O((1/\varepsilon )\log (n/\varepsilon ))\) many advice bits.
Proof
Let \(\delta =\varepsilon /(\varepsilon +2)\). We fix some optimal solution S to a given instance I of length n. We say that an item in S is small if its size is at most \(\delta \). Let h be the total size of all small items in S and, if \(h>0\), let \(x_m\in S\) be a small item in S with maximum density, i.e., maximizing \(v_m/s_m\). Using \(O(\log (1/\delta ))\) many advice bits, the oracle tells the algorithm a number \(h'\) such that \(h-\delta <h'\le h\) and, using another \(O(\log n)\) advice bits, it communicates the index m.
When the algorithm receives \(x_m\), it packs it into the knapsack \(\lfloor h'/s_m\rfloor \) times, filling the knapsack at most to a size of \(h'\) and packing a value of at least \(v=(h'-\delta )v_m/s_m\ge (h-2\delta )v_m/s_m\). Let us compare this to the total value \(v'\) in S contributed by small items. Since \(x_m\) is a small item with highest density, \(v'\le hv_m/s_m\) and \(v'-v\le 2\delta v_m/s_m\).
Next, we turn to the items in S that are not small. There can be at most \(1/\delta \) many of them and each can be packed at most \(1/\delta \) times. We can therefore encode both their indices and their packing multiplicities by \(1/\delta \) numbers each, of sizes at most n and \(1/\delta \), respectively, which can be done with \(O((1/\delta )(\log n + \log (1/\delta )))=O((1/\delta )\log (n/\delta ))\) many bits. Receiving this information as advice allows the algorithm to pack large items exactly the same as in the optimal solution S.
The algorithm has then packed the same large items and has lost only a value of \(2\delta v_m/s_m\) by suboptimally packing small items.
We have to bound \(v_m/s_m\). Intuitively, \(v_m/s_m\) cannot be much smaller than the average density of all items in S. Otherwise, we could pack \(x_m\) as often as possible into an empty knapsack achieving a gain of at least \((1-\delta )v_m/s_m\), which cannot be larger than \(\textrm{opt}(I)\).
Hence, we can assume from now on that \((1-\delta )v_m/s_m\le \textrm{opt}(I)\), which implies
which is an upper bound on the gap between the gain of our algorithm and the optimal solution. \(\square \)
6 Conclusion
We have analyzed the hardness of the unbounded KP for online algorithms with and without advice, both in the simple and general case, and found some significant differences to the 0-1 KP. A simple greedy strategy achieves a constant competitive ratio for the simple unbounded KP even in the deterministic case. Moreover, unlike for the simple 0-1 KP, an unlimited amount of randomization helps improving the competitive ratio for the online simple unbounded KP.
It remains as an open problem to find out the exact competitive ratio in expectation that can be achieved for the online simple unbounded KP by randomized algorithms. It would also be interesting to see, both for the simple and the general case, how exactly the transition from sublogarithmic advice to logarithmic advice looks like: What competitive ratio can be achieved with \(\alpha \log n\) advice bits, for some fixed constant \(\alpha \)?
For the online 0-1 KP, considering variants where the algorithm is allowed to remove packed items or reserve items for packing them later on has given valuable structural insights. It would for sure be interesting to look at such variants also for the online unbounded KP.
References
Epstein, L., Noga, J., Seiden, S.S., Sgall, J., Woeginger, G.J.: Randomized online scheduling on two uniform machines. In: Tarjan, R.E., Warnow, T.J. (eds.) Proceedings of the Tenth Annual ACM-SIAM Symposium on Discrete Algorithms, 17-19 January 1999, Baltimore, Maryland, USA, pp. 317–326. ACM/SIAM, Philadelphia (1999). http://dl.acm.org/citation.cfm?id=314500.314581
Seiden, S.S., Sgall, J., Woeginger, G.J.: Semi-online scheduling with decreasing job sizes. Oper. Res. Lett. 27(5), 215–221 (2000). https://doi.org/10.1016/S0167-6377(00)00053-5
Sgall, J., Woeginger, G.J.: Multiprocessor jobs, preemptive schedules, and one-competitive online algorithms. Oper. Res. Lett. 51(6), 583–590 (2023). https://doi.org/10.1016/J.ORL.2023.09.010
Csirik, J., Woeginger, G.J.: Resource augmentation for online bounded space bin packing. J. Algorithms 44(2), 308–320 (2002). https://doi.org/10.1016/S0196-6774(02)00202-X
Ebenlendr, T., Noga, J., Sgall, J., Woeginger, G.J.: A note on semi-online machine covering. In: Erlebach, T., Persiano, G. (eds.) Approximation and Online Algorithms, Third International Workshop, WAOA 2005, Palma de Mallorca, Spain, October 6-7, 2005, Revised Papers. Lecture Notes in Computer Science, vol. 3879, pp. 110–118. Springer, Berlin Heidelberg (2005). https://doi.org/10.1007/11671411_9
Fiat, A., Woeginger, G.J. (eds.): Online Algorithms, The State of the Art. Lecture Notes in Computer Science, vol. 1442. Springer, Berlin Heidelberg (1998). https://doi.org/10.1007/BFb0029561
Zukerman, M., Jia, L., Neame, T.D., Woeginger, G.J.: A polynomially solvable special case of the unbounded knapsack problem. Oper. Res. Lett. 29(1), 13–16 (2001). https://doi.org/10.1016/S0167-6377(01)00076-1
Deineko, V.G., Woeginger, G.J.: Unbounded knapsack problems with arithmetic weight sequences. Eur. J. Oper. Res. 213(2), 384–387 (2011). https://doi.org/10.1016/j.ejor.2011.03.028
Deineko, V.G., Woeginger, G.J.: A well-solvable special case of the bounded knapsack problem. Oper. Res. Lett. 39(2), 118–120 (2011). https://doi.org/10.1016/j.orl.2011.01.006
Karp, R.M.: Reducibility among combinatorial problems. In: Miller, R.E., Thatcher, J.W. (eds.) Proceedings of a Symposium on the Complexity of Computer Computations, pp. 85–103. Plenum Press, New York (1972). https://doi.org/10.1007/978-1-4684-2001-2_9
Garey, M.R., Johnson, D.S.: Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman, New York, USA (1979)
Ibarra, O.H., Kim, C.E.: Fast approximation algorithms for the knapsack and sum of subset problems. J. ACM 22(4), 463–468 (1975). https://doi.org/10.1145/321906.321909
Lawler, E.L.: Fast approximation algorithms for knapsack problems. Math. Oper. Res. 4(4), 339–356 (1979). https://doi.org/10.1287/moor.4.4.339
Magazine, M.J., Oguz, O.: A fully polynomial approximation algorithm for the 0–1 knapsack problem. Eur J Oper Res 8, 270–273 (1981)
Kellerer, H., Pferschy, U.: Improved dynamic programming in connection with an FPTAS for the knapsack problem. J. Comb. Optim. 8(1), 5–11 (2004). https://doi.org/10.1023/B:JOCO.0000021934.29833.6b
Jin, C.: An improved FPTAS for 0-1 knapsack. In: Baier, C., Chatzigiannakis, I., Flocchini, P., Leonardi, S. (eds.) 46th International Colloquium on Automata, Languages, and Programming. LIPIcs, vol. 132, pp. 76:1–76:14. Patras, Greece (2019). https://doi.org/10.4230/LIPIcs.ICALP.2019.76
Kellerer, H., Pferschy, U., Pisinger, D.: Knapsack Problems. Springer, Berlin Heidelberg (2004). https://doi.org/10.1007/978-3-540-24777-7
Jansen, K., Kraft, S.E.J.: A faster FPTAS for the unbounded knapsack problem. Eur. J. Comb. 68, 148–174 (2018). https://doi.org/10.1016/j.ejc.2017.07.016
Sleator, D.D., Tarjan, R.E.: Amortized efficiency of list update and paging rules. Commun. ACM 28(2), 202–208 (1985). https://doi.org/10.1145/2786.2793
Komm, D.: An Introduction to Online Computation - Determinism, Randomization, Advice. Texts in Theoretical Computer Science. An EATCS Series. Springer, Cham, Switzerland (2016). https://doi.org/10.1007/978-3-319-42749-2
Borodin, A., El-Yaniv, R.: Online Computation and Competitive Analysis. Cambridge University Press, Cambridge (1998)
Marchetti-Spaccamela, A., Vercellis, C.: Stochastic on-line knapsack problems. Math. Program. 68, 73–104 (1995). https://doi.org/10.1007/BF01585758
Lueker, G.S.: Average-case analysis of off-line and on-line knapsack problems. J. Algorithms 29(2), 277–305 (1998). https://doi.org/10.1006/jagm.1998.0954
Böckenhauer, H.-J., Burjons, E., Hromkovič, J., Lotze, H., Rossmanith, P.: Online simple knapsack with reservation costs. In: Bläser, M., Monmege, B. (eds.) 38th International Symposium on Theoretical Aspects of Computer Science. LIPIcs, vol. 187, pp. 16:1–16:18. Saarbrücken (2021). https://doi.org/10.4230/LIPIcs.STACS.2021.16
Iwama, K., Taketomi, S.: Removable online knapsack problems. In: Widmayer, P., Ruiz, F.T., Bueno, R.M., Hennessy, M., Eidenbenz, S.J., Conejo, R. (eds.) Automata, Languages and Programming, 29th International Colloquium. Lecture Notes in Computer Science, vol. 2380, pp. 293–305. Springer, Malaga, Spain (2002). https://doi.org/10.1007/3-540-45465-9_26
Han, X., Kawase, Y., Makino, K.: Online unweighted knapsack problem with removal cost. Algorithmica 70(1), 76–91 (2014). https://doi.org/10.1007/S00453-013-9822-Z
Böckenhauer, H.-J., Klasing, R., Mömke, T., Rossmanith, P., Stocker, M., Wehner, D.: Online knapsack with removal and recourse. In: Hsieh, S.-Y., Hung, L.-J., Lee, C.-W. (eds.) Proceedings of the 34th International Workshop, IWOCA 2023. Lecture Notes in Computer Science, vol. 13889, pp. 123–135. Springer, Cham, Switzerland (2023). https://doi.org/10.1007/978-3-031-34347-6_11
Böckenhauer, H.-J., Komm, D., Královič, R., Rossmanith, P.: On the advice complexity of the knapsack problem. In: Fernández-Baca, D. (ed.) LATIN 2012: Theoretical Informatics - 10th Latin American Symposium. Lecture Notes in Computer Science, vol. 7256, pp. 61–72. Springer, Arequipa, Peru (2012). https://doi.org/10.1007/978-3-642-29344-3_6
Böckenhauer, H.-J., Komm, D., Královič, R., Rossmanith, P.: The online knapsack problem: Advice and randomization. Theor. Comput. Sci. 527, 61–72 (2014). https://doi.org/10.1016/j.tcs.2014.01.027
Dobrev, S., Královič, R., Pardubská, D.: Measuring the problem-relevant information in input. RAIRO Theor. Informatics Appl. 43(3), 585–613 (2009). https://doi.org/10.1051/ita/2009012
Emek, Y., Fraigniaud, P., Korman, A., Rosén, A.: Online computation with advice 412, 2642–2656 (2011). https://doi.org/10.1016/J.TCS.2010.08.007
Hromkovič, J., Královič, R., Královič, R.: Information complexity of online problems. In: Hlinený, P., Kučerá, A. (eds.) Mathematical Foundations of Computer Science 2010, 35th International Symposium, MFCS. Lecture Notes in Computer Science, vol. 6281, pp. 24–36. Springer, Brno, Czech Republic (2010). https://doi.org/10.1007/978-3-642-15155-2_3
Böckenhauer, H.-J., Komm, D., Královič, R., Královič, R., Mömke, T.: Online algorithms with advice: The tape model. Inf. Comput. 254, 59–83 (2017). https://doi.org/10.1016/j.ic.2017.03.001
Böckenhauer, H.-J., Frei, F., Rossmanith, P.i: Removable online knapsack and advice. In: Beyersdorff, O., Kanté, M.M., Kupferman, O., Lokshtanov, D. (eds.) 41st International Symposium on Theoretical Aspects of Computer Science (STACS 2024). LIPIcs, vol. 289, pp. 18:1–18:17. Saarbrücken (2024). https://doi.org/10.4230/LIPIcs.STACS.2024.18
Boyar, J., Favrholdt, L.M., Kudahl, C., Larsen, K.S., Mikkelsen, J.W.: Online algorithms with advice: A survey. ACM Comput. Surv. 50(2), 19:1–19:34 (2017). https://doi.org/10.1145/3056461
Bianchi, M.P., Böckenhauer, H.-J., Brülisauer, T., Komm, D., Palano, B.: Online minimum spanning tree with advice. Int. J. Found. Comput. Sci. 29(4), 505–527 (2018). https://doi.org/10.1142/S0129054118410034
Böckenhauer, H.-J., Di Caro, L., Unger, W.: Fully online matching with advice on general bipartite graphs and paths. In: Böckenhauer, H.-J., Komm, D., Unger, W. (eds.) Adventures Between Lower Bounds and Higher Altitudes - Essays Dedicated to Juraj Hromkovič on the Occasion of His 60th Birthday. Lecture Notes in Computer Science, vol. 11011, pp. 172–190. Springer, Cham, Switzerland (2018). https://doi.org/10.1007/978-3-319-98355-4_11
Jin, B., Ma, W.: Online bipartite matching with advice: Tight robustness-consistency tradeoffs for the two-stage model. In: Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., Oh, A. (eds.) Advances in Neural Information Processing Systems, vol. 35, pp. 14555–14567 (2022)
Lavasani, A.M., Pankratov, D.: Advice complexity of online non-crossing matching. Comput. Geom. 110, 101943 (2023). https://doi.org/10.1016/j.comgeo.2022.101943
Rossmanith, P.: On the advice complexity of online edge- and node-deletion problems. In: Böckenhauer, H.-J., Komm, D., Unger, W. (eds.) Adventures Between Lower Bounds and Higher Altitudes - Essays Dedicated to Juraj Hromkovič on the Occasion of His 60th Birthday. Lecture Notes in Computer Science, vol. 11011, pp. 449–462. Springer, Cham, Switzerland (2018). https://doi.org/10.1007/978-3-319-98355-4_26
Chen, L.-H., Hung, L.-J., Lotze, H., Rossmanith, P.: Online node- and edge-deletion problems with advice. Algorithmica 83(9), 2719–2753 (2021). https://doi.org/10.1007/s00453-021-00840-9
Berndt, N., Lotze, H.: Advice complexity bounds for online delayed F-node-, H-node-and H-edge-deletion problems. In: Hsieh, S.-Y., Hung, L.-J., Lee, C.-W. (eds.) Proceedings of the 34th International Workshop on Combinatorial Algorithms, IWOCA 2023. Lecture Notes in Computer Science, vol. 13889, pp. 62–73. Springer, Cham, Switzerland (2023). https://doi.org/10.1007/978-3-031-34347-6_6
Boyar, J., Favrholdt, L.M., Kamali, S., Larsen, K.S.: Online bin covering with advice. Algorithmica 83(3), 795–821 (2021). https://doi.org/10.1007/s00453-020-00728-0
Fung, S.P.Y.: Online two-way trading: Randomization and advice. Theor. Comput. Sci. 856, 41–50 (2021). https://doi.org/10.1016/j.tcs.2020.12.016
Boyar, J., Eidenbenz, S.J., Favrholdt, L.M., Kotrbcík, M., Larsen, K.S.: Online dominating set. Algorithmica 81(5), 1938–1964 (2019). https://doi.org/10.1007/s00453-018-0519-1
Böckenhauer, H.-J., Hromkovič, J., Krug, S., Unger, W.: On the advice complexity of the online dominating set problem. Theor. Comput. Sci. 862, 81–96 (2021). https://doi.org/10.1016/j.tcs.2021.01.022
Böckenhauer, H.-J., Komm, D., Wegner, R.: Call admission problems on grids with advice. Theor. Comput. Sci. 918, 77–93 (2022). https://doi.org/10.1016/j.tcs.2022.03.022
Nilsson, B.J., Vujovic, G.: Online two-dimensional vector packing with advice. In: Calamoneri, T., Corò, F. (eds.) Algorithms and Complexity - 12th International Conference, CIAC. Lecture Notes in Computer Science, vol. 12701, pp. 381–393. Springer, Virtual event (2021). https://doi.org/10.1007/978-3-030-75242-2_27
Fraigniaud, P., Ilcinkas, D., Pelc, A.: Tree exploration with advice. Inf. Comput. 206(11), 1276–1287 (2008). https://doi.org/10.1016/j.ic.2008.07.005
Dobrev, S., Královič, R., Markou, E.: Online graph exploration with advice. In: Even, G., Halldórsson, M.M. (eds.) Structural Information and Communication Complexity - 19th International Colloquium, SIROCCO 2012, Reykjavik, Iceland, June 30-July 2, 2012, Revised Selected Papers. Lecture Notes in Computer Science, vol. 7355, pp. 267–278. Springer, Berlin Heidelberg (2012). https://doi.org/10.1007/978-3-642-31104-8_23
Gorain, B., Pelc, A.: Deterministic graph exploration with advice. ACM Trans. Algorithms 15(1), 8:1–8:17 (2019). https://doi.org/10.1145/3280823
Böckenhauer, H.-J., Fuchs, J., Unger, W.: Exploring sparse graphs with advice. Inf. Comput. 289(Part), 104950 (2022). https://doi.org/10.1016/j.ic.2022.104950
Borodin, A., Boyar, J., Larsen, K.S., Pankratov, D.: Advice complexity of priority algorithms. Theory Comput. Syst. 64(4), 593–625 (2020). https://doi.org/10.1007/s00224-019-09955-7
Boyar, J., Larsen, K.S., Pankratov, D.: Advice complexity of adaptive priority algorithms. Theor. Comput. Sci. 984, 114318 (2024). https://doi.org/10.1016/J.TCS.2023.114318
Böckenhauer, H.-J., Frei, F., Horvath, S.: Priority algorithms with advice for disjoint path allocation problems (extended abstract). In: Hsieh, S.-Y., Hung, L.-J., Klasing, R., Lee, C.-W., Peng, S.-L. (eds.) New Trends in Computer Technologies and Applications - 25th International Computer Symposium, ICS. Communications in Computer and Information Science, vol. 1723, pp. 25–36. Springer, Taoyuan, Taiwan (2022). https://doi.org/10.1007/978-981-19-9582-8_3
Angelopoulos, S., Dürr, C., Jin, S., Kamali, S., Renault, M.P.: Online computation with untrusted advice. In: Vidick, T. (ed.) 11th Innovations in Theoretical Computer Science Conference, ITCS. LIPIcs, vol. 151, pp. 52:1–52:15. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, Seattle, Washington (2020). https://doi.org/10.4230/LIPIcs.ITCS.2020.52
Lee, R., Maghakian, J., Hajiesmaili, M.H., Li, J., Sitaraman, R.K., Liu, Z.: Online peak-aware energy scheduling with untrusted advice. In: Meer, H., Meo, M. (eds.) e-Energy ’21: The Twelfth ACM International Conference on Future Energy Systems, pp. 107–123. ACM, Torino, Italy (2021). https://doi.org/10.1145/3447555.3464860
Angelopoulos, S., Kamali, S.: Rényi-ulam games and online computation with imperfect advice. In: Leroux, J., Lombardy, S., Peleg, D. (eds.) 48th International Symposium on Mathematical Foundations of Computer Science, MFCS 2023. LIPIcs, vol. 272, pp. 13:1–13:15. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, Dagstuhl, Germany (2023). https://doi.org/10.4230/LIPICS.MFCS.2023.13
Almanza, M., Chierichetti, F., Lattanzi, S., Panconesi, A., Re, G.: Online facility location with multiple advice. In: Ranzato, M., Beygelzimer, A., Dauphin, Y.N., Liang, P., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, Virtual, pp. 4661–4673 (2021). https://proceedings.neurips.cc/paper/2021/hash/250473494b245120a7eaf8b2e6b1f17c-Abstract.html
Antoniadis, A., Gouleakis, T., Kleer, P., Kolev, P.: Secretary and online matching problems with machine learned advice. Discret. Optim. 48(Part 2), 100778 (2023). https://doi.org/10.1016/J.DISOPT.2023.100778
Rohatgi, D.: Near-optimal bounds for online caching with machine learned advice. In: Chawla, S. (ed.) Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, pp 1834–1845. SIAM, Salt Lake City, USA (2020). https://doi.org/10.1137/1.9781611975994.112
Wang, S., Li, J., Wang, S.: Online algorithms for multi-shop ski rental with machine learned advice. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.-F., Lin, H.-T. (eds.) Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, Virtual (2020). https://proceedings.neurips.cc/paper/2020/hash/5cc4bb753030a3d804351b2dfec0d8b5-Abstract.html
Kasilag, C.G.R., Rey, P.M., Clemente, J.B.: Solving the online assignment problem with machine learned advice. CoRR arXiv:2208.04016https://doi.org/10.48550/arXiv.2208.04016 (2022)
Indyk, P., Mallmann-Trenn, F., Mitrovic, S., Rubinfeld, R.: Online page migration with ML advice. In: Camps-Valls, G., Ruiz, F.J.R., Valera, I. (eds.) International Conference on Artificial Intelligence and Statistics, AISTATS. Proceedings of Machine Learning Research, vol. 151, pp. 1655–1670. PMLR, Virtual event (2022). https://proceedings.mlr.press/v151/indyk22a.html
Emek, Y., Gil, Y., Pacut, M., Schmid, S.: Online algorithms with randomly infused advice. CoRR arXiv:2302.05366https://doi.org/10.48550/arXiv.2302.05366 (2023)
Frei, F.: Beneath, behind, and beyond common complexity classes. PhD thesis, ETH Zurich, Zürich, Switzerland (2021). https://doi.org/10.3929/ethz-b-000542575
Acknowledgements
We would like to thank an anonymous reviewer for valuable suggestions regarding Theorem 5. Ralf Klasing was partially supported by the ANR project TEMPOGRAL (ANR-22-CE48-0001).
Matthias Gehnen, Henri Lotze, and Daniel Mock were partially supported by an IDEA League Short-Term Research Exchange Grant.
Funding
Open access funding provided by Swiss Federal Institute of Technology Zurich.
Author information
Authors and Affiliations
Contributions
All authors contributed equally in the research, manuscript preparation and reviewing.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no Conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix A Technical Result for Theorem 4
Appendix A Technical Result for Theorem 4
In the proof of Theorem 4, we need the following result that can be proven by methods from standard calculus. We assume the preconditions and notation from Theorem 4.
Lemma 1
The function
is increasing for \(\frac{2}{3}\le \xi \le 1\).
Proof
We will show that
for \(\frac{2}{3}\le \xi \le 1\). This is equivalent to showing that
for \(\frac{2}{3}\le \xi \le 1\). When \(\xi =\frac{2}{3}\), we check that
since
Finally, we check that \(h(\xi )\) is itself increasing for \(\frac{2}{3}\le \xi \le 1\), since
for \(\frac{2}{3}\le \xi \le 1\). \(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Böckenhauer, HJ., Gehnen, M., Hromkovič, J. et al. Online Unbounded Knapsack. Theory Comput Syst 69, 14 (2025). https://doi.org/10.1007/s00224-025-10215-0
Accepted:
Published:
DOI: https://doi.org/10.1007/s00224-025-10215-0