Skip to main content

A Nearly Exact Propagation Algorithm for Energetic Reasoning in \(\mathcal O(n^2 \log n)\)

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 9892))

Abstract

In constraint programming, energetic reasoning constitutes a powerful start time propagation rule for cumulative scheduling problems (CuSP). This article first presents an improved time interval checking algorithm that is derived from a polyhedral model. In a second step, we extend this algorithm to an energetic reasoning propagation algorithm with time complexity \(\mathcal O(n^2 \log n)\) where n denotes the number of jobs. The idea is based on a new sweep line subroutine that efficiently evaluates energy overloads of each job on the relevant time intervals. In particular, our algorithm performs energetic reasoning propagations for every job. In addition, we show that on the vast number of relevant intervals our approach achieves the maximum possible propagations according to the energetic reasoning rule.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Artigues, C., Lopez, P.: Energetic reasoning for energy-constrained scheduling with a continuous resource. J. Sched. 18(3), 225–241 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  2. Bonifas, N.: A \(O(n^2 log(n))\) propagation for the energy reasoning. In: Conference Paper, Roadef 2016 (2016)

    Google Scholar 

  3. Derrien, A., Petit, T.: A new characterization of relevant intervals for energetic reasoning. In: O’Sullivan, B. (ed.) CP 2014. LNCS, vol. 8656, pp. 289–297. Springer, Heidelberg (2014)

    Google Scholar 

  4. Berthold, T., Heinz, S., Schulz, J.: An approximative criterion for the potential of energetic reasoning. In: Marchetti-Spaccamela, A., Segal, M. (eds.) TAPAS 2011. LNCS, vol. 6595, pp. 229–239. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  5. Baptiste, P., Le Pape, C., Nuijten, W.: Satisfiability tests and time bound adjustments for cumulative scheduling problems. Ann. Oper. Res. 92, 305–333 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  6. Baptiste, P., Le Pape, C., Nuijten, W.: Applying Constraint Programming to Scheduling Problems, vol. 39. Springer Science & Business Media (2012)

    Google Scholar 

  7. Vilím, P.: Edge finding filtering algorithm for discrete cumulative resources in \(\cal O\)(kn log n). In: Gent, I.P. (ed.) CP 2009. LNCS, vol. 5732, pp. 802–816. Springer, Heidelberg (2009)

    Google Scholar 

  8. Vilím, P.: Timetable edge finding filtering algorithm for discrete cumulative resources. In: Achterberg, T., Beck, J.C. (eds.) CPAIOR 2011. LNCS, vol. 6697, pp. 230–245. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  9. Schutt, A., Feydy, T., Stuckey, P.J., Wallace, M.G.: Explaining the cumulative propagator. Constraints 16(3), 250–282 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  10. Schutt, A., Wolf, A.: A new \(\cal O\)(\({n^2}\) log n) not-first/not-last pruning algorithm for cumulative resource constraints. In: Cohen, D. (ed.) CP 2010. LNCS, vol. 6308, pp. 445–459. Springer, Heidelberg (2010)

    Google Scholar 

  11. Schutt, A., Feydy, T., Stuckey, P.J.: Explaining time-table-edge-finding propagation for the cumulative resource constraint. In: Gomes, C., Sellmann, M. (eds.) CPAIOR 2013. LNCS, vol. 7874, pp. 234–250. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  12. Ouellet, P., Quimper, C.-G.: Time-table extended-edge-finding for the cumulative constraint. In: Schulte, C. (ed.) CP 2013. LNCS, vol. 8124, pp. 562–577. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  13. Kameugne, R., Fotso, L.P., Scott, J., Ngo-Kateu, Y.: A quadratic edge-finding filtering algorithm for cumulative resource constraints. Constraints 19(3), 243–269 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  14. Mercier, L., Van Hentenryck, P.: Edge finding for cumulative scheduling. INFORMS J. Comput. 20(1), 143–153 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  15. Letort, A., Beldiceanu, N., Carlsson, M.: A scalable sweep algorithm for the cumulative constraint. In: Milano, M. (ed.) CP 2012. LNCS, vol. 7514, pp. 439–454. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  16. Kolisch, R., Sprecher, A.: PSPLIB-a project scheduling problem library: OR software-ORSEP operations research software exchange program. Eur. J. Oper. Res. 96(1), 205–216 (1997)

    Article  MATH  Google Scholar 

  17. Godard, D., Laborie, P., Nuijten, W.: Randomized large neighborhood search for cumulative scheduling. In: ICAPS, vol. 5, pp. 81–89, June 2005

    Google Scholar 

  18. HP Peter Stuckey. http://people.eng.unimelb.edu.au/pstuckey/rcpsp/

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexander Tesch .

Editor information

Editors and Affiliations

Appendices

A Proofs

1.1 A.1 Proof of Lemma 2

We first show the following helping lemma.

Lemma 13

Let \(S \subseteq J\) be a job subset with \(S \ne \emptyset \) and \(P_S \ne \emptyset \). In addition, let \((t_1,t_2,\tilde{\mu }) \in P_S\) be a vertex of \(P_S\) with \(\tilde{\mu }_j > 0\) for all \(j \in S\). Then either one of the following holds:

  1. (i)

    \((t_1,t_2,\tilde{\mu })\) satisfies three inequalities of (8)–(11) with equality and all of them correspond to one job \(j \in S\)

  2. (ii)

    \((t_1,t_2,\tilde{\mu })\) satisfies four inequalities of (8)–(11) with equality where two correspond to one job \(i \in S\) and two correspond to one job \(j \in S\) with \(i \ne j\).

Proof

We first show that \(P_S\) has full dimension. Let \(m = |S|\) and let be the j-th unit vector. Furthermore, let \(0_m, 1_m \in \{0,1\}^m\) be the vectors that contain m zeros or one respectively. Consider the \(m+2\) vectors \((-T,T,p_j \cdot \delta _j)_{j \in S}\), \((0,T,0_m)\) and \((T,T,0_m)\) where T denotes a large constant. We verify that these vectors are linearly independent and satisfy inequalities (8)–(13). Consequently, \(P_S\) contains \(m+2\) linearly independent vectors, so it has full dimension \(m+2\).

It follows that the vertex \((t_1,t_2,\tilde{\mu }) \in P_S\) satisfies \(m+2\) inequalities of (8)–(13) with equality. If it satisfies inequality (12) or (13) with equality it holds \(\tilde{\mu }_j = 0\) for some \(j \in S\) which contradicts the assumption. Hence, we can restrict to inequalities (8)–(11) which yields the reduced constraint matrix \(A \in \{0,1\}^{4 \cdot m \times m+2}\) of the form

where the first two columns of A correspond to variables \(t_1,t_2\) and the last m columns correspond to variables \(\tilde{\mu }_j\) with \(j \in S\). Here, \(I_m \in \{0,1\}^{m \times m}\) equals the \(m \times m\) identity matrix.

Thus, the vertex \((t_1,t_2,\tilde{\mu }) \in P_S\) corresponds to a selection of \(m+2\) linearly independent rows of A whose associated submatrix we denote by \(A_B\). Since every column of \(A_B\) must contain at least one non-zero entry and each row of A has exactly one non-zero coefficient for some variable \(\tilde{\mu }_j\) it follows that \(A_B\) contains m rows with non-zero entries for each variable \(\tilde{\mu }_j\) with \(j \in S\). The remaining two rows of \(A_B\) either have a non-zero entry for one job \(j \in S\) or for two distinct jobs \(i,j \in S\). This is equivalent to cases (i) and (ii) which proves the lemma. \(\quad \square \)

Lemma 2. Let \(S \subseteq J\) be a job subset with \(S \ne \emptyset \) and \(P_S \ne \emptyset \) . In addition, let \((t_1,t_2,\tilde{\mu }) \in P_S\) be a vertex of \(P_S\) with \(\tilde{\mu }_j > 0\) for all \(j \in S\) . Then either one of the following holds:

  1. (i)

    There is a job \(j \in S\) such that \((t_1,t_2,\tilde{\mu }_j)\) is a vertex of \(P_j\).

  2. (ii)

    There are two distinct jobs \(i,j \in S\) such that \((t_1,t_2,\tilde{\mu }_i,\tilde{\mu }_j)\) is the intersection of one edge of \(P_i\) and one edge of \(P_j\) and thus a vertex of \(P_{i,j}\).

Proof

It either holds case (i) or (ii) of Lemma 13 because the assumptions are equal. From the proof of Lemma 13, the polyhedra \(P_i\) and \(P_{i,j}\) have dimensions three and four respectively. Case (i) of Lemma 13 implies that the projected vertex \((t_1,t_2, \tilde{\mu }_j)\) is a vertex of \(P_j\).

An edge of \(P_j\) satisfies two inequalities of (8)–(11) with equality that correspond to job j. Therefore, case (ii) of Lemma 13 yields that the projected vertex \((t_1,t_2, \tilde{\mu }_i, \tilde{\mu }_j)\) is the intersection of one edge of \(P_i\) and one edge of \(P_j\) and hence a vertex of \(P_{i,j}\). \(\quad \square \)

1.2 A.2 Proof of Lemma 3

Lemma 3. Given a job \(j \in J\) and assume the projection of the polyhedron \(P_j\) to the \((t_1,t_2)\) -plane. The projected line segments of the edges of \(P_j\) that contain a vertex \((t_1,t_2,\tilde{\mu }_j)\) of \(P_j\) with \(\tilde{\mu }_j > 0\) are given by

Proof

By the proof of Lemma 13, it suffices to restrict to inequalities (8)–(11). An edge of \(P_j\) satisfies two inequalities of (8)–(11) with equality. Thus, there are six possible cases:

  1. (i)

    If inequalities (8) and (10) hold with equality then it holds \(\tilde{\mu }_j = p_j = e_j+p_j-t_1\) which implies \(t_1 = e_j\). By inequalities (11) and (9) it follows \(t_2 \ge l_j\) and \(t_2 \ge e_j+p_j\).

  2. (ii)

    If inequalities (8) and (11) hold with equality then it holds \(\tilde{\mu }_j = p_j = t_2-l_j+p_j\) which implies \(t_2 = l_j\). By inequalities (10) and (9) it follows \(t_1 \le e_j\) and \(t_1 \le l_j-p_j\).

  3. (iii)

    If inequalities (10) and (11) hold with equality then it holds \(\tilde{\mu }_j = e_j+p_j-t_1 = t_2-l_j+p_j\) which implies \(t_1+t_2 = e_j+l_j\). By inequalities (8) and (9) it follows \(t_1 \ge e_j\) and \(t_1 \le l_j-p_j\). In addition, inequality (13) yields \(t_1 \le e_j+p_j\).

  4. (iv)

    If inequalities (9) and (11) hold with equality then it holds \(\tilde{\mu }_j = t_2-t_1 = t_2-l_j+p_j\) which implies \(t_1 = l_j-p_j\). By inequalities (10) and (8) it follows \(t_2 \le e_j+p_j\) and \(t_2 \le l_j\). In addition, inequality (12) yields \(t_2 \ge l_j-p_j\).

  5. (v)

    If inequalities (9) and (10) hold with equality then it holds \(\tilde{\mu }_j = t_2-t_1=e_j+p_j-t_1\) which implies \(t_2 = e_j+p_j\). By inequalities (11) and (8) it follows \(t_1 \ge l_j-p_j\) and \(t_1 \ge e_j\). In addition, inequality (12) yields \(t_1 \le e_j+p_j\).

  6. (vi)

    If inequalities (8) and (9) hold with equality then it holds \(\tilde{\mu }_j = p_j = t_2-t_1\). By inequalities (10) and (11) it follows \(t_1 \le e_j\) and \(t_2 \ge l_j\). Adding both yields \(l_j-e_j \le t_2-t_1 = p_j \le l_j-e_j\) which implies \(p_j = l_j-e_j\). Thus, it holds \(t_1 = e_j\) and \(t_2 = l_j\). Therefore, all inequalities of (8)–(11) are satisfied with equality. This case is already included in cases (i)–(v).

Since \(e_j \le l_j-p_j\) and \(e_j+p_j \le l_j\) always holds the cases (i)-(v), in order of appearance, correspond to the line segments \(T_1(j),T_2(j),T_3(j),T_1^M(j),T_2^M(j)\) which proves the lemma. \(\quad \square \)

1.3 A.3 Proof of Theorem 2

Theorem 2. If is a time interval of maximum energy overload then it holds \((t_1,t_2) \in \mathcal T\).

Proof

By Lemma 1 there exists a job subset \(S \subseteq J\) with \(S \ne \emptyset \) such that is a vertex of \(P_S\) with \(\tilde{\mu }_j > 0\) for all \(j \in S\). Therefore, Lemma 2 holds. We distinguish between cases (i) and (ii) of Lemma 2.

If case (i) holds then there is a job \(j \in S\) such that \((t_1,t_2,\tilde{\mu }_j)\) is a vertex of \(P_j\). Since \(P_j\) is a three-dimensional polyhedron the vertex \((t_1,t_2,\tilde{\mu }_j)\) of \(P_j\) has at least three incident edges. By Lemma 3, the only intersection points of at least three projected edges of \(P_j\) are \((t_1,t_2)=(e_j,l_j)\) and \((t_1,t_2)=(l_j-p_j,e_j+p_j)\), if \(j \in J^M\). This case is equivalent to \((t_1,t_2) \in \mathcal T_j\) and \((t_1,t_2) \in \mathcal T_j^M\), if \(j \in J^M\).

Otherwise, if case (ii) holds then there are two distinct jobs \(i,j \in S\) such that \((t_1,t_2)\) is an intersection point of the projected edges of \(P_i\) and \(P_j\) respectively. Since \(T_1(i),T_1^M(i)\) are vertical, \(T_2(i),T_2^M(i)\) horizontal and \(T_3(i)\) diagonal line segments the possible intersection relations are vertical-horizontal, vertical-diagonal and horizontal-diagonal. The relation vertical-horizontal corresponds to \((t_1,t_2) \in \mathcal T_{ij}\) and the relations vertical-diagonal and horizontal-diagonal to \((t_1,t_2) \in \mathcal T_{ij}'\). If the line segments of jobs i and j intersect in more than one point, we can always find an intersection point of the previous characterizations along the intersecting line. It follows that \((t_1,t_2) \in \mathcal T\) which shows the theorem. \(\quad \square \)

1.4 A.4 Proof of Lemmas 47

Lemma 4. For any job \(j \in J\) and \(\overline{t}_1 \le \theta _2\) the piecewise linear function \(f_j(t_2) = d_j \cdot (\mu _j^{left}(\overline{t}_1,t_2)-\mu _j(\overline{t}_1,t_2))\) on the interval \([\theta _1,\theta _4]\) decomposes into the linear function segments

$$\begin{aligned} f_j^1(t_2) = d_j \cdot (t_2-\theta _1)&,&\quad t_2 \in [\theta _1,\theta _2] \\ f_j^2(t_2) = d_j \cdot (\theta _2-\theta _1)&,&\quad t_2 \in [\theta _2,\theta _3] \\ f_j^3(t_2) = -d_j \cdot (t_2-\theta _4)&,&\quad t_2 \in [\theta _3,\theta _4]. \end{aligned}$$

Proof

Since \(\overline{t}_1 \le \theta _2\), the function \(\mu _j^{left}(\overline{t}_1,t_2)\) has slope one in the interval \(t_2 \in [\theta _1,e_j+p_j]\) and zero otherwise. The function \(\mu _j(\overline{t}_1,t_2)\) has slope one in the interval \(t_2 \in [l_j-p_j,\theta _4]\) and zero otherwise. Hence \(\mu _j^{left}(\overline{t}_1,t_2) - \mu _j(\overline{t}_1,t_2)\) has slope one the interval \([\theta _1,\theta _2]\), constant slope in \([\theta _2,\theta _3]\) and slope minus one in \([\theta _3,\theta _4]\). Scaling by \(d_j\) shows the statement. \(\quad \square \)

Lemma 5. For any job \(j \in J\) and \(\overline{t}_1 \le \theta _2\) the piecewise linear function \(f_j(t_2) = d_j \cdot (t_2 - \mu _j(\overline{t}_1,t_2))\) on the interval \([\theta _1,\theta _4]\) decomposes into the linear function segments

$$\begin{aligned} f_j^1(t_2) = d_j \cdot t_2&,&\quad t_2 \in [\theta _1,l_j-p_j] \\ f_j^2(t_2) = d_j \cdot (l_j-p_j)&,&\quad t_2 \in [l_j-p_j,\theta _4]. \end{aligned}$$

Proof

Since \(\overline{t}_1 \le \theta _2\), the function \(\mu _j(\overline{t}_1,t_2)\) has slope zero in \([\theta _1,l_j-p_j]\) and slope one in the interval \([l_j-p_j,\theta _4]\). Hence, the function \(t_2-\mu _j(\overline{t}_1,t_2)\) has slope one in the interval \([\theta _1,l_j-p_j]\) and slope zero in the interval \([l_j-p_j, \theta _4]\). Scaling by \(d_j\) shows the statement. \(\quad \square \)

Lemma 6. Let \(\theta ' = \max \{\theta _3, \theta _4,\overline{t}_1\}\) . For any job \(j \in J\) and \(\overline{t}_1 \in [e_j,l_j]\) the piecewise linear function \(f_j(t_2) = d_j \cdot (\mu _j^{right}(\overline{t}_1,t_2) - \mu _j(\overline{t}_1,t_2))\) on the interval \([\theta ',\infty )\) decomposes into the linear function segments

$$\begin{aligned} f_j^1(t_2) = d_j \cdot (t_2-\theta ')&,&\quad t_2 \in [\theta ',l_j] \\ f_j^2(t_2) = d_j \cdot (l_j-\theta ')&,&\quad t_2 \in [l_j,\infty ). \end{aligned}$$

Proof

Since \(\overline{t}_1 \in t_1 \in [e_j,l_j]\), the function \(\mu _j^{right}(\overline{t}_1,t_2)\) has slope one in the interval \([\theta ',l_j]\) and slope zero in the interval \([l_j,\infty )\). The function \(\mu _j(\overline{t}_1,t_2)\) is constant for all \(t_2 \in [\theta ',\infty )\). Hence, the function \(\mu _j^{right}(\overline{t}_1,t_2) - \mu _j(\overline{t}_1,t_2)\) has slope one in the interval \([\theta ',l_j]\) and slope zero in the interval \([l_j, \infty )\). Scaling by \(d_j\) shows the statement. \(\quad \square \)

Lemma 7. Let \(\theta ' = \max \{\theta _3, \theta _4,\overline{t}_1\}\) . For any job \(j \in J\) and \(\overline{t}_1 \in [e_j,l_j]\) the piecewise linear function \(f_j(t_2) = -d_j \cdot (\bar{t}_1 + \mu _j(\overline{t}_1,t_2))\) is constant on \([\theta ',\infty )\).

Proof

By construction, it holds \(\mu _j(\overline{t}_1,t_2) = \mu _j(\overline{t}_1, \theta ')\) for all \(t_2 \in [\theta ',\infty )\) which is constant. Consequently, \(f_j(t_2)\) is constant for all \(t_2 \in [\theta ',\infty )\). \(\quad \square \)

1.5 A.5 Proof of Lemma 8

Lemma 8. If the slopes of the functions ( 23 ) and ( 24 ) coincide on an interval then both functions on the interval \([\underline{t}_2, \overline{t}_2]\) attain their maximum at the same point \((\overline{t}_1,t_2) \in \mathcal T\) with \(t_2 \in [\underline{t}_2, \overline{t}_2]\) , if the maximum exists.

Proof

Since the slope of (23) equals the slope of (24) the quotient of the functions \(\omega (\overline{t}_1,t_2) + d_j \cdot (\mu _j^{left}(\overline{t}_1,t_2)-\mu _j(\overline{t}_1,t_2))\) and \(\omega (\overline{t}_1,t_2) + d_j \cdot (t_2 - \mu _{j}(\overline{t}_1,t_2))\) is constant for all \(t_2 \in [\underline{t}_2, \overline{t}_2]\). Hence, if there exists an interval \((\overline{t}_1, t_2) \in \mathcal T\) with \(t_2 \in [\underline{t}_2, \overline{t}_2]\) that maximizes (23) it also maximizes (24) and conversely. \(\quad \square \)

1.6 A.6 Proof of Lemmas 912

Lemma 9. For fixed value \(\overline{t}_1 \in [e_j,l_j]\) the function \(\mu _j^{right}(\overline{t}_1,t_2) - 2 \cdot \mu _j(\overline{t}_1,t_2)\) has slope zero for all \(t_2 \in [l_j,\infty )\).

Proof

Both functions \(\mu _j^{right}(\overline{t}_1,t_2)\) and \(\mu _j(\overline{t}_1,t_2)\) have slope zero in the interval \(t_2 \in [l_j,\infty )\), so the stated function has slope zero in this interval. \(\quad \square \)

Lemma 10. For fixed value \(\overline{t}_1 \in [e_j, \theta _2]\) the function \(t_2 - \mu _j^{left}(\overline{t}_1,t_2)\) has slope one the interval \(t_2 \in [e_j+p_j, \theta _4]\).

Proof

The function \(\mu _j^{left}(\overline{t}_1,t_2)\) has slope zero in the interval \(t_2 \in [e_j+p_j, \theta _4]\), so \(t_2 - \mu _j^{left}(\overline{t}_1,t_2)\) has slope one in the interval \(t_2 \in [e_j+p_j, \theta _4]\). \(\quad \square \)

Lemma 11. For fixed value \(\overline{t}_1 \in [e_j, \theta _2]\) the function \(\mu _j^{right}(\overline{t}_1,t_2) - 2 \cdot \mu _j(\overline{t}_1,t_2)\) has slope one in the interval \([\max \{\theta _3,\theta _4\},l_j]\).

Proof

The function \(\mu _j^{right}(\overline{t}_1,t_2)\) has slope one and the function \(\mu _j(\overline{t}_1,t_2)\) has slope zero in the interval \([\max \{\theta _3,\theta _4\},l_j]\), so the stated function has slope one. \(\quad \square \)

Lemma 12. For fixed value \(\overline{t}_1 \in [e_j, \theta _2]\) the function \(t_2 - \mu _j^{left}(\overline{t}_1,t_2)\) has slope zero in the interval \([t_1, \theta _2]\).

Proof

The function \(\mu _j^{left}(\overline{t}_1,t_2)\) has slope one in the interval \([t_1, \theta _2]\), therefore \(t_2 - \mu _j^{left}(\overline{t}_1,t_2)\) has slope zero. \(\quad \square \)

B Algorithms

Notes for the algorithms:

  • \((j,t_2,\tau _2) \in O_3(t_1) \iff (j,t_2+t_1,\tau _2) \in O_3\)

  • the computation of the energy overloads \(\omega (t_1,t_2)\) is analogous to the checker of Baptiste et al. [5] and involves dynamic slope updates

figure a
figure b
figure c
figure d
figure e
figure f
figure g
figure h

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Tesch, A. (2016). A Nearly Exact Propagation Algorithm for Energetic Reasoning in \(\mathcal O(n^2 \log n)\) . In: Rueher, M. (eds) Principles and Practice of Constraint Programming. CP 2016. Lecture Notes in Computer Science(), vol 9892. Springer, Cham. https://doi.org/10.1007/978-3-319-44953-1_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-44953-1_32

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-44952-4

  • Online ISBN: 978-3-319-44953-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics