Abstract
In this paper we introduce a novel algorithm to iteratively tune annealing offsets for qubits in a D-Wave 2000Q quantum processing unit (QPU). Using a (1+1)-CMA-ES algorithm, we are able to improve the performance of the QPU by up to a factor of 12.4 in probability of obtaining ground states for small problems, and obtain previously inaccessible (i.e., better) solutions for larger problems. We also make efficient use of QPU samples as a resource, using 100 times less resources than existing tuning methods. The success of this approach demonstrates how quantum computing can benefit from classical algorithms, and opens the door to new hybrid methods of computing.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Each qubit in the QPU has a different range in \(\varDelta s\) that can be set independently. \(\varDelta s~<~0\) is an advance in time and \(\varDelta s~>~0\) is a delay. A typical range for \(\varDelta s\) is \(\pm ~0.15\). The total annealing time for all qubits is still \(|s|=1\) (or \(\tau \) in units of time).
References
King, J., Yarkoni, S., Nevisi, M.M., Hilton, J.P., McGeoch, C.C.: Benchmarking a quantum annealing processor with the time-to-target metric. arXiv:1508.05087 (2015)
Bian, Z., Chudak, F., Israel, R.B., Lackey, B., Macready, W.G., Roy, A.: Mapping constrained optimization problems to quantum annealing with application to fault diagnosis. Front. ICT 3, 14 (2016)
Neukart, F., Compostella, G., Seidel, C., von Dollen, D., Yarkoni, S., Parney, B.: Traffic flow optimization using a quantum annealer. Front. ICT 4, 29 (2017)
Raymond, J., Yarkoni, S., Andriyash, E.: Global warming: temperature estimation in annealers. Front. ICT 3, 23 (2016)
Venturelli, D., Marchand, D.J.J., Rojo, G.: Quantum annealing implementation of job-shop scheduling. arXiv:1506.08479 (2015)
King, A.D., et al.: Observation of topological phenomena in a programmable lattice of 1,800 qubits. Nature 560(7719), 456–460 (2018)
Yarkoni, S., Plaat, A., Bäck, T.: First results solving arbitrarily structured maximum independent set problems using quantum annealing. In: 2018 IEEE Congress on Evolutionary Computation (CEC), (Rio de Janeiro, Brazil), pp. 1184–1190 (2018)
Johnson, M.W., et al.: Quantum annealing with manufactured spins. Nature 473, 194–198 (2011)
Barahona, F.: On the computational complexity of Ising spin glass models. J. Phys. A: Math. Gen. 15(10), 3241 (1982)
Lucas, A.: Ising formulations of many NP problems. Front. Phys. 2, 5 (2014)
Lanting, T., King, A.D., Evert, B., Hoskinson, E.: Experimental demonstration of perturbative anticrossing mitigation using non-uniform driver Hamiltonians. arXiv:1708.03049 (2017)
Andriyash, E., Bian, Z., Chudak, F., Drew-Brook, M., King, A.D., Macready, W.G., Roy, A.: Boosting integer factoring performance via quantum annealing offsets https://www.dwavesys.com/resources/publications
Hsu, T.-J., Jin, F., Seidel, C., Neukart, F., Raedt, H.D., Michielsen, K.: Quantum annealing with anneal path control: application to 2-sat problems with known energy landscapes. arXiv:1810.00194 (2018)
Susa, Y., Yamashiro, Y., Yamamoto, M., Nishimori, H.: Exponential speedup of quantum annealing by inhomogeneous driving of the transverse field. J. Phys. Soc. Jpn. 87(2), 023002 (2018)
Kadowaki, T., Nishimori, H.: Quantum annealing in the transverse ising model. Phys. Rev. E 58, 5355–5363 (1998)
Hansen, N.: The CMA evolution strategy: a comparing review. In: Lozano, J.A., Larrañaga, P., Inza, I., Bengoetxea, E. (eds.) Towards a New Evolutionary Computation: Advances in the Estimation of Distribution Algorithms, pp. 75–102. Springer, Heidelberg (2006). https://doi.org/10.1007/3-540-32494-1_4
Igel, C., Suttorp, T., Hansen, N.: A computational efficient covariance matrix update and a (1+1)-CMA for evolution strategies. In: Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, GECCO 2006, pp. 453–460. ACM, New York (2006)
Auger, A., Hansen, N.: Benchmarking the (1+1)-CMA-ES on the BBOB-2009 Noisy Testbed. In: Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference: Late Breaking Papers, GECCO 2009, pp. 2467–2472. ACM, New York (2009)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
A Evaluating the Fitness Function of \((1+1)\)-CMA-ES using the QPU
Here we show an example of a single tuning run of \((1+1)\)-CMA-ES for a 40 node graph, with the configuration of initial offsets set to all zeroes. As explained in Algorithm 1, we use the mean energy of 100 samples returned by the QPU as the evaluation for the fitness function of the CMA-ES. The sample size of 100 was determined empirically as being the minimum number of samples to determine the mean energy, and is consistent with previous results [4]. In Fig. 3 (left) we show the progression of the CMA-ES routine and the associated fitness function. The tuning shows a clear improvement in mean energy, as shown both in the fitness function and the cumulative minimum of the fitness function. Every time the objective function improves, the respective annealing offsets that were used in that sample set are recorded. The evolution of the annealing offsets for this 40 variable instance is shown in Fig. 3 (right). The final offsets after tuning were then used to test their performance.
B Analysis of Tuned Annealing Offsets
Here we present the aggregated results of all the annealing offsets post tuning. Figures 4 (left and right) show the final offset values for all problem instances using the CMA-ES routine with initial offsets set to zero and uniform, respectively. We found that the final offset values were not correlated with chain length or success probability. However, we did see a systematic shift in the final offset values with respect to the degree of the node in the graph, and as a function of problem size. In both figures, we see divergent behavior in offsets for very small and very high degree nodes, with consistent stability in the mid-range. The main different between the two figures is the final value of the offsets in this middle region. In Fig. 4 (left), the average offset value rises from 0 at small problems, to roughly .02 for problems with 40 variables, then back down to 0 for the largest problems. There is also a slight increase in average offset value from degree 3 to degree 14, found consistently for all problem sizes. In contrast, Fig. 4 (right) shows that the final offset values were roughly .02 at all problem sizes, apart from the divergent behavior in the extrema of the degree axis. The difference between the two configurations could explain why initial offsets set to zero performed slightly better than the uniform initial offsets. Given the fixed resources of 10,000 samples for calibration per MIS instance, escaping from a local optimum (such as the null initial configuration) becomes increasingly difficult at larger problem sizes, thus degrading the uniform configuration’s performance. Other than the results shown here, we were not able to extract any meaningful information with respect to other interesting parameters.
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Yarkoni, S., Wang, H., Plaat, A., Bäck, T. (2019). Boosting Quantum Annealing Performance Using Evolution Strategies for Annealing Offsets Tuning. In: Feld, S., Linnhoff-Popien, C. (eds) Quantum Technology and Optimization Problems. QTOP 2019. Lecture Notes in Computer Science(), vol 11413. Springer, Cham. https://doi.org/10.1007/978-3-030-14082-3_14
Download citation
DOI: https://doi.org/10.1007/978-3-030-14082-3_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-14081-6
Online ISBN: 978-3-030-14082-3
eBook Packages: Computer ScienceComputer Science (R0)