Abstract
Supervised machine learning is the classification of new data on the basis of already classified training data, from which support vector machine is one of the most significant model. In 2014, Rebentrost, Mohseni and Lloyd showed that for a suitable specified N × M data set, there is a quantum algorithm that constructs a support vector machine model in quantum settings. If the data set is well-defined, their algorithm runs in time poly(logMN, 1/ε), where ε is the desired precision of the output state. We in this paper present an improved quantum support vector machine model whose running time is polynomial in log(1/ε), exponentially improving the dependence on precision while keeping essentially the same dependence on other parameters.
1 Introduction
Dating back to the 1980s, quantum computing has been shown to be more computationally powerful in solving certain kinds of problems than classical computing. In the past decades, to achieve computational advantages, quantum computing has been brought into the field of machine learning. This gives birth to a new disciplinary research field, quantum machine learning. It is a cross-field of computer science and quantum physics on studying how to learn from training data and make predictions on new data in quantum settings [1,2,3,4,5, 16, 17]. Since its inception, quantum machine learning has become a hot topic that attracting world wide attentions, and a number of efficient quantum algorithms have been proposed for various machine learning tasks [6,7,8,9].
Quantum mechanics is well known to produce atypical patterns in data. Classical machine learning methods such as deep neural networks frequently have the feature that they can both recognize statistical patterns in data and produce data that possess the same statistical patterns: they recognize the patterns that they produce. This observation suggests the following hope. If small quantum information processors can produce statistical patterns that are computationally difficult for a classical computer to produce, then perhaps they can also recognize patterns that are equally difficult to recognize classically. The realization of this hope depends on whether efficient quantum algorithms can be found for machine learning. A quantum algorithm is a set of instructions solving a problem, such as determining whether two graphs are isomorphic, that can be performed on a quantum computer. Quantum machine learning software makes use of quantum algorithms as part of a larger implementation. By analysing the steps that quantum algorithms prescribe, it becomes clear that they have the potential to outperform classical algorithms for specific problems (that is, reduce the number of steps required). This potential is known as quantum speedup. The notion of a quantum speedup depends on whether one takes a formal computer science perspective—which demands mathematical proofs—or a perspective based on what can be done with realistic, finite size devices—which requires solid statistical evidence of a scaling advantage over some finite range of problem sizes [18,19,20,21,22,23,24,25]. For the case of quantum machine learning, the best possible performance of classical algorithms is not always known. This is similar to the case of Shor’s polynomial-time quantum algorithm for integer factorization: no sub-exponential-time classical algorithm has been found, but the possibility is not provably ruled out. Determination of a scaling advantage contrasting quantum and classical machine learning would rely on the existence of a quantum computer and is called a ‘benchmarking’ problem. Such advantages could include improved classification accuracy and sampling of classically inaccessible systems. Accordingly, quantum speedups in machine learning are currently characterized using idealized measures from complexity theory: query complexity and gate complexity Query complexity measures the number of queries to the information source for the classical or quantum algorithm. A quantum speedup results if the number of queries needed to solve a problem is lower for the quantum algorithm than for the classical algorithm. To determine the gate complexity, the number of elementary quantum operations (or gates) required to obtain the desired result are counted [26,27,28,29,30,31].
The most fundamental examples of supervised machine learning algorithms are linear support vector machines (SVM) and perceptions. The task of these methods is to find an optimal separating hyperplane between two classes of data such that all training examples of one class are found only on one side of the hyperplane with high probability. When the margin between the hyperplane and the data are maximized, the most robust classifier could be obtained. The SVM model can be solved in time \( {\mathcal{O}}(\log 1/\varepsilon \,poly(N,M)) \) [10], where N is the dimension of feature space, M is the number of training vectors and \( \varepsilon \) is the accuracy. However, classical computing cannot solve this problem if the handling data sets extend to million level.
Similar to the classical counterpart, the quantum SVM is a paradigmatic example of quantum machine learning [8]. The first quantum SVM algorithm was proposed in the early 2000s, using a variant Grover’s search for function minimization, which can find s support vectors out of N vectors consequently takes \( \sqrt {N/s} \) iterations [11]. Recently, Rebentrost et al. showed that a quantum SVM can be implemented in \( {\mathcal{O}}(poly(1/\varepsilon \log MN)) \) for training and classification. In this paper, we present an improved quantum SVM algorithm with \( {\mathcal{O}}(poly(\varepsilon^{ - 1} \log 1/\varepsilon \log MN)) \) running time. Specifically, we handle the dimension M, N by matrix inverse algorithm, and reduce the dependency of precise \( 1/\varepsilon \) by quantum Fourier transformation [12]. In case when a low-rank approximation is appropriate, our quantum SVM operates on the full training set in logarithmic runtime.
2 Review of SVM
The fundamental mission for the SVM is to classify a vector into one of two classes, given M training data points of the form \( \{ \left( {{\mathbf{x}}_{{\mathbf{j}}} ,y_{j} } \right):{\mathbf{x}}_{{\mathbf{j}}} \in R^{N} ,y_{j} = \pm 1, j = 1, \ldots ,M\} \), where label variable \( y_{j} = 1 \,{\text{or}}\, - 1 \) indicates the class that \( x_{j} \) belongs. For the classification, the SVM finds a maximum-margin hyperplane with normal vector w that divides the data set into two classes. The task of SVM aims to find two parallel hyperplanes whose maximum possible distance is \( {2 \mathord{\left/ {\vphantom {2 {\left\| {\mathbf{w}} \right\|}}} \right. \kern-0pt} {\left\| {\mathbf{w}} \right\|}} \) with each side of the hyperplane \( y_{j} = 1\, \text{and}\, - 1 \). Adjusting the fact which part \( {\mathbf{x}}_{{\mathbf{j}}} \) belongs depends on two constraints s.t. \( {\mathbf{wx}}_{{\mathbf{j}}} + b \ge 1 \) for \( {\text{y}}_{\text{j}} = 1 \) and \( {\mathbf{wx}}_{{\mathbf{j}}} + b \le - 1 \) for \( y_{j} = - 1 \). Therefore, finding the maximum margin hyperplane consists of minimizing \( \left\| {\mathbf{w}} \right\|^{2} /2 \) subject to the inequality constraints \( {\text{y}}_{\text{j}} ({\mathbf{wx}}_{{\mathbf{j}}} + b) \ge 1 \) f for all index j. This is the fundamental formulation of the problem. The dual formulation is maximizing over the Karush-Kuhn-Tucker multipliers \( \alpha = \left( {\alpha_{1} , \ldots ,\alpha_{M} } \right)^{T} \) the function:
The constraints can be expressed as
The hyperplane parameters w, b are decomposed of \( {\mathbf{w}} = \sum\nolimits_{j} {\alpha_{j} {\mathbf{x}}_{{\mathbf{j}}} } \) and \( {\text{b}} = {\text{y}}_{\text{j}} - {\mathbf{wx}}_{{\mathbf{j}}} \) for all the \( j = 1, \ldots ,M \). We then introduce the kernel matrix, a central quantity for supervised machine learning problems, \( {\text{K}}_{j,k} = k({\mathbf{x}}_{{\mathbf{j}}} ,{\mathbf{x}}_{{\mathbf{k}}} ) \). We in this paper present how to prepare the quantum kernel matrix, whose kernel function is inner products \( k\left( {{\mathbf{x}}_{{\mathbf{j}}} ,{\mathbf{x}}_{{\mathbf{k}}} } \right) = {\mathbf{x}}_{{\mathbf{j}}} {\mathbf{x}}_{{\mathbf{k}}} \) or Gaussian distance \( k\left( {{\mathbf{x}}_{{\mathbf{j}}} ,{\mathbf{x}}_{{\mathbf{k}}} } \right) = { \exp }(\left\| {{\mathbf{x}}_{{\mathbf{j}}}^{{\mathbf{2}}} } \right\| + \left\| {{\mathbf{x}}_{{\mathbf{k}}}^{{\mathbf{2}}} } \right\|) \). Solving the kernel matrix approximately takes O(M^3) complexity. As each kernel product takes \( {\mathcal{O}}(N) \) overhead, the classical support vector machine algorithm takes running time \( {\mathcal{O}}(\log 1/\varepsilon \,M^{2} (M + N)) \) with accuracy ε. The classification result can be computed as
3 Quantum Kernel Matrix
3.1 Construct Quantum Inner-Product Kernel Matrix
In the quantum setting, assume that there exists the mechanism supporting classical data encoding into quantum states:
Here the notation \( \left( {{\mathbf{x}}_{{\mathbf{j}}} } \right)_{\varvec{k}} \) denotes the k-th components of the vector \( {\mathbf{x}}_{{\mathbf{j}}} \). For the quantum mechanical preparation, we utilize QRAM mechanism obtaining the quantum state
where normalized factor \( {\mathbf{N}}_{\varvec{\chi}} = \sum\nolimits_{{\varvec{i} = {\mathbf{1}}}}^{\varvec{M}} {\left\| {{\mathbf{x}}_{{\mathbf{i}}} } \right\|}^{{\mathbf{2}}} \) with the \( {\mathbf{\mathcal{O}}}({\mathbf{log}}\,\varvec{MN}) \) running time. Discard the second register, we obtain the quantum inner-product kernel matrix
3.2 Construct Quantum Gaussian Kernel Matrix
Suppose the data set \( {\mathbf{x}} = {\mathbf{x}}_{{\mathbf{1}}} , \ldots ,{\mathbf{x}}_{{\mathbf{M}}} \) is stored in the structure of a special designed binary tree [13], thus we can assume there existing a pair of oracle \( \varvec{U}_{{\mathbf{1}}} ,\varvec{U}_{{\mathbf{2}}} \), s.t.
Perform the oracle \( \varvec{U}_{{\mathbf{1}}}^{\dag } \varvec{U}_{{\mathbf{2}}} \varvec{U}_{{\mathbf{1}}} \) onto the state \( \left| {\varvec{\chi}^{\varvec{'}} } \right\rangle = \left|\varvec{\chi}\right\rangle \left| {\mathbf{0}} \right\rangle \), the system becomes to
Add an ancilla qubit and perform controlled rotation, we obtain
Uncompute the second system by invoking \( \varvec{U}_{{\mathbf{2}}}^{\dag } \), the system finally turns to our destination
4 Quantum Least-Squares SVM
The main idea of this work is to adopt the least-squares reconstruction of the SVM that rounds the quadratic programming and gets the parameters from the solution of the linear equation system. The principal simplification is to draw into the lax variables \( e_{j} \) and replace the inequality constraints with equality constraints:
Besides the constraints, the implied Lagrange function contains a penalty term \( \frac{\upgamma}{{\mathop \sum \nolimits_{i = 1}^{M} e_{i}^{2} }} \), where user-specified parameter \( \gamma \) determines the relative derivatives of training error and SVM objective. Taking partial derivatives of the Lagrange function and eliminating the variables \( {\mathbf{w}} \), \( e_{j} \) into consideration, the least-squares approximation of the problem is introduced:
Here \( K \) is the kernel matrix we have constructed above, \( y = \left( {y_{1} , \cdots y_{M} } \right)^{T} \) denotes the classier label and \( 1 = \left( {1, \cdots ,1} \right)^{T} \). The matrix \( F \) is a \( \left( {M + 1} \right) \times \left( {M + 1} \right) \) dimensional matrix. Thus the quantum SVM parameter \( \left( {b,\alpha } \right) \) are determined by
To inverse the matrix \( F \), we now describe the Fourier approach, which is based on an approximation of \( 1/F \) as a linear combination of unitaries \( e^{{ - iFt_{i} }} ,t_{i} \in R. \) These unitaries can be implemented by using some Hamiltonian simulation methods [7, 14, 15]. Our quantum algorithm establishes the following Fourier expansion of the function \( 1/x \) on the domain \( D_{k} \):
Theorem 1:
Let the function \( h\left( x \right) \) be defined as
where \( y_{j} : = j\varDelta_{y} ,z_{k} : = k\varDelta_{z}, \) for some fixed \( J = \varTheta \left( {\frac{k}{\varepsilon }log\left( {k/\varepsilon } \right)} \right) \), \( K = \varTheta \left( {klog\left( {k/\varepsilon } \right)} \right) \), \( \varDelta_{y} = \varTheta \left( {\varepsilon /\sqrt {log\left( {k/\varepsilon } \right)} } \right) \) and \( \varDelta_{z} = \varTheta \left( {\left( {k\sqrt {log\left( {k/\varepsilon } \right)} } \right)^{ - 1} } \right) \). Then \( h\left( x \right) \) is “\( \varepsilon \) - close to \( 1/x \) on the domain \( D_{k} \).
Based on the above theorem, the matrix \( F^{ - 1} \) can be expressed as the linear combination of some unitaries:
To implementing this equation, we need the following theorem.
Theorem 2:
Let \( A \) be a Hermitian operator with eigenvalues in a domain \( D\, \subseteq \,R \). Suppose the function \( f:D \to R \) satisfies \( \left| {f\left( x \right)} \right| \ge 1 \) and all \( x \in D \). And \( f \) is \( \varepsilon \) - close to \( \sum\nolimits_{i} {\alpha_{i} T} \) on \( D \) for some \( \varepsilon \in \left( {0,1/2} \right) \), coefficients \( \alpha_{i} > 0 \), and functions \( T_{i} :D \to C \). Let \( U_{i} \) be a set of unitaries such that
for all states \( \left| \varphi \right\rangle \), where \( \left( {\left| 0 \right\rangle \left\langle 0 \right| \otimes 1} \right)\left| {\varphi^{ \bot } } \right\rangle = 0 \). Given an algorithm for creating a quantum state \( \left| b \right\rangle \), there is a quantum algorithm that prepares a quantum state \( 4\varepsilon \)-close to \( f\left( A \right)\left| b \right\rangle /\left\| {f\left( A \right)\left| b \right\rangle } \right\| \), succeeding with constant probability, that makes an expected \( {\rm O}\left( {\alpha /\left\| {f\left( A \right)\left| b \right\rangle } \right\|} \right) \) uses of the algorithm, \( U \) and \( V \), where
According to the Theorem 2, we introduce a signal quantum state and signal operator corresponding to \( 1/F \) under the Fourier expansion. Suppose there exists a unitary \( V \) that maps the initial state \( \left| 0 \right\rangle \) to the quantum state
where the parameter \( \alpha \) is the \( L_{1} \) norm of the coefficients of this linear combination
Theorem 2 also requires the unitary
The term \( e^{{ - iFy_{j} z_{k} }} \) can be implemented utilizing the Qubitization method [14] in running time
Thus we can decompose the \( 1/F \) under the spectrum of \( F \):
Perform \( F^{ - 1} \) on the label sequence state \( \left| y \right\rangle \), we obtain the SVM parameters:
According to the training set labels, the expansion coefficient of the new state are the desired SVM parameters:
where \( C = b^{2} + \sum\nolimits_{k = 1}^{M} {\alpha_{k}^{2} } . \)
5 Classification
Here, we have trained the quantum SVM model and would like to classify a query state \( \left| {\mathbf{x}} \right\rangle \) now. From the computed state \( \left| {{\mathbf{b}},{\varvec{\upalpha}}} \right\rangle \), add a register and perform the QRAM mechanism for encoding \( \left| {\mathbf{x}} \right\rangle \) in the entangle state:
with the factorized factor \( \varvec{N} = \varvec{b}^{{\mathbf{2}}} + \sum\nolimits_{{\varvec{k} = {\mathbf{1}}}}^{\varvec{M}} {\varvec{\alpha}_{\varvec{k}}^{{\mathbf{2}}} \left| {\varvec{x}_{\varvec{k}} } \right|^{{\mathbf{2}}} } \). Then, after constructing the query state \( \left| \varvec{x} \right\rangle \)
utilizing QRAM again, we can finally take swap test to invoke the classification mission. Suppose the axillary state \( \left|\varvec{\psi}\right\rangle = \frac{{\mathbf{1}}}{{\sqrt {\mathbf{2}} }}\left( {\left| {\mathbf{0}} \right\rangle \left| \varvec{u} \right\rangle + \left| {\mathbf{1}} \right\rangle \left| \varvec{x} \right\rangle } \right) \) and measure another ancilla in the state \( \left|\varvec{\phi}\right\rangle = \frac{{\mathbf{1}}}{{\sqrt {\mathbf{2}} }}(\left| {\mathbf{0}} \right\rangle - \left| {\mathbf{1}} \right\rangle ) \). The measurement has the success probability \( \varvec{P} = \frac{{\mathbf{1}}}{{\mathbf{2}}}({\mathbf{1}} - \left\langle {\varvec{u}} \mathrel{\left | {\vphantom {\varvec{u} \varvec{x}}} \right. \kern-0pt} {\varvec{x}} \right\rangle ) \). Thus if \( \varvec{P} < 1/2 \), the new vector \( \varvec{x} \) belongs to +1, otherwise −1.
6 Complexity Analysis
We now show that quantum matrix inversion substantially performs the operator \( \varvec{e}^{{ - \varvec{iFt}}} \) and analyze the running time of our algorithm. The matrix F contains the kernel matrix K and an additional row and column owing to the offset consideration b. From literature [14], we know that the gate complexity of simulating the Hamiltonian F for time t with error \( {\varvec{\upvarepsilon}} \) is \( {\mathbf{\mathcal{O}}}({\mathbf{t}} + {\mathbf{log}}\,{\mathbf{1}}/\varvec{\varepsilon}) \). Noting that time \( \varvec{t} = \varvec{y}_{\varvec{j}} \varvec{z}_{\varvec{k}} \), then F can be efficiently simulation in time \( {\varvec{\Theta}}({\varvec{\upkappa}}\,{\mathbf{log}}\,\varvec{\kappa}/\varvec{\varepsilon}+ {\mathbf{log}}\,{\mathbf{1}}/\varvec{\varepsilon}) \). It is interesting to note that the maximum absolute eigenvalue of \( \varvec{F}/(\varvec{tr}(\varvec{F})) \) is \( \le {\mathbf{1}} \) and the minimum absolute eigenvalue is \( \le \varvec{O}({\mathbf{1}}/\varvec{M}) \). Therefore, the condition number \( \varvec{\kappa} \) is \( \varvec{O}(\varvec{M}) \) in this case. To solve such an eigenvalue would require exponential time. Considering the relationship \( \varvec{\varepsilon}\le {\mathbf{max}}\left(\varvec{\lambda}\right) \le \varvec{ }{\mathbf{1}} \), thus \( \varvec{\kappa}= {\mathbf{\mathcal{O}}}({\mathbf{1}}/{\varvec{\upvarepsilon}}) \) holds. In consideration of the preparation of the kernel matrix in \( {\mathbf{\mathcal{O}}}({\mathbf{log}}\,\varvec{MN}) \), the run time is thus
Compared to Rebentrost’s algorithm, we achieve exponentially improved dependence on precision \( {\varvec{\upvarepsilon}} \).
7 Conclusion
In this work, we have shown that the support vector machine can be implemented by quantum mechanics, and the complexity of proposed algorithm is logarithmic in the feature size and the amount of training data. Besides, the presented algorithm also improves dependence on precision ε. When the training data kernel matrix is controlled by the optimal Hamiltonian simulation method, the speed of the quantum algorithm is maximized. Furthermore, our algorithm also avoids the complexity caused by phase estimation and controlled rotation. In summary, quantum SVM is an important machine learning algorithm that can be efficiently implemented. It also provides advantages in data privacy and could be one important composition in quantum neural networks.
References
Yu, C.H., Gao, F., Wen, Q.Y.: An improved quantum algorithms for ridge regression. arXiv preprint arXiv:1707.09524 (2017)
Schuld, M., Sinayskiy, I., Petruccione, F.: Prediction by linear regression on a quantum computer. Phys. Rev. A 94, 022342 (2016)
Wiebe, N., Kapoor, A., Svore, K.: Quantum deep learning. Computer Science (2014)
Rebentrost, P., Mohseni, M., Lloyd, S.: Quantum support vector machine for big data classification. Phys. Rev. Lett. 113(13), 130503 (2014)
Rebentrost, P, Schuld, M, Wossnig, L., Petruccione, F., Lloyd, S.: Quantum gradient descent and Newton’s method for constrained polynomial optimization. arXiv preprint arXiv:1612.01789 (2016)
Lloyd, S., Mohseni, M., Rebentrost, P.: Quantum algorithms for supervised and unsupervised machine learning. arXiv preprint arXiv:1307.0411 (2013)
Lloyd, S., Mohseni, M., Rebentrost, P.: Quantum principal component analysis. Nat. Phys. 10(9), 108–113 (2013)
Harrow, W.A., Hassidim, A., Lloyd, S.: Quantum algorithm for linear systems of equations. Phys. Rev. Lett. 103(15), 150502 (2009)
Wittek, P., Lloyd, S.: Quantum machine learning. Nature 549(7671), 195 (2017)
Boyd, S., Vandenberge, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)
Anguita, D., Ridella, S.: Quantum optimization for training support vector machines. Neural Netw. 16(5–6), 763–770 (2003)
Childs, A.M., Kothari, R., Somma, R.D.: Quantum algorithm for systems of linear equations with exponentially improved dependence on precision. SIAM J. Comput. 46(6), 1920–1950 (2015)
Kerenidis, I., Prakash, A.: Quantum recommendation systems. arXiv preprint arXiv:1603.08675 (2016)
Low, G.H., Chuang, I.L.: Hamiltonian simulation by qubitization. arXiv preprint arXiv:1610.06546 (2016)
Low, G.H., Chuang, I.L.: Optimal hamiltonian simulation by quantum signal processing. Phys. Rev. Lett. 118(1), 010501 (2017)
Yu, C.H., et al.: Quantum algorithm for association rules mining. Phys. Rev. A 94(4), 042311 (2016)
Yu, C.H., et al.: Quantum algorithm for visual tracking. Phys. Rev. A 99(2), 022301 (2019)
August, M., Ni, X.: Using recurrent neural networks to optimize dynamical decoupling for quantum memory. Preprint at https://arxiv.org/abs/1604.00279 (2016)
Amstrup, B., Toth, G.J., Szabo, G., Rabitz, H., Loerincz, A.: Genetic algorithm with migration on topology conserving maps for optimal control of quantum systems. J. Phys. Chem. 99, 5206–5213 (1995)
Hentschel, A., Sanders, B.C.: Machine learning for precise quantum measurement. Phys. Rev. Lett. 104, 063603 (2010)
Lovett, N.B., Crosnier, C., Perarnau-Llobet, M., Sanders, B.C.: Differential evolution for many-particle adaptive quantum metrology. Phys. Rev. Lett. 110, 220501 (2013)
Palittapongarnpim, P., Wittek, P., Zahedinejad, E., Vedaie, S., Sanders, B.C.: Learning in quantum control: high-dimensional global optimization for noisy quantum dynamics. Neurocomputing (in press). https://doi.org/10.1016/j.neucom.2016.12.087
Carrasquilla, J., Melko, R.G.: Machine learning phases of matter. Nat. Phys. 13, 431–434 (2017)
Broecker, P., Carrasquilla, J., Melko, R.G., Trebst, S.: Machine learning quantum phases of matter beyond the fermion sign problem. Preprint at https://arxiv.org/abs/1608.07848 (2016)
Carleo, G., Troyer, M.: Solving the quantum many-body problem with artificial neural networks. Science 355, 602–606 (2017)
Brunner, D., Soriano, M.C., Mirasso, C.R., Fischer, I.: Parallel photonic information processing at gigabyte per second data rates using transient states. Nat. Commun. 4, 1364 (2013)
Cai, X.-D., et al.: Entanglement-based machine learning on a quantum computer. Phys. Rev. Lett. 114, 110504 (2015)
Hermans, M., Soriano, M.C., Dambre, J., Bienstman, P., Fischer, I.: Photonic delay systems as machine learning implementations. J. Mach. Learn. Res. 16, 2081–2097 (2015)
Tezak, N., Mabuchi, H.: A coherent perceptron for all-optical learning. EPJ Quantum Technol. 2, 10 (2015)
Neigovzen, R., Neves, J.L., Sollacher, R., Glaser, S.J.: Quantum pattern recognition with liquid-state nuclear magnetic resonance. Phys. Rev. A 79, 042321 (2009)
Pons, M., et al.: Trapped ion chain as a neural network: error resistant quantum computation. Phys. Rev. Lett. 98, 023003 (2007)
Chen, J., et al.: Binary image steganalysis based on distortion level co-occurrence matrix. CMC: Comput. Mater. Continua 055(2), 201–211 (2018)
Xiong, Z., Shen, Q., Wang, Y., Zhu, C.: Paragraph vector representation based on word to vector and CNN learning. CMC: Comput. Mater. Continua 055(2), 213–227 (2018)
Acknowledgments
This paper is supported by development of power quantum security chip.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Feng, X. et al. (2019). Quantum Algorithm for Support Vector Machine with Exponentially Improved Dependence on Precision. In: Sun, X., Pan, Z., Bertino, E. (eds) Artificial Intelligence and Security. ICAIS 2019. Lecture Notes in Computer Science(), vol 11635. Springer, Cham. https://doi.org/10.1007/978-3-030-24268-8_53
Download citation
DOI: https://doi.org/10.1007/978-3-030-24268-8_53
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-24267-1
Online ISBN: 978-3-030-24268-8
eBook Packages: Computer ScienceComputer Science (R0)