Abstract:
In machine learning (ML) inference, two parties, Alice and Bob, are engaged in a transaction where Alice is the owner of a decision tree model but does not want to reveal...Show MoreMetadata
Abstract:
In machine learning (ML) inference, two parties, Alice and Bob, are engaged in a transaction where Alice is the owner of a decision tree model but does not want to reveal its parameters to Bob, who has private data. Bob wants to use Alice’s model for inference, but does not want to reveal his data. Knowing the heavy computational cost of fully homomorphic encryption, Alice and Bob agree to use order-preserving encryption (OPE) for running the inference engine in full confidence without revealing either the decision tree model or the private data. In this paper, we describe how such an OPE computation is established between Alice and Bob. Specifically, we demonstrate how to secure full confidentiality in decision tree inference on an FPGA accelerator embodying an OPE protocol. A finite-state machine design of the encrypted decision tree is evaluated for throughput and resource utilization on an Intel Cyclone V FPGA using the MNIST dataset.
Published in: 2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)
Date of Conference: 11-13 June 2023
Date Added to IEEE Xplore: 07 July 2023
ISBN Information: