Abstract:
Outsourcing inference enables users to outsource neural network inference tasks to a service provider (e.g., a remote server). This paradigm has brought enormous convenie...Show MoreMetadata
Abstract:
Outsourcing inference enables users to outsource neural network inference tasks to a service provider (e.g., a remote server). This paradigm has brought enormous convenience and effectively solved the resource-limited issues of users, especially for mobile devices. However, it still suffers from two challenges: (1) The user's input data and inference results contain a large amount of private information, which should not be disclosed. (2) The server in this setting may be malicious and hence violate the inference procedure. For example, the server may use a low-quality model to reduce costs or return wrong inference results. While several privacy-preserving inference works have been proposed, they cannot solve the above two problems at the same time. In this work, we propose PPVI, a secure and verifiable outsourcing inference scheme against malicious service providers. PPVI designs a hybrid check technique for inference integrity verification and employs leveled homomorphic encryption to protect users' privacy. These ingredients together make it possible to protect users' privacy and verify the inference correctness in outsourcing inference simultaneously. Extensive experiment results demonstrate our scheme has an excellent performance in terms of verification accuracy and communication and computational overhead.
Date of Conference: 04-08 December 2023
Date Added to IEEE Xplore: 26 February 2024
ISBN Information: