Frozen CLIP Transformer Is an Efficient Point Cloud Encoder

Authors

  • Xiaoshui Huang Shanghai AI Laboratory
  • Zhou Huang Jiangxi University of Finance and Economics
  • Sheng Li University of Electronic Science and Technology of China
  • Wentao Qu Nanjing University of Science and Technology
  • Tong He Shanghai AI Laboratory
  • Yuenan Hou Shanghai AI Laboratory
  • Yifan Zuo Jiangxi University of Finance and Economics
  • Wanli Ouyang Shanghai AI Laboratory

DOI:

https://doi.org/10.1609/aaai.v38i3.28013

Keywords:

CV: 3D Computer Vision, CV: Large Vision Models, CV: Multi-modal Vision

Abstract

The pretrain-finetune paradigm has achieved great success in NLP and 2D image fields because of the high-quality representation ability and transferability of their pretrained models. However, pretraining such a strong model is difficult in the 3D point cloud field due to the limited amount of point cloud sequences. This paper introduces Efficient Point Cloud Learning (EPCL), an effective and efficient point cloud learner for directly training high-quality point cloud models with a frozen CLIP transformer. Our EPCL connects the 2D and 3D modalities by semantically aligning the image features and point cloud features without paired 2D-3D data. Specifically, the input point cloud is divided into a series of local patches, which are converted to token embeddings by the designed point cloud tokenizer. These token embeddings are concatenated with a task token and fed into the frozen CLIP transformer to learn point cloud representation. The intuition is that the proposed point cloud tokenizer projects the input point cloud into a unified token space that is similar to the 2D images. Comprehensive experiments on 3D detection, semantic segmentation, classification and few-shot learning demonstrate that the CLIP transformer can serve as an efficient point cloud encoder and our method achieves promising performance on both indoor and outdoor benchmarks. In particular, performance gains brought by our EPCL are 19.7 AP50 on ScanNet V2 detection, 4.4 mIoU on S3DIS segmentation and 1.2 mIoU on SemanticKITTI segmentation compared to contemporary pretrained models. Code is available at \url{https://github.com/XiaoshuiHuang/EPCL}.

Published

2024-03-24

How to Cite

Huang, X., Huang, Z., Li, S., Qu, W., He, T., Hou, Y., Zuo, Y., & Ouyang, W. (2024). Frozen CLIP Transformer Is an Efficient Point Cloud Encoder. Proceedings of the AAAI Conference on Artificial Intelligence, 38(3), 2382-2390. https://doi.org/10.1609/aaai.v38i3.28013

Issue

Section

AAAI Technical Track on Computer Vision II