Towards End-to-End Image Compression and Analysis with Transformers

Authors

  • Yuanchao Bai Harbin Institute of Technology Peng Cheng Laboratory
  • Xu Yang Harbin Institute of Technology
  • Xianming Liu Harbin Institute of Technology Peng Cheng Laboratory
  • Junjun Jiang Harbin Institute of Technology Peng Cheng Laboratory
  • Yaowei Wang Peng Cheng Laboratory
  • Xiangyang Ji Tsinghua University
  • Wen Gao Peng Cheng Laboratory Peking University

DOI:

https://doi.org/10.1609/aaai.v36i1.19884

Keywords:

Computer Vision (CV)

Abstract

We propose an end-to-end image compression and analysis model with Transformers, targeting to the cloud-based image classification application. Instead of placing an existing Transformer-based image classification model directly after an image codec, we aim to redesign the Vision Transformer (ViT) model to perform image classification from the compressed features and facilitate image compression with the long-term information from the Transformer. Specifically, we first replace the patchify stem (i.e., image splitting and embedding) of the ViT model with a lightweight image encoder modelled by a convolutional neural network. The compressed features generated by the image encoder are injected convolutional inductive bias and are fed to the Transformer for image classification bypassing image reconstruction. Meanwhile, we propose a feature aggregation module to fuse the compressed features with the selected intermediate features of the Transformer, and feed the aggregated features to a deconvolutional neural network for image reconstruction. The aggregated features can obtain the long-term information from the self-attention mechanism of the Transformer and improve the compression performance. The rate-distortion-accuracy optimization problem is finally solved by a two-step training strategy. Experimental results demonstrate the effectiveness of the proposed model in both the image compression and the classification tasks.

Downloads

Published

2022-06-28

How to Cite

Bai, Y., Yang, X., Liu, X., Jiang, J., Wang, Y., Ji, X., & Gao, W. (2022). Towards End-to-End Image Compression and Analysis with Transformers. Proceedings of the AAAI Conference on Artificial Intelligence, 36(1), 104-112. https://doi.org/10.1609/aaai.v36i1.19884

Issue

Section

AAAI Technical Track on Computer Vision I