A Hybrid Quantum-Classical Approach based on the Hadamard Transform for the Convolutional Layer

Hongyi Pan, Xin Zhu, Salih Furkan Atici, Ahmet Cetin
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:26891-26903, 2023.

Abstract

In this paper, we propose a novel Hadamard Transform (HT)-based neural network layer for hybrid quantum-classical computing. It implements the regular convolutional layers in the Hadamard transform domain. The idea is based on the HT convolution theorem which states that the dyadic convolution between two vectors is equivalent to the element-wise multiplication of their HT representation. Computing the HT is simply the application of a Hadamard gate to each qubit individually, so the HT computations of our proposed layer can be implemented on a quantum computer. Compared to the regular Conv2D layer, the proposed HT-perceptron layer is computationally more efficient. Compared to a CNN with the same number of trainable parameters and 99.26% test accuracy, our HT network reaches 99.31% test accuracy with 57.1% MACs reduced in the MNIST dataset; and in our ImageNet-1K experiments, our HT-based ResNet-50 exceeds the accuracy of the baseline ResNet-50 by 0.59% center-crop top-1 accuracy using 11.5% fewer parameters with 12.6% fewer MACs.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-pan23d, title = {A Hybrid Quantum-Classical Approach based on the Hadamard Transform for the Convolutional Layer}, author = {Pan, Hongyi and Zhu, Xin and Atici, Salih Furkan and Cetin, Ahmet}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {26891--26903}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/pan23d/pan23d.pdf}, url = {https://proceedings.mlr.press/v202/pan23d.html}, abstract = {In this paper, we propose a novel Hadamard Transform (HT)-based neural network layer for hybrid quantum-classical computing. It implements the regular convolutional layers in the Hadamard transform domain. The idea is based on the HT convolution theorem which states that the dyadic convolution between two vectors is equivalent to the element-wise multiplication of their HT representation. Computing the HT is simply the application of a Hadamard gate to each qubit individually, so the HT computations of our proposed layer can be implemented on a quantum computer. Compared to the regular Conv2D layer, the proposed HT-perceptron layer is computationally more efficient. Compared to a CNN with the same number of trainable parameters and 99.26% test accuracy, our HT network reaches 99.31% test accuracy with 57.1% MACs reduced in the MNIST dataset; and in our ImageNet-1K experiments, our HT-based ResNet-50 exceeds the accuracy of the baseline ResNet-50 by 0.59% center-crop top-1 accuracy using 11.5% fewer parameters with 12.6% fewer MACs.} }
Endnote
%0 Conference Paper %T A Hybrid Quantum-Classical Approach based on the Hadamard Transform for the Convolutional Layer %A Hongyi Pan %A Xin Zhu %A Salih Furkan Atici %A Ahmet Cetin %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-pan23d %I PMLR %P 26891--26903 %U https://proceedings.mlr.press/v202/pan23d.html %V 202 %X In this paper, we propose a novel Hadamard Transform (HT)-based neural network layer for hybrid quantum-classical computing. It implements the regular convolutional layers in the Hadamard transform domain. The idea is based on the HT convolution theorem which states that the dyadic convolution between two vectors is equivalent to the element-wise multiplication of their HT representation. Computing the HT is simply the application of a Hadamard gate to each qubit individually, so the HT computations of our proposed layer can be implemented on a quantum computer. Compared to the regular Conv2D layer, the proposed HT-perceptron layer is computationally more efficient. Compared to a CNN with the same number of trainable parameters and 99.26% test accuracy, our HT network reaches 99.31% test accuracy with 57.1% MACs reduced in the MNIST dataset; and in our ImageNet-1K experiments, our HT-based ResNet-50 exceeds the accuracy of the baseline ResNet-50 by 0.59% center-crop top-1 accuracy using 11.5% fewer parameters with 12.6% fewer MACs.
APA
Pan, H., Zhu, X., Atici, S.F. & Cetin, A.. (2023). A Hybrid Quantum-Classical Approach based on the Hadamard Transform for the Convolutional Layer. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:26891-26903 Available from https://proceedings.mlr.press/v202/pan23d.html.

Related Material