Dimension Mixer: Group Mixing of Input Dimensions for Efficient Function Approximation

Suman Sapkota, Binod Bhattarai
Conference on Parsimony and Learning, PMLR 280:373-391, 2025.

Abstract

The recent success of multiple neural architectures like CNNs, Transformers, and MLP-Mixers motivates us to look for similarities and differences between them. We find that these architectures can be interpreted through the lens of a general concept of dimension mixing. Research on coupling flows, shufflenet and the butterfly transform shows that partial and hierarchical signal mixing schemes are sufficient for efficient and expressive function approximation. In this work, we study group-wise sparse, non-linear, multi-layered and learnable mixing schemes of inputs and find that they are complementary to many standard neural architectures. Following our observations and drawing inspiration from the Fast Fourier Transform, we generalize Butterfly Structure to use non-linear mixer function allowing for MLP as mixing function called Butterfly MLP. We are also able to sparsely mix along sequence dimension for Transformer-based architectures called Butterfly Attention. Experiments on CIFAR and LRA datasets demonstrate that the proposed Non-Linear Butterfly Mixers are efficient and scale well when the host architectures are used as mixing function. We devise datasets with increasing complexity to solve Pathfinder-X task. Additionally, we propose Patch-Only MLP-Mixer for processing spatial 2D signals demonstrating a different dimension mixing strategy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v280-sapkota25a, title = {Dimension Mixer: Group Mixing of Input Dimensions for Efficient Function Approximation}, author = {Sapkota, Suman and Bhattarai, Binod}, booktitle = {Conference on Parsimony and Learning}, pages = {373--391}, year = {2025}, editor = {Chen, Beidi and Liu, Shijia and Pilanci, Mert and Su, Weijie and Sulam, Jeremias and Wang, Yuxiang and Zhu, Zhihui}, volume = {280}, series = {Proceedings of Machine Learning Research}, month = {24--27 Mar}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v280/main/assets/sapkota25a/sapkota25a.pdf}, url = {https://proceedings.mlr.press/v280/sapkota25a.html}, abstract = {The recent success of multiple neural architectures like CNNs, Transformers, and MLP-Mixers motivates us to look for similarities and differences between them. We find that these architectures can be interpreted through the lens of a general concept of dimension mixing. Research on coupling flows, shufflenet and the butterfly transform shows that partial and hierarchical signal mixing schemes are sufficient for efficient and expressive function approximation. In this work, we study group-wise sparse, non-linear, multi-layered and learnable mixing schemes of inputs and find that they are complementary to many standard neural architectures. Following our observations and drawing inspiration from the Fast Fourier Transform, we generalize Butterfly Structure to use non-linear mixer function allowing for MLP as mixing function called Butterfly MLP. We are also able to sparsely mix along sequence dimension for Transformer-based architectures called Butterfly Attention. Experiments on CIFAR and LRA datasets demonstrate that the proposed Non-Linear Butterfly Mixers are efficient and scale well when the host architectures are used as mixing function. We devise datasets with increasing complexity to solve Pathfinder-X task. Additionally, we propose Patch-Only MLP-Mixer for processing spatial 2D signals demonstrating a different dimension mixing strategy.} }
Endnote
%0 Conference Paper %T Dimension Mixer: Group Mixing of Input Dimensions for Efficient Function Approximation %A Suman Sapkota %A Binod Bhattarai %B Conference on Parsimony and Learning %C Proceedings of Machine Learning Research %D 2025 %E Beidi Chen %E Shijia Liu %E Mert Pilanci %E Weijie Su %E Jeremias Sulam %E Yuxiang Wang %E Zhihui Zhu %F pmlr-v280-sapkota25a %I PMLR %P 373--391 %U https://proceedings.mlr.press/v280/sapkota25a.html %V 280 %X The recent success of multiple neural architectures like CNNs, Transformers, and MLP-Mixers motivates us to look for similarities and differences between them. We find that these architectures can be interpreted through the lens of a general concept of dimension mixing. Research on coupling flows, shufflenet and the butterfly transform shows that partial and hierarchical signal mixing schemes are sufficient for efficient and expressive function approximation. In this work, we study group-wise sparse, non-linear, multi-layered and learnable mixing schemes of inputs and find that they are complementary to many standard neural architectures. Following our observations and drawing inspiration from the Fast Fourier Transform, we generalize Butterfly Structure to use non-linear mixer function allowing for MLP as mixing function called Butterfly MLP. We are also able to sparsely mix along sequence dimension for Transformer-based architectures called Butterfly Attention. Experiments on CIFAR and LRA datasets demonstrate that the proposed Non-Linear Butterfly Mixers are efficient and scale well when the host architectures are used as mixing function. We devise datasets with increasing complexity to solve Pathfinder-X task. Additionally, we propose Patch-Only MLP-Mixer for processing spatial 2D signals demonstrating a different dimension mixing strategy.
APA
Sapkota, S. & Bhattarai, B.. (2025). Dimension Mixer: Group Mixing of Input Dimensions for Efficient Function Approximation. Conference on Parsimony and Learning, in Proceedings of Machine Learning Research 280:373-391 Available from https://proceedings.mlr.press/v280/sapkota25a.html.

Related Material