Intractability of Learning the Discrete Logarithm with Gradient-Based Methods

Rustem Takhanov, Maxat Tezekbayev, Artur Pak, Arman Bolatov, Zhibek Kadyrsizova, Zhenisbek Assylbekov
Proceedings of the 15th Asian Conference on Machine Learning, PMLR 222:1321-1336, 2024.

Abstract

The discrete logarithm problem is a fundamental challenge in number theory with significant implications for cryptographic protocols. In this paper, we investigate the limitations of gradient-based methods for learning the parity bit of the discrete logarithm in finite cyclic groups of prime order. Our main result, supported by theoretical analysis and empirical verification, reveals the concentration of the gradient of the loss function around a fixed point, independent of the logarithm’s base used. This concentration property leads to a restricted ability to learn the parity bit efficiently using gradient-based methods, irrespective of the complexity of the network architecture being trained. Our proof relies on Boas-Bellman inequality in inner product spaces and it involves establishing approximate orthogonality of discrete logarithm’s parity bit functions through the spectral norm of certain matrices. Empirical experiments using a neural network-based approach further verify the limitations of gradient-based learning, demonstrating the decreasing success rate in predicting the parity bit as the group order increases.

Cite this Paper


BibTeX
@InProceedings{pmlr-v222-takhanov24a, title = {Intractability of Learning the Discrete Logarithm with Gradient-Based Methods}, author = {Takhanov, Rustem and Tezekbayev, Maxat and Pak, Artur and Bolatov, Arman and Kadyrsizova, Zhibek and Assylbekov, Zhenisbek}, booktitle = {Proceedings of the 15th Asian Conference on Machine Learning}, pages = {1321--1336}, year = {2024}, editor = {Yanıkoğlu, Berrin and Buntine, Wray}, volume = {222}, series = {Proceedings of Machine Learning Research}, month = {11--14 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v222/takhanov24a/takhanov24a.pdf}, url = {https://proceedings.mlr.press/v222/takhanov24a.html}, abstract = {The discrete logarithm problem is a fundamental challenge in number theory with significant implications for cryptographic protocols. In this paper, we investigate the limitations of gradient-based methods for learning the parity bit of the discrete logarithm in finite cyclic groups of prime order. Our main result, supported by theoretical analysis and empirical verification, reveals the concentration of the gradient of the loss function around a fixed point, independent of the logarithm’s base used. This concentration property leads to a restricted ability to learn the parity bit efficiently using gradient-based methods, irrespective of the complexity of the network architecture being trained. Our proof relies on Boas-Bellman inequality in inner product spaces and it involves establishing approximate orthogonality of discrete logarithm’s parity bit functions through the spectral norm of certain matrices. Empirical experiments using a neural network-based approach further verify the limitations of gradient-based learning, demonstrating the decreasing success rate in predicting the parity bit as the group order increases.} }
Endnote
%0 Conference Paper %T Intractability of Learning the Discrete Logarithm with Gradient-Based Methods %A Rustem Takhanov %A Maxat Tezekbayev %A Artur Pak %A Arman Bolatov %A Zhibek Kadyrsizova %A Zhenisbek Assylbekov %B Proceedings of the 15th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Berrin Yanıkoğlu %E Wray Buntine %F pmlr-v222-takhanov24a %I PMLR %P 1321--1336 %U https://proceedings.mlr.press/v222/takhanov24a.html %V 222 %X The discrete logarithm problem is a fundamental challenge in number theory with significant implications for cryptographic protocols. In this paper, we investigate the limitations of gradient-based methods for learning the parity bit of the discrete logarithm in finite cyclic groups of prime order. Our main result, supported by theoretical analysis and empirical verification, reveals the concentration of the gradient of the loss function around a fixed point, independent of the logarithm’s base used. This concentration property leads to a restricted ability to learn the parity bit efficiently using gradient-based methods, irrespective of the complexity of the network architecture being trained. Our proof relies on Boas-Bellman inequality in inner product spaces and it involves establishing approximate orthogonality of discrete logarithm’s parity bit functions through the spectral norm of certain matrices. Empirical experiments using a neural network-based approach further verify the limitations of gradient-based learning, demonstrating the decreasing success rate in predicting the parity bit as the group order increases.
APA
Takhanov, R., Tezekbayev, M., Pak, A., Bolatov, A., Kadyrsizova, Z. & Assylbekov, Z.. (2024). Intractability of Learning the Discrete Logarithm with Gradient-Based Methods. Proceedings of the 15th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 222:1321-1336 Available from https://proceedings.mlr.press/v222/takhanov24a.html.

Related Material