$β$-th order Acyclicity Derivatives for DAG Learning

Madhumitha Shridharan, Garud Iyengar
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:208-216, 2025.

Abstract

We consider a non-convex optimization formulation for learning the weighted adjacency matrix $W$ of a directed acyclic graph (DAG) that uses acyclicity constraints that are functions of $|W_{ij}|^\beta$, for $\beta \in \mathbb{N}$. State-of-the-art algorithms for this problem use gradient-based Karush-Kuhn-Tucker (KKT) optimality conditions which only yield useful search directions for $\beta =1$. Therefore, constraints with $\beta \geq 2$ have been ignored in the literature, and their empirical performance remains unknown. We introduce $\beta$-th Order Taylor Series Expansion Based Local Search ($\beta$-LS) which yields actionable descent directions for any $\beta \in \mathbb{N}$. Our empirical experiments show that $2$-LS obtains solutions of higher quality than $1$-LS, $3$-LS and $4$-LS. $2$-LSopt, an optimized version of $2$-LS, obtains high quality solutions significantly faster than the state of the art which uses $\beta=1$. Moreover, $2$-LSopt does not need any graph-size specific hyperparameter tuning. We prove that $\beta$-LSopt is guaranteed to converge to a Coordinate-Wise Local Stationary Point (Cst) for any $\beta \in \mathbb{N}$. If the objective function is convex, $\beta$-LSopt converges to a local minimum.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-shridharan25a, title = {$β$-th order Acyclicity Derivatives for DAG Learning}, author = {Shridharan, Madhumitha and Iyengar, Garud}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {208--216}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/shridharan25a/shridharan25a.pdf}, url = {https://proceedings.mlr.press/v258/shridharan25a.html}, abstract = {We consider a non-convex optimization formulation for learning the weighted adjacency matrix $W$ of a directed acyclic graph (DAG) that uses acyclicity constraints that are functions of $|W_{ij}|^\beta$, for $\beta \in \mathbb{N}$. State-of-the-art algorithms for this problem use gradient-based Karush-Kuhn-Tucker (KKT) optimality conditions which only yield useful search directions for $\beta =1$. Therefore, constraints with $\beta \geq 2$ have been ignored in the literature, and their empirical performance remains unknown. We introduce $\beta$-th Order Taylor Series Expansion Based Local Search ($\beta$-LS) which yields actionable descent directions for any $\beta \in \mathbb{N}$. Our empirical experiments show that $2$-LS obtains solutions of higher quality than $1$-LS, $3$-LS and $4$-LS. $2$-LSopt, an optimized version of $2$-LS, obtains high quality solutions significantly faster than the state of the art which uses $\beta=1$. Moreover, $2$-LSopt does not need any graph-size specific hyperparameter tuning. We prove that $\beta$-LSopt is guaranteed to converge to a Coordinate-Wise Local Stationary Point (Cst) for any $\beta \in \mathbb{N}$. If the objective function is convex, $\beta$-LSopt converges to a local minimum.} }
Endnote
%0 Conference Paper %T $β$-th order Acyclicity Derivatives for DAG Learning %A Madhumitha Shridharan %A Garud Iyengar %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-shridharan25a %I PMLR %P 208--216 %U https://proceedings.mlr.press/v258/shridharan25a.html %V 258 %X We consider a non-convex optimization formulation for learning the weighted adjacency matrix $W$ of a directed acyclic graph (DAG) that uses acyclicity constraints that are functions of $|W_{ij}|^\beta$, for $\beta \in \mathbb{N}$. State-of-the-art algorithms for this problem use gradient-based Karush-Kuhn-Tucker (KKT) optimality conditions which only yield useful search directions for $\beta =1$. Therefore, constraints with $\beta \geq 2$ have been ignored in the literature, and their empirical performance remains unknown. We introduce $\beta$-th Order Taylor Series Expansion Based Local Search ($\beta$-LS) which yields actionable descent directions for any $\beta \in \mathbb{N}$. Our empirical experiments show that $2$-LS obtains solutions of higher quality than $1$-LS, $3$-LS and $4$-LS. $2$-LSopt, an optimized version of $2$-LS, obtains high quality solutions significantly faster than the state of the art which uses $\beta=1$. Moreover, $2$-LSopt does not need any graph-size specific hyperparameter tuning. We prove that $\beta$-LSopt is guaranteed to converge to a Coordinate-Wise Local Stationary Point (Cst) for any $\beta \in \mathbb{N}$. If the objective function is convex, $\beta$-LSopt converges to a local minimum.
APA
Shridharan, M. & Iyengar, G.. (2025). $β$-th order Acyclicity Derivatives for DAG Learning. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:208-216 Available from https://proceedings.mlr.press/v258/shridharan25a.html.

Related Material