Adversarial Machine Learning Attacks on Financial Reporting via Maximum Violated Multi-Objective Attack

Edward Raff, Karen Kukla, Michel Benaroch, Joseph Comprix
Proceedings of the 2025 Conference on Applied Machine Learning for Information Security, PMLR 299:1-27, 2025.

Abstract

Bad actors, primarily distressed firms, have the incentive and desire to manipulate their financial reports to hide their distress and derive personal gains. As attackers, these firms are motivated by potentially millions of dollars and the availability of many publicly disclosed and used financial modeling frameworks. Existing attack methods do not work on this data due to anti-correlated objectives that must both be satisfied for the attacker to succeed. We introduce Maximum Violated Multi-Objective (MVMO) attacks that adapt the attacker’s search direction to find 20$\times$more satisfying attacks compared to standard attacks. The result is that in $\approx$ 50% of cases, a company could inflate their earnings by 100-200%, while simultaneously reducing their fraud scores by 15%. By working with lawyers and professional accountants, we ensure our threat model is realistic to how such frauds are performed in practice.

Cite this Paper


BibTeX
@InProceedings{pmlr-v299-raff25a, title = {Adversarial Machine Learning Attacks on Financial Reporting via Maximum Violated Multi-Objective Attack}, author = {Raff, Edward and Kukla, Karen and Benaroch, Michel and Comprix, Joseph}, booktitle = {Proceedings of the 2025 Conference on Applied Machine Learning for Information Security}, pages = {1--27}, year = {2025}, editor = {Raff, Edward and Rudd, Ethan M.}, volume = {299}, series = {Proceedings of Machine Learning Research}, month = {22--24 Oct}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v299/main/assets/raff25a/raff25a.pdf}, url = {https://proceedings.mlr.press/v299/raff25a.html}, abstract = {Bad actors, primarily distressed firms, have the incentive and desire to manipulate their financial reports to hide their distress and derive personal gains. As attackers, these firms are motivated by potentially millions of dollars and the availability of many publicly disclosed and used financial modeling frameworks. Existing attack methods do not work on this data due to anti-correlated objectives that must both be satisfied for the attacker to succeed. We introduce Maximum Violated Multi-Objective (MVMO) attacks that adapt the attacker’s search direction to find 20$\times$more satisfying attacks compared to standard attacks. The result is that in $\approx$ 50% of cases, a company could inflate their earnings by 100-200%, while simultaneously reducing their fraud scores by 15%. By working with lawyers and professional accountants, we ensure our threat model is realistic to how such frauds are performed in practice. } }
Endnote
%0 Conference Paper %T Adversarial Machine Learning Attacks on Financial Reporting via Maximum Violated Multi-Objective Attack %A Edward Raff %A Karen Kukla %A Michel Benaroch %A Joseph Comprix %B Proceedings of the 2025 Conference on Applied Machine Learning for Information Security %C Proceedings of Machine Learning Research %D 2025 %E Edward Raff %E Ethan M. Rudd %F pmlr-v299-raff25a %I PMLR %P 1--27 %U https://proceedings.mlr.press/v299/raff25a.html %V 299 %X Bad actors, primarily distressed firms, have the incentive and desire to manipulate their financial reports to hide their distress and derive personal gains. As attackers, these firms are motivated by potentially millions of dollars and the availability of many publicly disclosed and used financial modeling frameworks. Existing attack methods do not work on this data due to anti-correlated objectives that must both be satisfied for the attacker to succeed. We introduce Maximum Violated Multi-Objective (MVMO) attacks that adapt the attacker’s search direction to find 20$\times$more satisfying attacks compared to standard attacks. The result is that in $\approx$ 50% of cases, a company could inflate their earnings by 100-200%, while simultaneously reducing their fraud scores by 15%. By working with lawyers and professional accountants, we ensure our threat model is realistic to how such frauds are performed in practice.
APA
Raff, E., Kukla, K., Benaroch, M. & Comprix, J.. (2025). Adversarial Machine Learning Attacks on Financial Reporting via Maximum Violated Multi-Objective Attack. Proceedings of the 2025 Conference on Applied Machine Learning for Information Security, in Proceedings of Machine Learning Research 299:1-27 Available from https://proceedings.mlr.press/v299/raff25a.html.

Related Material