Generalizing Orthogonalization for Models with Non-Linearities

David Rügamer, Chris Kolb, Tobias Weber, Lucas Kook, Thomas Nagler
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:42796-42817, 2024.

Abstract

The complexity of black-box algorithms can lead to various challenges, including the introduction of biases. These biases present immediate risks in the algorithms’ application. It was, for instance, shown that neural networks can deduce racial information solely from a patient’s X-ray scan, a task beyond the capability of medical experts. If this fact is not known to the medical expert, automatic decision-making based on this algorithm could lead to prescribing a treatment (purely) based on racial information. While current methodologies allow for the "orthogonalization" or "normalization" of neural networks with respect to such information, existing approaches are grounded in linear models. Our paper advances the discourse by introducing corrections for non-linearities such as ReLU activations. Our approach also encompasses scalar and tensor-valued predictions, facilitating its integration into neural network architectures. Through extensive experiments, we validate our method’s effectiveness in safeguarding sensitive data in generalized linear models, normalizing convolutional neural networks for metadata, and rectifying pre-existing embeddings for undesired attributes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-rugamer24a, title = {Generalizing Orthogonalization for Models with Non-Linearities}, author = {R\"{u}gamer, David and Kolb, Chris and Weber, Tobias and Kook, Lucas and Nagler, Thomas}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {42796--42817}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/rugamer24a/rugamer24a.pdf}, url = {https://proceedings.mlr.press/v235/rugamer24a.html}, abstract = {The complexity of black-box algorithms can lead to various challenges, including the introduction of biases. These biases present immediate risks in the algorithms’ application. It was, for instance, shown that neural networks can deduce racial information solely from a patient’s X-ray scan, a task beyond the capability of medical experts. If this fact is not known to the medical expert, automatic decision-making based on this algorithm could lead to prescribing a treatment (purely) based on racial information. While current methodologies allow for the "orthogonalization" or "normalization" of neural networks with respect to such information, existing approaches are grounded in linear models. Our paper advances the discourse by introducing corrections for non-linearities such as ReLU activations. Our approach also encompasses scalar and tensor-valued predictions, facilitating its integration into neural network architectures. Through extensive experiments, we validate our method’s effectiveness in safeguarding sensitive data in generalized linear models, normalizing convolutional neural networks for metadata, and rectifying pre-existing embeddings for undesired attributes.} }
Endnote
%0 Conference Paper %T Generalizing Orthogonalization for Models with Non-Linearities %A David Rügamer %A Chris Kolb %A Tobias Weber %A Lucas Kook %A Thomas Nagler %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-rugamer24a %I PMLR %P 42796--42817 %U https://proceedings.mlr.press/v235/rugamer24a.html %V 235 %X The complexity of black-box algorithms can lead to various challenges, including the introduction of biases. These biases present immediate risks in the algorithms’ application. It was, for instance, shown that neural networks can deduce racial information solely from a patient’s X-ray scan, a task beyond the capability of medical experts. If this fact is not known to the medical expert, automatic decision-making based on this algorithm could lead to prescribing a treatment (purely) based on racial information. While current methodologies allow for the "orthogonalization" or "normalization" of neural networks with respect to such information, existing approaches are grounded in linear models. Our paper advances the discourse by introducing corrections for non-linearities such as ReLU activations. Our approach also encompasses scalar and tensor-valued predictions, facilitating its integration into neural network architectures. Through extensive experiments, we validate our method’s effectiveness in safeguarding sensitive data in generalized linear models, normalizing convolutional neural networks for metadata, and rectifying pre-existing embeddings for undesired attributes.
APA
Rügamer, D., Kolb, C., Weber, T., Kook, L. & Nagler, T.. (2024). Generalizing Orthogonalization for Models with Non-Linearities. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:42796-42817 Available from https://proceedings.mlr.press/v235/rugamer24a.html.

Related Material