Improving Neural Logic Machines via Failure Reflection

Zhiming Li, Yushi Cao, Yan Zheng, Xu Liu, Bozhi Wu, Tianlin Li, Xiufeng Xu, Junzhe Jiang, Yon Shin Teo, Shang-Wei Lin, Yang Liu
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:27457-27473, 2024.

Abstract

Reasoning is a fundamental ability towards artificial general intelligence (AGI). Fueled by the success of deep learning, the neural logic machines models (NLMs) have introduced novel neural-symbolic structures and demonstrate great performance and generalization on reasoning and decision-making tasks. However, the original training approaches of the NLMs are still far from perfect, the models would repeat similar mistakes during the training process which leads to sub-optimal performance. To mitigate this issue, we present a novel framework named Failure Reflection Guided Regularizer (FRGR). FRGR first dynamically identifies and summarizes the root cause if the model repeats similar mistakes during training. Then it penalizes the model if it makes similar mistakes in future training iterations. In this way, the model is expected to avoid repeating errors of similar root causes and converge faster to a better-performed optimum. Experimental results on multiple relational reasoning and decision-making tasks demonstrate the effectiveness of FRGR in improving performance, generalization, training efficiency, and data efficiency.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-li24f, title = {Improving Neural Logic Machines via Failure Reflection}, author = {Li, Zhiming and Cao, Yushi and Zheng, Yan and Liu, Xu and Wu, Bozhi and Li, Tianlin and Xu, Xiufeng and Jiang, Junzhe and Teo, Yon Shin and Lin, Shang-Wei and Liu, Yang}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {27457--27473}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24f/li24f.pdf}, url = {https://proceedings.mlr.press/v235/li24f.html}, abstract = {Reasoning is a fundamental ability towards artificial general intelligence (AGI). Fueled by the success of deep learning, the neural logic machines models (NLMs) have introduced novel neural-symbolic structures and demonstrate great performance and generalization on reasoning and decision-making tasks. However, the original training approaches of the NLMs are still far from perfect, the models would repeat similar mistakes during the training process which leads to sub-optimal performance. To mitigate this issue, we present a novel framework named Failure Reflection Guided Regularizer (FRGR). FRGR first dynamically identifies and summarizes the root cause if the model repeats similar mistakes during training. Then it penalizes the model if it makes similar mistakes in future training iterations. In this way, the model is expected to avoid repeating errors of similar root causes and converge faster to a better-performed optimum. Experimental results on multiple relational reasoning and decision-making tasks demonstrate the effectiveness of FRGR in improving performance, generalization, training efficiency, and data efficiency.} }
Endnote
%0 Conference Paper %T Improving Neural Logic Machines via Failure Reflection %A Zhiming Li %A Yushi Cao %A Yan Zheng %A Xu Liu %A Bozhi Wu %A Tianlin Li %A Xiufeng Xu %A Junzhe Jiang %A Yon Shin Teo %A Shang-Wei Lin %A Yang Liu %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-li24f %I PMLR %P 27457--27473 %U https://proceedings.mlr.press/v235/li24f.html %V 235 %X Reasoning is a fundamental ability towards artificial general intelligence (AGI). Fueled by the success of deep learning, the neural logic machines models (NLMs) have introduced novel neural-symbolic structures and demonstrate great performance and generalization on reasoning and decision-making tasks. However, the original training approaches of the NLMs are still far from perfect, the models would repeat similar mistakes during the training process which leads to sub-optimal performance. To mitigate this issue, we present a novel framework named Failure Reflection Guided Regularizer (FRGR). FRGR first dynamically identifies and summarizes the root cause if the model repeats similar mistakes during training. Then it penalizes the model if it makes similar mistakes in future training iterations. In this way, the model is expected to avoid repeating errors of similar root causes and converge faster to a better-performed optimum. Experimental results on multiple relational reasoning and decision-making tasks demonstrate the effectiveness of FRGR in improving performance, generalization, training efficiency, and data efficiency.
APA
Li, Z., Cao, Y., Zheng, Y., Liu, X., Wu, B., Li, T., Xu, X., Jiang, J., Teo, Y.S., Lin, S. & Liu, Y.. (2024). Improving Neural Logic Machines via Failure Reflection. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:27457-27473 Available from https://proceedings.mlr.press/v235/li24f.html.

Related Material