Errors are Robustly Tamed in Cumulative Knowledge Processes

Anna Brandenberger, Cassandra Marcussen, Elchanan Mossel, Madhu Sudan
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:630-631, 2024.

Abstract

We study processes of societal knowledge accumulation, where the validity of a new unit of knowledge depends both on the correctness of its derivation and on the validity of the units it depends on. A fundamental question in this setting is: If a constant fraction of the new derivations is wrong, can investing a constant fraction, bounded away from one, of effort ensure that a constant fraction of knowledge in society is valid? Ben-Eliezer, Mikulincer, Mossel, and Sudan (ITCS 2023) introduced a concrete probabilistic model to analyze such questions and showed an affirmative answer to this question. Their study, however, focuses on the simple case where each new unit depends on just one existing unit, and units attach according to a {\em preferential attachment rule}. In this work, we consider much more general families of cumulative knowledge processes, where new units may attach according to varied attachment mechanisms and depend on multiple existing units. We also allow a (random) fraction of insertions of adversarial nodes. We give a robust affirmative answer to the above question by showing that for \textit{all} of these models, as long as many of the units follow simple heuristics for checking a bounded number of units they depend on, all errors will be eventually eliminated. Our results indicate that preserving the quality of large interdependent collections of units of knowledge is feasible, as long as careful but not too costly checks are performed when new units are derived/deposited.

Cite this Paper


BibTeX
@InProceedings{pmlr-v247-brandenberger24a, title = {Errors are Robustly Tamed in Cumulative Knowledge Processes}, author = {Brandenberger, Anna and Marcussen, Cassandra and Mossel, Elchanan and Sudan, Madhu}, booktitle = {Proceedings of Thirty Seventh Conference on Learning Theory}, pages = {630--631}, year = {2024}, editor = {Agrawal, Shipra and Roth, Aaron}, volume = {247}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--03 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v247/brandenberger24a/brandenberger24a.pdf}, url = {https://proceedings.mlr.press/v247/brandenberger24a.html}, abstract = {We study processes of societal knowledge accumulation, where the validity of a new unit of knowledge depends both on the correctness of its derivation and on the validity of the units it depends on. A fundamental question in this setting is: If a constant fraction of the new derivations is wrong, can investing a constant fraction, bounded away from one, of effort ensure that a constant fraction of knowledge in society is valid? Ben-Eliezer, Mikulincer, Mossel, and Sudan (ITCS 2023) introduced a concrete probabilistic model to analyze such questions and showed an affirmative answer to this question. Their study, however, focuses on the simple case where each new unit depends on just one existing unit, and units attach according to a {\em preferential attachment rule}. In this work, we consider much more general families of cumulative knowledge processes, where new units may attach according to varied attachment mechanisms and depend on multiple existing units. We also allow a (random) fraction of insertions of adversarial nodes. We give a robust affirmative answer to the above question by showing that for \textit{all} of these models, as long as many of the units follow simple heuristics for checking a bounded number of units they depend on, all errors will be eventually eliminated. Our results indicate that preserving the quality of large interdependent collections of units of knowledge is feasible, as long as careful but not too costly checks are performed when new units are derived/deposited.} }
Endnote
%0 Conference Paper %T Errors are Robustly Tamed in Cumulative Knowledge Processes %A Anna Brandenberger %A Cassandra Marcussen %A Elchanan Mossel %A Madhu Sudan %B Proceedings of Thirty Seventh Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2024 %E Shipra Agrawal %E Aaron Roth %F pmlr-v247-brandenberger24a %I PMLR %P 630--631 %U https://proceedings.mlr.press/v247/brandenberger24a.html %V 247 %X We study processes of societal knowledge accumulation, where the validity of a new unit of knowledge depends both on the correctness of its derivation and on the validity of the units it depends on. A fundamental question in this setting is: If a constant fraction of the new derivations is wrong, can investing a constant fraction, bounded away from one, of effort ensure that a constant fraction of knowledge in society is valid? Ben-Eliezer, Mikulincer, Mossel, and Sudan (ITCS 2023) introduced a concrete probabilistic model to analyze such questions and showed an affirmative answer to this question. Their study, however, focuses on the simple case where each new unit depends on just one existing unit, and units attach according to a {\em preferential attachment rule}. In this work, we consider much more general families of cumulative knowledge processes, where new units may attach according to varied attachment mechanisms and depend on multiple existing units. We also allow a (random) fraction of insertions of adversarial nodes. We give a robust affirmative answer to the above question by showing that for \textit{all} of these models, as long as many of the units follow simple heuristics for checking a bounded number of units they depend on, all errors will be eventually eliminated. Our results indicate that preserving the quality of large interdependent collections of units of knowledge is feasible, as long as careful but not too costly checks are performed when new units are derived/deposited.
APA
Brandenberger, A., Marcussen, C., Mossel, E. & Sudan, M.. (2024). Errors are Robustly Tamed in Cumulative Knowledge Processes. Proceedings of Thirty Seventh Conference on Learning Theory, in Proceedings of Machine Learning Research 247:630-631 Available from https://proceedings.mlr.press/v247/brandenberger24a.html.

Related Material