Online Control of the False Discovery Rate under "Decision Deadlines"

Aaron J. Fisher
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:8340-8359, 2022.

Abstract

Online testing procedures aim to control the extent of false discoveries over a sequence of hypothesis tests, allowing for the possibility that early-stage test results influence the choice of hypotheses to be tested in later stages. Typically, online methods assume that a permanent decision regarding the current test (reject or not reject) must be made before advancing to the next test. We instead assume that each hypothesis requires an immediate preliminary decision, but also allows us to update that decision until a preset deadline. Roughly speaking, this lets us apply a Benjamini-Hochberg-type procedure over a moving window of hypotheses, where the threshold parameters for upcoming tests can be determined based on preliminary results. We show that our approach can control the false discovery rate (FDR) at every stage of testing, even under arbitrary p-value dependencies. That said, our approach offers much greater flexibility if the p-values exhibit a known independence structure. For example, if the p-value sequence is finite and all p-values are independent, then we can additionally control FDR at adaptively chosen stopping times.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-fisher22a, title = { Online Control of the False Discovery Rate under "Decision Deadlines" }, author = {Fisher, Aaron J.}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {8340--8359}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/fisher22a/fisher22a.pdf}, url = {https://proceedings.mlr.press/v151/fisher22a.html}, abstract = { Online testing procedures aim to control the extent of false discoveries over a sequence of hypothesis tests, allowing for the possibility that early-stage test results influence the choice of hypotheses to be tested in later stages. Typically, online methods assume that a permanent decision regarding the current test (reject or not reject) must be made before advancing to the next test. We instead assume that each hypothesis requires an immediate preliminary decision, but also allows us to update that decision until a preset deadline. Roughly speaking, this lets us apply a Benjamini-Hochberg-type procedure over a moving window of hypotheses, where the threshold parameters for upcoming tests can be determined based on preliminary results. We show that our approach can control the false discovery rate (FDR) at every stage of testing, even under arbitrary p-value dependencies. That said, our approach offers much greater flexibility if the p-values exhibit a known independence structure. For example, if the p-value sequence is finite and all p-values are independent, then we can additionally control FDR at adaptively chosen stopping times. } }
Endnote
%0 Conference Paper %T Online Control of the False Discovery Rate under "Decision Deadlines" %A Aaron J. Fisher %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-fisher22a %I PMLR %P 8340--8359 %U https://proceedings.mlr.press/v151/fisher22a.html %V 151 %X Online testing procedures aim to control the extent of false discoveries over a sequence of hypothesis tests, allowing for the possibility that early-stage test results influence the choice of hypotheses to be tested in later stages. Typically, online methods assume that a permanent decision regarding the current test (reject or not reject) must be made before advancing to the next test. We instead assume that each hypothesis requires an immediate preliminary decision, but also allows us to update that decision until a preset deadline. Roughly speaking, this lets us apply a Benjamini-Hochberg-type procedure over a moving window of hypotheses, where the threshold parameters for upcoming tests can be determined based on preliminary results. We show that our approach can control the false discovery rate (FDR) at every stage of testing, even under arbitrary p-value dependencies. That said, our approach offers much greater flexibility if the p-values exhibit a known independence structure. For example, if the p-value sequence is finite and all p-values are independent, then we can additionally control FDR at adaptively chosen stopping times.
APA
Fisher, A.J.. (2022). Online Control of the False Discovery Rate under "Decision Deadlines" . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:8340-8359 Available from https://proceedings.mlr.press/v151/fisher22a.html.

Related Material