Sharp Optimality of Simple, Plug-in Estimation of the Fisher Information of a Smoothed Density

Subhodh Kotekal
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:31590-31618, 2025.

Abstract

Given independent and identically distributed data from a compactly supported, $\alpha$-Hölder density $f$, we study estimation of the Fisher information of the Gaussian-smoothed density $f*\varphi_t$, where $\varphi_t$ is the density of $N(0, t)$. We derive the minimax rate including the sharp dependence on $t$ and show some simple, plug-in type estimators are optimal for $t > 0$, even though extra debiasing steps are widely employed in the literature to achieve the sharp rate in the unsmoothed ($t = 0$) case. Due to our result’s sharp characterization of the scaling in $t$, plug-in estimators of the mutual information and entropy are shown to achieve the parametric rate by way of the I-MMSE and de Bruijn’s identities.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-kotekal25a, title = {Sharp Optimality of Simple, Plug-in Estimation of the {F}isher Information of a Smoothed Density}, author = {Kotekal, Subhodh}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {31590--31618}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/kotekal25a/kotekal25a.pdf}, url = {https://proceedings.mlr.press/v267/kotekal25a.html}, abstract = {Given independent and identically distributed data from a compactly supported, $\alpha$-Hölder density $f$, we study estimation of the Fisher information of the Gaussian-smoothed density $f*\varphi_t$, where $\varphi_t$ is the density of $N(0, t)$. We derive the minimax rate including the sharp dependence on $t$ and show some simple, plug-in type estimators are optimal for $t > 0$, even though extra debiasing steps are widely employed in the literature to achieve the sharp rate in the unsmoothed ($t = 0$) case. Due to our result’s sharp characterization of the scaling in $t$, plug-in estimators of the mutual information and entropy are shown to achieve the parametric rate by way of the I-MMSE and de Bruijn’s identities.} }
Endnote
%0 Conference Paper %T Sharp Optimality of Simple, Plug-in Estimation of the Fisher Information of a Smoothed Density %A Subhodh Kotekal %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-kotekal25a %I PMLR %P 31590--31618 %U https://proceedings.mlr.press/v267/kotekal25a.html %V 267 %X Given independent and identically distributed data from a compactly supported, $\alpha$-Hölder density $f$, we study estimation of the Fisher information of the Gaussian-smoothed density $f*\varphi_t$, where $\varphi_t$ is the density of $N(0, t)$. We derive the minimax rate including the sharp dependence on $t$ and show some simple, plug-in type estimators are optimal for $t > 0$, even though extra debiasing steps are widely employed in the literature to achieve the sharp rate in the unsmoothed ($t = 0$) case. Due to our result’s sharp characterization of the scaling in $t$, plug-in estimators of the mutual information and entropy are shown to achieve the parametric rate by way of the I-MMSE and de Bruijn’s identities.
APA
Kotekal, S.. (2025). Sharp Optimality of Simple, Plug-in Estimation of the Fisher Information of a Smoothed Density. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:31590-31618 Available from https://proceedings.mlr.press/v267/kotekal25a.html.

Related Material