Explainable Pathomics Feature Visualization via Correlation-aware Conditional Feature Editing

Yuechen Yang, Junlin Guo, Ruining Deng, Junchao Zhu, Zhengyi Lu, Chongyu Qu, Yanfan Zhu, Xingyi Guo, Yu Wang, Shilin Zhao, Haichun Yang, Yuankai Huo
Proceedings of The 9th International Conference on Medical Imaging with Deep Learning, PMLR 315:2563-2581, 2026.

Abstract

Pathomics is a recent approach that offers rich quantitative features beyond what black-box deep learning can provide, supporting more reproducible and explainable biomarkers in digital pathology. However, many derived features (e.g., “second-order moment”) remain difficult to interpret, especially across different clinical contexts, which limits their practical adoption. Conditional diffusion models show promise for explainability through feature editing, but they typically assume feature independence, an assumption violated by intrinsically correlated pathomics features. Consequently, editing one feature while fixing others can push the model off the biological manifold and produce unrealistic artifacts. To address this, we propose a Manifold-Aware Diffusion (MAD) framework for controllable and biologically plausible cell nuclei editing. Unlike existing approaches, our method regularizes feature trajectories within a disentangled latent space learned by a variational auto-encoder (VAE). This ensures that manipulating a target feature automatically adjusts correlated attributes to remain within the learned distribution of real cells. These optimized features then guide a conditional diffusion model to synthesize high-fidelity images. Experiments demonstrate that our approach is able to navigate the manifold of pathomics features when editing those features. The proposed method outperforms baseline methods in conditional feature editing while preserving structural coherence.

Cite this Paper


BibTeX
@InProceedings{pmlr-v315-yang26a, title = {Explainable Pathomics Feature Visualization via Correlation-aware Conditional Feature Editing}, author = {Yang, Yuechen and Guo, Junlin and Deng, Ruining and Zhu, Junchao and Lu, Zhengyi and Qu, Chongyu and Zhu, Yanfan and Guo, Xingyi and Wang, Yu and Zhao, Shilin and Yang, Haichun and Huo, Yuankai}, booktitle = {Proceedings of The 9th International Conference on Medical Imaging with Deep Learning}, pages = {2563--2581}, year = {2026}, editor = {Huo, Yuankai and Gao, Mingchen and Kuo, Chang-Fu and Jin, Yueming and Deng, Ruining}, volume = {315}, series = {Proceedings of Machine Learning Research}, month = {08--10 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v315/main/assets/yang26a/yang26a.pdf}, url = {https://proceedings.mlr.press/v315/yang26a.html}, abstract = {Pathomics is a recent approach that offers rich quantitative features beyond what black-box deep learning can provide, supporting more reproducible and explainable biomarkers in digital pathology. However, many derived features (e.g., “second-order moment”) remain difficult to interpret, especially across different clinical contexts, which limits their practical adoption. Conditional diffusion models show promise for explainability through feature editing, but they typically assume feature independence, an assumption violated by intrinsically correlated pathomics features. Consequently, editing one feature while fixing others can push the model off the biological manifold and produce unrealistic artifacts. To address this, we propose a Manifold-Aware Diffusion (MAD) framework for controllable and biologically plausible cell nuclei editing. Unlike existing approaches, our method regularizes feature trajectories within a disentangled latent space learned by a variational auto-encoder (VAE). This ensures that manipulating a target feature automatically adjusts correlated attributes to remain within the learned distribution of real cells. These optimized features then guide a conditional diffusion model to synthesize high-fidelity images. Experiments demonstrate that our approach is able to navigate the manifold of pathomics features when editing those features. The proposed method outperforms baseline methods in conditional feature editing while preserving structural coherence.} }
Endnote
%0 Conference Paper %T Explainable Pathomics Feature Visualization via Correlation-aware Conditional Feature Editing %A Yuechen Yang %A Junlin Guo %A Ruining Deng %A Junchao Zhu %A Zhengyi Lu %A Chongyu Qu %A Yanfan Zhu %A Xingyi Guo %A Yu Wang %A Shilin Zhao %A Haichun Yang %A Yuankai Huo %B Proceedings of The 9th International Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2026 %E Yuankai Huo %E Mingchen Gao %E Chang-Fu Kuo %E Yueming Jin %E Ruining Deng %F pmlr-v315-yang26a %I PMLR %P 2563--2581 %U https://proceedings.mlr.press/v315/yang26a.html %V 315 %X Pathomics is a recent approach that offers rich quantitative features beyond what black-box deep learning can provide, supporting more reproducible and explainable biomarkers in digital pathology. However, many derived features (e.g., “second-order moment”) remain difficult to interpret, especially across different clinical contexts, which limits their practical adoption. Conditional diffusion models show promise for explainability through feature editing, but they typically assume feature independence, an assumption violated by intrinsically correlated pathomics features. Consequently, editing one feature while fixing others can push the model off the biological manifold and produce unrealistic artifacts. To address this, we propose a Manifold-Aware Diffusion (MAD) framework for controllable and biologically plausible cell nuclei editing. Unlike existing approaches, our method regularizes feature trajectories within a disentangled latent space learned by a variational auto-encoder (VAE). This ensures that manipulating a target feature automatically adjusts correlated attributes to remain within the learned distribution of real cells. These optimized features then guide a conditional diffusion model to synthesize high-fidelity images. Experiments demonstrate that our approach is able to navigate the manifold of pathomics features when editing those features. The proposed method outperforms baseline methods in conditional feature editing while preserving structural coherence.
APA
Yang, Y., Guo, J., Deng, R., Zhu, J., Lu, Z., Qu, C., Zhu, Y., Guo, X., Wang, Y., Zhao, S., Yang, H. & Huo, Y.. (2026). Explainable Pathomics Feature Visualization via Correlation-aware Conditional Feature Editing. Proceedings of The 9th International Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 315:2563-2581 Available from https://proceedings.mlr.press/v315/yang26a.html.

Related Material