Transformer-based conformal predictors for paraphrase detection

Patrizio Giovannotti, Alex Gammerman
Proceedings of the Tenth Symposium on Conformal and Probabilistic Prediction and Applications, PMLR 152:243-265, 2021.

Abstract

Transformer architectures have established themselves as the state-of-the-art in many areas of natural language processing (NLP), including paraphrase detection (PD). However, they do not include a confidence estimation for each prediction and, in many cases, the applied models are poorly calibrated. These features are essential for numerous real-world applications. For example, in those cases when PD is used for sensitive tasks, like plagiarism detection, hate speech recognition or in medical NLP, mistakes might be very costly. In this work we build several variants of transformer- based conformal predictors and study their behaviour on a standard PD dataset. We show that our models are able to produce \emph{valid} predictions while retaining the accuracy of the original transformer-based models. The proposed technique can be extended to many more NLP problems that are currently being investigated.

Cite this Paper


BibTeX
@InProceedings{pmlr-v152-giovannotti21a, title = {Transformer-based conformal predictors for paraphrase detection}, author = {Giovannotti, Patrizio and Gammerman, Alex}, booktitle = {Proceedings of the Tenth Symposium on Conformal and Probabilistic Prediction and Applications}, pages = {243--265}, year = {2021}, editor = {Carlsson, Lars and Luo, Zhiyuan and Cherubin, Giovanni and An Nguyen, Khuong}, volume = {152}, series = {Proceedings of Machine Learning Research}, month = {08--10 Sep}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v152/giovannotti21a/giovannotti21a.pdf}, url = {https://proceedings.mlr.press/v152/giovannotti21a.html}, abstract = {Transformer architectures have established themselves as the state-of-the-art in many areas of natural language processing (NLP), including paraphrase detection (PD). However, they do not include a confidence estimation for each prediction and, in many cases, the applied models are poorly calibrated. These features are essential for numerous real-world applications. For example, in those cases when PD is used for sensitive tasks, like plagiarism detection, hate speech recognition or in medical NLP, mistakes might be very costly. In this work we build several variants of transformer- based conformal predictors and study their behaviour on a standard PD dataset. We show that our models are able to produce \emph{valid} predictions while retaining the accuracy of the original transformer-based models. The proposed technique can be extended to many more NLP problems that are currently being investigated.} }
Endnote
%0 Conference Paper %T Transformer-based conformal predictors for paraphrase detection %A Patrizio Giovannotti %A Alex Gammerman %B Proceedings of the Tenth Symposium on Conformal and Probabilistic Prediction and Applications %C Proceedings of Machine Learning Research %D 2021 %E Lars Carlsson %E Zhiyuan Luo %E Giovanni Cherubin %E Khuong An Nguyen %F pmlr-v152-giovannotti21a %I PMLR %P 243--265 %U https://proceedings.mlr.press/v152/giovannotti21a.html %V 152 %X Transformer architectures have established themselves as the state-of-the-art in many areas of natural language processing (NLP), including paraphrase detection (PD). However, they do not include a confidence estimation for each prediction and, in many cases, the applied models are poorly calibrated. These features are essential for numerous real-world applications. For example, in those cases when PD is used for sensitive tasks, like plagiarism detection, hate speech recognition or in medical NLP, mistakes might be very costly. In this work we build several variants of transformer- based conformal predictors and study their behaviour on a standard PD dataset. We show that our models are able to produce \emph{valid} predictions while retaining the accuracy of the original transformer-based models. The proposed technique can be extended to many more NLP problems that are currently being investigated.
APA
Giovannotti, P. & Gammerman, A.. (2021). Transformer-based conformal predictors for paraphrase detection. Proceedings of the Tenth Symposium on Conformal and Probabilistic Prediction and Applications, in Proceedings of Machine Learning Research 152:243-265 Available from https://proceedings.mlr.press/v152/giovannotti21a.html.

Related Material