BERT-Sort: A Zero-shot MLM Semantic Encoder on Ordinal Features for AutoML

Mehdi Bahrami, Wei-Peng Chen, Lei Liu, Mukul Prasad
Proceedings of the First International Conference on Automated Machine Learning, PMLR 188:11/1-26, 2022.

Abstract

Data pre-processing is one of the key steps in creating machine learning pipelines for tabular data. One of the common data pre-processing operations implemented in AutoML systems is to encode categorical features as numerical features. Typically, this is implemented using a simple alphabetical sort on the categorical values, using functions such as OrdinalEncoder, LabelEncoder in Scikit-Learn and H2O. However, often there exist semantic ordinal relationships among the categorical values, such as: quality level (i.e., [’very good’ $\succ$ ’good’ $\succ$ ’normal’ $\succ$ ’poor’]), or month (i.e., [’Jan’ $\prec$ ’Feb’ $\prec$ ’Mar’]). Such semantic relationships are not exploited by previous AutoML approaches. In this paper, we introduce BERT-Sort, a novel approach to semantically encode ordinal categorical values via zero-shot Masked Language Models (MLM) and apply it to AutoML for tabular data. We created a new benchmark of 42 features from 10 public data sets for sorting categorical ordinal values for the first time, where BERT-Sort significantly improves semantic encoding of ordinal values in comparison to the existing approaches with 27% improvement. We perform a comprehensive evaluation of BERT-Sort on different public MLMs, such as RoBERTa, XLM and DistilBERT. We also compare the performance of raw data sets against encoded data sets through BERT-Sort in different AutoML platforms including AutoGluon, FLAML, H2O, and MLJAR to evaluate the proposed approach in an end-to-end scenario, where BERT-Sort achieved a performance close to a hard encoded feature. The artifacts of BERT-Sort is available at https://github.com/marscod/BERT-Sort.

Cite this Paper


BibTeX
@InProceedings{pmlr-v188-bahrami22a, title = {BERT-Sort: A Zero-shot MLM Semantic Encoder on Ordinal Features for AutoML}, author = {Bahrami, Mehdi and Chen, Wei-Peng and Liu, Lei and Prasad, Mukul}, booktitle = {Proceedings of the First International Conference on Automated Machine Learning}, pages = {11/1--26}, year = {2022}, editor = {Guyon, Isabelle and Lindauer, Marius and van der Schaar, Mihaela and Hutter, Frank and Garnett, Roman}, volume = {188}, series = {Proceedings of Machine Learning Research}, month = {25--27 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v188/bahrami22a/bahrami22a.pdf}, url = {https://proceedings.mlr.press/v188/bahrami22a.html}, abstract = {Data pre-processing is one of the key steps in creating machine learning pipelines for tabular data. One of the common data pre-processing operations implemented in AutoML systems is to encode categorical features as numerical features. Typically, this is implemented using a simple alphabetical sort on the categorical values, using functions such as OrdinalEncoder, LabelEncoder in Scikit-Learn and H2O. However, often there exist semantic ordinal relationships among the categorical values, such as: quality level (i.e., [’very good’ $\succ$ ’good’ $\succ$ ’normal’ $\succ$ ’poor’]), or month (i.e., [’Jan’ $\prec$ ’Feb’ $\prec$ ’Mar’]). Such semantic relationships are not exploited by previous AutoML approaches. In this paper, we introduce BERT-Sort, a novel approach to semantically encode ordinal categorical values via zero-shot Masked Language Models (MLM) and apply it to AutoML for tabular data. We created a new benchmark of 42 features from 10 public data sets for sorting categorical ordinal values for the first time, where BERT-Sort significantly improves semantic encoding of ordinal values in comparison to the existing approaches with 27% improvement. We perform a comprehensive evaluation of BERT-Sort on different public MLMs, such as RoBERTa, XLM and DistilBERT. We also compare the performance of raw data sets against encoded data sets through BERT-Sort in different AutoML platforms including AutoGluon, FLAML, H2O, and MLJAR to evaluate the proposed approach in an end-to-end scenario, where BERT-Sort achieved a performance close to a hard encoded feature. The artifacts of BERT-Sort is available at https://github.com/marscod/BERT-Sort.} }
Endnote
%0 Conference Paper %T BERT-Sort: A Zero-shot MLM Semantic Encoder on Ordinal Features for AutoML %A Mehdi Bahrami %A Wei-Peng Chen %A Lei Liu %A Mukul Prasad %B Proceedings of the First International Conference on Automated Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Isabelle Guyon %E Marius Lindauer %E Mihaela van der Schaar %E Frank Hutter %E Roman Garnett %F pmlr-v188-bahrami22a %I PMLR %P 11/1--26 %U https://proceedings.mlr.press/v188/bahrami22a.html %V 188 %X Data pre-processing is one of the key steps in creating machine learning pipelines for tabular data. One of the common data pre-processing operations implemented in AutoML systems is to encode categorical features as numerical features. Typically, this is implemented using a simple alphabetical sort on the categorical values, using functions such as OrdinalEncoder, LabelEncoder in Scikit-Learn and H2O. However, often there exist semantic ordinal relationships among the categorical values, such as: quality level (i.e., [’very good’ $\succ$ ’good’ $\succ$ ’normal’ $\succ$ ’poor’]), or month (i.e., [’Jan’ $\prec$ ’Feb’ $\prec$ ’Mar’]). Such semantic relationships are not exploited by previous AutoML approaches. In this paper, we introduce BERT-Sort, a novel approach to semantically encode ordinal categorical values via zero-shot Masked Language Models (MLM) and apply it to AutoML for tabular data. We created a new benchmark of 42 features from 10 public data sets for sorting categorical ordinal values for the first time, where BERT-Sort significantly improves semantic encoding of ordinal values in comparison to the existing approaches with 27% improvement. We perform a comprehensive evaluation of BERT-Sort on different public MLMs, such as RoBERTa, XLM and DistilBERT. We also compare the performance of raw data sets against encoded data sets through BERT-Sort in different AutoML platforms including AutoGluon, FLAML, H2O, and MLJAR to evaluate the proposed approach in an end-to-end scenario, where BERT-Sort achieved a performance close to a hard encoded feature. The artifacts of BERT-Sort is available at https://github.com/marscod/BERT-Sort.
APA
Bahrami, M., Chen, W., Liu, L. & Prasad, M.. (2022). BERT-Sort: A Zero-shot MLM Semantic Encoder on Ordinal Features for AutoML. Proceedings of the First International Conference on Automated Machine Learning, in Proceedings of Machine Learning Research 188:11/1-26 Available from https://proceedings.mlr.press/v188/bahrami22a.html.

Related Material