Learning Weighted Top-$k$ Support Vector Machine
; Proceedings of The Eleventh Asian Conference on Machine Learning, PMLR 101:774-789, 2019.
Nowadays, the top-$k$ accuracy is a major performance criterion when benchmarking multi-class classifier using datasets with a large number of categories. Top-$k$ multiclass SVM has been designed with the aim to minimize the empirical risk based on the top-$k$ accuracy. There already exist two SDCA-based algorithms to learn the top-$k$ SVM, enjoying several preferable properties for optimization, although both the algorithms suffer from two disadvantages. A weak point is that, since the design of the algorithms are specialized only to the top-$k$ hinge, their applicability to other variants is limited. The other disadvantage is that both the two algorithms cannot attain the optimal solution in most cases due to their theoritical imperfections. In this study, a weighted extension of top-$k$ SVM is considered, and novel learning algorithms based on the Frank-Wolfe algorithm is devised. The new learning algorithms possess all the favorable properties of SDCA as well as the applicability not only to the original top-$k$ SVM but also to the weighted extension. Geometrical convergence is achieved by smoothing the loss functions. Numerical simulations demonstrate that only the proposed Frank-Wolfe algorithms can converge to the optimum, in contrast with the failure of the two existing SDCA-based algorithms. Finally, our analytical results for these two studies are presented to shed light on the meaning of the solutions produced from their algorithms.