When can unlabeled data improve the learning rate?

Christina Göpfert, Shai Ben-David, Olivier Bousquet, Sylvain Gelly, Ilya Tolstikhin, Ruth Urner
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1500-1518, 2019.

Abstract

In semi-supervised classification, one is given access both to labeled and unlabeled data. As unlabeled data is typically cheaper to acquire than labeled data, this setup becomes advantageous as soon as one can exploit the unlabeled data in order to produce a better classifier than with labeled data alone. However, the conditions under which such an improvement is possible are not fully understood yet. Our analysis focuses on improvements in the \emph{minimax} learning rate in terms of the number of labeled examples (with the number of unlabeled examples being allowed to depend on the number of labeled ones). We argue that for such improvements to be realistic and indisputable, certain specific conditions should be satisfied and previous analyses have failed to meet those conditions. We then demonstrate examples where these conditions can be met, in particular showing rate changes from $1/\sqrt{\ell}$ to $e^{-c\ell}$ and from $1/\sqrt{\ell}$ to $1/\ell$. These results improve our understanding of what is and isn’t possible in semi-supervised learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v99-gopfert19a, title = {When can unlabeled data improve the learning rate?}, author = {G{\"o}pfert, Christina and Ben-David, Shai and Bousquet, Olivier and Gelly, Sylvain and Tolstikhin, Ilya and Urner, Ruth}, booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory}, pages = {1500--1518}, year = {2019}, editor = {Beygelzimer, Alina and Hsu, Daniel}, volume = {99}, series = {Proceedings of Machine Learning Research}, month = {25--28 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v99/gopfert19a/gopfert19a.pdf}, url = {https://proceedings.mlr.press/v99/gopfert19a.html}, abstract = {In semi-supervised classification, one is given access both to labeled and unlabeled data. As unlabeled data is typically cheaper to acquire than labeled data, this setup becomes advantageous as soon as one can exploit the unlabeled data in order to produce a better classifier than with labeled data alone. However, the conditions under which such an improvement is possible are not fully understood yet. Our analysis focuses on improvements in the \emph{minimax} learning rate in terms of the number of labeled examples (with the number of unlabeled examples being allowed to depend on the number of labeled ones). We argue that for such improvements to be realistic and indisputable, certain specific conditions should be satisfied and previous analyses have failed to meet those conditions. We then demonstrate examples where these conditions can be met, in particular showing rate changes from $1/\sqrt{\ell}$ to $e^{-c\ell}$ and from $1/\sqrt{\ell}$ to $1/\ell$. These results improve our understanding of what is and isn’t possible in semi-supervised learning.} }
Endnote
%0 Conference Paper %T When can unlabeled data improve the learning rate? %A Christina Göpfert %A Shai Ben-David %A Olivier Bousquet %A Sylvain Gelly %A Ilya Tolstikhin %A Ruth Urner %B Proceedings of the Thirty-Second Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Alina Beygelzimer %E Daniel Hsu %F pmlr-v99-gopfert19a %I PMLR %P 1500--1518 %U https://proceedings.mlr.press/v99/gopfert19a.html %V 99 %X In semi-supervised classification, one is given access both to labeled and unlabeled data. As unlabeled data is typically cheaper to acquire than labeled data, this setup becomes advantageous as soon as one can exploit the unlabeled data in order to produce a better classifier than with labeled data alone. However, the conditions under which such an improvement is possible are not fully understood yet. Our analysis focuses on improvements in the \emph{minimax} learning rate in terms of the number of labeled examples (with the number of unlabeled examples being allowed to depend on the number of labeled ones). We argue that for such improvements to be realistic and indisputable, certain specific conditions should be satisfied and previous analyses have failed to meet those conditions. We then demonstrate examples where these conditions can be met, in particular showing rate changes from $1/\sqrt{\ell}$ to $e^{-c\ell}$ and from $1/\sqrt{\ell}$ to $1/\ell$. These results improve our understanding of what is and isn’t possible in semi-supervised learning.
APA
Göpfert, C., Ben-David, S., Bousquet, O., Gelly, S., Tolstikhin, I. & Urner, R.. (2019). When can unlabeled data improve the learning rate?. Proceedings of the Thirty-Second Conference on Learning Theory, in Proceedings of Machine Learning Research 99:1500-1518 Available from https://proceedings.mlr.press/v99/gopfert19a.html.

Related Material