PhoBERT: Pre-trained language models for Vietnamese (https://arxiv.org/abs/2003.00744)
Pre-trained models are available at: https://github.com/VinAIResearch/PhoBERT
Pre-trained PhoBERT models are the state-of-the-art language models for Vietnamese (Pho, i.e. “Phở”, is a popular food in Vietnam):
Two versions of PhoBERT “base” and “large” are the first public large-scale monolingual language models pre-trained for Vietnamese. PhoBERT pre-training approach is based on RoBERTa which optimizes the BERT pre-training method for more robust performance.
PhoBERT outperforms previous monolingual and multilingual approaches, obtaining new state-of-the-art performances on three downstream Vietnamese NLP tasks of Part-of-speech tagging, Named-entity recognition and Natural language inference.
We release our PhoBERT models in popular open-source libraries, hoping that PhoBERT can serve as a strong baseline for future Vietnamese NLP research and applications.