XLM-R is a model of BERT that uses self-supervised training techniques to achieve cutting-edge performance in multilingual understanding. The XLM model improves on the previous multilingual approaches thanks to the larger training data sets and languages.
- Project: XLM
- Author: Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov
- Initial Release: 2019
- Type: NLP
- Contains: Causal Language Model (CLM), Masked Language Model (MLM), Translation Language Model (TLM), GLUE, XNLI
- Language:Python, Jupyter Notebook, Shell
- GitHub:/XLM with 2.5k stars and 11 contributors.
- Twitter: None
- Applications: Allows content to be posted in other languages on social media platforms while boosting performance