BERTWhy is there such a good effect, deep into the principle itself, and where is it? At the AI ProCon 2019 conference, Zhang Junlin, head of the Sina Weibo machine learning team AI Lab, shared the BERT andTransformerWhat did you learn? 》
2019's latest Transformer models: XLNET, ERNIE 2.0 and ROBERTA
The large pre-training language model is undoubtedly natural language processing (NLPThe main trend of the latest research progress.
Google Machine Translation New Paper - Better and More Efficient Evolution of Transformer Structure to Improve Machine Translation to New Levels
The latest research in Google brain proposes to find better through neural architecture search TransformerTo achieve better performance. The search resulted in a new architecture called Evolved Transformer, which performed on four mature language tasks (WMT 2014, WMT 2014, WMT 2014, and LM1B). Better than the original Transformer.