BERTWhy is there such a good effect, deep into the principle itself, and where is it? At the AI ProCon 2019 conference, Zhang Junlin, head of the Sina Weibo machine learning team AI Lab, shared the BERT andTransformWhat did you learn? 》
bert
Google is stronger NLP model XLNet open source: 20 task is fully rolled BERT!
CMU and Google’s new XLNet surpassed 20 tasks BERT Performance and achieved the best results on 18 tasks! What's even more exciting is that XLNet has now opened training code and large pre-training models.
Microsoft proposed the universal pre-training model MASS, the natural language generation task surpasses BERT, GPT!
Researchers at Microsoft Research Asia have proposed a new universal pre-training method, MASS, on ICML 2019, which goes beyond the sequence-to-sequence natural language generation task.BERTAnd GPT. In the WMT19 machine translation competition that Microsoft participated in, MASS helped the Chinese-English, English-Lithuanese language pairs to achieve the first place.
What is the interpretative performance of the BERT feature of the Google NLP Deep Learning Model?
2018 10 At the beginning of the month, Google AI proposed a new contextual word representation -BERT feature. This paper introduces the BERT features and briefly analyzes the interpretability of BERT features.
[Practice] Multi-label text classification with BRET (with code)
We will use Kaggle's spam classification challenge to measureBERTPerformance in multi-label text categorization.