PaddleNLP Transformer API¶
随着深度学习的发展,NLP领域涌现了一大批高质量的Transformer类预训练模型,多次刷新各种NLP任务SOTA(State of the Art)。
PaddleNLP为用户提供了常用的 BERT
、ERNIE
、ALBERT
、RoBERTa
、XLNet
等经典结构预训练模型,
让开发者能够方便快捷应用各类Transformer预训练模型及其下游任务。
Transformer预训练模型汇总¶
下表汇总了介绍了目前PaddleNLP支持的各类预训练模型以及对应预训练权重。我们目前提供了 32 种网络结构, 136 种预训练的参数权重供用户使用, 其中包含了 59 种中文语言模型的预训练权重。
Model |
Pretrained Weight |
Language |
Details of the model |
---|---|---|---|
|
English |
12 repeating layers, 128 embedding, 768-hidden, 12-heads, 11M parameters. ALBERT base model |
|
|
English |
24 repeating layers, 128 embedding, 1024-hidden, 16-heads, 17M parameters. ALBERT large model |
|
|
English |
24 repeating layers, 128 embedding, 2048-hidden, 16-heads, 58M parameters. ALBERT xlarge model |
|
|
English |
12 repeating layers, 128 embedding, 4096-hidden, 64-heads, 223M parameters. ALBERT xxlarge model |
|
|
English |
12 repeating layers, 128 embedding, 768-hidden, 12-heads, 11M parameters. ALBERT base model (version2) |
|
|
English |
24 repeating layers, 128 embedding, 1024-hidden, 16-heads, 17M parameters. ALBERT large model (version2) |
|
|
English |
24 repeating layers, 128 embedding, 2048-hidden, 16-heads, 58M parameters. ALBERT xlarge model (version2) |
|
|
English |
12 repeating layers, 128 embedding, 4096-hidden, 64-heads, 223M parameters. ALBERT xxlarge model (version2) |
|
|
Chinese |
4 repeating layers, 128 embedding, 312-hidden, 12-heads, 4M parameters. ALBERT tiny model (Chinese) |
|
|
Chinese |
6 repeating layers, 128 embedding, 384-hidden, 12-heads, _M parameters. ALBERT small model (Chinese) |
|
|
Chinese |
12 repeating layers, 128 embedding, 768-hidden, 12-heads, 12M parameters. ALBERT base model (Chinese) |
|
|
Chinese |
24 repeating layers, 128 embedding, 1024-hidden, 16-heads, 18M parameters. ALBERT large model (Chinese) |
|
|
Chinese |
24 repeating layers, 128 embedding, 2048-hidden, 16-heads, 60M parameters. ALBERT xlarge model (Chinese) |
|
|
Chinese |
12 repeating layers, 128 embedding, 4096-hidden, 16-heads, 235M parameters. ALBERT xxlarge model (Chinese) |
|
|
English |
12-layer, 768-hidden, 12-heads, 217M parameters. BART base model (English) |
|
|
English |
24-layer, 768-hidden, 16-heads, 509M parameters. BART large model (English). |
|
|
English |
12-layer, 768-hidden, 12-heads, 110M parameters. Trained on lower-cased English text. |
|
|
English |
24-layer, 1024-hidden, 16-heads, 336M parameters. Trained on lower-cased English text. |
|
|
English |
12-layer, 768-hidden, 12-heads, 109M parameters. Trained on cased English text. |
|
|
English |
24-layer, 1024-hidden, 16-heads, 335M parameters. Trained on cased English text. |
|
|
Multilingual |
12-layer, 768-hidden, 12-heads, 168M parameters. Trained on lower-cased text in the top 102 languages with the largest Wikipedias. |
|
|
Multilingual |
12-layer, 768-hidden, 12-heads, 179M parameters. Trained on cased text in the top 104 languages with the largest Wikipedias. |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 108M parameters. Trained on cased Chinese Simplified and Traditional text. |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 108M parameters. Trained on cased Chinese Simplified and Traditional text using Whole-Word-Masking. |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 108M parameters. Trained on cased Chinese Simplified and Traditional text using Whole-Word-Masking with extented data. |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 102M parameters. Finetuned on NER task. |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 102M parameters. Finetuned on POS task. |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 102M parameters. Finetuned on WS task. |
|
|
Multilingual |
12-layer, 768-hidden, 12-heads, 167M parameters. Finetuned for sentiment analysis on product reviews in six languages: English, Dutch, German, French, Spanish and Italian. |
|
|
English |
12-layer, 768-hidden, 12-heads, 110M parameters. Trained on pre-k to graduate math language (English) using a masked language modeling (MLM) objective. |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 102M parameters. Trained with novel MLM as correction pre-training task. |
|
|
Chinese |
24-layer, 1024-hidden, 16-heads, 326M parameters. Trained with novel MLM as correction pre-training task. |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 108M parameters. Trained on 22 million pairs of similar sentences crawed from Baidu Know. |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 102M parameters. Trained on 300G Chinese Corpus Datasets. |
|
|
Chinese |
12-layer, 768-hidden,
12-heads, 102M parameters.
Trained on 20G Finacial Corpus,
based on |
|
|
Japanese |
12-layer, 768-hidden, 12-heads, 110M parameters. Trained on Japanese text. |
|
|
Japanese |
12-layer, 768-hidden, 12-heads, 109M parameters. Trained on Japanese text using Whole-Word-Masking. |
|
|
Japanese |
12-layer, 768-hidden, 12-heads, 89M parameters. Trained on Japanese char text. |
|
|
Japanese |
12-layer, 768-hidden, 12-heads, 89M parameters. Trained on Japanese char text using Whole-Word-Masking. |
|
|
English |
12-layer, 768-hidden, 12-heads, 127M parameters. Trained on lower-cased English text. |
|
|
English |
26-layer, 32-heads, 3B parameters. The Blenderbot base model. |
|
|
English |
14-layer, 384-hidden, 32-heads, 400M parameters. The Blenderbot distil model. |
|
|
English |
14-layer, 32-heads, 1478M parameters. The Blenderbot Distil 1B model. |
|
|
English |
16-layer, 16-heads, 90M parameters. The Blenderbot small model. |
|
|
English |
12-layer, 768-hidden, 12-heads, 106M parameters. The ConvBERT base model. |
|
|
English |
12-layer, 384-hidden, 8-heads, 17M parameters. The ConvBERT medium small model. |
|
|
English |
12-layer, 128-hidden, 4-heads, 13M parameters. The ConvBERT small model. |
|
|
English |
48-layer, 1280-hidden, 16-heads, 1701M parameters. The CTRL base model. |
|
|
English |
2-layer, 16-hidden, 2-heads, 5M parameters. The Tiny CTRL model. |
|
|
English |
6-layer, 768-hidden,
12-heads, 66M parameters.
The DistilBERT model distilled from
the BERT model |
|
|
English |
6-layer, 768-hidden,
12-heads, 66M parameters.
The DistilBERT model distilled from
the BERT model |
|
|
English |
6-layer, 768-hidden,
12-heads, 200M parameters.
The DistilBERT model distilled from
the BERT model
|
|
|
English |
2-layer, 2-hidden, 2-heads, 50K parameters. The DistilBERT model |
|
|
English |
12-layer, 768-hidden, 4-heads, 14M parameters. Trained on lower-cased English text. |
|
|
English |
12-layer, 768-hidden, 12-heads, 109M parameters. Trained on lower-cased English text. |
|
|
English |
24-layer, 1024-hidden, 16-heads, 334M parameters. Trained on lower-cased English text. |
|
|
Chinese |
12-layer, 768-hidden, 4-heads, 12M parameters. Trained on Chinese text. |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 102M parameters. Trained on Chinese text. |
|
|
Chinese |
Discriminator, 12-layer, 768-hidden, 12-heads, 102M parameters. Trained on 180g Chinese text. |
|
|
Chinese |
Discriminator, 24-layer, 256-hidden, 4-heads, 24M parameters. Trained on 180g Chinese text. |
|
|
Chinese |
Generator, 12-layer, 64-hidden, 1-heads, 3M parameters. Trained on Chinese legal corpus. |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 108M parameters. Trained on Chinese text. |
|
|
Chinese |
3-layer, 1024-hidden, 16-heads, _M parameters. Trained on Chinese text. |
|
|
English |
12-layer, 768-hidden, 12-heads, 103M parameters. Trained on lower-cased English text. |
|
|
English |
12-layer, 768-hidden, 12-heads, 110M parameters. Trained on finetuned squad text. |
|
|
English |
24-layer, 1024-hidden, 16-heads, 336M parameters. Trained on lower-cased English text. |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 108M parameters. Trained on Chinese text. |
|
|
English |
12-layer, 768-hidden, 12-heads, 103M parameters. Trained on lower-cased English text. |
|
|
English |
12-layer, 768-hidden, 12-heads, 108M parameters. Trained on lower-cased English text. |
|
|
English |
24-layer, 1024-hidden, 16-heads, 336M parameters. Trained on lower-cased English text. |
|
|
English |
24-layer, 1024-hidden, 16-heads, 336M parameters. Trained on lower-cased English text. with extended data (430 GB). |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 108M parameters. Trained on Chinese text. |
|
|
Chinese |
32-layer, 2560-hidden, 32-heads, 2.6B parameters. Trained on Chinese text. |
|
|
Chinese |
12-layer, 768-hidden,
12-heads, 109M parameters.
The model distilled from
the GPT model |
|
|
English |
12-layer, 768-hidden, 12-heads, 117M parameters. Trained on English text. |
|
|
English |
24-layer, 1024-hidden, 16-heads, 345M parameters. Trained on English text. |
|
|
English |
36-layer, 1280-hidden, 20-heads, 774M parameters. Trained on English text. |
|
|
English |
48-layer, 1600-hidden, 25-heads, 1558M parameters. Trained on English text. |
|
|
English |
6-layer, 768-hidden, 12-heads, 81M parameters. Trained on English text. |
|
|
English |
12-layer, 768-hidden, 12-heads, 124M parameters. Trained on English text. |
|
|
English |
24-layer, 1024-hidden, 16-heads, 354M parameters. Trained on English text. |
|
|
English |
36-layer, 1280-hidden, 20-heads, 774M parameters. Trained on English text. |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 103M parameters. Trained on Chinese poetry corpus. |
|
|
English |
12-layer, 768-hidden, 12-heads, 339M parameters. LayoutLm base uncased model. |
|
|
English |
24-layer, 1024-hidden, 16-heads, 51M parameters. LayoutLm large Uncased model. |
|
|
English |
12-layer, 768-hidden, 12-heads, 200M parameters. LayoutLmv2 base uncased model. |
|
|
English |
24-layer, 1024-hidden, 16-heads, _M parameters. LayoutLmv2 large uncased model. |
|
|
English |
12-layer, 768-hidden, 12-heads, 369M parameters. Layoutxlm base uncased model. |
|
|
English |
12-layer, 1024-hidden,
12-heads, 1123M parameters.
The |
|
|
English |
12-layer, 768-hidden,
16-heads, 1123M parameters.
The |
|
|
English |
12-layer, 1024-hidden,
16-heads, 1123M parameters.
|
|
|
English |
12-layer, 1024-hidden,
16-heads, 1123M parameters.
|
|
|
English |
12-layer, 1024-hidden,
16-heads, 1123M parameters.
|
|
|
English |
24-layer, 512-hidden, 4-heads, 24M parameters. Mobilebert uncased Model. |
|
|
English |
12-layer, 768-hidden, 12-heads, 109M parameters. MPNet Base Model. |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 108M parameters. Trained on Chinese text. |
|
|
Chinese |
24-layer, 1024-hidden, 16-heads, 336M parameters. Trained on Chinese text. |
|
|
Chinese |
12-layer, 768-hidden, 16-heads, 108M parameters. Trained on Chinese text. |
|
|
Chinese |
24-layer, 1024-hidden, 16-heads, 336M parameters. Trained on Chinese text. |
|
|
English |
12-layer, 1024-hidden, 8-heads, 148M parameters. |
|
|
English |
6-layer, 256-hidden, 2-heads, 3M parameters. |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 102M parameters. Trained on English Text using Whole-Word-Masking with extended data. |
|
|
Chinese |
24-layer, 1024-hidden, 16-heads, 325M parameters. Trained on English Text using Whole-Word-Masking with extended data. |
|
|
Chinese |
3-layer, 768-hidden, 12-heads, 38M parameters. |
|
|
Chinese |
3-layer, 1024-hidden, 16-heads, 61M parameters. |
|
|
English |
12-layer, 768-hidden, 12-heads, 124M parameters. Trained on English text. |
|
|
English |
12-layer, 768-hidden, 12-heads, 163M parameters. Trained on English text. |
|
|
English |
24-layer, 1024-hidden, 16-heads, 408M parameters. Trained on English text. |
|
|
English |
2-layer, 2-hidden, 2-heads, 0.25M parameters. Trained on English text. |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 101M parameters. Trained on Chinese text. |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 102M parameters. Trained on Chinese text. |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 101M parameters. Trained on Chinese text. |
|
|
Chinese |
6-layer, 384-hidden, 6-heads, 30M parameters. Roformer Small Chinese model. |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 124M parameters. Roformer Base Chinese model. |
|
|
Chinese |
6-layer, 384-hidden, 6-heads, 15M parameters. Roformer Chinese Char Small model. |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 95M parameters. Roformer Chinese Char Base model. |
|
|
Chinese |
6-layer, 384-hidden, 6-heads, 15M parameters. Roformer Chinese Char Ft Small model. |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 95M parameters. Roformer Chinese Char Ft Base model. |
|
|
Chinese |
6-layer, 384-hidden, 6-heads, 15M parameters. Roformer Chinese Sim Char Small model. |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 95M parameters. Roformer Chinese Sim Char Base model. |
|
|
English |
12-layer, 256-hidden, 4-heads, 13M parameters. Roformer English Small Discriminator. |
|
|
English |
12-layer, 64-hidden, 1-heads, 5M parameters. Roformer English Small Generator. |
|
|
Chinese |
24-layer, 1024-hidden,
16-heads, 336M parameters.
Trained using the Erine model
|
|
|
English |
24-layer, 1024-hidden,
16-heads, 336M parameters.
Trained using the Erine model
|
|
|
English |
24-layer, 1024-hidden,
16-heads, 355M parameters.
Trained using the RoBERTa model
|
|
|
English |
12-layer, 768-hidden, 12-heads, 51M parameters. SqueezeBert Uncased model. |
|
|
English |
12-layer, 768-hidden, 12-heads, 51M parameters. SqueezeBert Mnli model. |
|
|
English |
12-layer, 768-hidden, 12-heads, 51M parameters. SqueezeBert Mnli Headless model. |
|
|
English |
6-layer, 512-hidden, 8-heads, 93M parameters. T5 small model. |
|
|
English |
12-layer, 768-hidden, 12-heads, 272M parameters. T5 base model. |
|
|
English |
24-layer, 1024-hidden, 16-heads, 803M parameters. T5 large model. |
|
|
English |
4-layer, 312-hidden,
12-heads, 14.5M parameters.
The TinyBert model distilled from
the BERT model |
|
|
English |
6-layer, 768-hidden,
12-heads, 67M parameters.
The TinyBert model distilled from
the BERT model |
|
|
English |
4-layer, 312-hidden,
12-heads, 14.5M parameters.
The TinyBert model distilled from
the BERT model |
|
|
English |
6-layer, 768-hidden,
12-heads, 67M parameters.
The TinyBert model distilled from
the BERT model |
|
|
Chinese |
4-layer, 312-hidden,
12-heads, 14.5M parameters.
The TinyBert model distilled from
the BERT model |
|
|
Chinese |
6-layer, 768-hidden,
12-heads, 67M parameters.
The TinyBert model distilled from
the BERT model |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 108M parameters. Trained on Chinese text. |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 108M parameters. Trained on Chinese text (LUGE.ai). |
|
|
Chinese |
6-layer, 768-hidden, 12-heads, 66M parameters. Trained on Chinese text. |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 99M parameters. UNIMO-text-1.0 model. |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 99M parameters. Finetuned on lcsts_new dataset. |
|
|
Chinese |
24-layer, 768-hidden, 16-heads, 316M parameters. UNIMO-text-1.0 large model. |
|
|
English |
12-layer, 768-hidden, 12-heads, 110M parameters. XLNet English model |
|
|
English |
24-layer, 1024-hidden, 16-heads, 340M parameters. XLNet Large English model |
|
|
Chinese |
12-layer, 768-hidden, 12-heads, 117M parameters. XLNet Chinese model |
|
|
Chinese |
24-layer, 768-hidden, 12-heads, 209M parameters. XLNet Medium Chinese model |
|
|
Chinese |
24-layer, 1024-hidden, 16-heads, _M parameters. XLNet Large Chinese model |
Transformer预训练模型适用任务汇总¶
Model |
Sequence Classification |
Token Classification |
Question Answering |
Text Generation |
Multiple Choice |
---|---|---|---|---|---|
✅ |
✅ |
✅ |
❌ |
✅ |
|
✅ |
✅ |
✅ |
✅ |
❌ |
|
✅ |
✅ |
✅ |
❌ |
✅ |
|
✅ |
✅ |
✅ |
❌ |
✅ |
|
❌ |
❌ |
❌ |
✅ |
❌ |
|
❌ |
❌ |
❌ |
✅ |
❌ |
|
✅ |
✅ |
✅ |
✅ |
✅ |
|
✅ |
❌ |
❌ |
❌ |
❌ |
|
✅ |
✅ |
✅ |
❌ |
❌ |
|
✅ |
✅ |
❌ |
❌ |
✅ |
|
✅ |
✅ |
✅ |
❌ |
❌ |
|
✅ |
✅ |
✅ |
❌ |
❌ |
|
❌ |
❌ |
❌ |
✅ |
❌ |
|
✅ |
✅ |
✅ |
❌ |
❌ |
|
✅ |
✅ |
❌ |
✅ |
❌ |
|
✅ |
✅ |
❌ |
❌ |
❌ |
|
❌ |
✅ |
❌ |
❌ |
❌ |
|
❌ |
✅ |
❌ |
❌ |
❌ |
|
✅ |
❌ |
✅ |
❌ |
✅ |
|
✅ |
❌ |
✅ |
❌ |
❌ |
|
✅ |
✅ |
✅ |
❌ |
✅ |
|
✅ |
✅ |
✅ |
❌ |
✅ |
|
✅ |
❌ |
✅ |
❌ |
❌ |
|
✅ |
✅ |
✅ |
❌ |
❌ |
|
✅ |
✅ |
✅ |
❌ |
❌ |
|
✅ |
✅ |
❌ |
❌ |
❌ |
|
✅ |
✅ |
✅ |
❌ |
❌ |
|
❌ |
❌ |
❌ |
✅ |
❌ |
|
✅ |
❌ |
❌ |
❌ |
❌ |
|
❌ |
❌ |
❌ |
✅ |
❌ |
|
✅ |
✅ |
❌ |
❌ |
❌ |
预训练模型使用方法¶
PaddleNLP Transformer API在提丰富预训练模型的同时,也降低了用户的使用门槛。 使用Auto模块,可以加载不同网络结构的预训练模型,无需查找 模型对应的类别。只需十几行代码,用户即可完成模型加载和下游任务Fine-tuning。
from functools import partial
import numpy as np
import paddle
from paddlenlp.datasets import load_dataset
from paddlenlp.transformers import AutoModelForSequenceClassification, AutoTokenizer
train_ds = load_dataset("chnsenticorp", splits=["train"])
model = AutoModelForSequenceClassification.from_pretrained("bert-wwm-chinese", num_classes=len(train_ds.label_list))
tokenizer = AutoTokenizer.from_pretrained("bert-wwm-chinese")
def convert_example(example, tokenizer):
encoded_inputs = tokenizer(text=example["text"], max_seq_len=512, pad_to_max_seq_len=True)
return tuple([np.array(x, dtype="int64") for x in [
encoded_inputs["input_ids"], encoded_inputs["token_type_ids"], [example["label"]]]])
train_ds = train_ds.map(partial(convert_example, tokenizer=tokenizer))
batch_sampler = paddle.io.BatchSampler(dataset=train_ds, batch_size=8, shuffle=True)
train_data_loader = paddle.io.DataLoader(dataset=train_ds, batch_sampler=batch_sampler, return_list=True)
optimizer = paddle.optimizer.AdamW(learning_rate=0.001, parameters=model.parameters())
criterion = paddle.nn.loss.CrossEntropyLoss()
for input_ids, token_type_ids, labels in train_data_loader():
logits = model(input_ids, token_type_ids)
loss = criterion(logits, labels)
loss.backward()
optimizer.step()
optimizer.clear_grad()
上面的代码给出使用预训练模型的简要示例,更完整详细的示例代码, 可以参考:使用预训练模型Fine-tune完成中文文本分类任务
加载数据集:PaddleNLP内置了多种数据集,用户可以一键导入所需的数据集。
加载预训练模型:PaddleNLP的预训练模型可以很容易地通过
from_pretrained()
方法加载。 Auto模块(包括AutoModel, AutoTokenizer, 及各种下游任务类)提供了方便易用的接口, 无需指定类别,即可调用不同网络结构的预训练模型。 第一个参数是汇总表中对应的Pretrained Weight
,可加载对应的预训练权重。AutoModelForSequenceClassification
初始化__init__
所需的其他参数,如num_classes
等, 也是通过from_pretrained()
传入。Tokenizer
使用同样的from_pretrained
方法加载。通过
Dataset
的map
函数,使用tokenizer
将dataset
从原始文本处理成模型的输入。定义
BatchSampler
和DataLoader
,shuffle数据、组合Batch。定义训练所需的优化器,loss函数等,就可以开始进行模型fine-tune任务。
Reference¶
部分中文预训练模型来自: brightmart/albert_zh, ymcui/Chinese-BERT-wwm, huawei-noah/Pretrained-Language-Model/TinyBERT, ymcui/Chinese-XLNet, huggingface/xlnet_chinese_large, Knover/luge-dialogue, huawei-noah/Pretrained-Language-Model/NEZHA-PyTorch/ ZhuiyiTechnology/simbert
Lan, Zhenzhong, et al. “Albert: A lite bert for self-supervised learning of language representations.” arXiv preprint arXiv:1909.11942 (2019).
Lewis, Mike, et al. “BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension.” arXiv preprint arXiv:1910.13461 (2019).
Devlin, Jacob, et al. “Bert: Pre-training of deep bidirectional transformers for language understanding.” arXiv preprint arXiv:1810.04805 (2018).
Zaheer, Manzil, et al. “Big bird: Transformers for longer sequences.” arXiv preprint arXiv:2007.14062 (2020).
Stephon, Emily, et al. “Blenderbot: Recipes for building an open-domain chatbot.” arXiv preprint arXiv:2004.13637 (2020).
Stephon, Emily, et al. “Blenderbot-Small: Recipes for building an open-domain chatbot.” arXiv preprint arXiv:2004.13637 (2020).
Jiang, Zihang, et al. “ConvBERT: Improving BERT with Span-based Dynamic Convolution.” arXiv preprint arXiv:2008.02496 (2020).
Nitish, Bryan, et al. “CTRL: A Conditional Transformer Language Model for Controllable Generation.” arXiv preprint arXiv:1909.05858 (2019).
Sanh, Victor, et al. “DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter.” arXiv preprint arXiv:1910.01108 (2019).
Clark, Kevin, et al. “Electra: Pre-training text encoders as discriminators rather than generators.” arXiv preprint arXiv:2003.10555 (2020).
Sun, Yu, et al. “Ernie: Enhanced representation through knowledge integration.” arXiv preprint arXiv:1904.09223 (2019).
Xiao, Dongling, et al. “Ernie-gen: An enhanced multi-flow pre-training and fine-tuning framework for natural language generation.” arXiv preprint arXiv:2001.11314 (2020).
Xiao, Dongling, et al. “ERNIE-Gram: Pre-Training with Explicitly N-Gram Masked Language Modeling for Natural Language Understanding.” arXiv preprint arXiv:2010.12148 (2020).
Radford, Alec, et al. “Language models are unsupervised multitask learners.” OpenAI blog 1.8 (2019): 9.
Xu, Yiheng, et al. “LayoutLM: Pre-training of Text and Layout for Document Image Understanding.” arXiv preprint arXiv:1912.13318 (2019).
Xu, Yang, et al. “LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding” arXiv preprint arXiv:2012.14740 (2020).
Xu, Yiheng, et al. “LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding” arXiv preprint arXiv:2104.08836 (2021).
Liu, Yinhan, et al. “MBart: Multilingual Denoising Pre-training for Neural Machine Translation” arXiv preprint arXiv:2001.08210 (2020).
Sun, Zhiqing, et al. “MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices” arXiv preprint arXiv:2004.02984 (2020).
Song, Kaitao, et al. “MPNet: Masked and Permuted Pre-training for Language Understanding.” arXiv preprint arXiv:2004.09297 (2020).
Wei, Junqiu, et al. “NEZHA: Neural contextualized representation for chinese language understanding.” arXiv preprint arXiv:1909.00204 (2019).
Kitaev, Nikita, et al. “Reformer: The efficient Transformer.” arXiv preprint arXiv:2001.04451 (2020).
Liu, Yinhan, et al. “Roberta: A robustly optimized bert pretraining approach.” arXiv preprint arXiv:1907.11692 (2019).
Su Jianlin, et al. “RoFormer: Enhanced Transformer with Rotary Position Embedding.” arXiv preprint arXiv:2104.09864 (2021).
Tian, Hao, et al. “SKEP: Sentiment knowledge enhanced pre-training for sentiment analysis.” arXiv preprint arXiv:2005.05635 (2020).
Forrest, ALbert, et al. “SqueezeBERT: What can computer vision teach NLP about efficient neural networks?” arXiv preprint arXiv:2006.11316 (2020).
Raffel, Colin, et al. “T5: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer.” arXiv preprint arXiv:1910.10683 (2019).
Vaswani, Ashish, et al. “Attention is all you need.” arXiv preprint arXiv:1706.03762 (2017).
Jiao, Xiaoqi, et al. “Tinybert: Distilling bert for natural language understanding.” arXiv preprint arXiv:1909.10351 (2019).
Bao, Siqi, et al. “Plato-2: Towards building an open-domain chatbot via curriculum learning.” arXiv preprint arXiv:2006.16779 (2020).
Yang, Zhilin, et al. “Xlnet: Generalized autoregressive pretraining for language understanding.” arXiv preprint arXiv:1906.08237 (2019).
Cui, Yiming, et al. “Pre-training with whole word masking for chinese bert.” arXiv preprint arXiv:1906.08101 (2019).