Sentence Embeddings made with two layers of bidirectional LSTMs.

PDF

Papers with Code

Peters ME, Neumann M, Iyyer M, Gardner M, Clark C, Lee K, Zettlemoyer L. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. 2018 Feb 15.

Summary

Architecture Details

Tokenization

Embeddings for multi-segment tasks

Positional Encoding

Attention Mechanism

Training Details

Pre-training Data

Pre-training Method

Finetuning Data

Finetuning Method

Evaluation

Evaluation Datasets

Evaluation Results

Author’s Conclusions

Tags: architecture