Date
Topic
Comments
REALM, RAG, DPR, FiD (Jing)
Topic: Retrieval-Augmented Pre-training and Fine-tuning for Knowledge-Intensive NLP Tasks
Paper: REALM: Retrieval-Augmented Language Model Pre-Training
Paper: Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
Topic: Denoising Sequence-to-Sequence Pre-training for Language Understanding and Generation (Jing)
Paper: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Paper: BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Topic: Contrastive Learning of Sentence Embeddings (Kha-Dinh)
Paper: DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations
Topic: Parameter-Efficient Fine-tuning for NLP
Paper: Parameter-Efficient Transfer Learning for NLP (Liu)
Paper: Prefix-Tuning: Optimizing Continuous Prompts for Generation (Zheng)
Topic: Conditional Natural Language Generation with Conditional Training (Xianjun)
Paper: CTRL: A Conditional Transformer Language Model for Controllable Generation
Topic: Conditional Natural Language Generation with Guided Decoding
Paper: Plug and Play Language Models (Xuan)
Optional: Conditional Natural Language Generation with Prompting
Paper: AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts
Prompt-Tuning
Object Tracking Application
Paper: TransTrack: Multiple Object Tracking with Transformer (Chengyuan)
Paper: LayoutLM: Pre-training of Text and Layout for Document Image Understanding (Pranjali)
Topic: Knowledge Extraction from Pretrained Language Models
Topic: Make it smaller
Paper: DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter (Navya)
Paper: TinyBERT: Distilling BERT for Natural Language Understanding (Dan)
Topic: Pretrained Models for Long Documents
Paper: Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context (Jiachen)
Topic: Architecture Idea (Hong)