Language Modeling, Recurrent Neural Nets and Seq2Seq

January 20, 2023 - 2 minute read - Category: Intro - Tags: Deep learning


This post covers the third lecture in the course: “Language Modeling, Recurrent Neural Nets and Seq2Seq.”

TThis lecture will cover the history of language modeling prior to transformers. Even though you should typically use a transformer-based language model, this lecture will provide valuable context for understanding NLP and how the field has evolved. It will also introduce sequence-tosequence models and attention, key pre-requisites for understanding transformers and various applications.

The Lecture is broken up into three parts:

  • Language Modeling, covering early examples of language modeling
  • Word Embeddings, covering GloVe, Word2Vec, etc
  • Seq2Seq, covering Sequence to Sequence models (LSTMs)

Lecture Videos

Language Modeling

Watch the video

Word Embeddings

Watch the video

Sequence to Sequence Models

Watch the video

Lecture notes: Language Modeling, Word Embeddings, Seq2Seq

References Cited in Lecture 3: Language Modeling, Recurrent Neural Nets and Seq2Seq

Academic Papers

Other Resources

Code Bases

Image Source: