MOOC List is learner-supported. When you buy through links on our site, we may earn an affiliate commission.
MOOC List is learner-supported. When you buy through links on our site, we may earn an affiliate commission.
We will explore the basics of how language models work, and the specifics of how newer neural-based approaches are built. We will examine the key innovations that have enabled Transformer-based large language models to become dominant in solving various language tasks. Finally, we will examine the challenges in applying these large language models to various problems including the ethical problems involved in their construction and use.
Through hands-on labs, we will learn about the building blocks of Transformers and apply them for generating new text. These Python exercises step you through the process of applying a smaller language model and understanding how it can be evaluated and applied to various problems. Regular practice quizzes will help reinforce the knowledge and prepare you for the graded assessments.
What you'll learn
You will learn the foundations, implementations, applications, and risks of GPT.
Syllabus
Language Modeling
Module 1
This module introduces the concept of language modelling, which is the foundation of models like GPT.
Transformers and GPT
Module 2
This module describes the technical background for neural language models and an overview of how they are used to generate text.
Applications and Implications
Module 3
This module discusses considerations that are necessary when using GPT and similar models in real-world contexts, specifically discussing the risks of using these models and approaches to mitigating these risks.
MOOC List is learner-supported. When you buy through links on our site, we may earn an affiliate commission.
MOOC List is learner-supported. When you buy through links on our site, we may earn an affiliate commission.