Natural Language Processing with Probabilistic Models (Coursera)

Natural Language Processing with Probabilistic Models (Coursera)
Course Auditing
Categories
Effort
Certification
Languages
Misc

MOOC List is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Natural Language Processing with Probabilistic Models (Coursera)
In Course 2 of the Natural Language Processing Specialization, offered by deeplearning.ai, you will: a) Create a simple auto-correct algorithm using minimum edit distance and dynamic programming; b) Apply the Viterbi Algorithm for part-of-speech (POS) tagging, which is important for computational linguistics, c) Write a better auto-complete algorithm using an N-gram language model, and d) Write your own Word2Vec model that uses a neural network to compute word embeddings using a continuous bag-of-words model.

MOOC List is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Please make sure that you’re comfortable programming in Python and have a basic knowledge of machine learning, matrix multiplications, and conditional probability.

By the end of this Specialization, you will have designed NLP applications that perform question-answering and sentiment analysis, created tools to translate languages and summarize text, and even built a chatbot!

This Specialization is designed and taught by two experts in NLP, machine learning, and deep learning. Younes Bensouda Mourri is an Instructor of AI at Stanford University who also helped build the Deep Learning Specialization. Łukasz Kaiser is a Staff Research Scientist at Google Brain and the co-author of Tensorflow, the Tensor2Tensor and Trax libraries, and the Transformer paper.

Course 2 of 4 in the Natural Language Processing Specialization.


Syllabus


WEEK 1

Autocorrect

Learn about autocorrect, minimum edit distance, and dynamic programming, then build your own spellchecker to correct misspelled words!


WEEK 2

Part of Speech Tagging and Hidden Markov Models

Learn about Markov chains and Hidden Markov models, then use them to create part-of-speech tags for a Wall Street Journal text corpus!


WEEK 3

Autocomplete and Language Models

Learn about how N-gram language models work by calculating sequence probabilities, then build your own autocomplete language model using a text corpus from Twitter!


WEEK 4

Word embeddings with neural networks

Learn about how word embeddings carry the semantic meaning of words, which makes them much more powerful for NLP tasks, then build your own Continuous bag-of-words model to create word embeddings from Shakespeare text.



MOOC List is learner-supported. When you buy through links on our site, we may earn an affiliate commission.