Chevron Left
Back to Natural Language Processing with Sequence Models

Learner Reviews & Feedback for Natural Language Processing with Sequence Models by DeepLearning.AI

910 ratings
184 reviews

About the Course

In Course 3 of the Natural Language Processing Specialization, you will: a) Train a neural network with GLoVe word embeddings to perform sentiment analysis of tweets, b) Generate synthetic Shakespeare text using a Gated Recurrent Unit (GRU) language model, c) Train a recurrent neural network to perform named entity recognition (NER) using LSTMs with linear layers, and d) Use so-called ‘Siamese’ LSTM models to compare questions in a corpus and identify those that are worded differently but have the same meaning. By the end of this Specialization, you will have designed NLP applications that perform question-answering and sentiment analysis, created tools to translate languages and summarize text, and even built a chatbot! This Specialization is designed and taught by two experts in NLP, machine learning, and deep learning. Younes Bensouda Mourri is an Instructor of AI at Stanford University who also helped build the Deep Learning Specialization. Łukasz Kaiser is a Staff Research Scientist at Google Brain and the co-author of Tensorflow, the Tensor2Tensor and Trax libraries, and the Transformer paper....

Top reviews


Sep 27, 2020

Overall it was great a course. A little bit weak in theory. I think for practical purposes whatever was sufficient. The detection of Question duplication was a very much cool model. I enjoy it a lot.


Nov 11, 2021

This is the third course of NLP Specialization. This was a great course and the instructors was amazing. I really learned and understand everything they thought like LSTM, GRU, Siamese Networks etc.

Filter by:

151 - 175 of 192 Reviews for Natural Language Processing with Sequence Models

By Ahnaf A K

Aug 6, 2020

It was a bit repetitive of the 'Sequence Model' course from the Deep Learning specialization, only with the exception of implementing in TRAX.

By Nishank L

Nov 14, 2021

Assignments are good. Can we have these using pytorch. Or better: Can a person choose his own language and build entire code on that !!

By Osama A O

Oct 19, 2020

Great course, although would have been better if assignments were implemented in Keras or PyTorch. Otherwise, definitely worth it!

By Matthew P

Jan 7, 2021

Great information, but some of the assignments had errors and there weren't many interactions from the TAs on the Slack or Forum

By Marc G

Feb 10, 2022

Great course! I would have liked Keras/TensorFlow 2.x or Pytorch to be used instead of Trax which is not as frequently used.

By Manuela D

Mar 14, 2022

Interesting and well explained altough assignment excercise are difficult to understand and they are just focus on Trax

By Mohsen A F

Oct 24, 2020

The clarity of exposition was superb! 1 star less for using TRAX. I would have rathered to use Keras or Tensorflow.

By Saurabh K

May 24, 2021

We might have included little bit more details on dimensions of the inputs and outputs of the Sequence models.

By Mridul G

Jul 14, 2021

T​he course is very good, but its not complete in itself. The way course was taken and everything is good.

By Hair P

Nov 20, 2020

Overall the content was great. Please make sure that errors in the notebooks are corrected.


Sep 18, 2020

The course is designed quite well to boost understanding of Sequence Models in great depth

By Steve H

Apr 3, 2021

Excellent course, but probably worth doing the deep learning specialisation first!

By Ke Z

Feb 24, 2021

I dont like to use TRAX. If it is using tensorflow, then I will give 5 stars

By Alireza S

Dec 11, 2021

I prefer that the lecturer using TensorFlow instead of Trax for exercises

By kerolos E

Mar 23, 2022

Almost perfect. More Explanation in implementation is needed.

By Vitalii S

Jan 21, 2021

Good information, but some assignments were an embarrassment.

By Nikita M

Dec 7, 2020

Not as good as original courses by Andrew

By Gonzalo A M

Jan 14, 2021

it was good but it could be better

By Ruiwen W

Aug 1, 2020

some errors in the assignments

By V B

Sep 24, 2020


By Yaron K

Apr 29, 2022

The 4th week on Siamese networks was well done. The Weeks on RNN GRU and LSTMs basically gave the equations and some intuition but most of the emphasis was on building a model with them using Googles TRAX Deep learning Framework model. Which the lecturers believe to be better than Tenserflow2. At least when it comes to debugging - it isn't. Make the smallest error (say with shape parameters) - and you get a mass of error messages which don't really help. Now at least for shape errors there is no excuse for this - since all that is needed is to run checks on the first batch of the first epoch that pinpoint exactly where there's a shape discrepancy.

By Amlan C

Oct 9, 2020

Despite the theoretical underpinings I do not feel this course lets you write an NER algo on your own . Majority of these courses have been using data Whats supplied by coursera and so is the case with models. In real life we have to either create this data or use some opensource data like from kaggle or whatever. I think it'd be better if we orient the course using publicly available appropriate data and models trained by students to be used for actual analysis.

By Maury S

Mar 8, 2021

Like some of the other courses in this specialization, this one has promise but comes off as a so far somewhat careless effort compared to the usual quality of content from Andrew Ng. The lecturers are OK but not great, and it is unclear what the role of Lukasz Kaiser is beyond reading introductions to many of the lecture. There is a strange focus on simplifying with the Google Trax model at the cost of not really teaching the underlying maths.

By Petru R

Apr 13, 2022

The course requires a solid background on deep learning, it does not explain in detail the LSTMs or how is the programming part keeping the weights of the 2 parts of the siamese network identical.

I​s Trax providing other ways of generating data for siamese networks for training other than writing a custom function?

By Business D

Dec 14, 2020

I regret a lack of proper guidance in the coding exercises, compounded with the incomplete documentation of the trax library. I also feel we could build models with greater performance. An accuracy of 0.54 for the identification of question duplicates doesn't seem to be the state of the art...

You could do better!