This course is best suited for software engineers, data scientists, and graduate students in computer science or engineering fields who wish to develop expertise in building and deploying natural language processing systems to solve real-world language understanding challenges.

Applied Natural Language Processing in Engineering Part 2

Applied Natural Language Processing in Engineering Part 2

Instructor: Ramin Mohammadi
Access provided by Allegiant Giving Corporation
Skills you'll gain
Tools you'll learn
Details to know

Add to your LinkedIn profile
21 assignments
October 2025
See how employees at top companies are mastering in-demand skills

There are 7 modules in this course
This module delves into the critical preprocessing step of tokenization in NLP, where text is segmented into smaller units called tokens. You will explore various tokenization techniques, including character-based, word-level, Byte Pair Encoding (BPE), WordPiece, and Unigram tokenization. Then you’ll examine the importance of normalization and pre-tokenization processes to ensure text uniformity and improve tokenization accuracy. Through practical examples and hands-on exercises, students will learn to handle out-of-vocabulary (OOV) issues, manage large vocabularies efficiently, and understand the computational complexities involved. By the end of the module, you will be equipped with the knowledge to implement and optimize tokenization methods for diverse NLP applications.
What's included
1 video13 readings2 assignments1 app item
1 video• Total 1 minute
- Meet Your Faculty• 1 minute
13 readings• Total 69 minutes
- Course Introduction• 2 minutes
- Syllabus - Applied Natural Language Processing in Engineering Part 2• 10 minutes
- Academic Integrity• 1 minute
- Week 8 Overview• 2 minutes
- Introduction• 5 minutes
- Pre-Tokenization• 5 minutes
- Character-based Tokenization• 5 minutes
- Word-level Tokenization• 5 minutes
- Byte Pair Encoding (BPE)• 10 minutes
- WordPiece Tokenization• 10 minutes
- Unigram Tokenization• 10 minutes
- Vocabulary Pruning in Unigram Tokenization• 2 minutes
- Summary and Final Thoughts• 2 minutes
2 assignments• Total 75 minutes
- Assess Your Learning: Tokenization• 30 minutes
- Module 8 Quiz• 45 minutes
1 app item• Total 10 minutes
- The Viterbi Algorithm for Tokenization• 10 minutes
In this module, we will explore foundational models in natural language processing (NLP), focusing on language models, feedforward neural networks (FFNNs), and Hidden Markov Models (HMMs). Language models are crucial in predicting and generating sequences of text by assigning probabilities to words or phrases within a sentence, allowing for applications such as autocomplete and text generation. FFNNs, though limited to fixed-size contexts, are foundational neural architectures used in language modeling, learning complex word relationships through non-linear transformations. In contrast, HMMs model sequences based on hidden states, which influence observable outcomes. They are particularly useful in tasks like part-of-speech tagging and speech recognition. As the module progresses, we will also examine modern advancements like neural transition-based parsing and the evolution of language models into sophisticated architectures such as transformers and large-scale pre-trained models like BERT and GPT. This module provides a comprehensive view of how language modeling has developed from statistical methods to cutting-edge neural architectures.
What's included
2 videos19 readings4 assignments
2 videos• Total 8 minutes
- Language Models• 4 minutes
- Hidden Markov Models• 4 minutes
19 readings• Total 183 minutes
- Week 9 Overview• 2 minutes
- Introduction to Language Models• 5 minutes
- Probability Assignment in Language Model• 2 minutes
- Evolution of Language Models• 10 minutes
- State-of-the-Art Models• 2 minutes
- N-Gram• 5 minutes
- Probabilities in Language Models• 10 minutes
- Example: The Cat Sat on the Mat• 10 minutes
- Limitations of N-Gram Models• 5 minutes
- FFNN in Language Modeling• 20 minutes
- Pros and Cons of FFNNs• 5 minutes
- Introduction to HMM• 10 minutes
- Hidden Markov Models• 2 minutes
- Mathematical Representation of HMMs• 15 minutes
- Likelihood Problem: Forward Algorithm• 10 minutes
- Decoding Problem: Viterbi Algorithm• 15 minutes
- Learning Problem: Baum-Welch Algorithm• 15 minutes
- Example of HMM• 20 minutes
- HMMs in Speech Recognition• 20 minutes
4 assignments• Total 120 minutes
- Assess Your Learning: Language Models• 30 minutes
- Assess Your Learning: FFNNs• 15 minutes
- Assess Your Learning: HMMs• 30 minutes
- Module 9 Quiz • 45 minutes
In this module, we will explore Recurrent Neural Networks (RNNs), a fundamental architecture in deep learning designed for sequential data. RNNs are particularly well-suited for tasks where the order of inputs matters, such as time series prediction, language modeling, and speech recognition. Unlike traditional neural networks, RNNs have connections that allow them to “remember” information from previous steps by sharing parameters across time steps. This ability enables them to capture temporal dependencies in data, making them powerful for sequence-based tasks. However, RNNs come with challenges like vanishing and exploding gradients which affect their ability to learn long-term dependencies. Throughout the module, you will explore different RNN variants such as Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRUs), which address these challenges. You will also delve into advanced training techniques and applications of RNNs in real-world NLP and time series problems.
What's included
2 videos22 readings2 assignments1 app item
2 videos• Total 4 minutes
- Recurrent Neural Networks• 4 minutes
- The RNN Process• 0 minutes
22 readings• Total 221 minutes
- Week 10 Overview• 2 minutes
- Recurrent Neural Networks (RNNs)• 2 minutes
- Challenges & Applications in RNN• 5 minutes
- Parameter Sharing in RNN• 5 minutes
- Dynamic Systems• 5 minutes
- Dynamic Systems to RNN• 10 minutes
- Computing Gradient in RNN• 10 minutes
- RNN Advantages and Disadvantages• 5 minutes
- Training an RNN Language Model• 20 minutes
- Problems with RNN• 15 minutes
- How to Solve these Issues?• 15 minutes
- Gated RNN• 15 minutes
- LSTM Equations• 15 minutes
- Gated Recurrent Unit (GRU)• 10 minutes
- Residual Neural Networks• 20 minutes
- Skip Connection: The Key to Learning Residuals• 15 minutes
- Conventions Used• 2 minutes
- Step-by-Step Breakdown 1 - 2• 5 minutes
- Step-by-Step Breakdown 3 A - G• 15 minutes
- Step-by-Step Breakdown 3 H - N• 15 minutes
- Step-by-Step Breakdown 4 - 6• 10 minutes
- Perplexity Calculation• 5 minutes
2 assignments• Total 75 minutes
- Assess Your Learning: RNNs• 30 minutes
- Module 10 Quiz • 45 minutes
1 app item• Total 10 minutes
- Introduction to LSTM, GRU, and Residual Networks• 10 minutes
This module introduces students to advanced Natural Language Processing (NLP) techniques, focusing on foundational tasks such as Part-of-Speech (PoS) tagging, sentiment analysis, and sequence modeling with recurrent neural networks (RNNs). Students will examine how PoS tagging helps in understanding grammatical structures, enabling applications such as machine translation and named entity recognition (NER). The module delves into sentiment analysis, highlighting various approaches from traditional machine learning models (e.g., Naive Bayes) to advanced deep learning techniques (e.g., bidirectional RNNs and transformers). Students will learn to implement both forward and backward contextual understanding using bidirectional RNNs, which improves accuracy in tasks where sequence order impacts meaning. By the end of the course, students will gain hands-on experience building NLP models for real-world applications, equipping them to handle sequential data and capture complex dependencies in text analysis.
What's included
1 video15 readings4 assignments
1 video• Total 5 minutes
- Introduction to PoS Tagging, Bidirectional RNNs, and Sentiment Analysis• 5 minutes
15 readings• Total 113 minutes
- Week 11 Overview• 2 minutes
- Introduction to PoS Tagging• 10 minutes
- How does PoS Tagging Works?• 10 minutes
- Challenges in & Advantages of PoS Tagging• 5 minutes
- Using Recurrent Neural Networks (RNNs) for PoS Tagging• 10 minutes
- Steps in PoS Tagging with RNN• 5 minutes
- Using LSTM or GRU in Place of Simple RNNs• 10 minutes
- Conclusion• 10 minutes
- Motivation• 2 minutes
- Bidirectional RNNs• 10 minutes
- Multi-layer RNNs• 10 minutes
- Introduction• 5 minutes
- Approaches with RNNs• 20 minutes
- Other Approaches for Sentiment Analysis• 2 minutes
- Conclusion• 2 minutes
4 assignments• Total 135 minutes
- Assess Your Learning: PoS• 30 minutes
- Assess Your Learning: Bidirectional RNNs• 30 minutes
- Assess Your Learning: Sentiment Analysis• 30 minutes
- Module 11 Quiz • 45 minutes
This module introduces you to core tasks and advanced techniques in Natural Language Processing (NLP), with a focus on structured prediction, machine translation, and sequence labeling. You will explore foundational topics such as Named Entity Recognition (NER), Part-of-Speech (PoS) tagging, and sentiment analysis and use neural network architectures like Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) networks, and Conditional Random Fields (CRFs). The module will cover key concepts in sequence modeling, such as bidirectional and multi-layer RNNs, which capture both past and future context to enhance the accuracy of tasks like NER and PoS tagging. Additionally, you will delve into Neural Machine Translation (NMT), examining encoder-decoder models with attention mechanisms to address challenges in translating long sequences. Practical implementations will involve integrating these models into real-world applications, focusing on handling complex language structures, rare words, and sequential dependencies. By the end of this module, you will be proficient in building and optimizing deep learning models for a variety of NLP tasks.
What's included
3 videos18 readings4 assignments
3 videos• Total 7 minutes
- Introduction to CRF• 3 minutes
- Introduction to NER and NMT• 4 minutes
- Visualization of the NMT Process • 0 minutes
18 readings• Total 164 minutes
- Week 12 Overview• 2 minutes
- Definition of CRF• 10 minutes
- CRF Model with LSTM• 10 minutes
- Combining LSTM with CRF• 20 minutes
- Calculating the Probability of a Sequence, Log-Probability & Training Objective• 15 minutes
- Decoding: Finding the Best Label Sequence• 5 minutes
- Details on LSTM-CRF Components• 15 minutes
- Summary of the Transition Matrix in CRF• 5 minutes
- Named Entity Recognition (NER)• 10 minutes
- NER Using RNNs/LSTMs• 10 minutes
- BiLSTM for NER• 10 minutes
- CRF Layer for Sequencing Labeling• 10 minutes
- Attention in NER• 10 minutes
- Table: Alphabetical List of PoS Tags used in the Penn Treebank Project• 5 minutes
- Machine Translation Overview• 5 minutes
- Sequence-to-Sequence Model for NMT• 10 minutes
- Learning in NMT: Optimization and Loss Function• 10 minutes
- Byte Pair Encoding (BPE) for Handling Rare Words• 2 minutes
4 assignments• Total 135 minutes
- Assess Your Learning: CRFs• 30 minutes
- Assess Your Learning: NERs• 30 minutes
- Assess Your Learning: NMTs• 30 minutes
- Module 12 Quiz• 45 minutes
In this module we’ll focus on attention mechanisms and explore the evolution and significance of attention in neural networks, starting with its introduction in neural machine translation. We’ll cover the challenges of traditional sequence-to-sequence models and how attention mechanisms, particularly in Transformer architectures, address issues like long-range dependencies and parallelization, which enhances the model's ability to focus on relevant parts of the input sequence dynamically. Then, we’ll turn our attention to Transformers and delve into the revolutionary architecture introduced by Vaswani et al. in 2017, which has significantly advanced natural language processing. We’ll cover the core components of Transformers, including self-attention, multi-head attention, and positional encoding to explain how these innovations address the limitations of traditional sequence models and enable efficient parallel processing and handling of long-range dependencies in text.
What's included
2 videos25 readings3 assignments2 app items
2 videos• Total 9 minutes
- Attention Mechanisms• 3 minutes
- Transformers• 6 minutes
25 readings• Total 239 minutes
- Week 13 Overview• 2 minutes
- Introduction and Motivation• 5 minutes
- Sequence-to-Sequence Models• 5 minutes
- Challenges of Seq2Seq Models• 15 minutes
- Attention Mechanisms• 5 minutes
- General Seq2Seq Models• 10 minutes
- Detailed Attention Process in Seq2Seq• 15 minutes
- Introduction and Transformer Architecture• 2 minutes
- Applications of Transformer Architectures• 5 minutes
- Key, Query, Value• 3 minutes
- Self-Attention• 15 minutes
- Self-Attention as Routing• 5 minutes
- Computing and Weighting Values• 10 minutes
- Self-Attention in Matrix Form• 10 minutes
- Position Representations • 10 minutes
- The Intuition• 15 minutes
- Elementwise Nonlinearity• 20 minutes
- Multi-head Attention• 10 minutes
- Sequence-Tensor Form• 10 minutes
- Transformers• 15 minutes
- Types of Transformers• 20 minutes
- Cross-Attention• 15 minutes
- Decoder Process with Cross-Attention• 10 minutes
- Drawbacks of Transformers• 5 minutes
- Conclusion• 2 minutes
3 assignments• Total 105 minutes
- Assess Your Learning: Attention• 30 minutes
- Assess Your Learning: Transformer• 30 minutes
- Module 13 Quiz• 45 minutes
2 app items• Total 40 minutes
- Multi-Head Visualization• 20 minutes
- Encoder-Decoder Example• 20 minutes
In this module, we’ll hone in on pre-training and explore the foundational role of pre-training in modern NLP models, highlighting how models are initially trained on large, general datasets to learn language structures and semantics. This pre-training phase, often involving tasks like masked language modeling, equips models with broad linguistic knowledge, which can then be fine-tuned on specific tasks, enhancing performance and reducing the need for extensive task-specific data.
What's included
1 video19 readings2 assignments
1 video• Total 5 minutes
- Pre-Training• 5 minutes
19 readings• Total 209 minutes
- Week 14 Overview• 2 minutes
- Introduction to Pre-Training• 15 minutes
- Pretrained Word Embeddings• 10 minutes
- Learning from Reconstructing Input• 10 minutes
- Pretraining Through Language Modeling• 20 minutes
- Pretraining for Three Types of Architectures• 10 minutes
- BERT: Bidirectional Encoder Representations from Transformers• 15 minutes
- BERT Pre-training • 10 minutes
- Fine-tuning• 15 minutes
- Full fine-tuning vs Parameter-Efficient Fine-tuning• 15 minutes
- Limitations of Pre-trained Encoders and Extensions of BERT• 10 minutes
- Pretraining Decoders• 10 minutes
- Generative Pretrained Transformer (GPT)• 10 minutes
- Scaling Laws• 15 minutes
- What kinds of things does pretraining teach?• 10 minutes
- Pretraining encoder-decoders: What pretraining objective to use?• 15 minutes
- Span Corruption: T5 model• 10 minutes
- Transfer Learning to Downstream Tasks• 5 minutes
- Congratulations! • 2 minutes
2 assignments• Total 75 minutes
- Assess Your Learning: Pre-training• 30 minutes
- Module 14 Quiz• 45 minutes
Instructor

Offered by

Offered by

Founded in 1898, Northeastern is a global research university with a distinctive, experience-driven approach to education and discovery. The university is a leader in experiential learning, powered by the world’s most far-reaching cooperative education program. The spirit of collaboration guides a use-inspired research enterprise focused on solving global challenges in health, security, and sustainability.
Why people choose Coursera for their career

Felipe M.

Jennifer J.

Larry W.

Chaitanya A.
Explore more from Data Science
NNortheastern University
Course
NNortheastern University
Course

Specialization

Course