Generative AI and LLMs: Architecture and Data Preparation
Completed by Abdulabin Amer Lahaji
March 2, 2025
5 hours (approximately)
Abdulabin Amer Lahaji's account is verified. Coursera certifies their successful completion of Generative AI and LLMs: Architecture and Data Preparation
What you will learn
Differentiate between generative AI architectures and models, such as RNNs, transformers, VAEs, GANs, and diffusion models
Describe how LLMs, such as GPT, BERT, BART, and T5, are applied in natural language processing tasks
Implement tokenization to preprocess raw text using NLP libraries like NLTK, spaCy, BertTokenizer, and XLNetTokenizer
Create an NLP data loader in PyTorch that handles tokenization, numericalization, and padding for text datasets
Skills you will gain
- Category: Large Language Modeling
- Category: Recurrent Neural Networks (RNNs)
- Category: Data Preprocessing
- Category: LLM Application
- Category: Data Pipelines
- Category: Generative AI
- Category: PyTorch (Machine Learning Library)
- Category: Natural Language Processing
- Category: Generative Model Architectures
- Category: Hugging Face
- Category: Model Training
- Category: Generative Adversarial Networks (GANs)

