IBM
Generative AI Engineering with LLMs Specialization
IBM

Generative AI Engineering with LLMs Specialization

Advance your ML career with Gen AI and LLMs . Master the essentials of Gen AI engineering and large language models (LLMs) in just 3 months.

Sina Nazeri
Fateme Akbari
Wojciech 'Victor' Fulmyk

Instructors: Sina Nazeri

3,104 already enrolled

Included with Coursera Plus

Get in-depth knowledge of a subject
4.6

(68 reviews)

Intermediate level

Recommended experience

3 months
at 4 hours a week
Flexible schedule
Learn at your own pace
Get in-depth knowledge of a subject
4.6

(68 reviews)

Intermediate level

Recommended experience

3 months
at 4 hours a week
Flexible schedule
Learn at your own pace

What you'll learn

  • In-demand, job-ready skills in gen AI, NLP apps, and large language models in just 3 months.

  • How to tokenize and load text data to train LLMs and deploy Skip-Gram, CBOW, Seq2Seq, RNN-based, and Transformer-based models with PyTorch

  • How to employ frameworks and pre-trained models such as LangChain and Llama for training, developing, fine-tuning, and deploying LLM applications.

  • How to implement a question-answering NLP system by preparing, developing, and deploying NLP applications using RAG.

Details to know

Shareable certificate

Add to your LinkedIn profile

Taught in English
Recently updated!

September 2024

See how employees at top companies are mastering in-demand skills

Placeholder

Advance your subject-matter expertise

  • Learn in-demand skills from university and industry experts
  • Master a subject or tool with hands-on projects
  • Develop a deep understanding of key concepts
  • Earn a career certificate from IBM
Placeholder
Placeholder

Earn a career certificate

Add this credential to your LinkedIn profile, resume, or CV

Share it on social media and in your performance review

Placeholder

Specialization - 7 course series

What you'll learn

  • Differentiate between generative AI architectures and models, such as RNNs, Transformers, VAEs, GANs, and Diffusion Models.

  • Describe how LLMs, such as GPT, BERT, BART, and T5, are used in language processing.

  • Implement tokenization to preprocess raw textual data using NLP libraries such as NLTK, spaCy, BertTokenizer, and XLNetTokenizer.

  • Create an NLP data loader using PyTorch to perform tokenization, numericalization, and padding of text data.

Skills you'll gain

Category: Generative AI applications
Category: Retrieval augmented generation (RAG)
Category: Vector Database
Category: LangChain
Category: Gradio
Category: Vector database

What you'll learn

  • Explain how to use one-hot encoding, bag-of-words, embedding, and embedding bags to convert words to features.

  • Build and use word2vec models for contextual embedding.

  • Build and train a simple language model with a neural network.

  • Utilize N-gram and sequence-to-sequence models for document classification, text analysis, and sequence transformation.

Skills you'll gain

Category: Retrieval augmented generation (RAG)
Category: In-context learning and prompt engineering
Category: LangChain
Category: Vector databases
Category: Chatbots

Generative AI Language Modeling with Transformers

Course 38 hours4.5 (21 ratings)

What you'll learn

  • Explain the concept of attention mechanisms in transformers, including their role in capturing contextual information.

  • Describe language modeling with the decoder-based GPT and encoder-based BERT.

  • Implement positional encoding, masking, attention mechanism, document classification, and create LLMs like GPT and BERT.

  • Use transformer-based models and PyTorch functions for text classification, language translation, and modeling.

Skills you'll gain

Category: Reinforcement Learning
Category: Proximal policy optimization (PPO)
Category: Reinforcement learning
Category: Direct preference optimization (DPO)
Category: Hugging Face
Category: Instruction-tuning

Generative AI Engineering and Fine-Tuning Transformers

Course 48 hours4.8 (10 ratings)

What you'll learn

  • Sought-after job-ready skills businesses need for working with transformer-based LLMs for generative AI engineering... in just 1 week.

  • How to perform parameter-efficient fine-tuning (PEFT) using LoRA and QLoRA

  • How to use pretrained transformers for language tasks and fine-tune them for specific tasks.

  • How to load models and their inferences and train models with Hugging Face.

Skills you'll gain

Category: Fine-tuning LLMs
Category: LoRA and QLoRA
Category: Pretraining transformers
Category: PyTorch
Category: Hugging Face

What you'll learn

  • In-demand gen AI engineering skills in fine-tuning LLMs employers are actively looking for in just 2 weeks

  • Instruction-tuning and reward modeling with the Hugging Face, plus LLMs as policies and RLHF

  • Direct preference optimization (DPO) with partition function and Hugging Face and how to create an optimal solution to a DPO problem

  • How to use proximal policy optimization (PPO) with Hugging Face to create a scoring function and perform dataset tokenization

Skills you'll gain

Category: Bidirectional Representation for Transformers (BERT)
Category: Positional encoding and masking
Category: Generative pre-trained transformers (GPT)
Category: Language transformation
Category: PyTorch functions

Fundamentals of AI Agents Using RAG and LangChain

Course 66 hours4.7 (16 ratings)

What you'll learn

  • In-demand job-ready skills businesses need for building AI agents using RAG and LangChain in just 8 hours.

  • How to apply the fundamentals of in-context learning and advanced methods of prompt engineering to enhance prompt design.

  • Key LangChain concepts, tools, components, chat models, chains, and agents.

  • How to apply RAG, PyTorch, Hugging Face, LLMs, and LangChain technologies to different applications.

Skills you'll gain

Category: N-Gram
Category: PyTorch torchtext
Category: Generative AI for NLP
Category: Word2Vec Model
Category: Sequence-to-Sequence Model

What you'll learn

  • Gain practical experience building your own real-world gen AI application that you can talk about in interviews.

  • Get hands-on using LangChain to load documents and apply text splitting techniques with RAG and LangChain to enhance model responsiveness.

  • Create and configure a vector database to store document embeddings and develop a retriever to fetch document segments based on queries.

  • Set up a simple Gradio interface for model interaction and construct a QA bot using LangChain and an LLM to answer questions from loaded documents.

Skills you'll gain

Category: Tokenization
Category: Hugging Face Libraries
Category: NLP Data Loader
Category: Large Language Models
Category: PyTorch

Instructors

Sina Nazeri
IBM
2 Courses12,310 learners
Fateme Akbari
IBM
4 Courses4,934 learners
Wojciech 'Victor' Fulmyk
IBM
4 Courses35,921 learners

Offered by

IBM

Why people choose Coursera for their career

Felipe M.
Learner since 2018
"To be able to take courses at my own pace and rhythm has been an amazing experience. I can learn whenever it fits my schedule and mood."
Jennifer J.
Learner since 2020
"I directly applied the concepts and skills I learned from my courses to an exciting new project at work."
Larry W.
Learner since 2021
"When I need courses on topics that my university doesn't offer, Coursera is one of the best places to go."
Chaitanya A.
"Learning isn't just about being better at your job: it's so much more than that. Coursera allows me to learn without limits."

New to Machine Learning? Start here.

Placeholder

Open new doors with Coursera Plus

Unlimited access to 7,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription

Advance your career with an online degree

Earn a degree from world-class universities - 100% online

Join over 3,400 global companies that choose Coursera for Business

Upskill your employees to excel in the digital economy

Frequently asked questions