Chevron Left
Back to Generative AI Engineering and Fine-Tuning Transformers

Learner Reviews & Feedback for Generative AI Engineering and Fine-Tuning Transformers by IBM

4.5
stars
113 ratings

About the Course

The demand for technical generative AI (GenAI) skills is increasing, and businesses are actively seeking AI engineers who can work with large language models (LLMs). This IBM course is designed to build job-ready skills that can accelerate your AI career. In this course, you’ll explore transformers and key model frameworks and platforms, including Hugging Face and PyTorch. You’ll begin with a foundational framework for optimizing LLMs and quickly advance to fine-tuning generative AI models. You’ll also learn advanced techniques such as parameter-efficient fine-tuning (PEFT), low-rank adaptation (LoRA), quantized LoRA (QLoRA), and prompting. The hands-on labs will give you valuable, practical experience including loading, pretraining, and fine-tuning models using industry-standard tools. These skills are directly applicable in real-world AI roles and are great for showcasing in interviews. If you’re ready to take your AI career to the next level and strengthen your resume with in-demand Gen AI competencies, enroll today and start applying your new skills in just one week!...

Top reviews

RK

Jan 16, 2025

The labs all too often failed on environment issues - packages, version alignment, etc. This should be seamless in your controlled environment.

SS

Nov 16, 2024

The coding part in the labs provided in this course was very helpful and helped me to stabilize my learning.

Filter by:

1 - 15 of 15 Reviews for Generative AI Engineering and Fine-Tuning Transformers

By Roger K

Jan 17, 2025

The labs all too often failed on environment issues - packages, version alignment, etc. This should be seamless in your controlled environment.

By Alexandre E

Jan 2, 2025

The course is good but lacks depth on complex subjects.

By Jens H

Jan 25, 2025

In general I find the videos very hard to understand due to the mechanical reading of the texts and way too high tempo, and quite a big amount of grammatical errors subtract from the general readability.

By Sajjad

Nov 17, 2024

The coding part in the labs provided in this course was very helpful and helped me to stabilize my learning.

By Mike S

Oct 1, 2025

Interesting ways for LLM fine-tunning

By Geetika P

Mar 26, 2025

One of the best courses for sure!!

By Eman H

Jun 17, 2025

Thank you

By Uwiragiye B

Dec 4, 2024

Awesome

By Deepa d

Dec 1, 2025

GOOD

By Aurelio M

May 8, 2026

I recently completed this course on LLM Fine-Tuning and was impressed by the breadth of topics covered. It strikes a great balance between theoretical foundations and the practical tools currently dominating the industry. What I liked: Modern Tech Stack: The course stays relevant by focusing on the Hugging Face Transformers library and PyTorch, which are the gold standard today. Comprehensive Roadmap: It covers everything from the "why" behind fine-tuning to advanced methodologies like Self-Supervised, Supervised (SFT), and RLHF (Reinforcement Learning from Human Feedback). Technical Variety: I appreciated the inclusion of diverse techniques. It covers Selective Fine-Tuning (dated for Transformers but great for context), Additive methods, and essential reparameterization techniques like LoRA and QLoRA, which are crucial in the current landscape. Niche Insights: A big plus for the section on Soft Prompting. It’s a subtle topic that many instructors overlook, yet it’s incredibly useful. Areas for Improvement: The practical component was the only downside. The labs felt a bit passive; it felt more like "reading through code" rather than actively building. I found the practical videos to be a bit too rushed, making it difficult to fully grasp the implementation details. Suggestion: The learning experience would be much more engaging with an interactive AI tutor guiding you step-by-step rather than just reading through notebooks.I recently completed this course on LLM Fine-Tuning and was impressed by the breadth of topics covered. It strikes a great balance between theoretical foundations and the practical tools currently dominating the industry. What I liked: Modern Tech Stack: The course stays relevant by focusing on the Hugging Face Transformers library and PyTorch, which are the gold standard today. Comprehensive Roadmap: It covers everything from the "why" behind fine-tuning to advanced methodologies like Self-Supervised, Supervised (SFT), and RLHF (Reinforcement Learning from Human Feedback). Technical Variety: I appreciated the inclusion of diverse techniques. It covers Selective Fine-Tuning (dated for Transformers but great for context), Additive methods, and essential reparameterization techniques like LoRA and QLoRA, which are crucial in the current landscape. Niche Insights: A big plus for the section on Soft Prompting. It’s a subtle topic that many instructors overlook, yet it’s incredibly useful. Areas for Improvement: The practical component was the only downside. The labs felt a bit passive; it felt more like "reading through code" rather than actively building. I found the practical videos to be a bit too rushed, making it difficult to fully grasp the implementation details. Suggestion: The learning experience would be much more engaging with an interactive AI tutor guiding you step-by-step rather than just reading through notebooks.

By FRANCISCO J M S

Sep 30, 2025

La traducción automática no es precisa con ciertos términos.

By Óscar Z R

Jan 25, 2026

There is a lot of importing errors in the jupyter notebooks

By john l

Aug 17, 2025

The lab constantly ffails due to long loading of pip, performance issues

By Aitor C M d G

Apr 27, 2026

More or less acceptable from the theoric point of view, absolutely terrible from the practical point of view. Pedagogically, these GenAI courses from IBM are an absolut disaster...

By Conor J C

Aug 10, 2025

Very difficult to understand course videos. Far to much technical jargon which can easily confuse the listener. Found it very frustrating to complete this and irriated over-emphasis on technical specifics. Do not recommend!!!!