RK
The labs all too often failed on environment issues - packages, version alignment, etc. This should be seamless in your controlled environment.
The demand for technical generative AI (GenAI) skills is increasing, and businesses are actively seeking AI engineers who can work with large language models (LLMs). This IBM course is designed to build job-ready skills that can accelerate your AI career.
In this course, you’ll explore transformers and key model frameworks and platforms, including Hugging Face and PyTorch. You’ll begin with a foundational framework for optimizing LLMs and quickly advance to fine-tuning generative AI models. You’ll also learn advanced techniques such as parameter-efficient fine-tuning (PEFT), low-rank adaptation (LoRA), quantized LoRA (QLoRA), and prompting. The hands-on labs will give you valuable, practical experience including loading, pretraining, and fine-tuning models using industry-standard tools. These skills are directly applicable in real-world AI roles and are great for showcasing in interviews. If you’re ready to take your AI career to the next level and strengthen your resume with in-demand Gen AI competencies, enroll today and start applying your new skills in just one week!
RK
The labs all too often failed on environment issues - packages, version alignment, etc. This should be seamless in your controlled environment.
SS
The coding part in the labs provided in this course was very helpful and helped me to stabilize my learning.
AE
The course is good but lacks depth on complex subjects.
Showing: 15 of 15
The labs all too often failed on environment issues - packages, version alignment, etc. This should be seamless in your controlled environment.
The course is good but lacks depth on complex subjects.
In general I find the videos very hard to understand due to the mechanical reading of the texts and way too high tempo, and quite a big amount of grammatical errors subtract from the general readability.
The coding part in the labs provided in this course was very helpful and helped me to stabilize my learning.
Interesting ways for LLM fine-tunning
One of the best courses for sure!!
Thank you
Awesome
GOOD
I recently completed this course on LLM Fine-Tuning and was impressed by the breadth of topics covered. It strikes a great balance between theoretical foundations and the practical tools currently dominating the industry. What I liked: Modern Tech Stack: The course stays relevant by focusing on the Hugging Face Transformers library and PyTorch, which are the gold standard today. Comprehensive Roadmap: It covers everything from the "why" behind fine-tuning to advanced methodologies like Self-Supervised, Supervised (SFT), and RLHF (Reinforcement Learning from Human Feedback). Technical Variety: I appreciated the inclusion of diverse techniques. It covers Selective Fine-Tuning (dated for Transformers but great for context), Additive methods, and essential reparameterization techniques like LoRA and QLoRA, which are crucial in the current landscape. Niche Insights: A big plus for the section on Soft Prompting. It’s a subtle topic that many instructors overlook, yet it’s incredibly useful. Areas for Improvement: The practical component was the only downside. The labs felt a bit passive; it felt more like "reading through code" rather than actively building. I found the practical videos to be a bit too rushed, making it difficult to fully grasp the implementation details. Suggestion: The learning experience would be much more engaging with an interactive AI tutor guiding you step-by-step rather than just reading through notebooks.I recently completed this course on LLM Fine-Tuning and was impressed by the breadth of topics covered. It strikes a great balance between theoretical foundations and the practical tools currently dominating the industry. What I liked: Modern Tech Stack: The course stays relevant by focusing on the Hugging Face Transformers library and PyTorch, which are the gold standard today. Comprehensive Roadmap: It covers everything from the "why" behind fine-tuning to advanced methodologies like Self-Supervised, Supervised (SFT), and RLHF (Reinforcement Learning from Human Feedback). Technical Variety: I appreciated the inclusion of diverse techniques. It covers Selective Fine-Tuning (dated for Transformers but great for context), Additive methods, and essential reparameterization techniques like LoRA and QLoRA, which are crucial in the current landscape. Niche Insights: A big plus for the section on Soft Prompting. It’s a subtle topic that many instructors overlook, yet it’s incredibly useful. Areas for Improvement: The practical component was the only downside. The labs felt a bit passive; it felt more like "reading through code" rather than actively building. I found the practical videos to be a bit too rushed, making it difficult to fully grasp the implementation details. Suggestion: The learning experience would be much more engaging with an interactive AI tutor guiding you step-by-step rather than just reading through notebooks.
La traducción automática no es precisa con ciertos términos.
There is a lot of importing errors in the jupyter notebooks
The lab constantly ffails due to long loading of pip, performance issues
More or less acceptable from the theoric point of view, absolutely terrible from the practical point of view. Pedagogically, these GenAI courses from IBM are an absolut disaster...
Very difficult to understand course videos. Far to much technical jargon which can easily confuse the listener. Found it very frustrating to complete this and irriated over-emphasis on technical specifics. Do not recommend!!!!