Chevron Left
Back to Build, Train, and Deploy ML Pipelines using BERT

Learner Reviews & Feedback for Build, Train, and Deploy ML Pipelines using BERT by DeepLearning.AI

12 ratings
1 reviews

About the Course

In the second course of the Practical Data Science Specialization, you will learn to automate a natural language processing task by building an end-to-end machine learning pipeline using Hugging Face’s highly-optimized implementation of the state-of-the-art BERT algorithm with Amazon SageMaker Pipelines. Your pipeline will first transform the dataset into BERT-readable features and store the features in the Amazon SageMaker Feature Store. It will then fine-tune a text classification model to the dataset using a Hugging Face pre-trained model, which has learned to understand the human language from millions of Wikipedia documents. Finally, your pipeline will evaluate the model’s accuracy and only deploy the model if the accuracy exceeds a given threshold. Practical data science is geared towards handling massive datasets that do not fit in your local hardware and could originate from multiple sources. One of the biggest benefits of developing and running data science projects in the cloud is the agility and elasticity that the cloud offers to scale up and out at a minimum cost. The Practical Data Science Specialization helps you develop the practical skills to effectively deploy your data science projects and overcome challenges at each step of the ML workflow using Amazon SageMaker. This Specialization is designed for data-focused developers, scientists, and analysts familiar with the Python and SQL programming languages and want to learn how to build, train, and deploy scalable, end-to-end ML pipelines - both automated and human-in-the-loop - in the AWS cloud....
Filter by:

1 - 2 of 2 Reviews for Build, Train, and Deploy ML Pipelines using BERT

By Israel T

Jun 19, 2021

Great for introduction to the AWS Sagemaker tools. But if you really want to dive deeper on the tools, you need to add and explore other resources, since most of the codes are already provided in the exercise.

By Magnus M

Jun 14, 2021

The videos are excellent. The labs are way too easy, just copying some variable names.