TensorFlow Serving with Docker for Model Deployment

4.9
stars
15 ratings
Offered By
Coursera Project Network
2,129 already enrolled
In this Guided Project, you will:

Train and export TensorFlow Models for text classification

Serve and deploy models with TensorFlow Serving and Docker

Perform model inference with gRPC and REST endpoints

Clock1.5 hours
IntermediateIntermediate
CloudNo download needed
VideoSplit-screen video
Comment DotsEnglish
LaptopDesktop only

This is a hands-on, guided project on deploying deep learning models using TensorFlow Serving with Docker. In this 1.5 hour long project, you will train and export TensorFlow models for text classification, learn how to deploy models with TF Serving and Docker in 90 seconds, and build simple gRPC and REST-based clients in Python for model inference. With the worldwide adoption of machine learning and AI by organizations, it is becoming increasingly important for data scientists and machine learning engineers to know how to deploy models to production. While DevOps groups are fantastic at scaling applications, they are not the experts in ML ecosystems such as TensorFlow and PyTorch. This guided project gives learners a solid, real-world foundation of pushing your TensorFlow models from development to production in no time! Prerequisites: In order to successfully complete this project, you should be familiar with Python, and have prior experience with building models with Keras or TensorFlow. Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.

Skills you will develop

Deep LearningDockerTensorFlow ServingTensorflowmodel deployment

Learn step-by-step

In a video that plays in a split-screen with your work area, your instructor will walk you through these steps:

  1. Introduction and Demo Deployment

  2. Load and Preprocess the Amazon Fine Foods Review Data

  3. Build Text Classification Model using Keras and TensorFlow Hub

  4. Define Training Procedure

  5. Train and Export Model as Protobuf

  6. Test Model

  7. TensorFlow Serving with Docker

  8. Setup a REST Client to Perform Model Predictions

  9. Setup a gRPC Client to Perform Model Predictions

  10. Versioning with TensorFlow Serving

How Guided Projects work

Your workspace is a cloud desktop right in your browser, no download required

In a split-screen video, your instructor guides you step-by-step

Frequently asked questions

Frequently Asked Questions

More questions? Visit the Learner Help Center.