
Duration
7 hours (usually 1 day including breaks)
Requirements
- Experience with TensorFlow
- Experience with the Linux command line
Audience
- Developers
- Data scientists
Overview
TensorFlow Serving is a system for serving machine learning (ML) models to production.
In this instructor-led, live training (online or onsite), participants will learn how to configure and use TensorFlow Serving to deploy and manage ML models in a production environment.
By the end of this training, participants will be able to:
- Train, export and serve various TensorFlow models
- Test and deploy algorithms using a single architecture and set of APIs
- Extend TensorFlow Serving to serve other types of models beyond TensorFlow models
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Course Outline
TensorFlow Serving Overview
- What is TensorFlow Serving?
- TensorFlow Serving architecture
- Serving API and REST client API
Preparing the Development Environment
- Installing and configuring Docker
- Installing ModelServer with Docker
TensorFlow Server Quick Start
- Training and exporting a TensorFlow model
- Monitoring storage systems
- Loading exported model
- Building a TensorFlow ModelServer
Advanced Configuration
- Writing a config file
- Reloading Model Server configuration
- Configuring models
- Working with monitoring configuration
Testing the Application
- Testing and running the server
Debugging the Application
- Handling errors
TensorFlow Serving with Kubernetes
- Running in Docker containers
- Deploying serving clusters
Securing the Application
- Hiding data
Troubleshooting
Summary and Conclusion