What's Happening

Containerizing Neural Networks

During the 2018 Build keynote this year Microsoft talked about deploying models in containers using Docker and Kuburnetes.

Machine Learning Model Considerations

  1. Handoff from Data Scientists to DevOps
  2. Portability
  3. Scalability
  4. Ease of Model Refreshing

Preparing a machine learning model for production work requires taking many factors into account. The needs include an easy method to hand off data between the Data Science team and the DevOps team. The model can be quickly deployed by making it portable, which allows for easy updates. It should be scalable so that as usage increases it is easy to scale. The model also needs an established cadence for Model Refreshing, and an easy way to roll back to previous versions of the model.

AI Containers

The typical output of a Neural Network training is a serialized model and training values for that model. The model is deserialized and the training is applied to prepare it for use. We can store this model and training data in a separate docker container from the service that is actually performing the prediction using the model. This allows each to version separately, opening up a lot of possibilities. It allows for easy updates to either the model or the service consuming the model independently. Containerization also opens up the possibility of using Kubernetes for scale or to push the container via Azure IoT Edge to all of your Edge devices. It also allows for the possibility of A/B testing your models to see which performs better with your customer base.

For all of these reasons, best practices have emerged for packing your neural network as a container.

I’ve got a couple events coming up in Milwaukee and Madison. I’m giving my Intro to Artificial Neural Network talk for the Cream City Code conference at Potawatomi Casino on Oct. 13th and at Herzing University for a Madison Meetup on Oct. 17th – I hope to see you there.

Share