Model Packaging and Deployment with MLflow Models

Note

This section describes MLflow features that are in Private Preview. To request access to the preview, contact your Azure Databricks sales representative. If you are not participating in the preview, see the MLflow open-source documentation for information on how to run standalone MLflow.

An MLflow Model is a standard format for packaging machine learning models that can be used in a variety of downstream tools—for example, batch inference on Apache Spark and real-time serving through a REST API. The format defines a convention that lets you save a model in different flavors that can be understood by different model serving and inference platforms.

This topic provides examples of using a model for inference and deploying models to Azure ML.

Quick start model inference

This notebook is part 2b of a Quick Start guide based on the MLflow tutorial. As in part 1, Quick start training, this notebook uses ElasticNet models trained on the diabetes dataset in scikit-learn. This part of the tutorial shows how to:

  • Select a model to deploy using the MLflow tracking UI
  • Load the trained model as a scikit-learn model
  • Export the model as a PySpark UDF
  • Apply the UDF to add a prediction column to a DataFrame

Quick start model deployment

This notebook is part 2b of a Quick Start guide based on the MLflow tutorial. As in part 1, Quick start training, this notebook uses ElasticNet models trained on the diabetes dataset in scikit-learn. This part of the tutorial shows how to:

  • Select a model to deploy using the MLflow tracking UI
  • Deploy the model to Azure ML using the MLflow API
  • Query the deployed model
  • Repeat the deployment and query process for another model
  • Delete the deployment using the MLflow API