Saving, Loading, and Deploying Models


  • This feature is in Public Preview.
  • The R API is not supported in the Public Preview, but is under development.

An MLflow Model is a standard format for packaging machine learning models that can be used in a variety of downstream tools—for example, batch inference on Apache Spark and real-time serving through a REST API. The format defines a convention that lets you save a model in different flavors (Python Function, PyTorch, Scikit-learn, and so on), that can be understood by different model serving and inference platforms.

Saving, loading, and deploying models

Most models are logged to a tracking server using the mlflow.<model-type>.log_model(model, ...), loaded using the mlflow.<model-type>.load_model(modelpath), and deployed using the mlflow.<model-type>.deploy() API.

See the notebooks in Tracking Examples for examples of saving models and the notebooks below for examples of loading and deploying models.

You can also save models locally and load them in a similar way using the mlflow.<model-type>.save_model(model, modelpath) API. For local models, MLflow requires you to use the DBFS FUSE paths for modelpath. For example, if you have a DBFS location dbfs:/diabetes_models to store diabetes regression models, you must use the model path /dbfs/diabetes_models:

modelpath = "/dbfs/diabetes_models/model-%f-%f" % (alpha, l1_ratio)
mlflow.sklearn.save_model(lr, modelpath)