Quick Start


  • This feature is in Public Preview.
  • The R API is not supported in the Public Preview, but is under development.

MLflow is an open source platform for managing the end-to-end machine learning lifecycle. It has three primary components: Tracking, Models, and Projects. The MLflow Tracking component lets you log and query machine model training sessions (runs) using Java, Python, R, and REST APIs. An MLflow run is a collection of parameters, metrics, tags, and artifacts associated with a machine learning model training process.

Experiments are the primary unit of organization in MLflow – all MLflow runs belong to an experiment. Each experiment lets you visualize, search, and compare runs, as well as download run artifacts or metadata for analysis in other tools. Experiments are maintained in a Azure Databricks hosted MLflow tracking server.

Experiments are located in the Workspace file tree. You manage experiments using the same tools you use to manage other workspace objects such as folders, notebooks, and libraries. The /Shared/experiments folder is for sharing experiments across your organization. For example:


The following notebooks provide a quick start that demonstrates how to create and log to an MLflow run using MLflow’s tracking APIs, as well how to use the experiment UI to view the run. These notebooks are available in both Python and Scala.


The notebooks assume that you have a /Shared/experiments folder.

  1. Go to the Shared folder. See Special folders.
  2. If you do not have an experiments subfolder, select Create > Folder.
  3. Enter experiments.
  4. Click Create Folder.