This section shows how to work with data in Azure Databricks.

You can import data into Databricks File System (DBFS), a distributed file system mounted into an Azure Databricks workspace and available on Azure Databricks clusters.

You can create tables directly from imported data and the table schema is stored in the default Azure Databricks internal metastore. You also can configure and use external metastores.

You can also use a wide variety of Apache Spark data sources to access data in your notebooks.