You can connect business intelligence (BI) tools to Azure Databricks clusters to query data in tables. Every Azure Databricks cluster runs a JDBC/ODBC server on the driver node. This topic covers general installation and configuration instructions for most BI tools. For tool-specific connection instructions, see Business Intelligence Tools.
In this topic:
To access a cluster via JDBC/ODBC you must have Can Attach To permission.
If you connect to a terminated cluster using JDBC/ODBC and have Can Restart permission, the cluster will be restarted.
For most BI tools, you need a JDBC or ODBC driver, according to the tool’s specification, to make a connection to Azure Databricks clusters.
- Go to the Databricks JDBC / ODBC Driver Download page.
- Fill out the form and submit it. You will receive an email that includes multiple download options.
- In the email, select the driver that you want and download it.
- Install the driver. For JDBC, a JAR is provided which does not require installation. For ODBC, an installation package is provided for your chosen platform that needs to be installed on your system.
- Configure your BI tool to use the installed library. Depending on the tool, point it to the JAR or installed library.
Here are some of the parameters a JDBC/ODBC driver might require:
|Username/password||See Username and password.|
|HTTP Path||See Construct the JDBC URL.|
|The following are usually specified in the “httpPath” for JDBC and the DSN conf for ODBC|
|Spark Server Type||Spark Thrift Server|
|Authentication Mechanism (AuthMech)||Username and password authentication|
|The following is for performance. Ask your vendor to change the parameter if you can’t access it|
|(Batch) Fetch Size||100000|
- To turn off SSL, set
- To use binary transport, set
To establish the connection, you use a personal access token to authenticate to the cluster gateway:
On the cluster detail page, click the JDBC/ODBC tab. It contains the hostname, port, protocol, and HTTP path.
Construct a JDBC connection string (URL) that looks like:
<http-path>with the values from the cluster detail page, set
UIDto the string
token, and replace
<personal-access-token>with your personal access token.
The Data Source Name (DSN) configuration contains the parameters for communicating with a specific database. BI tools like Tableau usually provide a friendly user interface for entering these parameters. If you have to install and manage the Simba ODBC driver yourself, you might need to create the configuration files and also allow your Driver Manager (odbc32.dll on Windows and unixODBC /iODBC on Unix) to access them.
After you download and install the Simba ODBC driver, create two files,
/etc/odbcinst.ini. The content in
/etc/odbc.ini can be:
[Databricks-Spark-2-x] Driver=Simba Server=<server-hostname> HOST=<server-hostname> PORT=<port> SparkServerType=3 Schema=default ThriftTransport=2 SSL=1 AuthMech=3 UID=token PWD=<personal-access-token> HTTPPath=<http-path>
The content in
/etc/odbcinst.ini can be:
[ODBC Drivers] Simba = Installed [Simba] Driver = <driver-path>
<driver-path> according to the type of operating system you chose when you downloaded the driver in Step 1. For example:
You can specify the paths of the two files in your environment variables so that they can be used by the Driver Manager:
export ODBCINI=/etc/odbc.ini export ODBCSYSINI=/etc/odbcinst.ini export SIMBASPARKINI=<simba-ini-path>/simba.sparkodbc.ini # (Contains the configuration for debugging the Simba driver)
- Fetching result set is slow after statement execution
- After a query execution, you can fetch result rows by calling the
next()method on the returned
ResultSetrepeatedly. This method triggers a request to the driver Thrift server to fetch a batch of rows back if the buffered ones are exhausted. We found the size of the batch significantly affects the performance. The default value in the most of the JDBC/ODBC drivers is too conservative, and we recommend that you set it to at least 100,000. Contact the BI tool provider if you cannot access this configuration.
- Timeout/Exception when creating the connection
Once you have the server hostname, you can run the following tests from a terminal to check for connectivity to the endpoint.
curl https://<server-hostname>:<port>/sql/protocolv1/o/0/<cluster-id> -H "Authorization: Basic $(echo -n 'token:<personal-access-token>' | base64)"
If the connection times out, check whether your network settings of the connection are correct.
If the response contains a
TTransportException(the error is expected) like the following, it means that the gateway is functioning properly and you have passed in valid credentials. If you are not able to connect with the same credentials, check that the client you are using is properly configured and is using the latest Simba drivers (version >= 1.2.0):
<h2>HTTP ERROR: 500</h2> <p>Problem accessing /cliservice. Reason: <pre> javax.servlet.ServletException: org.apache.thrift.transport.TTransportException</pre></p>
- Referencing temporary views
If the response contains the message
Table or view not found: SPARK..temp_viewit means that a temporary view is not properly referenced in the client application. Simba has an internal configuration parameter called
UseNativeQuerythat decides whether the query is translated or not before being submitted to the Thrift server. By default, the parameter is set to 0, in which case Simba can modify the query. In particular, Simba creates a custom
#tempschema for temporary views and it expects the client application to reference a temporary view with this schema. You can avoid using this special alias by setting
UseNativeQuery=1, which prevents Simba from modifying the query. In this case, Simba sends the query directly to the Thrift server. However, the client needs to make sure that the queries are written in the dialect that Spark expects, that is, HiveQL.
To sum up, you have the following options to handle temporary views over Simba and Spark:
UseNativeQuery=0and reference the view by prefixing its name with
UseNativeQuery=1and make sure the query is written in the dialect that Spark expects.
- Other errors
If you get the error
401 Unauthorized, check the credentials you are using:
<h2>HTTP ERROR: 401</h2> <p>Problem accessing /sql/protocolv1/o/0/test-cluster. Reason: <pre> Unauthorized</pre></p>
Verify that the username is
token(not your username) and the password is a personal access token (it should start with
Responses such as
404, Not Foundusually indicate problems with locating the specified cluster:
<h2>HTTP ERROR: 404</h2> <p>Problem accessing /sql/protocolv1/o/0/missing-cluster. Reason: <pre> RESOURCE_DOES_NOT_EXIST: No cluster found matching: missing-cluster</pre></p>