Chapter 7. Advanced Connector Examples
Now you know what functionality connectors provide to Trino and how to configure them from Chapter 6. Letâs expand that knowledge to some of the more complex usage scenarios and connectors. These are typically connectors that need to be smart enough to translate storage patterns and ideas from the underlying data source, which do not easily map to the table-oriented model from SQL and Trino.
Learn more by jumping right to the section about the system you want to connect to with Trino and query with SQL:
Then you can round out your understanding by learning about query federation and the related ETL usage in âQuery Federation in Trinoâ.
Connecting to HBase with Phoenix
The distributed, scalable, big data store Apache HBase builds on top of HDFS. Users are, however, not restricted to using low-level HDFS and accessing it with the Hive connector. The Apache Phoenix project provides a SQL layer to access HBase, and thanks to the Trino Phoenix connector, you can therefore access HBase databases from Trino just like any other data source.
As usual, you simply need a catalog file like etc/catalog/bigtables.properties:
connector.name
=
phoenix5
phoenix.connection-url
=
jdbc:phoenix:zookeeper1,zookeeper2:2181:/hbase ...
Get Trino: The Definitive Guide, 2nd Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.