How to get hive table name based on HDFS location path? with out connecting to metastore

I know how to get hive table name based on HDFS location through Meta-store. For example, if I need to get table name for the HDFS location hdfs://

  1. I will connect to hive metastore using JDBC.
  2. Run a query against tables TBLS and SDS, SDS.location will have location value of table, and get TBLS.tbl_name.

But, I need other ways to get table name?

Is it possible?


In this way we can get table name for hdfs location provided... :)

HiveConf hc = new HiveConf(yourclass.class);
hc.set("hive.metastore.local", "false");
hiveuris = "thrift://";
//hiveuri is the property "hive.metastore.uris" value from hive-site.xml

hc.setBoolVar(HiveConf.ConfVars.METASTORE_USE_THRIFT_SASL, false);
HiveMetaStoreClient hiveClient = HCatUtil.getHiveClient(hiveConf);
//get all tables
List<String> tables = hiveClient.getAllTables("default");//default is databasename
//loop through tables and complare the needed path
String path = "hdfs://"; //hdfs path to find table name
//find talbe for above path
for (String table:tables){                              
     Table ht = HCatUtil.getTable(hiveClient, "default", table);
     if (path.equals(ht.getMetadata().get("location")) ){
     System.out.println("Found table name:"+ht.getTableName());

The name of the directory within your Hive Warehouse location is the table Name.

For example, if you create a table testTable in Hive, correspondingly a directory with the same name is created within the Hive warehouse Directory.

Further, if you create partitions on your Hive table, each partition will map to a sub-directory within your testTable directory, i.e <hive_warehouse_path>/testTable/<partition>. All data under a particular partition are stored in files under the partition sub-directory. This is the way hive manages its data on HDFS. Of course, it stores the table schema on the Metastore, but the actual data is stored as above in HDFS.

In your question, you indicated that you want to get the table name of the HDFS location hdfs:// In this case the table name should be test.

There is also the possibility that /user/hive/warehouse, which is the default warehouse location that hive uses to store table data & is defined in hive-default.xml might have been overridden & hive might actually be using a different location for its warehouse. You should check hive-site.xml on your environment to be certain of the hive metastore location.

Need Your Help

Segmentation Fault when converting IP address to text form

c linux berkeley-sockets

I am trying to convert the IP address of any client that connect to my server to a text form as following: