Hcatalog 简明教程
HCatalog - CLI
HCatalog 命令行界面 (CLI) 可以从命令 $HIVE_HOME/HCatalog/bin/hcat 中调用,其中 $HIVE_HOME 是 Hive 的主目录。 hcat 是用于初始化 HCatalog 服务器的命令。
HCatalog Command Line Interface (CLI) can be invoked from the command $HIVE_HOME/HCatalog/bin/hcat where $HIVE_HOME is the home directory of Hive. hcat is a command used to initialize the HCatalog server.
使用以下命令初始化 HCatalog 命令行。
Use the following command to initialize HCatalog command line.
cd $HCAT_HOME/bin
./hcat
如果安装已正确完成,则您将获得以下输出 −
If the installation has been done correctly, then you will get the following output −
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
usage: hcat { -e "<query>" | -f "<filepath>" }
[ -g "<group>" ] [ -p "<perms>" ]
[ -D"<name> = <value>" ]
-D <property = value> use hadoop value for given property
-e <exec> hcat command given from command line
-f <file> hcat commands in file
-g <group> group for the db/table specified in CREATE statement
-h,--help Print help information
-p <perms> permissions for the db/table specified in CREATE statement
HCatalog CLI 支持这些命令行选项 −
The HCatalog CLI supports these command line options −
Sr.No |
Option |
Example & Description |
1 |
-g |
hcat -g mygroup … The table to be created must have the group "mygroup". |
2 |
-p |
hcat -p rwxr-xr-x … The table to be created must have read, write, and execute permissions. |
3 |
-f |
hcat -f myscript.HCatalog … myscript.HCatalog is a script file containing DDL commands to execute. |
4 |
-e |
hcat -e 'create table mytable(a int);' … Treat the following string as a DDL command and execute it. |
5 |
-D |
hcat -Dkey = value … Passes the key-value pair to HCatalog as a Java system property. |
6 |
- |
hcat Prints a usage message. |
Note −
-
The -g and -p options are not mandatory.
-
At one time, either -e or -f option can be provided, not both.
-
The order of options is immaterial; you can specify the options in any order.
Sr.No |
DDL Command & Description |
1 |
CREATE TABLE Create a table using HCatalog. If you create a table with a CLUSTERED BY clause, you will not be able to write to it with Pig or MapReduce. |
2 |
ALTER TABLE Supported except for the REBUILD and CONCATENATE options. Its behavior remains same as in Hive. |
3 |
DROP TABLE Supported. Behavior the same as Hive (Drop the complete table and structure). |
4 |
CREATE/ALTER/DROP VIEW Supported. Behavior same as Hive. Note − Pig and MapReduce cannot read from or write to views. |
5 |
SHOW TABLES Display a list of tables. |
6 |
SHOW PARTITIONS Display a list of partitions. |
7 |
Create/Drop Index CREATE and DROP FUNCTION operations are supported, but the created functions must still be registered in Pig and placed in CLASSPATH for MapReduce. |
8 |
DESCRIBE Supported. Behavior same as Hive. Describe the structure. |
上表中的一些命令在后续章节中进行了说明。
Some of the commands from the above table are explained in subsequent chapters.