这里使用的是hiveserver2来进行提交任务,需要注意我们要将hiveserver2的服务给启动起来
hive --service hiveserver2 & hive --service metastore &这里运行hive报错了,顺便记录一下这个错误。 解决办法: https://blog.csdn.net/qq_35078688/article/details/86137440 上一篇并没有解决问题所以看下一篇 https://blog.csdn.net/hhj724/article/details/79094138 还是没解决,然后重启了一下集群,问题解决了。。。
修改job.properties
cd /export/servers/oozie-4.1.0-cdh5.14.0/oozie_works/hive2 vim job.properties nameNode=hdfs://hadoop01:8020 jobTracker=hadoop01:8032 queueName=default jdbcURL=jdbc:hive2://node03:10000/default examplesRoot=oozie_works oozie.use.system.libpath=true # 配置我们文件上传到hdfs的保存路径 实际上就是在hdfs 的/user/root/oozie_works/hive2这个路径下 oozie.wf.application.path=${nameNode}/user/${user.name}/${examplesRoot}/hive2修改workflow.xml
vim workflow.xml <?xml version="1.0" encoding="UTF-8"?> <workflow-app xmlns="uri:oozie:workflow:0.5" name="hive2-wf"> <start to="hive2-node"/> <action name="hive2-node"> <hive2 xmlns="uri:oozie:hive2-action:0.1"> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode}</name-node> <prepare> <delete path="${nameNode}/user/${wf:user()}/${examplesRoot}/output-data/hive2"/> <mkdir path="${nameNode}/user/${wf:user()}/${examplesRoot}/output-data"/> </prepare> <configuration> <property> <name>mapred.job.queue.name</name> <value>${queueName}</value> </property> </configuration> <jdbc-url>${jdbcURL}</jdbc-url> <script>script.q</script> <param>INPUT=/user/${wf:user()}/${examplesRoot}/input-data/table</param> <param>OUTPUT=/user/${wf:user()}/${examplesRoot}/output-data/hive2</param> </hive2> <ok to="end"/> <error to="fail"/> </action> <kill name="fail"> <message>Hive2 (Beeline) action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message> </kill> <end name="end"/> </workflow-app>编辑hivesql文件
vim script.q DROP TABLE IF EXISTS test; CREATE EXTERNAL TABLE default.test (a INT) STORED AS TEXTFILE LOCATION '${INPUT}'; insert into test values(10); insert into test values(20); insert into test values(30);成功了!Nice!