pig使用load覆盖配置单元中的数据

bxjv4tth  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(283)

我对pig和hive还不熟悉,我需要使用pig load store将存储在hdfs上的csv文件中的数据加载到hive表中。我正在使用的

load_resource_csv = LOAD '/user/hadoop/emp.csv' USING PigStorage(',')
 AS
 (dates:chararray,
  shipnode_key:chararray,
  delivery_method:chararray,

  );

STORE load_resource_csv
INTO 'employee'
USING org.apache.hive.hcatalog.pig.HCatStorer();

每次运行pig脚本时,我都需要覆盖配置单元表中的数据。我能怎么做?

ngynwnxp

ngynwnxp1#

使用 fs shell命令: fs -rm -f -r /path/to/dir :

load_resource_csv = LOAD '/user/cloudera/newfile' USING PigStorage(',')
 AS
 (name:chararray,
  skill:chararray
  );

fs -rm -r -f /user/hive/warehouse/stack/

STORE load_resource_csv INTO '/user/hive/warehouse/stack' USING PigStorage(',');
-------------- BEFORE ---------------------------
$ hadoop fs -ls /user/hive/warehouse/stack/
-rwxrwxrwx   1 cloudera supergroup         22 2016-08-05 18:31 /user/hive/warehouse/stack/000000_0

hive> select * from stack;
OK
bigDataLearner  hadoop

$ hadoop fs -cat /user/cloudera/newfile
bigDataLearner,spark
-------------- AFTER -------------------
$ hadoop fs -ls /user/hive/warehouse/stack
Found 2 items
-rw-r--r--   1 cloudera supergroup          0 2016-08-05 18:56 /user/hive/warehouse/stack/_SUCCESS
-rw-r--r--   1 cloudera supergroup         21 2016-08-05 18:56 /user/hive/warehouse/stack/part-m-00000

$ hadoop fs -cat /user/hive/warehouse/stack/*
bigDataLearner,spark

hive> select * from stack;
OK
bigDataLearner  spark
Time taken: 0.183 seconds, Fetched: 1 row(s)

相关问题