Oozie和Hive的问题



我正在尝试使用hive与Oozie使用hive动作。Oozie工作流应该将数据从一个Hive表加载到另一个Hive表。我在Hive中有一个表foo,它应该加载数据到表"test"。

我正在使用Cloudera VM与Hadoop 2.0.0-cdh4.4.0。

我使用下面的命令运行工作流:

    [cloudera@localhost oozie-3.3.2+92]$ oozie job -oozie http://localhost:11000/oozie -config examples/apps/hive/job.properties -run

当我进入JobTracker日志文件时,它说:Table not found 'foo'。任何帮助吗?

,

    cat script.q:
    CREATE EXTERNAL TABLE test (
    id int,
    name string
    )
    ROW FORMAT DELIMITED
    FIELDS TERMINATED BY 't'
    STORED AS TEXTFILE
    LOCATION
    '/user/cloudera/test';
    INSERT OVERWRITE table test SELECT * FROM foo;

,

    cat job.properties:
    nameNode=hdfs://localhost.localdomain:8020
    jobTracker=localhost.localdomain:8021
    queueName=default
    examplesRoot=examples
    oozie.use.system.libpath=true
    oozie.wf.application.path=${nameNode}/user/${user.name}/${examplesRoot}/apps/hive

,

    cat workflow.xml:
    <?xml version="1.0" encoding="UTF-8"?>
    <workflow-app xmlns="uri:oozie:workflow:0.2" name="hive-wf">
    <start to="hive-node"/>
    <action name="hive-node">
    <hive xmlns="uri:oozie:hive-action:0.2">
    <job-tracker>${jobTracker}</job-tracker>
    <name-node>${nameNode}</name-node>
    <script>script.q</script>
    </hive>
    <ok to="end"/>
    <error to="fail"/>
    </action>
    <kill name="fail">
    <message>Hive failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
    </kill>
    <end name="end"/>
    </workflow-app>

= =

[cloudera@localhost hive]$ PWD/usr/share/doc/oozie-3.3.2 + 92/例子/应用程序/蜂巢

= =

    Current (local) dir = /mapred/local/taskTracker/cloudera/jobcache/job_201405081447_0019/attempt_201405081447_0019_m_000000_0/work
    ------------------------
    hive-exec-log4j.properties
    .action.xml.crc
    tmp
    hive-log4j.properties
    hive-site.xml
    action.xml
    script.q
    ------------------------
    Script [script.q] content: 
    ------------------------
    CREATE EXTERNAL TABLE test (
    id int,
    name string
    )
    ROW FORMAT DELIMITED
    FIELDS TERMINATED BY 't'
    STORED AS TEXTFILE
    LOCATION
    '/user/cloudera/test';
    INSERT OVERWRITE table test SELECT * FROM foo;
        ------------------------
    Hive command arguments :
    fhive-node--hive
    script.q
    =================================================================
    >>> Invoking Hive command line now >>>
    Hadoop Job IDs executed by Hive:
    Intercepting System.exit(10001)
    <<< Invocation of Main class completed <<<
    Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.HiveMain], exit code [10001]
    Oozie Launcher failed, finishing Hadoop job gracefully
    oozie Launcher ends
    stderr logs
    Logging initialized using configuration in jar:file:/mapred/local/taskTracker/distcache/9141962611866023942_1400842701_327187723/localhost.localdomain/user/oozie/share/lib/hive/hive-common-0.10.0-cdh4.4.0.jar!/hive-log4j.properties
    Hive history file=/tmp/mapred/hive_job_log_eecd5d6b-69d3-4dbd-94ed-9c86ef42443d_1563998739.txt
    OK
    Time taken: 9.816 seconds
    FAILED: SemanticException [Error 10001]: Line 3:42 Table not found 'foo'
    Log file: /mapred/local/taskTracker/cloudera/jobcache/job_201405081447_0019/attempt_201405081447_0019_m_000000_0/work/hive-oozie-job_201405081447_0019.log not present. Therefore no Hadoop jobids found
    Intercepting System.exit(10001)
    Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.HiveMain], exit code [10001]
    syslog logs
    2014-05-12 10:12:10,156 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
    2014-05-12 10:12:11,099 INFO org.apache.hadoop.mapred.TaskRunner: Creating symlink: /mapred/local/taskTracker/distcache/-2339055663322524001_1176285901_1902801582/localhost.localdomain/user/cloudera/examples/apps/hive/script.q <- /mapred/local/taskTracker/cloudera/jobcache/job_201405081447_0019/attempt_201405081447_0019_m_000000_0/work/script.q
    2014-05-12 10:12:11,231 WARN org.apache.hadoop.conf.Configuration: session.id is deprecated. Instead, use dfs.metrics.session-id
    2014-05-12 10:12:11,231 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId=
    2014-05-12 10:12:11,544 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
    2014-05-12 10:12:11,549 INFO org.apache.hadoop.mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@375e293a
    2014-05-12 10:12:11,755 INFO org.apache.hadoop.mapred.MapTask: Processing split: hdfs://localhost.localdomain:8020/user/cloudera/oozie-oozi/0000014-140508144817449-oozie-oozi-W/hive-node--hive/input/dummy.txt:0+5
    2014-05-12 10:12:11,773 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and BYTES_READ as counter name instead
    2014-05-12 10:12:11,775 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 0

= =

谢谢,

里约热内卢

Hive使用的是哪个Metastore ?

如果您正在使用Derby(默认),那么它是一个本地metastore,并且只能从运行Hive和创建表的节点上看到。Oozie操作可能运行在不同的机器上,连接到它自己的本地metastore,并且不会看到在前一步中定义的表。

需要在MySQL或Postgres等数据库上配置和安装远程metastore。

请看这里的说明:http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH5/latest/CDH5-Installation-Guide/cdh5ig_hive_metastore_configure.html

最新更新