在HBase中做批量加载时出错



我试图在HBase中做批量加载,但在加载数据时出现以下异常…

应用程序application_1439213972129_0080初始化失败(exitCode=255),输出:请求的用户root未被列入白名单,且id为0,低于允许的最小值500尝试失败。应用程序失败

  hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=HBASE_ROW_KEY,personal:Name,Profession:Position_Title,Profession:Department,personal:Employee_Annual_Salary -Dimporttsv.separator=',' /tables/emp_salary_new1 /mapr/MapRDev/apps/Datasets/Employee_Details.csv
 2015-08-13 18:24:33,076 INFO  [main] mapreduce.TableMapReduceUtil: Setting speculative execution off for bulkload operation
 2015-08-13 18:24:33,123 INFO  [main] mapreduce.TableMapReduceUtil: Configured 'hbase.mapreduce.mapr.tablepath' to /tables/emp_salary_new1
 2015-08-13 18:24:33,220 INFO  [main] Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
 2015-08-13 18:24:33,372 INFO  [main] client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
 2015-08-13 18:24:33,735 INFO  [main] Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
 2015-08-13 18:24:33,770 INFO  [main] mapreduce.TableOutputFormat: Created table instance for /tables/emp_salary_new1
 2015-08-13 18:24:34,252 INFO  [main] input.FileInputFormat: Total input paths to process : 1
 2015-08-13 18:24:34,294 INFO  [main] mapreduce.JobSubmitter: number of splits:1
 2015-08-13 18:24:34,535 INFO  [main] mapreduce.JobSubmitter: Submitting tokens for job: job_1439213972129_0055
 2015-08-13 18:24:34,792 INFO  [main] security.ExternalTokenManagerFactory: Initialized external token manager class - com.mapr.hadoop.yarn.security.MapRTicketManager
 2015-08-13 18:24:35,031 INFO  [main] impl.YarnClientImpl: Submitted application application_1439213972129_0055
 2015-08-13 18:24:35,114 INFO  [main] mapreduce.Job: The url to track the job: http://hadoop-c02n02.ss.sw.ericsson.se:8088/proxy/application_1439213972129_0055/
 2015-08-13 18:24:35,115 INFO  [main] mapreduce.Job: Running job: job_1439213972129_0055
 2015-08-13 18:24:53,253 INFO  [main] mapreduce.Job: Job job_1439213972129_0055 running in uber mode : false
 2015-08-13 18:24:53,256 INFO  [main] mapreduce.Job:  map 0% reduce 0%
 2015-08-13 18:24:53,281 INFO  [main] mapreduce.Job: Job job_1439213972129_0055 failed with state FAILED due to: Application application_1439213972129_0055 failed 2 times due to AM Container for appattempt_1439213972129_0055_000002 exited with  exitCode: 1
 For more detailed output, check application tracking page:http://hadoop-c02n02.ss.sw.ericsson.se:8088/cluster/app/application_1439213972129_0055Then, click on links to logs of each attempt.
 Diagnostics: Exception from container-launch.
 Container id: container_e02_1439213972129_0055_02_000001
 Exit code: 1
 Stack trace: ExitCodeException exitCode=1:
         at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
         at org.apache.hadoop.util.Shell.run(Shell.java:456)
         at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
         at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:304)
         at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:354)
         at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:87)
         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
         at java.lang.Thread.run(Thread.java:745)
 Shell output: main : command provided 1
 main : user is mapradm
 main : requested yarn user is mapradm

 Container exited with a non-zero exit code 1
 Failing this attempt. Failing the application.
 2015-08-13 18:24:53,320 INFO  [main] mapreduce.Job: Counters: 0

看起来你正在MapR DB中加载数据而不是在hbase中。但是它的hbase命令是与MarDB兼容的。我只是在你的命令中做了一些小改动,看看是否适合你。

hbase org.apache.hadoop.hbase.mapreduce.ImportTsv - importtsv。列= HBASE_ROW_KEY,个人信息:姓名、专业:Position_Title,专业:部门、个人:Employee_Annual_Salary -Dimporttsv。separator=,'/tables/emp_salary_new1/mapr/MapRDev/apps/datassets/Employee_Details.csv

相关内容

  • 没有找到相关文章