如何在使用MAPRFS运行“hadoop fs”时启用附加日志记录



当我运行此命令时:

hadoop fs -copyFromLocal /tmp/1GB.img 'maprfs://maprfs.example.com/tmp/1GB.img'

我得到以下错误。

2014-11-05 01:21:08,7669 ERROR Client fs/client/fileclient/cc/writebuf.cc:154 Thread: 240 FlushWrite failed: File 1GB.img, error: Invalid argument(22), pfid 4484.66.266002, off 65536, fid 5189.87.131376
14/11/05 01:21:08 ERROR fs.Inode: Write failed for file: /tmp/1GB.img, error: Invalid argument
14/11/05 01:21:08 ERROR fs.Inode: Marking failure for: /tmp/1GB.img, error: Invalid argument
14/11/05 01:21:08 ERROR fs.Inode: Throwing exception for: /tmp/1GB.img, error: Invalid argument
14/11/05 01:21:08 ERROR fs.Inode: Flush failed for file: /tmp/1GB.img, error: Invalid argument
14/11/05 01:21:08 ERROR fs.Inode: Marking failure for: /tmp/1GB.img, error: Invalid argument
14/11/05 01:21:08 ERROR fs.Inode: Throwing exception for: /tmp/1GB.img, error: Invalid argument
copyFromLocal: 4484.66.266002 /tmp/1GB.img (Invalid argument)

有人能建议如何启用额外的详细/调试日志记录吗?

上面的错误似乎来自MAPR-hadoop类。如果能在这些包以及org.apache.*中启用更详细的日志记录,那就太好了

我尝试修改/opt/mapr/conf/logging.properties,但似乎没有帮助。

BTW,运行Hadoop 1.0.3和MapR 3.1.1.26113.GA

谢谢,

Fi

p.s。这与我在http://answers.mapr.com/questions/11374/write-to-maprfs-with-hadoop-cli-fails-inside-docker-while-running-on-a-data-node#

您也可以直接使用选项

hadoop mfs -Dfs.mapr.trace=DEBUG -ls maprfs://maprfs.example.com/tmp/1GB.img

找到答案,感谢http://answers.mapr.com/answer_link/6181/

只需要中的fs.mapr.trace=debug属性

/opt/mapr/hadoop/hadoop-0.20.2/conf/core-site.xml

<configuration>
  <property>
    <name>fs.mapr.trace</name>
    <value>debug</value>
  </property>
</configuration>

最新更新