基于文件大小滚动时,Flume需要花费时间将数据复制到HDFS



我有一个用例,我想复制远程文件到hdfs使用flume。我还希望复制的文件应该与HDFS块大小(128MB/256MB)保持一致。远程数据的总大小为33GB。

我使用avro源和汇复制远程数据到hdfs。同样,从下沉方面,我正在做文件大小滚动(128,256)。但是对于从远程机器复制文件并将其存储到hdfs(文件大小128/256 MB) flume平均需要2分钟。

水槽配置:Avro源(远程机)

### Agent1 - Spooling Directory Source and File Channel, Avro Sink  ###
# Name the components on this agent
Agent1.sources = spooldir-source  
Agent1.channels = file-channel
Agent1.sinks = avro-sink
# Describe/configure Source
Agent1.sources.spooldir-source.type = spooldir
Agent1.sources.spooldir-source.spoolDir =/home/Benchmarking_Simulation/test

# Describe the sink
Agent1.sinks.avro-sink.type = avro
Agent1.sinks.avro-sink.hostname = xx.xx.xx.xx   #IP Address destination machine
Agent1.sinks.avro-sink.port = 50000
#Use a channel which buffers events in file
Agent1.channels.file-channel.type = file
Agent1.channels.file-channel.checkpointDir = /home/Flume_CheckPoint_Dir/
Agent1.channels.file-channel.dataDirs = /home/Flume_Data_Dir/
Agent1.channels.file-channel.capacity = 10000000
Agent1.channels.file-channel.transactionCapacity=50000
# Bind the source and sink to the channel
Agent1.sources.spooldir-source.channels = file-channel
Agent1.sinks.avro-sink.channel = file-channel

Avro Sink(运行hdfs的机器)

### Agent1 - Avro Source and File Channel, Avro Sink  ###
# Name the components on this agent
Agent1.sources = avro-source1  
Agent1.channels = file-channel1
Agent1.sinks = hdfs-sink1
# Describe/configure Source
Agent1.sources.avro-source1.type = avro
Agent1.sources.avro-source1.bind = xx.xx.xx.xx
Agent1.sources.avro-source1.port = 50000
# Describe the sink
Agent1.sinks.hdfs-sink1.type = hdfs
Agent1.sinks.hdfs-sink1.hdfs.path =/user/Benchmarking_data/multiple_agent_parallel_1
Agent1.sinks.hdfs-sink1.hdfs.rollInterval = 0
Agent1.sinks.hdfs-sink1.hdfs.rollSize = 130023424
Agent1.sinks.hdfs-sink1.hdfs.rollCount = 0
Agent1.sinks.hdfs-sink1.hdfs.fileType = DataStream
Agent1.sinks.hdfs-sink1.hdfs.batchSize = 50000
Agent1.sinks.hdfs-sink1.hdfs.txnEventMax = 40000
Agent1.sinks.hdfs-sink1.hdfs.threadsPoolSize=1000
Agent1.sinks.hdfs-sink1.hdfs.appendTimeout = 10000
Agent1.sinks.hdfs-sink1.hdfs.callTimeout = 200000

#Use a channel which buffers events in file
Agent1.channels.file-channel1.type = file
Agent1.channels.file-channel1.checkpointDir = /home/Flume_Check_Point_Dir
Agent1.channels.file-channel1.dataDirs = /home/Flume_Data_Dir
Agent1.channels.file-channel1.capacity = 100000000
Agent1.channels.file-channel1.transactionCapacity=100000

# Bind the source and sink to the channel
Agent1.sources.avro-source1.channels = file-channel1
Agent1.sinks.hdfs-sink1.channel = file-channel1

两台机器之间的网络连接是686 Mbps。

有没有人能帮我确定配置中是否有问题或替代配置,以便复制不会花费太多时间

两个代理都使用文件通道。因此,在写入HDFS之前,数据已经被写入磁盘两次。您可以尝试为每个代理使用一个内存通道,看看性能是否有所提高。

最新更新