This sets up a source that runs "tail" and sinksthat data via Avro RPC to on port 10000.

The collecting Flume agent on the Hadoop cluster willneed a flume.conf with an avro source and an HDFS sink.把数据写到HDFS的Agent

On this side we've defined a source that reads Avro messagesfrom port 10000 on and writes the results into HDFS, rolling thefile every 30 seconds. It's just like our setup in Flume OG, but now multi-hopforwarding is a snap.

两个Flume Agent通过Avro串联起来,前面这个Flume Agent部署在应用服务器上,第二个Flume Agent部署在Hadoop集群服务器中,用来汇总前面Agent的数据信息,并把数据写到HDFS中;


已标记关键词 清除标记
©️2020 CSDN 皮肤主题: Age of Ai 设计师:meimeiellie 返回首页