flume-ng应用场景分析


                                                        图:FLUME的配置,把控制台的输出命令写出到avro这种格式;

This sets up a source that runs "tail" and sinksthat data via Avro RPC to 10.1.1.100 on port 10000.


The collecting Flume agent on the Hadoop cluster willneed a flume.conf with an avro source and an HDFS sink.把数据写到HDFS的Agent



On this side we've defined a source that reads Avro messagesfrom port 10000 on 10.1.1.100 and writes the results into HDFS, rolling thefile every 30 seconds. It's just like our setup in Flume OG, but now multi-hopforwarding is a snap.




两个Flume Agent通过Avro串联起来,前面这个Flume Agent部署在应用服务器上,第二个Flume Agent部署在Hadoop集群服务器中,用来汇总前面Agent的数据信息,并把数据写到HDFS中;




另外一个架构:














已标记关键词 清除标记
©️2020 CSDN 皮肤主题: Age of Ai 设计师:meimeiellie 返回首页