`
sunbin
  • 浏览: 342431 次
  • 性别: Icon_minigender_1
  • 来自: 深圳
社区版块
存档分类
最新评论

Flume + Kafka系统搭建

阅读更多

1、搭建kafka,参考 Kafka集群部署

2、flume版本apache-flume-1.6.0-bin.tar.gz

3、Flume安装流程:

     首先解压apache-flume-1.6.0-bin.tar.gz

      修改配置文件

      

cp conf/flume-env.sh.template flume-env.sh
vi  flume-env.sh
修改配置项目
export JAVA_HOME=/usr/java/jdk1.7.0_67

 3、连接kafka,新建配置文件xxx.conf  (文件名随便,但启动时需要)

   

a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = avro
a1.sources.r1.bind = sto1
a1.sources.r1.port = 41414

# Describe the sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.topic = testflume
a1.sinks.k1.brokerList = sto1:9092,sto2:9092,sto3:9092
a1.sinks.k1.requiredAcks = 1
a1.sinks.k1.batchSize = 20
a1.sinks.k1.channel = c1

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000000
a1.channels.c1.transactionCapacity = 10000

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

 4、启动集群

启动zk集群
A、启动Kafka集群。
bin/kafka-server-start.sh config/server.properties
B、配置Flume集群,并启动Flume集群。
bin/flume-ng agent -n a1 -c conf -f conf/fl.conf -Dflume.root.logger=DEBUG,console

 5、测试系统

kafka启动消费者,topic的名称不能变,且使用前可以不用手工创建
bin/kafka-console-consumer.sh --zookeeper  sto1:2181, sto2:2181, sto3:2181 --from-beginning --topic testflume

 

启动消费者:
bin/kafka-console-consumer.sh --zookeeper  sto1:2181, sto2:2181, sto3:2181 --from-beginning --topic testflume
启动生产者
bin/kafka-topics.sh --zookeeper  sto1:2181, sto2:2181, sto3:2181 --create --replication-factor 2 --partitions 1 --topic mylog_cmcc
查看topic列表:
bin/kafka-topics.sh --zookeeper sto1:2181, sto2:2181, sto3:2181 --list
启动消费者
bin/kafka-console-consumer.sh --zookeeper  sto1:2181, sto2:2181, sto3:2181 --from-beginning --topic mylog_cmcc
bin/kafka-console-consumer.sh --zookeeper  sto1:2181, sto2:2181, sto3:2181 --topic mylog_cmcc

 

java客户端代码
package com.sgb.flume;

import org.apache.flume.Event;
import org.apache.flume.EventDeliveryException;
import org.apache.flume.api.RpcClient;
import org.apache.flume.api.RpcClientFactory;
import org.apache.flume.event.EventBuilder;
import java.nio.charset.Charset;

/**
 * Flume官网案例
 * http://flume.apache.org/FlumeDeveloperGuide.html 
 * @author root
 */
public class RpcClientDemo {
	
	public static void main(String[] args) {
		MyRpcClientFacade client = new MyRpcClientFacade();
		client.init("sto1", 41414);
		for (int i = 10; i < 20; i++) {
			String sampleData = "Hello Flume!ERROR" + i;
			client.sendDataToFlume(sampleData);
			System.out.println("senddata" + sampleData);
		}
		client.cleanUp();
	}
}

class MyRpcClientFacade {
	private RpcClient client;
	private String hostname;
	private int port;

	public void init(String hostname, int port) {
		// Setup the RPC connection
		this.hostname = hostname;
		this.port = port;
		this.client = RpcClientFactory.getDefaultInstance(hostname, port);
	}
	public void sendDataToFlume(String data) {
		Event event = EventBuilder.withBody(data, Charset.forName("UTF-8"));

		try {
			client.append(event);
		} catch (EventDeliveryException e) {
			client.close();
			client = null;
			client = RpcClientFactory.getDefaultInstance(hostname, port);
		}
	}
	public void cleanUp() {
		client.close();
	}
}

 

     java客户端执行时,可以看到数据从flume流向kafka,并最终显示在消费者。此时可以通过storm与kafka的代码取得数据进行内存运算。

分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics