我创建了一个vagrant/ansible playbook来构建一个单节点kafka vm。
我们的想法是在原型设计时提供一些灵活性:如果我们想要一个快速而肮脏的kafka消息队列,我们可以 git clone [my 'kafka in a box' repo]
, cd ..
以及 vagrant up
.
以下是我迄今为止所做的:
流浪汉档案:
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "hashicorp/precise64"
config.vm.network "forwarded_port", guest:9092, host: 9092
config.vm.provider "virtualbox" do |vb|
vb.customize ["modifyvm", :id, "--memory", "2048"]
end
config.vm.provision "ansible" do |ansible|
ansible.playbook = "kafkaPlaybook.yml"
end
end
... 还有ansible kafkaPlaybook.yml
文件:
---
- hosts: all
user: vagrant
sudo: True
tasks:
- name: install linux packages
action: apt update_cache=yes pkg={{item}} state=installed
with_items:
- vim
- openjdk-7-jdk
- name: make /usr/local/kafka directory
shell: "mkdir /usr/local/kafka"
- name: download kafka (the link is from an apache mirror)
get_url: url=http://apache.spinellicreations.com/kafka/0.8.1.1/kafka-0.8.1.1-src.tgz dest=/usr/local/kafka/kafka-0.8.1.1-src.tgz mode=0440
- name: untar file
shell: "tar -xvf /usr/local/kafka/kafka-0.8.1.1-src.tgz -C /usr/local/kafka"
- name: build kafka with gradle
shell: "cd /usr/local/kafka/kafka-0.8.1.1-src && ./gradlew jar"
当我 vagrant up
这个盒子已经准备好了。我可以 vagrant ssh
并在本地执行基本的生产者/消费者测试,例如。
cd /usr/local/kafka/kafka-0.8.1.1-src
bin/zookeeper-server-start.sh config/zookeeper.properties #start zookeeper
bin/kafka-server-start.sh config/server.properties #start kafka
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic tests #start a producer
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning #start a consumer
当我在producer窗口中键入消息时,它们会出现在consumer窗口中。伟大的。
我尝试使用kafka python包从主机连接到kafka:
>>> from kafka import KafkaClient, SimpleProducer
>>> kafka = KafkaClient("127.0.0.1:9092", timeout=120)
>>> kafka.ensure_topic_exists('turkey')
No handlers could be found for logger "kafka"
>>> kafka.ensure_topic_exists('turkey')
>>> producer = SimpleProducer(kafka)
>>> producer.send_messages("turkey", "gobble gobble")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/awoolford/anaconda/lib/python2.7/site-packages/kafka/producer.py", line 261, in send_messages
return super(SimpleProducer, self).send_messages(topic, partition, *msg)
File "/Users/awoolford/anaconda/lib/python2.7/site-packages/kafka/producer.py", line 188, in send_messages
timeout=self.ack_timeout)
File "/Users/awoolford/anaconda/lib/python2.7/site-packages/kafka/client.py", line 312, in send_produce_request
resps = self._send_broker_aware_request(payloads, encoder, decoder)
File "/Users/awoolford/anaconda/lib/python2.7/site-packages/kafka/client.py", line 148, in _send_broker_aware_request
conn = self._get_conn(broker.host, broker.port)
File "/Users/awoolford/anaconda/lib/python2.7/site-packages/kafka/client.py", line 55, in _get_conn
timeout=self.timeout
File "/Users/awoolford/anaconda/lib/python2.7/site-packages/kafka/conn.py", line 60, in __init__
self.reinit()
File "/Users/awoolford/anaconda/lib/python2.7/site-packages/kafka/conn.py", line 195, in reinit
self._raise_connection_error()
File "/Users/awoolford/anaconda/lib/python2.7/site-packages/kafka/conn.py", line 75, in _raise_connection_error
raise ConnectionError("Kafka @ {0}:{1} went away".format(self.host, self.port))
kafka.common.ConnectionError: Kafka @ precise64:9092 went away
这个 kafka.ensure_topic_exists
打了两次电话。第一次运行时,它会返回一个警告,然后创建主题,这样我就可以看到python正在端口9092上与kafka通信。但是,我无法向队列发送消息。
你能看出我做错了什么吗?
1条答案
按热度按时间6yjfywim1#
需要在config/server.properties中设置advised.host.name和advised.port。我在剧本中添加了以下两行内容:
... 现在可以提供单节点kafka群集:
如果我重新开始,我可能会用wirbelsturm为Kafka提供一个实验室。