大数据环境搭建之 - ZK、Kafka、Hadoop HA

环境搭建是从事大数据相关专业工作必不可少的一步,本文记录ZK、Kafka、Hadoop HA搭建的全过程,需要注意的是本文默认基础环境已就绪。比如主机名修改、hosts文件映射、SSH免密登录、JDK的安装等。

说明:本文所采用的三个组件的版本依次是:zookeeper-3.4.10,kafka-0.10.2.0,hadoop-2.6.5。

我的集群情况如下: 依次是内网IP,hostname,HDFS节点,ZK节点,Kafka节点

1
2
3
192.168.4.114 bonree-zq-1 namenode resorucemanager nodemanager datanode zk kafka
192.168.4.117 bonree-zq-2 namenode resorucemanager nodemanager datanode zk kafka
192.168.4.116 bonree-zq-3 nodemanager datanode zk kafka

一、Zookeeper

1. 下载安装包

地址:https://archive.apache.org/dist/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz

2. 解压并创建相关目录

1
2
3
4
$ cd /usr/local/env
$ tar -zxvf zookeeper-3.4.10.tar.gz
$ mkdir data
$ mkdir logs

3. 在data目录下创建myid文件

1
在bonree-zq-1,bonree-zq-2,bonree-zq-3上依次为1,2,3。

4. 在conf下建立zoo.cfg文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=50
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=25
maxSessionTimeout=180000
zookeeper.sasl=false
minSessionTimeout=30000
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/usr/local/env/zookeeper-3.4.10/data
dataLogDir=/usr/local/env/zookeeper-3.4.10/logs
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
maxClientCnxns=1000
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
autopurge.purgeInterval=1

server.1=bonree-zq-1:2888:3888
server.2=bonree-zq-2:2888:3888
server.3=bonree-zq-3:2888:3888

5. 启动

可使用如下命令单独启动每个节点的ZK服务或自行编写脚本启动:

1
2
3
$ bin/zkServer.sh start
$ bin/zkServer.sh stop
$ bin/zkServer.sh status 查看节点状态

二、Kafka

1. 下载安装包

地址:https://archive.apache.org/dist/kafka/0.10.2.0/kafka_2.11-0.10.2.0.tgz

注意:2.11是scala的版本。

2. 解压并创建相关目录

1
2
3
$ cd /usr/local/env
$ tar -zxvf kafka_2.11-0.10.2.0.tgz
$ mkdir logs

3. 在config下建立server.properties文件(注意原文件的备份)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
# 注意broker.id要和zk的myid保持一致
broker.id=1
port=9092

num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=10485760
socket.receive.buffer.bytes=10485760
socket.request.max.bytes=1610612736
#socket.request.max.bytes=471859200
fetch.purgatory.purge.interval.requests=100
producer.purgatory.purge.interval.requests=100

############################# Log Basics #############################
num.partitions=3
log.dirs=/usr/local/env/kafka-0.10.2.0/logs

log.cleaner.enable=true
log.cleanup.policy=delete
log.retention.bytes=20737418240
log.retention.hours=8

log.flush.interval.messages=20000
log.flush.interval.ms=10000
log.flush.scheduler.interval.ms=2000
log.retention.check.interval.ms=300000
log.segment.bytes=1073741824
message.max.bytes=1048576
#num.recovery.threads.per.data.dir=1

############################# Zookeeper #############################
zookeeper.connect=bonree-zq-1:2181,bonree-zq-2:2181,bonree-zq-3:2181
zookeeper.connection.timeout.ms=6000
zookeeper.sync.time.ms=5000

# Replication configurations
default.replication.factor=2
num.replica.fetchers=4
replica.fetch.max.bytes=1073741824
replica.fetch.wait.max.ms=5000
replica.high.watermark.checkpoint.interval.ms=5000
replica.socket.timeout.ms=60000
replica.socket.receivie.buffer.bytes=10485760
replica.lag.time.max.ms=10000
replica.lag.max.messages=4000
leader.imbalance.check.interval.seconds=1800

4. 启动

可使用如下命令单独启动每个节点的Kafka服务或自行编写脚本启动:

1
2
3
APPHOME=/usr/local/env/kafka-0.10.2.0
$ nohup sh ${APPHOME}/bin/kafka-server-start.sh ${APPHOME}/config/server.properties > /dev/null 2>&1 &
$ nohup sh ${APPHOME}/bin/kafka-server-stop.sh

可使用jps查看Kafka进程。

三、Hadoop

1. 下载安装包

地址:https://archive.apache.org/dist/hadoop/common/hadoop-2.6.5/hadoop-2.6.5.tar.gz

2. 解压

1
2
$ cd /usr/local/env
$ tar -zxvf hadoop-2.6.5.tar.gz

3. 修改各个配置文件

3.1 hadoop-env.sh

1
export JAVA_HOME=/usr/local/env/jdk1.8.0_141

3.2 core-site.xml

1
2
3
4
5
6
7
8
9
10
11
12
<property>
<name>fs.default.name</name>
<value>hdfs://ns1</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/env/hadoop-2.6.5/data/tmp</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>bonree-zq-1:2181,bonree-zq-2:2181,bonree-zq-3:2181</value>
</property>

3.3 hdfs-site.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<!--数据块的大小-->
<property>
<name>dfs.block.size</name>
<value>67108864</value>
</property>
<!--hdfs的元数据存放位置-->
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/env/hadoop-2.6.5/data/namenode</value>
</property>
<!--hdfs的数据存放位置-->
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/env/hadoop-2.6.5/data/datanode</value>
</property>
<!--指定hdfs虚拟服务名,名字与fs.defaultFS一致-->
<property>
<name>dfs.nameservices</name>
<value>ns1</value>
</property>
<!--指定hdfs虚拟服务名下的namenode的名字-->
<property>
<name>dfs.ha.namenodes.ns1</name>
<value>nn1,nn2</value>
</property>
<!--两个namenode的rpc通信地址-->
<property>
<name>dfs.namenode.rpc-address.ns1.nn1</name>
<value>bonree-zq-1:9000</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ns1.nn2</name>
<value>bonree-zq-2:9000</value>
</property>
<!--两个namenode的web界面地址-->
<property>
<name>dfs.namenode.http-address.ns1.nn1</name>
<value>bonree-zq-1:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.ns1.nn2</name>
<value>bonree-zq-2:50070</value>
</property>
<!--指定journalnode数据共享日志的目录-->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://bonree-zq-1:8485;bonree-zq-2:8485;bonree-zq-3:8485/ns1</value>
</property>
<!--指定journalnode共享数据本地存放路径-->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/usr/local/env/hadoop-2.6.5/data/journal</value>
</property>
<!--开启失败自动切换-->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<!--指定namenode失败进行自动切换的驱动主类-->
<property>
<name>dfs.client.failover.proxy.provider.bonree</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!--指定脑裂时,采用某种方式杀死其中一个namenode,可以配置多种机制,每个机制换行分割-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
<!--指定shell脚本位置,用于杀死进程-->
shell(/bin/true)
</value>
</property>
<!--指定启动该方法的用户,指定该用户秘钥所在目录-->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<!--指定杀死其中一个namenode,多长时间算超时-->
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
<property>
<name>dfs.qjournal.start-segment.timeout.ms</name>
<value>600000</value>
</property>
<property>
<name>dfs.qjournal.prepare-recovery.timeout.ms</name>
<value>600000</value>
</property>
<property>
<name>dfs.qjournal.accept-recovery.timeout.ms</name>
<value>600000</value>
</property>
<property>
<name>dfs.qjournal.finalize-segment.timeout.ms</name>
<value>600000</value>
</property>
<property>
<name>dfs.qjournal.select-input-streams.timeout.ms</name>
<value>600000</value>
</property>
<property>
<name>dfs.qjournal.get-journal-state.timeout.ms</name>
<value>600000</value>
</property>
<property>
<name>dfs.qjournal.new-epoch.timeout.ms</name>
<value>600000</value>
</property>
<property>
<name>dfs.qjournal.write-txns.timeout.ms</name>
<value>600000</value>
</property>

3.4 yarn-site.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
<!--是否启动yarn的HA-->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!--是否开启失败自动转移-->
<property>
<name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
<!--默认为true-->
<value>true</value>
</property>
<!--yarn的HA虚拟服务名-->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>yrc</value>
</property>
<!--指定yarn的HA虚拟服务下的具体的rm-->
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<!--指定运行rm1的主机-->
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>bonree-zq-1</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>bonree-zq-2</value>
</property>
<!--指定rm1的web ui的通信地址-->
<property>
<name>yarn.resourcemanager.webapp.address.rm1</name>
<value>bonree-zq-1:8088</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm2</name>
<value>bonree-zq-2:8088</value>
</property>
<!--指定rm的依赖的zk地址-->
<property>
<name>yarn.resourcemanager.zk-address</name>
<!--不能有空格-->
<value>bonree-zq-1:2181,bonree-zq-2:2181,bonree-zq-3:2181</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>

3.5 mapred-site.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<!--job tracker交互端口-->
<property>
<name>mapred.job.tracker</name>
<value>hdfs://bonree-zq-1:40020/</value>
</property>
<!--job tracker的web管理端口-->
<property>
<name>mapred.job.tracker.http.address</name>
<value>0.0.0.0:40021</value>
</property>
<!--task tracker的HTTP端口-->
<property>
<name>mapred.task.tracker.http.address</name>
<value>0.0.0.0:40022</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>0.0.0.0:40023</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>0.0.0.0:40024</value>
</property>
<property>
<name>mapreduce.shuffle.port</name>
<value>40025</value>
</property>

3.6 slaves

1
2
3
bonree-zq-1
bonree-zq-2
bonree-zq-3

3.7 hadoop环境变量配置

vim /etc/profile

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# java
export JAVA_HOME=/usr/local/env/jdk1.8.0_141
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/jre/lib/:$JAVA_HOME/lib/

# hadoop
export HADOOP_HOME=/usr/local/env/hadoop-2.6.5
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_COMMON_LIB_NATIVE_DIR"
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export PATH=$PATH:$HADOOP_HOME/bin/:$HADOOP_HOME/sbin/

编辑完成后source /etc/profile。

4. 启动

4.1 启动ZK

每个节点执行:

1
2
$ cd /usr/local/env/zookeeper-3.4.10
$ bin/zkServer.sh start

4.2 启动Journalnode

每个节点执行:

1
2
$ cd /usr/local/env/hadoop-2.6.5
$ sbin/hadoop-daemon.sh start journalnode

4.3 格式化Namenode

bonree-zq-1节点执行:

1
$ bin/hdfs namenode -format

注:格式化NN时可能会出现ipc异常,多半是由免密登录配置不完全导致的。

4.4 格式化zkfc

bonree-zq-1节点执行:

1
$ bin/hdfs zkfc -formatZK

4.5 启动NN

bonree-zq-1节点执行:

1
$ sbin/start-dfs.sh

4.6 从NN上拉取元数据并启动从NN

bonree-zq-2节点执行:

1
2
$ bin/hdfs namenode -bootstrapStandby
$ sbin/hadoop-daemon.sh start namenode

4.7 启动yarn

bonree-zq-1节点执行:

1
$ sbin/start-yarn.sh

从节点的NN上的RM需要单独启动,因此bonree-zq-2节点执行:

1
$ sbin/yarn-daemon.sh start resourcemanager

5. 相关进程查看

如果配置准确无误,三个节点对应的进程应该是如下这样:

bonree-zq-1

1
2
3
4
5
6
7
8
9
[root@bonree-zq-1 hadoop]# jps
12498 DataNode
12691 JournalNode
1720 QuorumPeerMain
13033 ResourceManager
12395 NameNode
12875 DFSZKFailoverController
14253 Jps
13135 NodeManager

bonree-zq-2

1
2
3
4
5
6
7
8
9
[root@bonree-zq-2 hadoop]# jps
9392 DataNode
9604 DFSZKFailoverController
11685 Jps
1750 QuorumPeerMain
9319 NameNode
9483 JournalNode
11643 ResourceManager
9742 NodeManager

bonree-zq-3

1
2
3
4
5
6
[root@bonree-zq-3 hadoop]# jps
5424 JournalNode
5840 Jps
5537 NodeManager
1763 QuorumPeerMain
5333 DataNode

6. UI查看

配置完成后进入 50070 web页面,正常的话应该是一个NN处于active另一个处于standby状态,若两个namenode都是standby状态,这是可以先强制手动是其中一个节点变为active。

1
$ bin/hdfs haadmin –transitionToActive–forcemanual

在bonree-zq-1上登录zookeeper,应该会发现多了一个hadoop-ha节点,这时配置应该没有问题了。

1
2
[zk: localhost:2181(CONNECTED) 0] ls /
[cluster, controller_epoch, brokers, zookeeper, yarn-leader-election, hadoop-ha, admin, isr_change_notification, consumers, config]

此时可以kill掉一个NN或者RM,查看对应的状态有没有改变。

7. 总结

  1. 一件事只要确定原理上可行,符合科学,有逻辑,那么不管在实际实现中遇到什么问题,我们总是可以解决或者找到替代方案,我们一定要有坚定的信心走下去。
  2. 我们要克服对未知领域的恐惧,要能够将自己熟悉领域的方法论,经验迁移到未知领域。当我们尝试未知领域时,肯定会遇到各种各样的问题,但是只要我们冷静下来,仔细分析,会发现在未知领域遇到的各种问题几乎都可以在已知的熟悉领域找到类似或者相关的解决方案。
  3. 李笑来有个观点:所谓的执行力就是 是否能够在做得不够好的时候持续去做。我挺认同这个观点,我大学时Java,C++都学过,不过一开始并没有这种环境搭建的经验。我一开始搭建Hadoop HA的时候,各种莫名其妙的错误一大堆,甚至有一种想放弃的感觉。但是因为必须要用到,所以我就决定一定要解这个问题,然后就遇神杀神,遇佛杀佛。当我解决后,发现这个问题其实并不复杂,实际上挺简单的,但是一开始由于存在对于未知的恐惧,我就没有一口气走下去。
坚持原创技术分享,您的支持将鼓励我继续创作!

------本文结束 感谢您的阅读------