一、安装版本:

JDK1.8.0_111-b14
hadoophadoop-2.7.3
zookeeperzookeeper-3.5.2

二、安装步骤:  

    JDK的安装和集群的依赖环境配置不再叙述https://my.oschina.net/u/2500254/blog/806114

1、hadoop配置

    hadoop配置主要涉及hdfs-site.xml,core-site.xml,mapred-site.xml,yarn-site.xml四个文件。以下详细介绍每个文件的配置。

  1. core-site.xml的配置
    <?xml version="1.0" encoding="UTF-8"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!--
      Licensed under the Apache License, Version 2.0 (the "License");
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at
    
        http://www.apache.org/licenses/LICENSE-2.0
    
      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License. See accompanying LICENSE file.
    -->
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
    <property>
          <name>fs.defaultFS</name>
          <value>hdfs://cluster1</value>
          <description>HDFS namenode的逻辑名称,也就是namenode HA,此值要对应hdfs-site.xml里的dfs.nameservices</description>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hadoop/bigdata/tmp</value>
        <description>hdfs中namenode和datanode的数据默认放置路径,也可以在hdfs-site.xml中分别指定</description>
    </property>
    <property>
            <name>ha.zookeeper.quorum</name>
            <value>m7-psdc-kvm01:2181,m7-psdc-kvm02:2181,m7-psdc-kvm03:2181</value>
            <description>zookeeper集群的地址和端口,zookeeper集群的节点数必须为奇数</description>
    </property>
    </configuration>


  2. hdfs-site.xml的配置(重点配置)
    <?xml version="1.0" encoding="UTF-8"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!--
      Licensed under the Apache License, Version 2.0 (the "License");
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at
    
        http://www.apache.org/licenses/LICENSE-2.0
    
      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License. See accompanying LICENSE file.
    -->
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
    	<property>
    	    <name>dfs.name.dir</name>
    	    <value>/home/hadoop/bigdata/nn</value>
    	    <description>namenode的数据放置目录</description>
    	</property>
    	<property>
    	    <name>dfs.data.dir</name>
    	    <value>/home/hadoop/bigdata/dn</value>
    	    <description>datanode的数据放置目录</description>
    	</property>
    	<property>
    	    <name>dfs.replication</name>
    	    <value>2</value>
    	    <description>数据块的备份数,默认是3</description>
    	</property>
    	<property>
    		<name>dfs.nameservices</name>
    		<value>cluster1</value>
    		<description>HDFS namenode的逻辑名称,也就是namenode HA</description>
    	</property>
    	<property>
    		<name>dfs.ha.namenodes.cluster1</name>
    		<value>ns1,ns2</value>
    		<description>nameservices对应的namenode逻辑名</description>
    	</property>
    	<property>
    		<name>dfs.namenode.rpc-address.cluster1.ns1</name>
    		<value>m7-psdc-kvm01:8020</value>
    		<description>指定namenode(ns1)的rpc地址和端口</description>
    	</property>
    	<property>
    		<name>dfs.namenode.http-address.cluster1.ns1</name>
    		<value>m7-psdc-kvm01:50070</value>
    		<description>指定namenode(ns1)的web地址和端口</description>
    	</property>
    	<property>
    		<name>dfs.namenode.rpc-address.cluster1.ns2</name>
    		<value>m7-psdc-kvm02:9000</value>
    		<description>指定namenode(ns2)的rpc地址和端口</description>
    	</property>
    	<property>
    		<name>dfs.namenode.http-address.cluster1.ns2</name>
    		<value>m7-psdc-kvm02:50070</value>
    		<description>指定namenode(ns2)的web地址和端口</description>
    	</property>
    	<property>
    		<name>dfs.namenode.shared.edits.dir</name>
    		<value>qjournal://m7-psdc-kvm01:8485;m7-psdc-kvm02:8485;m7-psdc-kvm03:8485;m7-psdc-kvm04:8485/cluster1 </value>
    		<description>这是NameNode读写JNs组的uri,active NN 将 edit log 写入这些JournalNode,而 standby NameNode 读取这些 edit log,并作用在内存中的目录树中</description>
    	</property>
    	<property>
    		<name>dfs.journalnode.edits.dir</name>
    		<value>/home/hadoop/bigdata/journal</value>
    		<description>journalNode 所在节点上的一个目录,用于存放 editlog 和其他状态信息。</description>
    	</property>
    	<property>
    		   <name>dfs.ha.automatic-failover.enabled</name>
    		   <value>true</value>
    		   <description>启动自动failover。自动failover依赖于zookeeper集群和ZKFailoverController(ZKFC),后者是一个zookeeper客户端,用来监控NN的状态信息。每个运行NN的节点必须要运行一个zkfc</description>
    	</property>
    	<property>
    		<name>dfs.client.failover.proxy.provider.cluster1</name>
    		<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    		<description>配置HDFS客户端连接到Active NameNode的一个java类</description>
    	</property>
    	<property>
    		<name>dfs.ha.fencing.methods</name>
    		<value>sshfence</value>
    		<description>解决HA集群脑裂问题(即出现两个 master 同时对外提供服务,导致系统处于不一致状态)。在 HDFS HA中,JournalNode 只允许一个 NameNode 写数据,不会出现两个 active NameNode 的问题,但是,当主备切换时,之前的 active NameNode 可能仍在处理客户端的 RPC 请求,为此,需要增加隔离机制(fencing)将之前的 active NameNode 杀死。常用的fence方法是sshfence,要指定ssh通讯使用的密钥dfs.ha.fencing.ssh.private-key-files和连接超时时间</description>
    	</property>
    	<property>
    		<name>dfs.ha.fencing.ssh.private-key-files</name>
    		<value>/home/hadoop/.ssh/id_rsa</value>
    		<description>ssh通讯使用的密钥</description>
    	</property>
    	<property>
    		<name>dfs.ha.fencing.ssh.connect-timeout</name>
    		<value>30000</value>
    		<description>连接超时时间</description>
    	</property>
    </configuration>


     

  3. mapred-site.xml的配置
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!--
      Licensed under the Apache License, Version 2.0 (the "License");
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at
    
        http://www.apache.org/licenses/LICENSE-2.0
    
      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License. See accompanying LICENSE file.
    -->
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
    <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
            <description>指定运行mapreduce的环境是yarn,与hadoop1截然不同的地方</description>
    </property>
    <property>
            <name>mapreduce.jobhistory.address</name>
            <value>m7-psdc-kvm01:10020</value>
             <description>MR JobHistory Server管理的日志的存放位置</description>
    </property>
    <property>
            <name>mapreduce.jobhistory.webapp.address</name>
            <value>m7-psdc-kvm01:19888</value>
            <description>查看历史服务器已经运行完的Mapreduce作业记录的web地址,需要启动该服务才行</description>
    </property>
    <property>
       <name>mapreduce.jobhistory.done-dir</name>
       <value>/home/hadoop/bigdata/done</value>
       <description>MR JobHistory Server管理的日志的存放位置,默认:/mr-history/done</description>
    </property>
    <property>
       <name>mapreduce.jobhistory.intermediate-done-dir</name>
       <value>hdfs://mycluster-pha/mapred/tmp</value>
       <description>MapReduce作业产生的日志存放位置,默认值:/mr-history/tmp</description>
    </property>
    </configuration>
    4.  yarn-site.xml

    <?xml version="1.0"?>
    <!--
      Licensed under the Apache License, Version 2.0 (the "License");
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at
    
    
        http://www.apache.org/licenses/LICENSE-2.0
    
    
      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License. See accompanying LICENSE file.
    -->
    <configuration>
    
    
    <!-- Site specific YARN configuration properties -->
    	<property>
                    <name>yarn.resourcemanager.ha.enabled</name>
                    <value>true</value>
            </property>
    	<property>
                    <name>yarn.resourcemanager.cluster-id</name>
                    <value>yarn-cluster1</value>
            </property>
    <!-- 指定RM的名字 -->
            <property>
                    <name>yarn.resourcemanager.ha.rm-ids</name>
                    <value>rm1,rm2</value>
            </property>
    <!-- 分别指定RM的地址 -->
            <property>
                    <name>yarn.resourcemanager.hostname.rm1</name>
                    <value>m7-psdc-kvm03</value>
            </property>
            <property>
                    <name>yarn.resourcemanager.hostname.rm2</name>
                    <value>m7-psdc-kvm02</value>
            </property>
    	<property>
                    <name>yarn.resourcemanager.recovery.enabled</name>
                    <value>true</value>
            </property>
    
    
            <property>
                    <name>yarn.resourcemanager.store.class</name>
                    <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
            </property>
            <!-- 指定zk集群地址 -->
            <property>
                    <name>yarn.resourcemanager.zk-address</name>
                    <value>m7-psdc-kvm01:2181,m7-psdc-kvm02:2181,m7-psdc-kvm03:2181</value>
            </property>
        <property>
            <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
            <value>true</value>
        </property>
        <property>
            <name>yarn.resourcemanager.admin.address.rm1</name>
            <value>m7-psdc-kvm03:8033</value>
        </property>
        <property>
            <name>yarn.resourcemanager.admin.address.rm2</name>
            <value>m7-psdc-kvm02:8033</value>
        </property>
    
    
        <property>
            <name>yarn.resourcemanager.address.rm1</name>
            <value>m7-psdc-kvm03:8032</value>
        </property>
        <property>
            <name>yarn.resourcemanager.address.rm2</name>
            <value>m7-psdc-kvm02:8032</value>
        </property>
    
    
        <property>
            <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
            <value>m7-psdc-kvm03:8031</value>
        </property>
        <property>
            <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
            <value>m7-psdc-kvm02:8031</value>
        </property>
    
    
        <property>
            <name>yarn.resourcemanager.webapp.address.rm1</name>
            <value>m7-psdc-kvm03:8088</value>
        </property>
        <property>
            <name>yarn.resourcemanager.webapp.address.rm2</name>
            <value>m7-psdc-kvm02:8088</value>
        </property>
    
    
        <property>
            <name>yarn.resourcemanager.scheduler.address.rm1</name>
            <value>m7-psdc-kvm03:8030</value>
        </property>
        <property>
            <name>yarn.resourcemanager.scheduler.address.rm2</name>
            <value>m7-psdc-kvm02:8030</value>
        </property>
    
    
           <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
            <description>默认</description>
           </property>
    
    
        <property>
            <name>yarn.nodemanager.pmem-check-enabled</name>
            <value>false</value>
        </property>
        <property>
            <name>yarn.log-aggregation-enable</name>
            <value>true</value>
        </property>
    <!--
        <property>
            <name>yarn.resourcemanager.ha.id</name>
            <value>rm1</value>
        </property>
    -->
        <property>
            <name>yarn.nodemanager.delete.debug-delay-sec</name>
            <value>86400</value>
        </property>
    
    
           <property>
            <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
            <value>org.apache.hadoop.mapred.ShuffleHandler</value>
           </property>
           <property>
            <name>yarn.nodemanager.resource.memory-mb</name>
            <value>102400</value>
         </property>
        <property>
            <name>yarn.nodemanager.resource.cpu-vcores</name>
            <value>20</value>
        </property>
        <property>
            <name>yarn.scheduler.maximum-allocation-mb</name>
            <value>102400</value>
        </property>
    </configuration>

2.zookeeper配置

    zookeeper的配置主要是zoo.cfg和myid两个文件

  1. conf/zoo.cfg配置:先将zoo_sample.cfg改成zoo.cfg
    cp  zoo_sample.cfg  zoo.cfg
  2. vi zoo.cfg
    dataDir:数据的放置路径
    
    dataLogDir:log的放置路径
    initLimit=10
    syncLimit=5
    clientPort=2181
    tickTime=2000
    dataDir=/usr/zookeeper/tmp/data
    dataLogDir=/usr/zookeeper/tmp/log
    server.1=master:2888:3888
    server.2=slave1:2888:3888
    server.3=slave2:2888:3888
  3. 在[master,slave1,slave2]节点的dataDir目录新建文件myid
vi myid

    master节点编辑:1

    slave1节点编辑:2

    slave2节点编辑:3

    如下:

[hadoop@master data]$ vi myid 

1

三、启动集群

 1.zookeeper集群启动

    1.启动zookeeper集群,在三个节点全部启动
bin/zkServer.sh start
    2.查看集群zookeeper状态:zkServer.sh status,一个learer两个follower。
[hadoop@master hadoop-2.7.3]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.5.2-alpha/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
[hadoop@slave1 root]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.5.2-alpha/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: leader
[hadoop@slave2 root]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.5.2-alpha/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
    3.验证zookeeper(非必须): 执行zkCli.sh
[hadoop@slave1 root]$ zkCli.sh
Connecting to localhost:2181
2016-12-18 02:05:03,115 [myid:] - INFO  [main:Environment@109] - Client environment:zookeeper.version=3.5.2-alpha-1750793, built on 06/30/2016 13:15 GMT
2016-12-18 02:05:03,118 [myid:] - INFO  [main:Environment@109] - Client environment:host.name=salve1
2016-12-18 02:05:03,118 [myid:] - INFO  [main:Environment@109] - Client environment:java.version=1.8.0_111
2016-12-18 02:05:03,120 [myid:] - INFO  [main:Environment@109] - Client environment:java.vendor=Oracle Corporation
2016-12-18 02:05:03,120 [myid:] - INFO  [main:Environment@109] - Client environment:java.home=/usr/local/jdk1.8.0_111/jre
2016-12-18 02:05:03,120 [myid:] - INFO  [main:Environment@109] - Client environment:java.class.path=/usr/local/zookeeper-3.5.2-alpha/bin/../build/classes:/usr/local/zookeeper-3.5.2-alpha/bin/../build/lib/*.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/slf4j-log4j12-1.7.5.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/slf4j-api-1.7.5.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/servlet-api-2.5-20081211.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/netty-3.10.5.Final.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/log4j-1.2.17.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/jline-2.11.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/jetty-util-6.1.26.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/jetty-6.1.26.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/javacc.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/jackson-mapper-asl-1.9.11.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/jackson-core-asl-1.9.11.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/commons-cli-1.2.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../zookeeper-3.5.2-alpha.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../src/java/lib/*.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../conf:.:/usr/local/jdk1.8.0_111/lib/dt.jar:/usr/local/jdk1.8.0_111/lib/tools.jar:/usr/local/zookeeper-3.5.2-alpha/bin:/usr/local/hadoop-2.7.3/bin
2016-12-18 02:05:03,120 [myid:] - INFO  [main:Environment@109] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:java.io.tmpdir=/tmp
2016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:java.compiler=<NA>
2016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:os.name=Linux
2016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:os.arch=amd64
2016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:os.version=3.10.0-327.22.2.el7.x86_64
2016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:user.name=hadoop
2016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:user.home=/home/hadoop
2016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:user.dir=/tmp/hsperfdata_hadoop
2016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:os.memory.free=52MB
2016-12-18 02:05:03,123 [myid:] - INFO  [main:Environment@109] - Client environment:os.memory.max=228MB
2016-12-18 02:05:03,123 [myid:] - INFO  [main:Environment@109] - Client environment:os.memory.total=57MB
2016-12-18 02:05:03,146 [myid:] - INFO  [main:ZooKeeper@855] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@593634ad
Welcome to ZooKeeper!
2016-12-18 02:05:03,171 [myid:localhost:2181] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1113] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2016-12-18 02:05:03,243 [myid:localhost:2181] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@948] - Socket connection established, initiating session, client: /127.0.0.1:56184, server: localhost/127.0.0.1:2181
2016-12-18 02:05:03,252 [myid:localhost:2181] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1381] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x200220f5fe30060, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] 

2.hadoop集群启动

    1.第一次配置启动

        1.1在三个节点上启动Journalnode deamons,然后jps,出现JournalNode进程。

sbin/./hadoop-daemon.sh start journalnode
jps

JournalNode

        1.2格式化master上的namenode(任意一个),然后启动该节点的namenode。

bin/hdfs namenode -format
sbin/hadoop-daemon.sh start namenode

        1.3在另一个namenode节点slave1上同步master上的元数据信息

bin/hdfs namenode -bootstrapStandby

         1.4停止hdfs上的所有服务

sbin/stop-dfs.sh

        1.5初始化zkfc

bin/hdfs zkfc -formatZK

        1.6启动hdfs

sbin/start-dfs.sh

        1.7启动yarn

sbin/start-yarn.sh
    2.非第一次配置启动

        2.1直接启动hdfs和yarn即可,namenode、datanode、journalnode、DFSZKFailoverController都会自动启动。

        

sbin/start-dfs.sh

 

        2.2启动yarn

sbin/start-yarn.sh

四、查看各节点的进程

    4.1master

[hadoop@master hadoop-2.7.3]$ jps
26544 QuorumPeerMain
25509 JournalNode
25704 DFSZKFailoverController
26360 Jps
25306 DataNode
25195 NameNode
25886 ResourceManager
25999 NodeManager

    4.2slave1

[hadoop@slave1 root]$ jps
2289 DFSZKFailoverController
9400 QuorumPeerMain
2601 Jps
2060 DataNode
2413 NodeManager
2159 JournalNode
1983 NameNode

    4.3slave2

[hadoop@slave2 root]$ jps
11984 DataNode
12370 Jps
2514 QuorumPeerMain
12083 JournalNode
12188 NodeManager

五、浏览器查看

    http://master:8088/cluster/cluster,yarn的资源管理页面    

    http://master:50070/dfshealth.html#tab-overview ,master节点的hdfs页面

    http://slave1:50070/dfshealth.html#tab-overview ,slave1节点的hdfs页面

 

 

 

 


Logo

旨在为数千万中国开发者提供一个无缝且高效的云端环境,以支持学习、使用和贡献开源项目。

更多推荐