下面的例子都是以三台集群为例: 1. 第一步需要三台机器,乌班图或者centos都可以。两者在安装docker上面不同,其他的步骤基本一样。 2. 固定三台机器的ip,把三台机器设置成静态ip,首先上centos8的设置方案:
centos8 cd /etc/sysconfig/network-scripts vim ifcfg-ens192 TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=static DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens192 UUID=3e4aae63-11ad-4213-ac97-1d5685ba9d6a DEVICE=ens192 ONBOOT=yes IPADDR=172.16.2.201 NETMASK=255.255.255.0 GATEWAY=172.16.2.1 PREFIX=24设置成以上保存好,重启网络nmcli c reloadifconfig查看是否设置成功 3. 配置三台机器的hosts,三台都一样
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.2.201 hadoop-master 172.16.2.202 hadoop-slave1 172.16.2.203 hadoop-slave2 172.16.2.201 spark-master 172.16.2.2 tech.ciwei另外两台设置一样的scp /etc/hosts root@ip:/etc/hosts 4. 配置三台机器的免密登录 在hadoop-master上执行以下操作
ssh-keygen -t rsa -P '' -f /root/.ssh/id_rsa cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys chmod 600 /root/.ssh/authorized_keys 在hadoop-slave1上执行以下操作 ssh-keygen -t rsa -P '' -f /root/.ssh/id_rsa scp /root/.ssh/id_rsa.pub root@hadoop-master:/root/.ssh/id_rsa.pub01在hadoop-slave2上执行以下操作
ssh-keygen -t rsa -P '' -f /root/.ssh/id_rsa scp /root/.ssh/id_rsa.pub root@hadoop-master:/root/.ssh/id_rsa.pub02 在hadoop-master上面执行以下操作 cat /root/.ssh/id_rsa.pub01 >> /root/.ssh/authorized_keys cat /root/.ssh/id_rsa.pub02 >> /root/.ssh/authorized_keys scp authorized_keys root@hadoop-slave1:~/roort/ssh/ scp authorized_keys root@hadoop-slave2:~/roort/ssh/更改三台机器的~/.ssh/文件夹权限改为700, authorized_keys权限改为600
chmod 700 ~/.ssh chmod 600 ~/.ssh/authorized_keys至此,三台机器的免密登录过程结束 5. 上传需要安装的hadoop的解压包 6. 配置环境变量
vim ~/.bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export PATH export HADOOP_HOME=$HOME/bigdata/hadoop export JAVA_HOME=$HOME/bigdata/jdk export HIVE_HOME=$HOME/bigdata/hive export HBASE_HOME=$HOME/bigdata/hbase export SPARK_HOME=$HOME/bigdata/spark export FLUME_HOME=$HOME/bigdata/flume export KAFKA_HOME=$HOME/bigdata/kafka export SQOOP_HOME=$HOME/bigdata/sqoop export ZOOKEEPER_HOME=/root/bigdata/zookeeper export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$JAVA_HOME/bin:$HIVE_HOME/bin:$HBASE_HOME/bin:$SPARK_HOME/bin:$SPARK_HOME/sbin:$FLUME_HOME/bin:/$KAFKA/bin:$SQOOP_HOME/bin:$ZOOKEEPER_HOME/bin:/usr/local/src/git/bin export HIVE_CONF_DIR=$HIVE_HOME/conf #export PYSPARK_PYTHON=/miniconda2/envs/py365/bin/python export PYSPARK_PYTHON=/root/anaconda3/envs/reco_sys/bin/python #export PYSPARK_DRIVER_PYTHON=/miniconda2/envs/py365/bin/python export PYSPARK_DRIVER_PYTHON=/root/anaconda3/envs/reco_sys/bin/python export PYTHONPATH=$SPARK_HOME/python:$SPARK_HOME/python/lib/py4j-0.10.7-src.zip:$PYTHONPATH export JAVA_LIBRARY_PATH=/root/bigdata/hadoop/lib/native复制到另外两台机器
scp ~/.bash_profile root@hadoop-slave1:~/.bash_profile scp ~/.bash_profile root@hadoop-slave2:~/.bash_profile另外两台机器上执行
source ~/.bash_profile