ElasticSearch分布式集群搭建

it2026-03-20  5

文章目录

1.修改系统配置1.1更新系统内核1.1.1更新nss服务1.1.2更新curl1.1.3安装public-key1.1.4安装elrepo1.1.5安装更新内核1.1.6修改grub.conf文件 1.2防止其它可能出现的报错1.2.1修改limits.conf文件1.2.2修改90-nproc.conf文件1.2.3修改sysctl.conf文件 1.3重启机器1.4重复以上操作 2.安装Elasticsearch2.1上传安装包2.2解压2.3重命名2.4添加用户环境变量2.5修改elasticsearch.yml配置文件2.6远程发送配置文件2.7修改node.name2.8创建文件夹2.9启动2.10验证

1.修改系统配置

这里主要是安装 elasticsearch 5.x 以上的版本遇到的问题一些问题

1.1更新系统内核

centos6.x 的内核太低,需要 centos7 或者升级 centos6.x 对应的内核 至 3.5 以上。我的系统为centos6,所以这里选择升级 centos6.x 对应的内核。(如果为centos7或者centos6的内核大于3.5可以不用管,直接跳到1.2即可)

uname -r

可以看到内核低于3.5,无法满足es安装要求

为了避免curl: (35) SSL connect error报错,首先要更新一下nss服务

1.1.1更新nss服务
sudo yum -y update nss
1.1.2更新curl
sudo yum -y update curl
1.1.3安装public-key
sudo rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
1.1.4安装elrepo
sudo rpm -Uvh https://www.elrepo.org/elrepo-release-6-8.el6.elrepo.noarch.rpm
1.1.5安装更新内核
sudo yum --enablerepo=elrepo-kernel install kernel-lt -y
1.1.6修改grub.conf文件
sudo vi /etc/grub.conf

修改Grub引导顺序,将default有原先的1改成0

1.2防止其它可能出现的报错

1.2.1修改limits.conf文件

可能出现的报错为:max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]

sudo vi /etc/security/limits.conf

在文件末尾加上下面的内容

* soft nofile 65536 * hard nofile 131072 * soft nproc 2048 * hard nproc 4096

1.2.2修改90-nproc.conf文件

可能出现的报错为:max number of threads [1024] for user [bigdata] is too low, increase to at least [4096]

sudo vi /etc/security/limits.d/90-nproc.conf

将原先的

* soft nproc 1024

修改为

* soft nproc 4096

1.2.3修改sysctl.conf文件

可能出现的报错为:max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

sudo vi /etc/sysctl.conf

在最后添加如下内容

vm.max_map_count = 262144

1.3重启机器

sudo reboot

1.4重复以上操作

每台机器都要手动进行1.1-1.3的操作,这里不建议直接scp发送文件!!!

2.安装Elasticsearch

注意:2.x以后,es必须只能安装在非root用户下面

2.1上传安装包

rz -E C:/elasticsearch-6.5.2.tar.gz

2.2解压

tar -zxvf elasticsearch-6.5.2.tar.gz -C ~/apps/

2.3重命名

mv elasticsearch-6.5.2.tar.gz elasticsearch

2.4添加用户环境变量

vi ~/.bash_profile

添加以下内容

export ELASTICSEARCH_HOME=/home/hadoop/apps/elasticsearch export PATH=$PATH:$ELASTICSEARCH_HOME/bin

重新 加载配置文件

source ~/.bash_profile

2.5修改elasticsearch.yml配置文件

vi $ELASTICSEARCH_HOME/config/elasticsearch.yml cluster.name: bde-es(集群名称)node.name: hadoop01(节点名称)node.master: true(设置为主节点)node.data: true(设置为work节点)node.attr.rack: r1(自定义的属性,这是官方文档中自带的)path.data: /home/hadoop/data/elastic(手动创建目录)path.logs: /home/hadoop/logs/elastic(手动创建目录)bootstrap.memory_lock: false(一定要放在bootstrap.system_call_filter上面)bootstrap.system_call_filter: false(一定要放在bootstrap.memory_lock下面)network.host: 0.0.0.0(开放端口)http.port: 9200(端口号)discovery.zen.ping.unicast.hosts: [“hadoop01”, “hadoop02”, “hadoop03”](集群IP)

elasticsearch.yml

# ======================== Elasticsearch Configuration ========================= # # NOTE: Elasticsearch comes with reasonable defaults for most settings. # Before you set out to tweak and tune the configuration, make sure you # understand what are you trying to accomplish and the consequences. # # The primary way of configuring a node is via this file. This template lists # the most important settings you may want to configure for a production cluster. # # Please consult the documentation for further information on configuration options: # https://www.elastic.co/guide/en/elasticsearch/reference/index.html # # ---------------------------------- Cluster ----------------------------------- # # Use a descriptive name for your cluster: # cluster.name: bde-es # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # node.name: hadoop01 node.master: true node.data: true # # Add custom attributes to the node: # node.attr.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # path.data: /home/hadoop/data/elastic # # Path to log files: # path.logs: /home/hadoop/logs/elastic # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # #bootstrap.memory_lock: true # bootstrap.memory_lock: false bootstrap.system_call_filter: false # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific IP (IPv4 or IPv6): # network.host: 0.0.0.0 # # Set a custom port for HTTP: # http.port: 9200 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when new node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # discovery.zen.ping.unicast.hosts: ["hadoop01", "hadoop02", "hadoop03"] # # Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1): # #discovery.zen.minimum_master_nodes: # # For more information, consult the zen discovery module documentation. # # ---------------------------------- Gateway ----------------------------------- # # Block initial recovery after a full cluster restart until N nodes are started: # #gateway.recover_after_nodes: 3 # # For more information, consult the gateway module documentation. # # ---------------------------------- Various ----------------------------------- # # Require explicit names when deleting indices: # #action.destructive_requires_name: true

2.6远程发送配置文件

scp -r ~/.bash_profile hadoop02:~/.bash_profile scp -r ~/.bash_profile hadoop03:~/.bash_profile scp -r /home/hadoop/apps/elasticsearch hadoop02:/home/hadoop/apps/ scp -r /home/hadoop/apps/elasticsearch hadoop03:/home/hadoop/apps/

对三台机器同时发送命令

source ~/.bash_profile

2.7修改node.name

这里需要修改第二台与第三台机器上elasticsearch.yml里面的node.name

hadoop02中修改为

node.name: hadoop02

hadoop03中修改为

node.name: hadoop03

2.8创建文件夹

对三台机器同时发送命令

mkdir -p /home/hadoop/data/elastic && mkdir -p /home/hadoop/logs/elastic

2.9启动

注意这里不要使用root用户启动,elasticsearch目前不支持root用户启动

后台启动(若需要启动集群,则在三台机器上都要输入)

elasticsearch -d

2.10验证

使用curl

curl -XGET http://localhost:9200

或直接访问http://hadoop01:9200

出现如下信息说明,安装配置成功~

{ "name" : "hadoop01", "cluster_name" : "bde-es", "cluster_uuid" : "-Em6aw25S4mIHu0SsN8hug", "version" : { "number" : "6.5.2", "build_flavor" : "default", "build_type" : "tar", "build_hash" : "9434bed", "build_date" : "2018-11-29T23:58:20.891072Z", "build_snapshot" : false, "lucene_version" : "7.5.0", "minimum_wire_compatibility_version" : "5.6.0", "minimum_index_compatibility_version" : "5.0.0" }, "tagline" : "You Know, for Search" }

tips:最后分享一个ElasticSearch的chrome插件神器 如果能访问Google可以直接在chrome商店在线安装。 如果无法访问Google,下载插件(提取码:j5dy)进行离线安装。 Google插件的安装方法比较简单,请自行百度。

插件截图:

最新回复(0)