这里主要是安装 elasticsearch 5.x 以上的版本遇到的问题一些问题
centos6.x 的内核太低,需要 centos7 或者升级 centos6.x 对应的内核 至 3.5 以上。我的系统为centos6,所以这里选择升级 centos6.x 对应的内核。(如果为centos7或者centos6的内核大于3.5可以不用管,直接跳到1.2即可)
uname -r可以看到内核低于3.5,无法满足es安装要求
为了避免curl: (35) SSL connect error报错,首先要更新一下nss服务
修改Grub引导顺序,将default有原先的1改成0
可能出现的报错为:max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
sudo vi /etc/security/limits.conf在文件末尾加上下面的内容
* soft nofile 65536 * hard nofile 131072 * soft nproc 2048 * hard nproc 4096可能出现的报错为:max number of threads [1024] for user [bigdata] is too low, increase to at least [4096]
sudo vi /etc/security/limits.d/90-nproc.conf将原先的
* soft nproc 1024修改为
* soft nproc 4096可能出现的报错为:max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
sudo vi /etc/sysctl.conf在最后添加如下内容
vm.max_map_count = 262144每台机器都要手动进行1.1-1.3的操作,这里不建议直接scp发送文件!!!
注意:2.x以后,es必须只能安装在非root用户下面
添加以下内容
export ELASTICSEARCH_HOME=/home/hadoop/apps/elasticsearch export PATH=$PATH:$ELASTICSEARCH_HOME/bin重新 加载配置文件
source ~/.bash_profileelasticsearch.yml
# ======================== Elasticsearch Configuration ========================= # # NOTE: Elasticsearch comes with reasonable defaults for most settings. # Before you set out to tweak and tune the configuration, make sure you # understand what are you trying to accomplish and the consequences. # # The primary way of configuring a node is via this file. This template lists # the most important settings you may want to configure for a production cluster. # # Please consult the documentation for further information on configuration options: # https://www.elastic.co/guide/en/elasticsearch/reference/index.html # # ---------------------------------- Cluster ----------------------------------- # # Use a descriptive name for your cluster: # cluster.name: bde-es # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # node.name: hadoop01 node.master: true node.data: true # # Add custom attributes to the node: # node.attr.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # path.data: /home/hadoop/data/elastic # # Path to log files: # path.logs: /home/hadoop/logs/elastic # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # #bootstrap.memory_lock: true # bootstrap.memory_lock: false bootstrap.system_call_filter: false # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific IP (IPv4 or IPv6): # network.host: 0.0.0.0 # # Set a custom port for HTTP: # http.port: 9200 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when new node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # discovery.zen.ping.unicast.hosts: ["hadoop01", "hadoop02", "hadoop03"] # # Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1): # #discovery.zen.minimum_master_nodes: # # For more information, consult the zen discovery module documentation. # # ---------------------------------- Gateway ----------------------------------- # # Block initial recovery after a full cluster restart until N nodes are started: # #gateway.recover_after_nodes: 3 # # For more information, consult the gateway module documentation. # # ---------------------------------- Various ----------------------------------- # # Require explicit names when deleting indices: # #action.destructive_requires_name: true对三台机器同时发送命令
source ~/.bash_profile这里需要修改第二台与第三台机器上elasticsearch.yml里面的node.name
hadoop02中修改为
node.name: hadoop02hadoop03中修改为
node.name: hadoop03对三台机器同时发送命令
mkdir -p /home/hadoop/data/elastic && mkdir -p /home/hadoop/logs/elastic注意这里不要使用root用户启动,elasticsearch目前不支持root用户启动
后台启动(若需要启动集群,则在三台机器上都要输入)
elasticsearch -d使用curl
curl -XGET http://localhost:9200或直接访问http://hadoop01:9200
出现如下信息说明,安装配置成功~
{ "name" : "hadoop01", "cluster_name" : "bde-es", "cluster_uuid" : "-Em6aw25S4mIHu0SsN8hug", "version" : { "number" : "6.5.2", "build_flavor" : "default", "build_type" : "tar", "build_hash" : "9434bed", "build_date" : "2018-11-29T23:58:20.891072Z", "build_snapshot" : false, "lucene_version" : "7.5.0", "minimum_wire_compatibility_version" : "5.6.0", "minimum_index_compatibility_version" : "5.0.0" }, "tagline" : "You Know, for Search" }tips:最后分享一个ElasticSearch的chrome插件神器 如果能访问Google可以直接在chrome商店在线安装。 如果无法访问Google,下载插件(提取码:j5dy)进行离线安装。 Google插件的安装方法比较简单,请自行百度。
插件截图:
