elasticsearch+elasticsearch-head+logstash+kinbana

it2023-03-23  76

elasticsearch集群安装

环境变量配置jdk配置系统参数配置修改sysctl.conf文件中参数修改limits.conf新增esuser(es不能使用root用户启动) elasticsearch配置修改elasticsearch.yml文件 启动es集群安装elasticsearch-head插件下载elasticsearch-head并解压安装node安装grunt修改head源码1.增加hostname属性,设置为*2.修改服务连接地址3.修改Elasticsearch配置文件4.运行head启动node.js5.访问es-head logstash的安装修改logstash配置nginx_to_esnginx_to_esfilebeat_to_esfilebeat文件配置(nginx_to_filebeat_to_logstash) 启动logstash单个配置启动多个配置文件启动 启动filebeatnohup ./filebeat -e -c ./myconfig/nginx_log_filebeat.yml & 安装kinbana配置kinbana启动kinbana访问kinbana地址

环境变量配置

jdk配置

解压安装包 [root@localhost jdk1.8]# tar -zxvf jdk-8u171-linux-x64.tar.gz 设置Java环境变量 [root@localhost jdk1.8.0_171]# vi /etc/profile 在文件最后添加 export JAVA_HOME=/home/elk1/jdk1.8/jdk1.8.0_171 export JRE_HOME= J A V A H O M E / j r e e x p o r t C L A S S P A T H = . : JAVA_HOME/jre export CLASSPATH=.: JAVAHOME/jreexportCLASSPATH=.:JAVA_HOME/LIB: J R E H O M E / L I B : JRE_HOME/LIB: JREHOME/LIB:CLASSPATH export PATH= J A V A H O M E / b i n : JAVA_HOME/bin: JAVAHOME/bin:JRE_HOME/bin:$PATH 生效环境变量 [root@localhost jdk1.8.0_171]# source /etc/profile 检查是否生效 [root@localhost jdk1.8.0_171]# java -version java version “1.8.0_171” Java™ SE Runtime Environment (build 1.8.0_171-b11) Java HotSpot™ 64-Bit Server VM (build 25.171-b11, mixed mode)

系统参数配置

修改sysctl.conf文件中参数

[root@localhost bin]# vi /etc/sysctl.conf 增加以下内容:

vm.max_map_count=655360 fs.file-max=655360

使配置生效 [root@localhost bin]# sysctl -p

修改limits.conf

每个进程最大同时打开文件数太小,可通过下面2个命令查看当前数量

ulimit -Hn ulimit -Sn

[root@localhost bin]# vi /etc/security/limits.conf 修改或增加以下内容:

* hard nofile 65536 * soft nofile 131072 * hard nproc 4096 * soft nproc 2048

新增esuser(es不能使用root用户启动)

1.新增用户: useradd esuser 2.修改密码: passwd esuser 3.把解压后的es程序目录权限给esuser: chown -R esuser:esuser /opt/app/elasticsearch-6.8.1

elasticsearch配置

修改elasticsearch.yml文件

vim /opt/app/elasticsearch-6.8.1/config/elasticsearch.yml

# ======================== Elasticsearch Configuration ========================= # # NOTE: Elasticsearch comes with reasonable defaults for most settings. # Before you set out to tweak and tune the configuration, make sure you # understand what are you trying to accomplish and the consequences. # # The primary way of configuring a node is via this file. This template lists # the most important settings you may want to configure for a production cluster. # # Please consult the documentation for further information on configuration options: # https://www.elastic.co/guide/en/elasticsearch/reference/index.html # # ---------------------------------- Cluster ----------------------------------- # # Use a descriptive name for your cluster: # cluster.name: tyhy # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # node.name: node03 #node.master: true #node.data: true # # Add custom attributes to the node: # #node.attr.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # path.data: /opt/app/elasticsearch-6.8.1/data # # Path to log files: # path.logs: /opt/app/elasticsearch-6.8.1/logs # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # #bootstrap.memory_lock: true # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific IP (IPv4 or IPv6): # network.host: 192.168.0.233 # # Set a custom port for HTTP: # http.port: 9200 transport.tcp.port: 9300 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when new node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # #discovery.zen.ping.unicast.hosts: ["192.168.0.231", "192.168.0.232","192.168.0.233"] discovery.zen.ping.unicast.hosts: ["node03"] # # Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1): # discovery.zen.minimum_master_nodes: 2 http.cors.enabled: true http.cors.allow-origin: "*" # # For more information, consult the zen discovery module documentation. # # ---------------------------------- Gateway ----------------------------------- # # Block initial recovery after a full cluster restart until N nodes are started: # #gateway.recover_after_nodes: 3 # # For more information, consult the gateway module documentation. # # ---------------------------------- Various ----------------------------------- # # Require explicit names when deleting indices: # #action.destructive_requires_name: true

启动es集群

1.切换到esuser用户: su - esuser 2.执行命令: /opt/app/elasticsearch-6.8.1/bin/elasticsearch -d

安装elasticsearch-head插件

下载elasticsearch-head并解压

在线下载:wget https://github.com/mobz/elasticsearch-head/archive/master.zip

或者到github下载:https://github.com/mobz/elasticsearch-head

unzip elasticsearch-head-master.zip //解压zip文件

mv elasticsearch-head-master.zip /home/ntc/code/elasticsearch-head //解压到自定义目录并修改文件夹名为elasticsearch-head

安装node

由于head插件本质上还是一个nodejs的工程,因此需要安装node,使用npm来安装依赖的包。(npm可以理解为maven)

wget https://npm.taobao.org/mirrors/node/latest-v4.x/node-v4.4.7-linux-x64.tar.gz tar -zxvf node-v4.4.7-linux-x64.tar.gz

编辑/etc/profile,添加:

export NODE_HOME=/export/servers/node-v4.4.7-linux-x64 export PATH=$NODE_HOME/bin:$PATH

执行:

source /etc/profile

测试一下node是否生效:node -v

安装grunt

grunt是一个很方便的构建工具,可以进行打包压缩、测试、执行等等的工作,5.0里的head插件就是通过grunt启动的。因此需要安装一下grunt: 1.进入elasticsearch-head安装目录 2.安装nodejs

npm install -g grunt-cli //执行后会生成node_modules文件夹 npm install

修改head源码

1.增加hostname属性,设置为*

vi /home/ntc/code/elasticsearch-head/Gruntfile.js

connect: { server: { options: { port: 9100, hostname: '*', base: '.', keepalive: true } } }

2.修改服务连接地址

目录:vi /home/ntc/code/elasticsearch-head/_site/app.js

修改head的连接地址: this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://localhost:9200"; 把localhost修改成你es的服务器地址,如: this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://192.168.40.133:9200";

3.修改Elasticsearch配置文件

vim /opt/app/elasticsearch-6.8.1/config/elasticsearch.yml 增加内容:

http.cors.enabled: true http.cors.allow-origin: "*"

4.运行head启动node.js

1.创建run.sh,加入内容:

nohup grunt server &

5.访问es-head

http://192.168.0.233:9100/

logstash的安装

修改logstash配置

nginx_to_es

# Sample Logstash configuration for creating a simple # Beats -> Logstash -> Elasticsearch pipeline. input { file { path => "/opt/logs/tyhy-pay-dev.log" start_position => "end" type => "tyhy_pay_dev_log" codec => "json" tags => ['tyhy_pay'] } } filter { if [type] == "tyhy_pay_dev_log" { mutate { convert => ["status","integer"] #@timestamp 如果去除 %{+YYYY.MM.dd} 时间就显示不了 remove_field => ["@version","tags"] } date { target => "runtime" locale => en match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ] } geoip { source => "ip" } } } output { if [type] == "tyhy_pay_dev_log" { elasticsearch { hosts => ["http://192.168.0.233:9200","http://192.168.0.232:9200","http://192.168.0.231:9200"] index => "nginx_%{type}_%{+YYYY.MM.dd}" document_type => "%{type}" } #user => "elastic" #password => "changeme" } #else{ # exec { # command => "/usr/bin/echo '%{message}'" # } # } stdout { codec => rubydebug } #将数据保存到redis中去 # redis { # host => "192.168.0.233" # port => "6379" # key => "nginx_tyhy_pay_dev" # data_type => "list" # } }

nginx_to_es

# Sample Logstash configuration for creating a simple # Beats -> Logstash -> Elasticsearch pipeline. input { jdbc { jdbc_driver_library => "/opt/app/logstash-6.8.1/lib/thirdlib/mysql-connector-java-5.1.47.jar" jdbc_driver_class => "com.mysql.jdbc.Driver" jdbc_connection_string => "jdbc:mysql://192.168.0.233:8066/orderCenter" jdbc_user => "root" jdbc_password => "root" schedule => "* * * * *" statement => "SELECT * FROM pay_order WHERE payment_time > :sql_last_value" use_column_value => true tracking_column_type => "timestamp" tracking_column => "payment_time" last_run_metadata_path => "syncpoint_product_table" type => "mysql_pay_log" } } filter { if [type] == "mysql_pay_log" { #匹配的字段 # grok { # match => {"message"=>"%{DATA:raw_datetime}"} # } # 删除的字段 mutate { remove_field => ["@version","@timestamp"] } } } output { if [type] == "mysql_pay_log" { elasticsearch { hosts => ["http://192.168.0.233:9200","http://192.168.0.232:9200","http://192.168.0.231:9200"] index => "pay_order" document_id => "%{id}" document_type => "pay_order_type" } } # stdout { # codec => json_lines # } }

filebeat_to_es

# Sample Logstash configuration for creating a simple # Beats -> Logstash -> Elasticsearch pipeline. input { beats { add_field => {"type" => "filebeat"} port => 5044 } } output { if [type] == "filebeat" { elasticsearch { hosts => ["http://192.168.0.231:9200","192.168.0.232:9200","192.168.0.233:9200"] index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" #user => "elastic" #password => "changeme" } } }

filebeat文件配置(nginx_to_filebeat_to_logstash)

filebeat.inputs: - type: log enabled: true paths: - /usr/local/nginx/logs/order-service-dev.log #将采集的日志导入logstash里 output.logstash: hosts: ["192.168.0.233:5044"] #存入es #output.elasticsearch: # hosts: ["192.168.1.42:9200"] #配置kibanna端点 #setup.kibana: # host: "localhost:5601" #es或kibanna配置安全策略 #output.elasticsearch: # hosts: ["myEShost:9200"] # username: "filebeat_internal" # password: "{pwd}" #setup.kibana: # host: "mykibanahost:5601" # username: "my_kibana_user" # password: "{pwd}"

启动logstash

单个配置启动

nohup ./bin/logstash -f ./config/myconfig/nginx_to_es &

多个配置文件启动

nohup ./bin/logstash -f /config/myconfig/ & 注意:myconfig是自己建的目录里面包含了多个配置文件;每个事件之间需要使用type区分,不然多个数据源会走一个filter或output;如果所示:

启动filebeat

上面配置了filebeat:然后启动

nohup ./filebeat -e -c ./myconfig/nginx_log_filebeat.yml &

安装kinbana

配置kinbana

# Kibana is served by a back end server. This setting specifies the port to use. server.port: 5601 # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values. # The default is 'localhost', which usually means remote machines will not be able to connect. # To allow connections from remote users, set this parameter to a non-loopback address. server.host: "192.168.0.233" # Enables you to specify a path to mount Kibana at if you are running behind a proxy. # Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath # from requests it receives, and to prevent a deprecation warning at startup. # This setting cannot end in a slash. #server.basePath: "" # Specifies whether Kibana should rewrite requests that are prefixed with # `server.basePath` or require that they are rewritten by your reverse proxy. # This setting was effectively always `false` before Kibana 6.3 and will # default to `true` starting in Kibana 7.0. #server.rewriteBasePath: false # The maximum payload size in bytes for incoming server requests. #server.maxPayloadBytes: 1048576 # The Kibana server's name. This is used for display purposes. #server.name: "your-hostname" # The URLs of the Elasticsearch instances to use for all your queries. elasticsearch.hosts: ["http://192.168.0.231:9200","http://192.168.0.232:9200","http://192.168.0.233:9200"] # When this setting's value is true Kibana uses the hostname specified in the server.host # setting. When the value of this setting is false, Kibana uses the hostname of the host # that connects to this Kibana instance. #elasticsearch.preserveHost: true # Kibana uses an index in Elasticsearch to store saved searches, visualizations and # dashboards. Kibana creates a new index if the index doesn't already exist. kibana.index: ".kibana" # The default application to load. #kibana.defaultAppId: "home" # If your Elasticsearch is protected with basic authentication, these settings provide # the username and password that the Kibana server uses to perform maintenance on the Kibana # index at startup. Your Kibana users still need to authenticate with Elasticsearch, which # is proxied through the Kibana server. #elasticsearch.username: "user" #elasticsearch.password: "pass" # Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively. # These settings enable SSL for outgoing requests from the Kibana server to the browser. #server.ssl.enabled: false #server.ssl.certificate: /path/to/your/server.crt #server.ssl.key: /path/to/your/server.key # Optional settings that provide the paths to the PEM-format SSL certificate and key files. # These files validate that your Elasticsearch backend uses the same key files. #elasticsearch.ssl.certificate: /path/to/your/client.crt #elasticsearch.ssl.key: /path/to/your/client.key # Optional setting that enables you to specify a path to the PEM file for the certificate # authority for your Elasticsearch instance. #elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ] # To disregard the validity of SSL certificates, change this setting's value to 'none'. #elasticsearch.ssl.verificationMode: full # Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of # the elasticsearch.requestTimeout setting. #elasticsearch.pingTimeout: 1500 # Time in milliseconds to wait for responses from the back end or Elasticsearch. This value # must be a positive integer. #elasticsearch.requestTimeout: 30000 # List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side # headers, set this value to [] (an empty list). #elasticsearch.requestHeadersWhitelist: [ authorization ] # Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten # by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration. #elasticsearch.customHeaders: {} # Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable. #elasticsearch.shardTimeout: 30000 # Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying. #elasticsearch.startupTimeout: 5000 # Logs queries sent to Elasticsearch. Requires logging.verbose set to true. #elasticsearch.logQueries: false # Specifies the path where Kibana creates the process ID file. #pid.file: /var/run/kibana.pid # Enables you specify a file where Kibana stores log output. #logging.dest: stdout # Set the value of this setting to true to suppress all logging output. #logging.silent: false # Set the value of this setting to true to suppress all logging output other than error messages. #logging.quiet: false # Set the value of this setting to true to log all events, including system usage information # and all requests. #logging.verbose: false # Set the interval in milliseconds to sample system and process performance # metrics. Minimum is 100ms. Defaults to 5000. #ops.interval: 5000 # Specifies locale to be used for all localizable strings, dates and number formats. #i18n.locale: "en"

启动kinbana

nohup ./kibana &

访问kinbana地址

http://192.168.0.233:5601/

最新回复(0)