DockerCompose部署ElasticSearch集群+Kibana+Logstash
安装ES
1、修改宿主机参数
sysctl -w vm.max_map_count=262144
2、创建目录并授权
#创建es目录
mkdir /home/elasticsearch
#创建es节点挂载目录
for port in `seq 1 3`; do \
mkdir -p elastic0${port}/data; \
mkdir -p elastic0${port}/conf \
&& envsubst <elasticsearch.yml> elastic0${port}/conf/elasticsearch.yml ;\
mkdir -p elastic0${port}/logs; \
mkdir -p cert; \
done
#授权elastic用户对 elasticsearch 目录的操作权限
chown -R elastic:elastic elasticsearch/
3、编写docker-compose文件
version: "3"
networks:
es:
driver: bridge
services:
elastic01:
image: elasticsearch:7.17.2
container_name: elastic01
restart: always
volumes:
- /home/elasticsearch/elastic01/data:/usr/share/elasticsearch/data
- /home/elasticsearch/elastic01/conf/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- /home/elasticsearch/elastic01/logs:/user/share/elasticsearch/logs
- /home/elasticsearch/cert/elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12
ulimits:
memlock:
soft: -1
hard: -1
ports:
- "9201:9200"
- "9301:9300"
environment: # 设置环境变量,相当于docker run命令中的-e
- node.name=elastic01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=elastic02,elastic03
- cluster.initial_master_nodes=elastic01,elastic02,elastic03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
networks:
- es
elastic02:
image: elasticsearch:7.17.2
container_name: elastic02
restart: always
volumes:
- /home/elasticsearch/elastic02/data:/usr/share/elasticsearch/data
- /home/elasticsearch/elastic02/conf/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- /home/elasticsearch/elastic02/logs:/user/share/elasticsearch/logs
- /home/elasticsearch/cert/elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12
ulimits:
memlock:
soft: -1
hard: -1
ports:
- "9202:9200"
- "9302:9300"
environment: # 设置环境变量,相当于docker run命令中的-e
- node.name=elastic02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=elastic01,elastic03
- cluster.initial_master_nodes=elastic01,elastic02,elastic03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
networks:
- es
elastic03:
image: elasticsearch:7.17.2
container_name: elastic03
restart: always
volumes:
- /home/elasticsearch/elastic03/data:/usr/share/elasticsearch/data
- /home/elasticsearch/elastic03/conf/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- /home/elasticsearch/elastic03/logs:/user/share/elasticsearch/logs
- /home/elasticsearch/cert/elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12
ulimits:
memlock:
soft: -1
hard: -1
ports:
- "9203:9200"
- "9303:9300"
environment: # 设置环境变量,相当于docker run命令中的-e
- node.name=elastic03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=elastic01,elastic02
- cluster.initial_master_nodes=elastic01,elastic02,elastic03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
networks:
- es
3、配置elasticsearch.yml
network.host: 0.0.0.0
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/elastic-certificates.p12
4、生成证书
先启动一个容器,进入容器后生成证书后将证书拷贝出来供集群使用。
启动容器
docker run -dit --name=tmpes elasticsearch:7.17.2 /bin/bash
进入容器
docker exec -it tmpes bash
生成elastic-stack-ca.p12,
./bin/elasticsearch-certutil ca
然后生成 elastic-certificates.p12 ,
./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
两条命令均一路回车即可,不需要给秘钥再添加密码
生成证书后退出容器,然后将elastic-certificates.p12 拷贝到宿主机。之后“工具”容器tmpes
任由你处置。
docker cp tmpes:/usr/share/elasticsearch/elastic-certificates.p12 /home/elasticsearch/cert
到此准备工作已完成。
5、启动es集群
docker-compose up -d
然后另外打开一个ssh窗口设置集群密码:
docker exec -it elastic01 /bin/bash
ES中内置了几个管理其他集成组件的账号即:apm_system
, beats_system
, elastic
, kibana
, logstash_system
, remote_monitoring_user
,使用之前,首先需要添加一下密码
- interactive:给用户一一设置密码。
- auto:自动生成密码。
root@fd7635ffd8c9:/usr/share/elasticsearch/bin# elasticsearch-setup-passwords interactive
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y
Enter password for [elastic]:
Reenter password for [elastic]:
Enter password for [apm_system]:
Reenter password for [apm_system]:
Enter password for [kibana_system]:
Reenter password for [kibana_system]:
Enter password for [logstash_system]:
Reenter password for [logstash_system]:
Enter password for [beats_system]:
Reenter password for [beats_system]:
Enter password for [remote_monitoring_user]:
Reenter password for [remote_monitoring_user]:
Changed password for user [apm_system]
Changed password for user [kibana_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]
root@fd7635ffd8c9:/usr/share/elasticsearch/bin#
配置完毕之后,可以通过如下方式访问es服务:
http://10.18.63.109:9202/_cluster/health?pretty
6、问题解决:
问题1
如果出现该问题则表示宿主机data挂载目录没有权限
Likely root cause: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/nodes
[root@uids elasticsearch]# chmod 777 /home/elasticsearch/elastic01/data/
[root@uids elasticsearch]# chmod 777 /home/elasticsearch/elastic02/data/
[root@uids elasticsearch]# chmod 777 /home/elasticsearch/elastic03/data/
安装kibana
docker-compose
kibana:
image: kibana:7.17.2
container_name: kibana-uids
environment:
SERVER_NAME: kibana
ELASTICSEARCH_URL: http://elastic01:9200 #这里填容器内部的服务名和端口
XPACK_MONITORING_ENABLED: true
ports:
- 5601:5601
depends_on:
- elastic01
volumes:
- /home/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml
networks:
- es
kibana.yml
server.port: 5601
server.name: "kibana-uids"
server.host: "0"
server.publicBaseUrl: "http://10.18.63.109:5601/"
elasticsearch.username: "elastic"
elasticsearch.password: "uids#@876"
elasticsearch.hosts: ["http://elastic01:9200"]
monitoring.ui.container.elasticsearch.enabled: true
i18n.locale: "zh-CN"
启动
docker-compose up -d
安装logstash
1、docker-compose
logstash:
image: logstash:7.17.2
container_name: logstash-uids
restart: always
ports:
- "4560:4560"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx1g -Xms1g"
depends_on:
- elastic01
volumes:
- /home/logstash/pipeline:/usr/share/logstash/pipeline
- /home/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml
networks:
- es
安装logstash时可能会出现容器内挂载的文件不会同步到宿主机目录下,需要手动配置有如下文件:
2、所需文件
jvm.options
## JVM configuration
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms1g
-Xmx1g
################################################################
## Expert settings
################################################################
##
## All settings below this section are considered
## expert settings. Don't tamper with them unless
## you understand what you are doing
##
################################################################
## GC configuration
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
## Locale
# Set the locale language
#-Duser.language=en
# Set the locale country
#-Duser.country=US
# Set the locale variant, if any
#-Duser.variant=
## basic
# set the I/O temp directory
#-Djava.io.tmpdir=$HOME
# set to headless, just in case
-Djava.awt.headless=true
# ensure UTF-8 encoding by default (e.g. filenames)
-Dfile.encoding=UTF-8
# use our provided JNA always versus the system one
#-Djna.nosys=true
# Turn on JRuby invokedynamic
-Djruby.compile.invokedynamic=true
# Force Compilation
-Djruby.jit.threshold=0
# Make sure joni regexp interruptability is enabled
-Djruby.regexp.interruptible=true
## heap dumps
# generate a heap dump when an allocation from the Java heap fails
# heap dumps are created in the working directory of the JVM
-XX:+HeapDumpOnOutOfMemoryError
# specify an alternative path for heap dumps
# ensure the directory exists and has sufficient space
#-XX:HeapDumpPath=${LOGSTASH_HOME}/heapdump.hprof
## GC logging
#-XX:+PrintGCDetails
#-XX:+PrintGCTimeStamps
#-XX:+PrintGCDateStamps
#-XX:+PrintClassHistogram
#-XX:+PrintTenuringDistribution
#-XX:+PrintGCApplicationStoppedTime
# log GC status to a file with time stamps
# ensure the directory exists
#-Xloggc:${LS_GC_LOG_FILE}
# Entropy source for randomness
-Djava.security.egd=file:/dev/urandom
# Copy the logging context from parent threads to children
-Dlog4j2.isThreadContextMapInheritable=true
logstash.yml
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: uids#@876
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://elastic01:9200" ] #host和端口填写容器内部对应的服务名和端口
pipelines.yml
- pipeline.id: micro-service-log
path.config: "./pipeline/uids-service-es.conf" #该地址填写管道地址,一个管道对应一个id和一个配置
安装Metricbeat
kibana7.17.2版本堆栈监测需要安装Metricbeat
拉取metricbeat镜像
docker pull docker.elastic.co/beats/metricbeat:7.17.2
创建metricbeat文件夹
mkdir metricbeat
配置metricbeat.yml
metricbeat.config:
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
metricbeat.autodiscover:
providers:
- type: docker
hints.enabled: true
metricbeat.modules:
- module: docker
metricsets:
- "container"
- "cpu"
- "diskio"
- "healthcheck"
- "info"
#- "image"
- "memory"
- "network"
hosts: ["unix:///var/run/docker.sock"]
period: 10s
enabled: true
processors:
- add_cloud_metadata: ~
output.elasticsearch:
hosts: ["elastic01:9200"]
username: 'elastic'
password: 'uids#@876'
启动容器
docker run -d \
--name=metricbeat \
--user=root \
--volume="$(pwd)/metricbeat.docker.yml:/usr/share/metricbeat/metricbeat.yml:ro" \
--volume="/var/run/docker.sock:/var/run/docker.sock:ro" \
--volume="/sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro" \
--volume="/proc:/hostfs/proc:ro" \
--volume="/:/hostfs:ro" \
-net=elasticsearch_es
docker.elastic.co/beats/metricbeat:7.17.2
metricbeat 配置 集成kibana、 elasticsearch
进入容器:
docker exec -it metricbeat bash
启动 system 和 容器 监控 modules:
metricbeat modules enable system
metricbeat modules enable docker