docker安装elk(Elasticsearch+logstash+kibana) 亲测可以使用

    技术2022-07-13  92

    一、elk是什么

    ELK 是一套完整的日志收集以及展示的解决方案,是三个产品的首字母缩写,分别是ElasticSearch、Logstash 和 Kibana。

    ElasticSearch: 简称ES,它是一个实时的分布式搜索和分析引擎,它可以用于全文搜索,结构化搜索以及分析 Lucene 基础上的搜索引擎,使用 Java 语言编写。Logstash:是一个具有实时传输能力的数据收集引擎,用来进行数据收集 然后发送给 esKibana:为 Elasticsearch 提供了分析和可视化的 Web 平台。它可以在 Elasticsearch 的索引中查找数据等等

    二、基于linux中docker搭建elk

    1.下载docker-compose命令:

    sudo curl -L "https://github.com/docker/compose/releases/download/1.26.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

    可能稍微有点慢 如果下载失败 就找个网速快的地方下载 下载完 执行此命令 给dokcer-compose权限

    sudo chmod +x /usr/local/bin/docker-compose

    2.在docker中拉取ElasticSearch镜像:

    docker pull elasticsearch:7.8.0

    3.在docker中拉取Logstash镜像

    docker pull logstash:7.8.0

    4.在docker中拉取Kibana镜像

    docker pull kibana:7.8.0

    5.进入opt目录创建elk文件夹

    [root@localhost /]# cd /opt/ [root@localhost opt]# mkdir elk

    6.创建kibana 和logstach 文件夹

    [root@localhost elk]# mkdir kibana [root@localhost elk]# mkdir logstach [root@localhost elk]# mkdir elasticsearch

    7.进入kibana文件夹 创建kibana.yml

    [root@localhost kibana]# touch kibana.yml [root@localhost kibana]# vim kibana.yml

    7.1 kibana.yml内容如下

    ## Default Kibana configuration from Kibana base image. ### https://github.com/elastic/kibana/blob/master/src/dev/build/tasks/os_packages/docker_generator/templates/kibana_yml.template.js server.name: kibana server.host: 0.0.0.0 ##汉化kibana 这个不能写在最下面 具体原因不清楚 写在最下面没作业 i18n.locale: "zh-CN" elasticsearch.hosts: [ "http://elasticsearch:9200" ] #elasticsearch 这里写的是你的ip monitoring.ui.container.elasticsearch.enabled: true # ### X-Pack security credentials ## elasticsearch.username: elastic elasticsearch.password: changeme

    8.进入logstash文件夹 创建logstash.yml

    [root@localhost logstach]# touch logstash.yml [root@localhost logstach]# vim logstash.yml

    8.1 logstash.yml内容如下

    ## Default Logstash configuration from Logstash base image. ### https://github.com/elastic/logstash/blob/master/docker/data/logstash/config/logstash-full.yml http.host: "0.0.0.0" xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ] #elasticsearch 这里写的是你的ip ## X-Pack security credentials xpack.monitoring.enabled: true xpack.monitoring.elasticsearch.username: elastic xpack.monitoring.elasticsearch.password: changeme

    8.2 创建pipeline目录

    ##创建一个目录pipeline [root@localhost logstach]# mkdir pipeline ##进入pipeline [root@localhost logstach]# cd pipeline ##创建一个logstash.conf文件 [root@localhost pipeline]# touch logstash.conf

    logstash.conf 文件内容如下

    这里的user 和password 你们是不需要配置的 我这里只是记录一下 这个你们需要删除掉 elasticsearch 这里写的是你的ip

    input { tcp { port => 5000 } } ## Add your filters / logstash plugins configuration here output { elasticsearch { hosts => "elasticsearch:9200" user => "elastic" password => "changeme" } }

    9.同样进入elasticsearch目录下创建 elasticsearch.yml

    --- ## Default Elasticsearch configuration from Elasticsearch base image. ## https://github.com/elastic/elasticsearch/blob/master/distribution/docker/src/docker/config/elasticsearch.yml # cluster.name: "docker-cluster" network.host: 0.0.0.0 ## X-Pack settings ## see https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-xpack.html # xpack.license.self_generated.type: trial xpack.security.enabled: true xpack.monitoring.collection.enabled: true

    10.进入 opt/elk 目录下创建 docker-compose.yml 文件 10.1 docker-compose.yml 文件内容如下

    version: "2.2" services: ###配置elasticsearch es: image: elasticsearch:7.8.0 container_name: elasticsearch ports: - "9200:9200" - "9300:9300" environment: discovery.type: single-node ##es的密码 ELASTIC_PASSWORD: changeme ES_JAVA_OPTS: "-Xmx256m -Xms256m" volumes: # 这里注意一下 如果你想吧docker中es的数据 映射出来 你本地的 /home/elasticsearch 必须拥有 777权限 - /home/elasticsearch/:/usr/share/elasticsearch/data - /opt/elk/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml network_mode: host ###配置Logstash ls: image: logstash:7.8.0 container_name: logstash ports: - "5000:5000/tcp" - "5000:5000/udp" - "9600:9600" network_mode: host environment: discovery.type: single-node ES_JAVA_OPTS: "-Xmx256m -Xms256m" volumes: ###将本机目录/opt/elk/logstach/pipeline下的文件映射到docker容器里面 - /opt/elk/logstach/pipeline:/usr/share/logstash/pipeline - /opt/elk/logstach/logstash.yml:/usr/share/logstash/config/logstash.yml depends_on: - es ###配置Kibana kb: image: kibana:7.8.0 container_name: kibana ports: - "5601:5601" volumes: ###将本机目录/opt/elk/kibana/kibana.yml下的文件映射到docker容器里面 - /opt/elk/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml network_mode: host depends_on: - es

    discovery.type:表示这是单机启动 ES_JAVA_OPTS:配置该容器占用主机的内存等等 network_mode:设置docker容器的网络模式 depends_on:指必须后面的服务启动以后 这个才启动

    11.启动镜像 docker-compose up

    三、SpringBoot集成logstash

    3.1:在pom文件中添加以下依赖 让springboot的日志集成logstash

    <dependency> <groupId>net.logstash.logback</groupId> <artifactId>logstash-logback-encoder</artifactId> <version>6.4</version> </dependency>

    3.2:在resources下添加 logback和logstash集成的文件 3.3:logback-spring.xml配置如下

    <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE configuration> <configuration> <!--配置logstash 发送日志数据的地址 --> <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender"> <destination>localhost:5000</destination> <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder" /> </appender> <!--springboot的日志 --> <include resource="org/springframework/boot/logging/logback/base.xml" /> <!--日志的打印级别 --> <root level="INFO"> <appender-ref ref="LOGSTASH" /> <appender-ref ref="CONSOLE" /> </root> </configuration>

    3.4:application.properties配置如下

    logging.config= classpath:logback-spring.xml

    3.5:启动项目 访问Kibana

    四、自定义logstash收集日志信息

    4.1:修改logback-spring.xml

    <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE configuration> <configuration> <!--配置logstash 发送日志数据的地址 --> <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender"> <destination>localhost:5000</destination> <!--自己定义logstash的encoder--> <encoder> <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n </pattern> <charset>UTF-8</charset> </encoder> </appender> <!--springboot的日志 --> <include resource="org/springframework/boot/logging/logback/base.xml" /> <!--日志的打印级别 --> <root level="INFO"> <appender-ref ref="LOGSTASH" /> <appender-ref ref="CONSOLE" /> </root> </configuration>

    4.2:修改 logstashconf

    input { tcp { port => 5000 } } filter { grok { match => { "message" => "%{TIMESTAMP_ISO8601:logTime} %{GREEDYDATA:logThread} %{LOGLEVEL:logLevel} %{GREEDYDATA:loggerClass} - %{GREEDYDATA:logContent}" } } if [LOGLEVEL] == "DEBUG" { drop {} } if [LOGLEVEL] == "INFO" { drop {} } if [LOGLEVEL] == "WARN" { drop {} } } ## Add your filters / logstash plugins configuration here output { elasticsearch { hosts => "localhost:9200" } }

    message =>后面的 参数 参考 参数

    Processed: 0.015, SQL: 9