➜ Old React website
Chung Cheuk Hang MichaelJava Web Developer
Spring JSON 變 XML response 問題Java 8 HashMap 原理

Spring 項目使用 ELK 做 logging

Continued from previous post:
Java Logging

Table of contents

1 ELK 簡介

ELK stack(Elasticsearch、Logstash、Kibana)係 Elastic 公司既 open-source log 管理工具,可以幫我地有效咁搵 log,當我地需要 debug 或者 troubleshoot 一啲 application 問題既時候就會非常有用。
工具作用
Logstash收集唔同來源既 log 資料,再通過 API 將資料傳送到唔同既目的地(Elasticsearch)。
Elasticsearch儲存 log 資料,再自動做 indexing,所以有強大既資料搜尋功能(快如 Google search)。佢提供左儲存資料、管理資料、搜尋資料既 API。
Kibana提供網頁,畀我地用唔同既搜尋條件(時間段、keyword、metadata)搜尋 log,亦可以用圖表視覺化搜尋結果。佢通過 API 取得 log 資料(Elasticsearch),然後喺網頁上面顯示結果出黎。
所以一個 log 既 flow 會係由 application 去 Logstash 再去 Elasticsearch 再去 Kibana,最終喺一個網頁上面顯示出黎。

1.1 ELK 版本

根據 endoflife.date,我地最好用仲有 active support 既 version 8

2 動手寫

我地會用到 Spring Boot、Lombok、Slf4j 以及 Logback。

2.1 Maven dependencies

我地會用到 Lombok 既 @Slf4j annotation 去 generate construct logger object 既 code。
1<parent> 2 <groupId>org.springframework.boot</groupId> 3 <artifactId>spring-boot-starter-parent</artifactId> 4 <version>3.2.3</version> 5</parent> 6 7<dependencies> 8 <dependency> 9 <groupId>org.springframework.boot</groupId> 10 <artifactId>spring-boot-starter-web</artifactId> 11 </dependency> 12 13 <dependency> 14 <groupId>org.projectlombok</groupId> 15 <artifactId>lombok</artifactId> 16 </dependency> 17 18 <dependency> 19 <groupId>net.logstash.logback</groupId> 20 <artifactId>logstash-logback-encoder</artifactId> 21 <version>7.2</version> 22 </dependency> 23</dependencies>

2.2 Application 配置

我地可以有 2 個 Spring Boot profiles:
application.yml(default profile,默認會用呢個):
spring.application.name: mick-app logging: config: classpath:logback/logback-prod.xml
application-dev.ymldev profile,指定先會用呢個):
spring.application.name: mick-app logging: config: classpath:logback/logback-dev.xml
註:如果要指定用 dev profile,啟動個 application 果陣要用 VM argument -Dspring.profiles.active=dev

2.3 Logback 配置

喺我地既 Maven project 度建立 2 個配置檔:
  • src/main/resources
    • /logback
      • logback-dev.xml
      • logback-prod.xml
logback-dev.xml(console + file):
1<?xml version="1.0" encoding="UTF-8"?> 2<configuration> 3 <springProperty scope="context" name="app_name" source="spring.application.name" /> 4 5 <appender name="console" class="ch.qos.logback.core.ConsoleAppender"> 6 <encoder> 7 <pattern>[${app_name:-}] %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} (%file:%line\) - %msg%n</pattern> 8 </encoder> 9 </appender> 10 11 <appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender"> 12 <encoder class="net.logstash.logback.encoder.LogstashEncoder"> 13 <includeContext>false</includeContext> 14 <customFields>{ "host": "${hostname:-}", "app_name": "${app_name:-}" }</customFields> 15 </encoder> 16 17 <file>logs/app.log</file> 18 19 <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> 20 <fileNamePattern>logs/archived/app.log.%d{yyyy-MM-dd}</fileNamePattern> 21 <maxHistory>30</maxHistory> 22 </rollingPolicy> 23 </appender> 24 25 <root level="INFO"> 26 <appender-ref ref="console" /> 27 <appender-ref ref="file" /> 28 </root> 29</configuration>
logback-prod.xml(console + TCP socket):
1<?xml version="1.0" encoding="UTF-8"?> 2<configuration> 3 <springProperty scope="context" name="app_name" source="spring.application.name" /> 4 5 <appender name="console" class="ch.qos.logback.core.ConsoleAppender"> 6 <encoder> 7 <pattern>[${hostname:-} ${app_name:-}] %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} (%file:%line\) - %msg%n</pattern> 8 </encoder> 9 </appender> 10 11 <appender name="stash" class="net.logstash.logback.appender.LogstashTcpSocketAppender"> 12 <encoder class="net.logstash.logback.encoder.LogstashEncoder"> 13 <includeContext>false</includeContext> 14 <customFields>{ "host": "${hostname:-}", "app_name": "${app_name:-}" }</customFields> 15 </encoder> 16 17 <destination>127.0.0.1:5000</destination> 18 <keepAliveDuration>5 minutes</keepAliveDuration> 19 <reconnectionDelay>10 second</reconnectionDelay> 20 </appender> 21 22 <root level="INFO"> 23 <appender-ref ref="console" /> 24 <appender-ref ref="stash" /> 25 </root> 26</configuration>

2.4 寫 Java code

Project structure:
  • src/main/java
    • /code
      • LogGenerator.java
      • MainApplication.java

2.4.1 定時生成 log 既 component

LogGenerator.java
1@Slf4j 2@EnableScheduling 3@Component 4public class LogGenerator { 5 6 @Scheduled(cron = "* * * * * *") 7 public void generateLog() { 8 log.info("Michael says {}", System.currentTimeMillis()); 9 } 10}

3 部署 ELK

我地會用 Docker Compose 黎建立一個 Docker network 以及 ELK 既 containers,呢啲 containers 會連接到同一個 Docker network。
我地需要一個 working directory:
  • elk-test
    • /elk-config
      • /elasticsearch
        • elasticsearch.yml
      • /kibana
        • kibana.yml
      • /logstash
        • /pipeline
          • logstash.conf
    • /elk-data(忽略,全部由 Docker 自動生成)
      • /elasticsearch
        • /data
      • /kibana
        • /data
    • docker-compose.yml

3.1 Elasticsearch 配置

elk-test/elk-config/elasticsearch/elasticsearch.yml
cluster.name: "elasticsearch" network.host: 0.0.0.0 xpack.security.enabled: false xpack.security.enrollment.enabled: false xpack.security.autoconfiguration.enabled: false

3.2 Kibana 配置

elk-test/elk-config/kibana/kibana.yml
server.name: kibana server.host: "0.0.0.0" elasticsearch.hosts: ["http://elasticsearch:9200"] monitoring.ui.container.elasticsearch.enabled: true

3.3 Logstash pipeline 配置

elk-test/elk-config/logstash/pipeline/logstash.conf
1input { 2 tcp { 3 port => "5000" 4 codec => json_lines 5 } 6} 7 8output { 9 stdout {} 10 elasticsearch { 11 hosts => ["http://elasticsearch:9200"] 12 index => "mick-elk-%{+YYYY.MM.dd}" 13 } 14}

3.2 Docker Compose 配置

elk-test/docker-compose.yml
1version: "3.8" 2 3networks: 4 elk: 5 driver: bridge 6 7services: 8 elasticsearch: 9 container_name: elasticsearch 10 image: elasticsearch:8.12.2 11 mem_limit: 1073741824 12 ports: 13 - "9200:9200" 14 - "9300:9300" 15 networks: 16 - elk 17 volumes: 18 - ./elk-config/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml 19 - ./elk-data/elasticsearch/data:/usr/share/elasticsearch/data 20 environment: 21 - discovery.type=single-node 22 - ELASTIC_PASSWORD=elastic 23 - bootstrap.memory_lock=true 24 25 logstash: 26 depends_on: 27 - elasticsearch 28 container_name: logstash 29 image: logstash:8.12.2 30 mem_limit: 1073741824 31 ports: 32 - "5000:5000" 33 - "9600:9600" 34 networks: 35 - elk 36 volumes: 37 - ./elk-config/logstash/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf 38 39 kibana: 40 depends_on: 41 - elasticsearch 42 container_name: kibana 43 image: kibana:8.12.2 44 mem_limit: 1073741824 45 ports: 46 - "5601:5601" 47 networks: 48 - elk 49 volumes: 50 - ./elk-config/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml 51 - ./elk-data/kibana/data:/usr/share/kibana/data

3.3 執行 Docker Compose

elk-test folder 度執行 command:
docker-compose up -d
停止/清理:
docker-compose down

4 產生 log

  1. 啟動 application(用默認 Spring profile)。
  2. 等待 console 出現一堆 Mick says xxxxxxxxxxxxx 既 log messages,建議至少 10 個。
  3. 打開 http://localhost:9200/_search?size=100 應該可以見到一堆 log documents。

5 設定 Kibana、檢視結果

  1. 打開 http://localhost:5601
  2. 佢會問我地 Enrollment token,我地可以撳「Configure manually」按鈕。
  3. 輸入 http://elasticsearch:9200,然後撳「Check address」按鈕。
  4. 佢會顯示「Elastic is already configured」,撳「Continue to Kibana」按鈕。
  5. 去左手邊既 navigation menu > Analytics > Discover 頁面。
  6. 因為我地已經有啲 log data,所以佢會顯示「You have data in Elasticsearch. Now, create a data view.」。
  7. 撳「Create data view」按鈕。
    1. 喺「Name」度輸入 Mick testing
    2. 喺「Index pattern」輸入 mick-elk-*
    3. 喺「Timestamp field」揀 @timestamp
  8. 右邊會顯示符合個 index pattern 既 index 來源。
  9. 撳「Save data view to Kibana」按鈕。
  10. 喺 data view 順序加入以下既 fields:
    1. host
    2. app_name
    3. level
    4. logger_name
    5. message
  11. 確保 @timestamp column 係 descending order。
  12. 可以撳右上既 Date quick select > Refresh every 設定 5 seconds
  13. 如果畫面太細,可以撳右上 Chart options > Hide chart。
註:以上既設定會被儲存喺 Elasticsearch 裡面既 .kibana index(http://localhost:9200/.kibana/_search?size=100)。

6 注意事項

7 參考資料