For the latest version, please use Certificate Lifecycle Manager 6.2.0! |
Connecting ERS to Opensearch
Summary
This page outlines the process of connecting ERS application logs and metrics to an Opensearch cluster. With Docker deployment as an exception, it is assumed that an external and functioning cluster is already present. Please refer to the Official Opensearch Documentation to learn how to set up Opensearch and Opensearch-Dashboards.
Docker
Provided within the ERS Docker Deployment is an Opensearch role that automatically sets up an Opensearch deployment and configures it with data-streams to receive logs and metrics from the ERS deployment. In order to establish a good overview of your environment and your logs, 2 example dashboards are also provided.
You can enable the profile at the start of your .env
file like below:
COMPOSE_PROFILES='embedded-db,cpki,kms,opensearch'
Kubernetes Helm Chart
It is possible to connect an already existing Opensearch deployment to the ERS Helm deployment by setting the relevant configuration options in the values of the helm chart:
global:
opensearch:
enabled: true
host: "opensearch.cluster.local"
port: "9200"
user: "opensearch"
password: "changeit"
Package-based Installations
To connect Linux-Package based installations to an Opensearch cluster you must set up metrics and log-forwarding.
Metrics Setup Process
ERS Applications can be configured to send metrics to an Opensearch Cluster by editing the application.properties
file (e.g., etc/opt/mtg-clm-server/application.properties
) and configuring the following Keys:
management.elastic.metrics.export.enabled=true
management.elastic.metrics.export.host=opensearch.cluster.local
management.elastic.metrics.export.user-name=opensearch
management.elastic.metrics.export.password=changeit
Setup Log-Forwarding
To make the log parsing easier, the log format is first converted to json.
This can be enabled by editing the application.properties
and adding the key:
spring.profiles.include=json-file-logging
To forward the logs to the Opensearch Deployment use Fluent Bit. For more information on how to install Fluent Bit please refer to the Fluent Bit Official Documentation.
Below you will find an example configuration (/etc/fluent-bit/fluent-bit.conf
).
It can be used to send the logs to Opensearch:
[SERVICE]
Flush 1
Daemon off
Log_Level debug
parsers_file parsers.conf
storage.path /var/log/flb-storage/
storage.sync full
storage.checksum on
storage.backlog.mem_limit 5M
storage.max_chunks_up 64
storage.delete_irrecoverable_chunks On
[INPUT]
Name tail
Tag clm
storage.type filesystem
Read_from_Head true
Buffer_Chunk_Size 1m
Buffer_Max_Size 1m
Parser ers-ecs-parser
Path_Key logfile
Path /var/log/mtg/mtg-clm-server/mtg-clm-server.log.json
[OUTPUT]
Name opensearch
Match clm
Host opensearch
Port 9200
HTTP_User admin
HTTP_Passwd admin
tls off
tls.verify Off
Retry_Limit False
Suppress_Type_Name true
Index mtg-ers-logs-clm
This example uses a custom parser (/etc/fluent-bit/parsers.conf
) with the following configuration:
[PARSER]
Name ers-ecs-parser
Format json
Time_Key @timestamp
Time_Format %Y-%m-%dT%H:%M:%S.%L%z
#matches 2023-05-26T11:05:59.032Z
The [INPUT] and [OUTPUT] blocks can be added in relation to the amount of logfiles you want to stream into the opensearch-cluster.
|
Example Data-Streams
To configure index patterns for logs and metrics, you can run these two commands in the Dev-Tools of Opensearch-Dashboards:
PUT _index_template/ers-logs-template
{
"index_patterns": [
"mtg-ers-logs-*"
],
"template": {
"settings": {
"number_of_replicas": 0
}
},
"data_stream": {
"timestamp_field": {
"name": "@timestamp"
}
}
}
PUT _index_template/ers-metrics-template
{
"index_patterns": [
"mtg-ers-metrics-*"
],
"template": {
"settings": {
"number_of_replicas": 0
}
},
"data_stream": {
"timestamp_field": {
"name": "@timestamp"
}
}
}
The number_of_replicas attribute is relevant only when referring to a single node Opensearch deployment.
|