November 2, 2022

OpenShift 4.6 Automation and Integration: Cluster Logging

Overview

1.1.8. About cluster logging components
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html-single/logging/index#cluster-logging-about-components_cluster-logging

The major components of cluster logging are:

LogStore

The logStore is the Elasticsearch cluster that

  • Stores the logs into indexes.
  • Provides RBAC access to the logs.
  • Provides data redundancy.

Collection

Implemented with Fluentd, By default, the log collector uses the following sources:

  • journald for all system logs
  • /var/log/containers/*.log for all container logs

The logging collector is deployed as a daemon set that deploys pods to each OpenShift Container Platform node.

Visualization

This is the UI component you can use to view logs, graphs, charts, and so forth. The current implementation is Kibana.

Event Routing

The Event Router is a pod that watches OpenShift Container Platform events so they can be collected by cluster logging. The Event Router collects events from all projects and writes them to STDOUT. Fluentd collects those events and forwards them into the OpenShift Container Platform Elasticsearch instance. Elasticsearch indexes the events to the infra index.

You must manually deploy the Event Router.

Installing cluster logging

Chapter 2. Installing cluster logging https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html-single/logging/index#cluster-logging-deploying

Install the OpenShift Elasticsearch Operator

namespace: openshift-operators-redhat

Install the Cluster Logging Operator

namespace: openshift-logging

Deploying a Cluster Logging Instance

This default cluster logging configuration should support a wide array of environments.

apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
  name: "instance"
  namespace: "openshift-logging"
spec:
  managementState: "Managed"
  logStore:
    type: "elasticsearch"
    retentionPolicy:
      application:
        maxAge: 1d
      infra:
        maxAge: 7d
      audit:
        maxAge: 7d
    elasticsearch:
      nodeCount: 3 5
      storage:
        storageClassName: "<storage-class-name>"
        size: 200G
      resources:
        limits:
          memory: "16Gi"
        requests:
          memory: "16Gi"
      proxy:
        resources:
          limits:
            memory: 256Mi
          requests:
            memory: 256Mi
      redundancyPolicy: "SingleRedundancy"
  visualization:
    type: "kibana"
    kibana:
      replicas: 1
  curation:
    type: "curator"
    curator:
      schedule: "30 3 * * *"
  collection:
    logs:
      type: "fluentd"
      fluentd: {}

Verify

$ oc get clusterlogging -n openshift-logging instance -o yaml

Install the Event Router

7.1. Deploying and configuring the Event Router
https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html-single/logging/index#cluster-logging-eventrouter-deploy_cluster-logging-curator

Creating Kibana Index Patterns

Index Pattern: app-*
Time Filter Field Name: @timestamp

Index Pattern: infra-*
Time Filter Field Name: @timestamp

Index Pattern: audit-*
Time Filter Field Name: @timestamp

No comments: