October 14, 2019

Eclipse Microprofile Metrics with Wildfly 18 and Prometheus

Eclipse Microprofile Metrics

"This specification aims at providing a unified way for Microprofile servers to export Monitoring data ("Telemetry") to management agents and also a unified Java API, that all (application) programmers can use to expose their telemetry data." [https://microprofile.io/project/eclipse/microprofile-metrics]

Source code https://github.com/eclipse/microprofile-metrics/

Eclipse Microprofile Metrics Specification https://github.com/eclipse/microprofile-metrics/releases

JBoss EAP 7.2

Does not support Eclipse Microprofile Metrics.

Wildfly 18.0.0.Final

Wildfly 18.0.0.Final supports Eclipse Microprofile Metrics 2.0.0 [1], which is part of Eclipse Microprofile 3.0.

[1] $JBOSS_HOME/modules/system/layers/base/org/eclipse/microprofile/metrics/api/main/microprofile-metrics-api-2.0.2.jar

Documentation https://docs.wildfly.org/18/Admin_Guide.html#MicroProfile_Metrics_SmallRye

Configuration


[standalone@localhost:9990 /] /subsystem=microprofile-metrics-smallrye:read-resource(recursive=true, include-defaults=true)
{
    "outcome" => "success",
    "result" => {
        "exposed-subsystems" => ["*"],
        "prefix" => expression "${wildfly.metrics.prefix:wildfly}",
        "security-enabled" => false
    }
}

Wildfly exposes Metrics via HTTP Management Interface, i.e. http://127.0.0.1:9990/metrics.

Prometheus

Prometheus is an open-source monitoring and alerting platform. Its main features are:

  • "Prometheus implements a highly dimensional data model. Time series are identified by a metric name and a set of key-value pairs."
  • "PromQL allows slicing and dicing of collected time series data in order to generate ad-hoc graphs, tables, and alerts."
  • "Prometheus has multiple modes for visualizing data: a built-in expression browser, Grafana integration, and a console template language."
  • "Prometheus stores time series in memory and on local disk in an efficient custom format. Scaling is achieved by functional sharding and federation."
  • "Each server is independent for reliability, relying only on local storage. Written in Go, all binaries are statically linked and easy to deploy."
  • "Alerts are defined based on Prometheus's flexible PromQL and maintain dimensional information. An alertmanager handles notifications and silencing."
  • "Client libraries allow easy instrumentation of services. Over ten languages are supported already and custom libraries are easy to implement."
  • "Existing exporters allow bridging of third-party data into Prometheus. Examples: system statistics, as well as Docker, HAProxy, StatsD, and JMX metrics."

[https://prometheus.io/]

Prometheus can either be locally installed or via Docker.

For local installation, download latest Prometheus version unpack it and run './prometheus'.

To use Docker, use Prometheus Image at https://hub.docker.com/r/prom/prometheus.

Prometheus Docker Image source (Dockerfile) https://github.com/prometheus/prometheus/blob/master/Dockerfile.

Prometheus Docker Image documentation https://prometheus.io/docs/prometheus/latest/installation/.

Before we can use Prometheus for Wildfly we need to add Wildfly metrics endpoint to Prometheus configuration. First download Prometheus and edit prometheus.yml in the root of the zipped installation.


# my global config
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets:
      # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
    - targets: ['localhost:9090']

  # this is the configuration to poll metrics from WildFly 18
  # https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config
  - job_name: 'wildfly18'
    scrape_interval: 2s
    metrics_path: '/metrics'
    scheme: 'http'
    static_configs:
    - targets: ['127.0.0.1:9990']

Then start Prometheus with './prometheus', then open http://127.0.0.1:9090/. To test it check which metrics wildfly is exposing, by calling it metrics endpoint.


$ curl http://127.0.0.1:9990/metrics

# HELP base_cpu_processCpuLoad Displays the "recent cpu usage" for the Java Virtual Machine process.
# TYPE base_cpu_processCpuLoad gauge
base_cpu_processCpuLoad 1.5940700593791097E-4

Go back to Prometheus and enter base_cpu_processCpuLoad.

No comments: