(MySQL) Performance Monitoring with Prometheus [UPDATE]

| Keine Kommentare | Keine TrackBacks

In my last I was looking for a way to do performance monitoring and I stumbled upon Prometheus. Prometheus is much more than monitoring a single node service. Anyway let’s get the idea of gathering metrics using MySQL as example.

This how a simple configuration of Prometheus could look like:

  scrape_interval: 1m
  scrape_timeout: 10s
  evaluation_interval: 1m

  - job_name: mysql
    scheme: http
    - targets: 
        - ''
        zone: mysql

Every minute Prometheus accesses (/metrics is a Prometheus convention) and labels the result with zone=mysql. Querying the data you can use the labels.

This is a simple configuration. The fun of Prometheus is to have a lot of targets/jobs.

Let’s have a look at our specific endpoint:

> curl
mysql_global_status_threads_cached 26
mysql_global_status_threads_connected 99
mysql_global_status_threads_created 125
mysql_global_status_threads_running 2

You as a MySQL administrator know what this is all about. The data is provided by an exporter. In our case a container :)

> docker run -d -p 9104:9104 --link=mysql:backend \
  -e DATA_SOURCE_NAME=prometheus:prometheus@secret(backend:3306)/ \ 

This is old school Docker. Obviously the MySQL is running in a container also (mysql) and we are using the deprecated --link :)

The mysqld-exporter has a lot of options:

$ docker run --rm prom/mysqld-exporter --help
Usage of /bin/mysqld_exporter:
      Collect auto_increment columns and max values from information_schema
      Collect the current size of all registered binlog files
      Collect from SHOW GLOBAL STATUS (default true)
      Collect from SHOW GLOBAL VARIABLES (default true)
      Collect current thread state counts from the information_schema.processlist
      Collect metrics from information_schema.tables (default true)
  -collect.info_schema.tables.databases string
      The list of databases to collect table stats for, or '*' for all (default "*")
      If running with userstat=1, set to true to collect table statistics
      If running with userstat=1, set to true to collect user statistics
      Collect metrics from performance_schema.events_statements_summary_by_digest
  -collect.perf_schema.eventsstatements.digest_text_limit int
      Maximum length of the normalized statement text (default 120)
  -collect.perf_schema.eventsstatements.limit int
      Limit the number of events statements digests by response time (default 250)
  -collect.perf_schema.eventsstatements.timelimit int
      Limit how old the 'last_seen' events statements can be, in seconds (default 86400)
      Collect metrics from performance_schema.events_waits_summary_global_by_event_name
      Collect metrics from performance_schema.file_summary_by_event_name
      Collect metrics from performance_schema.table_io_waits_summary_by_index_usage
      Collect metrics from performance_schema.table_io_waits_summary_by_table
      Collect metrics from performance_schema.table_lock_waits_summary_by_table
      Collect from SHOW SLAVE STATUS (default true)
  -config.my-cnf string
      Path to .my.cnf file to read MySQL credentials from. (default "/home/golang/.my.cnf")
  -log.level value
      Only log messages with the given severity or above. Valid levels: [debug, info, warn, error, fatal, panic]. (default info)
      Add a log_slow_filter to avoid exessive MySQL slow logging.  NOTE: Not supported by Oracle MySQL.
  -web.listen-address string
      Address to listen on for web interface and telemetry. (default ":9104")
  -web.telemetry-path string
      Path under which to expose metrics. (default "/metrics")

Prometheus ships with an expression browser. Giving you the opportunity to access and graph the data. It also provides his own query language :) Without graphing the following two queries should be self-explaining:




Brian Brazil mentioned to use another (function)[https://prometheus.io/docs/querying/functions/] thx \o/


I recommend to use Grafana as dashboard. You just need to provide the Prometheus server as source and reuse the Queries you used in the expression browser. There is also PomDash, but afaik Grafana is the way to go.

Prometheus rocks. Having a central point to do the performance analysis of the whole datacenter is awesome. There a lot of exporters you can use. Even writing your own exporter is quite easy.

Viel Spaß



There is a nice presentation I recommend to check it and see the nice graphs Grafana builds :)


Percona is always great in adopting new stuff. Today they announced there Percona Monitoring and Management. Of course it uses also some exporters, Prometheus and Grafana. I’m quite sure it would/could kill other solutions on the market \o/

Keine TrackBacks

TrackBack-URL: http://linsenraum.de/mt/mt-tb.cgi/390

Jetzt kommentieren