Here is a good place to mention Gron - a portable commandline tool which makes JSON greppable and nicely colorises it. Java -jar jmx-dump-0.7.3-standalone.jar -dump-all -p 9010 | less As the name suggests, it dumps JMX metrics to the console in JSON format. If you don’t have an environment where you can connect to the JMX port from a GUI application then jmx-dump will be helpful. This is useful for very quickly checking that JMX metrics are being exposed correctly, but it is not particularly useful for monitoring the application. Running this will give you a GUI for connecting to the remote process and then give you some nice graphs for all the JMX metrics. If you are running the application on your desktop or laptop environment, then you probably already have jconsole installed. We can now restart the application with these new parameters and we should see that the application is additionally listening on port 9010 (or whichever port you picked) by running netstat -lntp. However since we are not exposing the JMX port over the network (and are connecting to localhost), we will not cover these here. If you plan on exposing these metrics over the network then you will want to investigate the authentication and SSL options, and you may need to set the advertised hostname too depending on your setup. The exact parameters and how you add them varies by application, but generally look like: JMX Metrics are exposed remotely via a TCP port, but most JVM applications don’t expose their JMX metrics by default so we’ll need to pass in some additional commandline parameters. But lets take a look at what’s involved as a learning exercise. Zenreach Engineering have already done most of the work in exposing the Kafka Connect JMX metrics with their Docker image that includes jmx_exporter for Prometheus. It’s true that JMX metrics are not particularly easy to use if you do not have a lot of experience in JVM applications, so let’s talk about the metrics we can get from Kafka Connect and how to use them. They weren’t doing anything with the JMX metrics because they weren’t aware of them. The team running the Kafka Connect cluster were not aware of the rich JMX metrics, and were instead prodding the source and destination stores in various ways. In reviewing a recent incident involving a Kafka Connect worker it became obvious that there is a gap here. And this is a problem for the people for whom this isn’t obvious. It’s so obvious and well-known that this isn’t written about, and often isn’t covered in the software’s documentation. For those of us who have been around for a while it is obvious that well-written Java applications expose a bunch of useful metrics via JMX and that graphing and alerting on these is the standard way of monitoring these applications.
0 Comments
Leave a Reply. |