JMH (Java Microbenchmarking Harness) is a library for benchmarking Java code. Running a benchmark using a library such as JMH is better than simply measuring execution time of methods using System.currentTimeMillis()
or System.nanoTime()
.
The easiest way to run a benchmark using JMH is to use Maven/Gradle plugins. This article shows how to write and run a simple benchmark using the Gradle plugin.
I suppose you already have a Gradle project in which you want to use JMH, but if you wish to try out the walkthrough in this article, you can checkout the code from here as a starting point – https://github.com/heppydepe/measuring-execution-time-of-java-methods/tree/v1.0
which was used in the other article ‘Measuring Execution Time of Java Methods‘. It is a simple project with a class called ‘RandomNumbers’ that creates a list of 100 random numbers and stores it in memory when an object is instantiated.
Add The Plugin
The latest version of this plugin is 0.5.3 at the time of writing this post. You should probably use whichever latest version is available now.
plugins {
id 'me.champeau.jmh' version '0.6.5'
}
Write Your Benchmarks
The plugin expects all our benchmarks are put in a src/jmh
directory. I like this because it is similar to how we would keep our unit tests – in a separate src/tests
folder. So create a jmh
directory inside the src
, create folder structure to represent your package inside this. That is, if your package name is com.example.mybenchmark
, then create com/example/mybenchmark
directory inside the jmh
directory.
Create a java file for putting your benchmarks in. This is a benchmark –
@State(Scope.Benchmark)
public class RandomNumbersBenchmark {
@Benchmark
@OutputTimeUnit(TimeUnit.NANOSECONDS)
@BenchmarkMode(Mode.AverageTime)
public void initializeRandomNumbers(Blackhole bh) {
bh.consume(new RandomNumbers());
}
}
new RandomNumbers()
is the piece of code that we are benchmarking. We are benchmarking the performance of creating new RandomNumbers objects. The Blackhole.consume()
method ensures that JVM optimisations don’t come in the way of our benchmark. Without it, the JVM can see that we are not actually using the new RandomNumbers object, and optimise our code accordingly. JVMs do many such things and benchmarking frameworks like JMH give us ways to minimise JVM optimisations from hurting our benchmark numbers.
Run the Benchmarks
Running the benchmarks is quite simple. Just execute ./gradlew jmh
from the project directory (where the build.gradle
file is). You should be able to see the benchmark running and displaying it’s metrics as it runs. And the output would end with a summary like so –
Benchmark Mode Cnt Score Error Units
RandomNumbersBenchmark.initializeRandomNumbers avgt 25 2056.808 ± 84.686 ns/op
The full project for trying out can be downloaded from GitHub – https://github.com/heppydepe/measuring-execution-time-of-java-methods/tree/v2.0

Annotations
There are a ton of annotations in JMH which you can refer to in the documentation. But I’ll briefly write about the annotations used in the above sample.
@State
is used to assign a “Scope” for the benchmark. Possible scopes are Benchmark
, Group
and Thread
.
The @Benchmark
annotation identifies this method as a benchmark. Similar to the @Test
annotation that marks Unit tests.
@OutputTimeUnit
annotation specifies what time unit your benchmark output should be in.
@BenchmarkMode
annotation specifies in which modes the benchmark would run. Note that this annotation can supply multiple modes. Possible modes are AverageTime
, SampleTime
, SingleShotTime
, Throughput
and All
.
For reference, all annotations and their meanings can be found in the JavaDoc for JMH.
Why JMH?
Microbenchmarking is a big step above simple time capturing. JMH does things like warmup, iterating and managing threads for us. It also helps us avoid common traps while benchmarking – for example, side stepping from JVM optimisations so that our benchmarks are accurate.