Don’t use the main method to test anymore, it’s really too low…

Reply “Join the group” after paying attention, and pull you into the programmer exchange group

From: Nuggets, Author: Richard_Yi

Link: https://juejin.cn/post/6844903936869007368

Foreword

“If you cannot measure it, you cannot improve it”.

In daily development, we have many options for calling some code or using tools. When we are not sure about their performance, the first thing we want to do is to measure it. Most of the time, we will simply measure by counting multiple times to see the total time consumption of this method.

However, if you are familiar with the JVM class loading mechanism, you should know that the default execution mode of the JVM is a mixed execution of JIT compilation and interpretation.

JVM identifies high-frequency method calls, loop bodies, public modules, etc. through statistical analysis of hot codes. Based on JIT dynamic compilation technology, it will convert hot codes into machine codes and hand them directly to the CPU for execution.

c84fbecc9c7f894b29339b3a54c7c9fc.png

In other words, the JVM will continue to compile and optimize, which makes it difficult to determine how many times to repeat to get a stable test result? Therefore, many experienced students will write a warm-up logic before testing the code.

JMH, the full name of Java Microbenchmark Harness (micro-benchmark testing framework), is a set of test tool APIs specially used for Java code micro-benchmark testing, and is officially released by OpenJDK/Oracle.

What is Micro Benchmark? Simply put, it is a benchmark at the method level, and the accuracy can be as accurate as microseconds.

A few points to note when benchmarking Java:

  • Need to warm up before testing

  • Prevent useless code from entering test methods

  • concurrency test

  • Presentation of test results

JMH usage scenarios:

  • Quantitatively analyze the optimization effect of a hot function

  • Want to know quantitatively how long a function takes to execute, and the correlation between execution time and input variables

  • Comparing multiple implementations of a function

This article mainly introduces JMH’s DEMO demonstration and common annotation parameters, hoping to help you.

DEMO demo

Here is a demo first, so that students who don’t know JMH can quickly grasp the general usage of this tool.

Test project build

JMH is a built-in version of Java9 and later. Here is Java8 for illustration. For convenience, here is a direct introduction to how to use maven to build a JMH test project.

The first is to use the command line to build, execute the following command in the specified directory:

$ mvn archetype:generate \
          -DinteractiveMode=false \
          -DarchetypeGroupId=org.openjdk.jmh \
          -DarchetypeArtifactId=jmh-java-benchmark-archetype\
          -DgroupId=org.sample \
          -DartifactId=test\
          -Dversion=1.0

A test project will appear in the corresponding directory. After opening the project, we will see such a project structure.

931252975739caa7b0947cd79423d9d2.png

The second way is to directly add jmh-core and jmh-generator-annprocess dependencies to the existing maven project to integrate JMH.

<dependency>
            <groupId>org.openjdk.jmh</groupId>
            <artifactId>jmh-core</artifactId>
            <version>${jmh.version}</version>
        </dependency>
        <dependency>
            <groupId>org.openjdk.jmh</groupId>
            <artifactId>jmh-generator-annprocess</artifactId>
            <version>${jmh.version}</version>
            <scope>provided</scope>
        </dependency>

Writing performance tests

Here I take the example of testing the performance difference between LinkedList iteration through index and foreach iteration, and write a test class. The annotations involved will be explained later.

/**
 * @author Richard_yyf
 * @version 1.0 2019/8/27
 */

@State(Scope. Benchmark)
@OutputTimeUnit(TimeUnit. SECONDS)
@Threads(Threads. MAX)
public class LinkedListIterationBenchMark {
    private static final int SIZE = 10000;

    private List<String> list = new LinkedList<>();

    @Setup
    public void setUp() {
        for (int i = 0; i < SIZE; i ++ ) {
            list.add(String.valueOf(i));
        }
    }

    @Benchmark
    @BenchmarkMode(Mode. Throughput)
    public void forIndexIterate() {
        for (int i = 0; i < list. size(); i ++ ) {
            list. get(i);
            System.out.print("");
        }
    }

    @Benchmark
    @BenchmarkMode(Mode. Throughput)
    public void forEachIterate() {
        for (String s: list) {
            System.out.print("");
        }
    }
}

Execute the test

There are two ways to run the JMH benchmark test, one is to run the production jar file, and the other is to write the main function directly or execute it in the unit test.

The form of generating jar files is mainly for some relatively large tests. There may be some requirements for machine performance or real environment simulation. It is necessary to write the test method and execute it in the linux environment.

The specific commands are as follows:

$ mvn clean install
$ java -jar target/benchmarks.jar

What we encounter in our daily life are generally some small tests, such as the example I wrote above, just run it directly in the IDE.

The way to start it is as follows:

public static void main(String[] args) throws RunnerException {
        Options opt = new OptionsBuilder()
                .include(LinkedListIterationBenchMark.class.getSimpleName())
                .forks(1)
                .warmupIterations(2)
                .measurementIterations(2)
                .output("E:/Benchmark.log")
                .build();

        new Runner(opt).run();
    }

Report Results

The final output is as follows:

Benchmark Mode Cnt Score Error Units
LinkedListIterationBenchMark.forEachIterate thrpt 2 1192.380 ops/s
LinkedListIterationBenchMark.forIndexIterate thrpt 2 206.866 ops/s

the whole process:

# Detecting actual CPU count: 12 detected
# JMH version: 1.21
# VM version: JDK 1.8.0_131, Java HotSpot(TM) 64-Bit Server VM, 25.131-b11
# VM invoker: C:\Program Files\Java\jdk1.8.0_131\jre\bin\java.exe
# VM options: -javaagent:D:\Program Files\JetBrains\IntelliJ IDEA 2018.2.2\lib\idea_rt.jar=65175:D:\Program Files\JetBrains\IntelliJ IDEA 2018.2.2\bin -Dfile.encoding=UTF -8
# Warmup: 2 iterations, 10 seconds each
# Measurement: 2 iterations, 10 seconds each
# Timeout: 10 min per iteration
# Threads: 12 threads, will synchronize iterations
# Benchmark mode: Throughput, ops/time
# Benchmark: org.sample.jmh.LinkedListIterationBenchMark.forEachIterate

# Run progress: 0.00% complete, ETA 00:01:20
# Fork: 1 of 1
# Warmup Iteration 1: 1189.267 ops/s
# Warmup Iteration 2: 1197.321 ops/s
Iteration 1: 1193.062 ops/s
Iteration 2: 1191.698 ops/s


Result "org.sample.jmh.LinkedListIterationBenchMark.forEachIterate":
  1192.380 ops/s


# JMH version: 1.21
# VM version: JDK 1.8.0_131, Java HotSpot(TM) 64-Bit Server VM, 25.131-b11
# VM invoker: C:\Program Files\Java\jdk1.8.0_131\jre\bin\java.exe
# VM options: -javaagent:D:\Program Files\JetBrains\IntelliJ IDEA 2018.2.2\lib\idea_rt.jar=65175:D:\Program Files\JetBrains\IntelliJ IDEA 2018.2.2\bin -Dfile.encoding=UTF -8
# Warmup: 2 iterations, 10 seconds each
# Measurement: 2 iterations, 10 seconds each
# Timeout: 10 min per iteration
# Threads: 12 threads, will synchronize iterations
# Benchmark mode: Throughput, ops/time
# Benchmark: org.sample.jmh.LinkedListIterationBenchMark.forIndexIterate

# Run progress: 50.00% complete, ETA 00:00:40
# Fork: 1 of 1
# Warmup Iteration 1: 205.676 ops/s
# Warmup Iteration 2: 206.512 ops/s
Iteration 1: 206.542 ops/s
Iteration 2: 207.189 ops/s


Result "org.sample.jmh.LinkedListIterationBenchMark.forIndexIterate":
  206.866 ops/s


# Run complete. Total time: 00:01:21

REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on
why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial
experiments, perform baseline and negative tests that provide experimental control, make sure
The benchmarking environment is safe on JVM/OS/HW level, ask for reviews from the domain experts.
Do not assume the numbers tell you what you want them to tell.

Benchmark Mode Cnt Score Error Units
LinkedListIterationBenchMark.forEachIterate thrpt 2 1192.380 ops/s
LinkedListIterationBenchMark.forIndexIterate thrpt 2 206.866 ops/s

Introduction to annotations

Let us introduce the relevant annotations in detail.

@BenchmarkMode

Microbenchmark type. JMH provides the following types of support:

5dd24046ea0114e5b6bafa76cedecc3a.png

Annotations can be at the method level or at the class level.

@BenchmarkMode(Mode. All)
public class LinkedListIterationBenchMark {
    ...
}
@Benchmark
@BenchmarkMode({Mode.Throughput, Mode.SingleShotTime})
public void m() {
    ...
}

@Warmup

This word means preheating, and iterations = 3 refers to the number of preheating rounds.

@Benchmark
@BenchmarkMode({Mode.Throughput, Mode.SingleShotTime})
@Warmup(iterations = 3)
public void m() {
    ...
}

@Measurement

Number of rounds for formal metric computation:

  • iterations The number of rounds to test

  • time The duration of each round

  • timeUnit time unit

@Benchmark
@BenchmarkMode({Mode.Throughput, Mode.SingleShotTime})
@Measurement(iterations = 3)
public void m() {
    ...
}

@Threads

Test threads in each process.

@Threads(Threads.MAX)
public class LinkedListIterationBenchMark {
    ...
}

@Fork

The number of forks performed. If the number of forks is 3, JMH will fork 3 processes for testing.

@Benchmark
@BenchmarkMode({Mode.Throughput, Mode.SingleShotTime})
@Fork(value = 3)
public void m() {
    ...
}

@OutputTimeUnit

Time type for benchmark results. Generally select seconds, milliseconds, microseconds.

@OutputTimeUnit(TimeUnit. SECONDS)
public class LinkedListIterationBenchMark {
    ...
}

@Benchmark

Method-level annotation, indicating that the method is an object that needs to be benchmarked, and its usage is similar to JUnit’s @Test.

@Param

Attribute-level annotations, @Param can be used to specify multiple situations of a certain parameter. It is especially suitable for testing the performance of a function under different input parameters.

@Setup

Method-level annotations, the function of this annotation is that we need to do some preparatory work before testing, such as initializing some data.

@TearDown

Method-level annotation, the function of this annotation is that we need to do some finishing work after the test, such as closing the thread pool, database connection, etc., mainly used for resource recovery, etc.

@State

When using the @Setup parameter, you must add this parameter to the class, otherwise it will prompt that it cannot run. For example, in my example above, state must be set.

State is used to declare that a class is a “state”, and then accepts a Scope parameter to indicate the shared scope of the state.

Because many benchmarks will need some classes to represent the state, JMH allows you to inject these classes into the benchmark function in the way of dependency injection.

Scope is mainly divided into three types:

  • Thread: This state is exclusive to each thread.

  • Group: This state is shared by all threads in the same group.

  • Benchmark: This state is shared among all threads.

Activation method

In the startup method, some parameters mentioned above can be specified directly, and the test results can be output to the specified file.

/**
     * Only for running in IDE
     * The command line mode is build and then java -jar start
     *
     * 1. This is the entry point for benchmark startup
     * 2. Some configuration work of the JMH test is also completed here
     * 3. In the default scenario, JMH will look for methods marked with @Benchmark, and the semantics of inclusion and exclusion can be completed through include and exclude
     */
    public static void main(String[] args) throws RunnerException {
        Options opt = new OptionsBuilder()
                // contains semantics
                // You can use the method name or XXX.class.getSimpleName()
                .include("Helloworld")
                // exclusion semantics
                .exclude("Pref")
                // Warm up for 10 rounds
                .warmupIterations(10)
                // represent 10 rounds of formal metrology testing,
                // And each time, the warm-up is performed first and then the formal measurement is performed.
                // The content is to call the code marked with @Benchmark.
                .measurementIterations(10)
                // forks(3) refers to doing 3 rounds of testing,
                // Because a test cannot effectively represent the result,
                // So through 3 rounds of testing more comprehensive testing,
                // And each round is preheated first, and then formally measured.
                .forks(3)
                .output("E:/Benchmark.log")
                .build();

        new Runner(opt).run();
    }

Conclusion

Based on JMH, many tools and frameworks can be tested, such as log framework performance comparison, BeanCopy performance comparison, etc.

For more examples, you can refer to the official JMH samples:

https://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/

This article talks about some common code testing traps from the perspective of Java Developer, analyzes their correlation with the bottom layer of the operating system and the bottom layer of Java, and uses JMH to help you get rid of these traps.

-End-

Recently, some friends asked me to help find some interview question materials, so I searched through the 5T materials in my collection, and compiled them together, which can be said to be a must for programmer interviews! All the information has been organized into the network disk, welcome to download!

3cd6aba55055a7b59c9fada696100931.png

Click on the card, follow and reply [Interview Question] to get it

Look herebe1b2dfb02f4b18a1f1dcb3565a8325 .gifGood article to share with more people↓↓