Saturday, December 8, 2018

JVM performance: OpenJ9 uses least memory. GraalVM most. OpenJDK distributions differ.

In a previous blog post I created a setup to compare JVM performance of several JVMs. I received some valuable feedback on the measures I conducted and requests to add additional JVMs. In this second post I'll look at some more JVMs and I've added some measures like process memory usage and startup time. Also I've automated the test and reduced the complexity of the setup by removing haproxy and testing a single JVM at a time.

Setup

Test application

I've used the reactive Spring Boot application from here.

JVMs

The JVMs which were looked at;
  • openjdk:8u181
  • oracle/graalvm-ce:1.0.0-rc9
  • adoptopenjdk/openjdk8:jdk8u172-b11
  • adoptopenjdk/openjdk8-openj9:jdk8u181-b13_openj9-0.9.0
  • azul/zulu-openjdk:8u192
  • store/oracle/serverjre:8
The versions were the currently available latest versions of the respective JVMs. I also quickly looked at Azul Zing but couldn't get a Docker image with my application running quickly enough so for now I skipped this.

Automated tests

I've used SoapUI loadrunner to automate my tests with. First I executed a 10s 'primer' loadtest to reach a steady state. Next I performed a 5 minute test with the following settings:


Dockerfile

I've used the following Dockerfile:

FROM openjdk:8u181
VOLUME /tmp
ARG JAR_FILE
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-XX:+UnlockExperimentalVMOptions","-XX:+UseCGroupMemoryLimitForHeap","-jar","/app.jar"]

And of course varied the FROM entry. This way of automation also made it a bit more difficult to include Zing as there are no Docker Hub images available for it.  Creating an image myself did not work out of the box with the supplied examples and since it is just one of the JVMs to look at I decided to post this without Zing.

Docker-compose

The people from Docker have reduced the options which are available in the v3 docker-compose.yml file. In order to set memory limits and configure your network stack, a v2 docker-compose.yml is required. I've used the following:

version: '2'
services:
  spring-boot-jdk:
    image: "spring-boot-jdk"
    container_name: spring-boot-jdk
    ports:
    - "8080:8080"
    networks:
            - dockernet
    mem_limit: 1024M
  prometheus:
    image: "prom/prometheus"
    ports:
    - "9090:9090"
    volumes:
     - ./prom-jdks.yml:/etc/prometheus/prometheus.yml
    container_name: prometheus
    networks:
            - dockernet
  grafana:
     image: "grafana/grafana"
     ports:
     - "3000:3000"
     container_name: grafana
     networks:
            - dockernet
networks:
    dockernet:
        driver: bridge
        ipam:
            config:
            - subnet: 192.168.0.0/24
              gateway: 192.168.0.1

I've used a memory limit to make sure all the JVMs were running under a similar amount of available memory.

I stopped, removed and recreated the spring-boot-jdk container. Everytime with a different JVM.

process-exporter

Why hardcode the network settings in the docker-compose.yml file? I wanted to measure the complete JVMs memory. When using for example Micrometer, you only get the memory used inside the JVM and not the memory the OS process uses. In order to achieve this, I've used process-exporter with the following configuration process-exporter.yml in the proc-exp folder:

process_names:
  # comm is the second field of /proc/<pid>/stat minus parens.
  # It is the base executable name, truncated at 15 chars.  
  # It cannot be modified by the program, unlike exe.
  - comm:
    - java
    cmdline: 
    - app.jar  

This monitors java processes which have app.jar in their command-line. If I didn't also check the command-line, my Java test processes would also be included and I didn't want that.

Next I started process-exporter on my host with:

docker run -d --rm -p 9256:9256 --privileged -v /proc:/host/proc -v `pwd`/proc-exp:/config ncabatoff/process-exporter --procfs /host/proc -config.path /config/process-exporter.yml

I wanted to monitor process exporter with Prometheus inside my Docker container. To make this possible, my host (=gateway from within the Docker network) should be available at the same IP so I could configure that in my Prometheus configuration.

Results

Response time

I did HTTP GET requests from SOAPUI. This is the average response time of the service measured after a steady state was reached.

The reported response times by Micrometer from within the applications, were as follows:



OpenJDK and Oracle JDK were fastest while AdoptOpenJDK was slowest.

When looking at what SOAP UI reported as response times, we see something different.


This differs from what I measured previously. In that previous measure GraalVM appeared to provide the slowest response times while during this test, that was clearly not the case and GraalVM was one of the faster JVMs when looking at measures from within the JVM but also from outside the JVM.

Between the different measures, there was also quite a lot of difference. The response times from OpenJDK are slowest here instead of fastest. This makes me wonder if the measures from within the JVM across JVMs are really comparable and if they are measuring the same thing. This might differ due to implementation differences? AdoptOpenJDK was slowest in response times both when looking at within JVM measures and outside.

Startup time

This is the period reported by Spring Boot about how long it took for the application to start and how long the JVM was running before the application was actually up.


Here again we see the results are not quite as reproducible as I would want. Adopt OpenJ9 was clearly slowest in both tests for application startup followed by GraalVM. There's no clear winner though.

Process memory usage

This is a result from process exporter on how much memory the Java process took in total. This consists of virtual, reserved/resident and swap memory. Swap memory was for all JVMs zero during the test. Virtual memory also consists of shared libraries (which are also used by other programs). When looking at resident and virtual memory I saw the following (using https://grafana.com/dashboards/249):


Clear winner here with least memory usage is OpenJ9 followed at distance by Oracle JDK. OpenJDK and GraalVM use most memory (both virtual and resident).

JVM memory usage

This is the heap and non-heap inside the JVM measured with Micrometer and exposed to Prometheus. Non-heap consists of reserved memory, a cache and PermGen space. Heap consists of several memory areas in which the JVM moves stuff around.

Heap


I've used the following Grafana dashboard: https://grafana.com/dashboards/4701. When looking at heap memory, OpenJ9 seems clearly to be the winner followed again at distance by Oracle JDK. GraalVM uses most memory for the same application within the JVM.

When looking at the parts the heap consists of, the different JVMs show some remarkable differences. Especially OpenJ9 behaves really differently compared to the other JVMs.

Non heap


While the other JVMs do not reserve much memory for the non-heap, OpenJ9 does, although it uses less memory. GraalVM uses most non heap memory. When we look at a bit more detail of what happens in the non-heap area we see the following:


OpenJ9 (the 4th bar in the graphs) clearly behaves differently.

Threads

When looking at threads, GraalVM uses slightly more threads and OpenJ9 a lot when compared to the other JVMs.


It is interesting to notice that even though OpenJ9 uses more threads, it does not use more memory.

Conclusions

Difficult to reproduce / large error

Startup times

OpenJ9 and GraalVM are slowest to start. The results here are also not that reproducible so I should do more tests on this with larger applications.

Response times

Since the response times measured inside and outside of the JVM differed a lot and the results were not solidly reproducible, I won't draw any conclusions here yet.

Reproducible results / small error

Memory usage

Upon request I also looked at OS process memory using process-exporter. Also I've split up heap and non-heap memory. All memory measures provided similar results in that the JVM which used most memory was GraalVM and the JVM which used least memory (by far) was OpenJ9. If memory usage is a concern I would recommend you to consider OpenJ9 as an option.

Some notes

Not looked at yet
  • larger applications containing more complex logic
  • non-reactive Spring Boot
  • only compared Java 8 JVMs because for GraalVM at the moment of writing there was no newer version available yet. Is Java 11 faster? (I'm going to skip 9 and 10, no Oracle LTS versions)
  • Azul Zing should be added as it is claimed to be fast
  • GraalVM can produce native executables. Interesting to also use them in a comparison.
  • Garbage collection behavior also differs. I have measures but did not have the time yet to look at it in more detail.
  • I should spend time to make the setup and measures available in a suitable way so others can reproduce them. 
  • Measures might differ when running on different systems (Windows, MacOS, Linux) or different processor architectures.
GraalVM

Of course GraalVM is much more than just a JVM in that is allows you to run other languages such as Javascript (not confuse this with Nashorn or Rhino) and R in a seamless matter and allows you to create native executables which are supposed to be much faster. Haven't tested this yet though.

No comments:

Post a Comment