Wednesday, April 8, 2020

Spring: Blocking vs non-blocking: R2DBC vs JDBC and WebFlux vs Web MVC

Spring Framework version 5, released in Sept 2017, introduced Spring WebFlux. A fully reactive stack. In Dec 2019 Spring Data R2DBC, a reactive relational database driver was released. In this blog post I'll show that at high concurrency, WebFlux and R2DBC perform better. They have better response times and higher throughput. As additional benefits, they use less memory and CPU per request processed and when leaving out JPA in case of R2DBC, your fat JAR becomes a lot smaller. At high concurrency using WebFlux and R2DBC is a good idea!


Method

In this blog post I've looked at 4 implementations
  • Spring Web MVC + JDBC database driver
  • Spring Web MVC + R2DBC database driver
  • Spring WebFlux + JDBC database driver
  • Spring WebFlux + R2DBC database driver
I've varied the number of requests in progress (concurrency) from 4 to 500 in steps of 50 and assigned 4 cores to the load generator and to the service (my laptop has 12 cores). I've configured all connection pools to be 100. Why a fixed number of cores and connection pool size? In a previous exploration of JDBC vs R2DBC data changing those variables did not provide much additional insight so I decided to keep them fixed for this test reducing my test run time by several factors.

I did a GET request on the service. The service fetched 10 records from the database and returned them as JSON. First I 'primed' the service for 2 seconds by putting heavy load on the service. Next I started with a 1 minute benchmark. I repeated every scenario 5 times (separated by other tests, so not 5 times after each other) and averaged the results. I only looked at the runs which did not cause errors. When I increased concurrency to more than 1000, the additional concurrent requests failed without exception for all implementations. The results appeared reproducible.

As back end database I've used Postgres (12.2). I used wrk to benchmark the implementations (because of several recommendations). I measured
  • Response time
    As reported by wrk
  • Throughput (number of requests)
    As reported by wrk
  • Process CPU usage
    User and kernel time (based on /proc/PID/stat)
  • Memory usage
    Private and shared process memory (based on /proc/PID/maps)
You can view the test script used here. You can view the implementations used here.

Results

You can view the raw data which I used for the graphs here.

Response time
It is clear that at higher concurrency, the response times of Spring Web MVC + JDBC starts to drop. R2DBC clearly gives the better response times at higher concurrency. Spring WebFlux also does better than a similar implementation using Spring Web MVC.

Throughput
Similar to the response times, Spring Web MVC with JDBC starts doing worse at higher concurrency. R2DBC clearly does best. Moving from Spring Web MVC to Spring WebFlux however also helps improving throughput but not as much as going from JDBC to R2DBC. At low concurrency, Spring Web MVC + JDBC does slightly better than Spring WebFlux + JDBC.

CPU

CPU was measured as CPU time during the entire run, the sum of process user and kernel time.
Spring WebFlux with JDBC used least CPU. However as you've seen above, it also has the lowest throughput. When you look at the CPU used per request processed, you get a measure of how efficient the code/JVM was in utilizing the CPU:
WebFlux and R2DBC use least CPU per request. R2DBC clearly uses less CPU per request than JDBC. At low concurrency however Web MVC + JDBC makes most efficient use of available memory. Memory usage per request processed, when any component is non-blocking (WebFlux of R2DBC is used), is more stable than a completely blocking stack (Web MVC + R2DBC).

Memory

Memory was measured as process private memory at the end of the run. Memory usage is garbage collection dependent. G1GC was used on JDK 11.0.6. Xms was 0.5 Gb (default 1/64 of my available 32 Gb). Xmx was 8 Gb (default 1/4 of my available 32 Gb).

Web MVC starts to take more memory at higher concurrency while WebFlux is stable. At low concurrency, Web MVC + JDBC does best but at higher concurrency, WebFlux + R2DBC uses least memory per processed request.
Fat JAR size

The below graph shows JPA is a big one. If you can't use it in case of R2DBC, your fat JAR size drops in the order of 15Mb!


Summary

R2DBC and WebFlux, a good idea at high concurrency!
  • At high concurrency, the benefits of using R2DBC instead of JDBC and WebFlux instead of Web MVC are obvious. 
    • Less CPU is required to process a single request. 
    • Less memory required to process a single request. 
    • Response times at high concurrency are better.
    • Throughput at high concurrency is better
    • The fat JAR size is smaller (no JPA with R2DBC)
  • You're not required to have a completely non-blocking stack to reap the benefits of using R2DBC or WebFlux. It is however best to use both at high concurrency. WebFlux + JDBC is not a good idea since it is not efficient in memory and CPU usage and also has a low throughput compared to the other tested services.
  • At low concurrency (somewhere below 200 concurrent requests), using Web MVC and JDBC, might give better results. Test this to determine your own break-even point!
Some challenges when using R2DBC
  • JPA cannot deal with reactive repositories such as provided by Spring Data R2DBC. This means you will have to do more things manually when using R2DBC.
  • There are other reactive drivers around such as for example Quarkus Reactive Postgres client (which uses Vert.x). This does not use R2DBC and has different performance characteristics (see here).
  • Limited availability
    Not all relational databases have reactive drivers available. For example, Oracle does not (yet?) have an R2DBC implementation. 
  • Application servers still depend on JDBC.
    Do people still use those for non-legacy in this Kubernetes age?
  • When Java Fibers will be introduced (Project Loom, could be Java 15), the driver landscape might change again and R2DBC might not become JDBCs successor after all.

No comments:

Post a Comment