Spring Boot and the Holy GraalVM
With the release of Spring Boot 3.0, we get official support for GraalVM native builds. Does it mean we can finally free ourselves from the overhead of the JVM? How do native builds improve an app’s performance? Where’s the trade‑off, and is it worth making? In this post, we’ll try to answer those questions—with a few Monty Python references along the way.

The Spring Native project is now officially part of Spring Boot 3.0. With it, we can compile Spring Boot projects directly to OS‑native executables, completely omitting the JVM. We can now run the binary on a system without a JRE installed. That’s pretty wild! But you might ask why we would ever want to do that. Well… there are a couple of reasons.
The JVM was a way to streamline building software that would run on any device, regardless of the OS.
Write once and run anywhere
It’s all great, but maybe you don’t need this. Maybe you’re building a web app that only needs to run on Linux somewhere in the cloud. In that case, the JVM is just unnecessary overhead that will slow us down.
With GraalVM AOT (ahead‑of‑time) compilation, we can achieve just that. And since it ships with Spring Boot 3.0, there has never been a better time to run your app directly on bare metal… or on the cloud’s virtualized compute service.
Clean start
Let’s start by looking at a blank app, straight from the Spring Initializr with no additional dependencies, to get a reference point. I will be testing compile time, the final file size (native vs JAR), startup time, and memory allocated by the process.
Compilation time
Compiling a native image takes much longer than building a JAR. In my tests, the average native image took 1m 37s to compile, while building a JAR took only 4s—almost 25× longer. That’s a lot, but expected: AOT performs optimizations at compile time. In the default HotSpot JVM’s JIT approach, those things are postponed until runtime. The AOT compiler also does things JIT will never attempt, for instance, it checks which resources are never used and can throw away classes from the final build. With JIT, we always take everything from the classpath into the final JAR.
File size
The file size of the app will also be significantly larger with the native build. In my tests, the blank app took 45.3MB of disk space, while the JAR weighed only 14.4MB. The size difference is caused by the fact that the native build is a standalone executable. It does not require dependencies like a JRE to start. Meaning we have to pack everything we might need from the JRE inside the binary. A JAR will utilize the JRE, so it can contain only the bytecode of our app.

Ok. So a native image takes longer to compile and weighs more. So far not so great, but let’s move to the good parts!
Startup time
Spring Boot apps are infamous for their long startup time. Classpath scanning is one of the culprits. Since AOT pushes that work to compile time, we see a massive improvement. During my tests, the JAR needed 0.774s to boot, while the native binary needed only 0.017s. That’s a 45× improvement.
Memory usage
Memory usage also improves. The blank JAR allocated 125MB of memory, while the native build needed only 27MB. That’s another place where we can see the benefits of skipping the JVM.
Summary of all the numbers in a table:
JIT | AOT | |
---|---|---|
Compile time | 4s | 1m 37s |
File size | 14.4MB | 45.3MB |
Startup time | 0.774s | 0.017s |
Memory usage | 125MB | 27MB |
1000 Beans
So far we tested the blank app, but what happens if we fill the project with some code? To test this, I created 1000 empty beans.

The idea of this test is to see how bean discovery time impacts the startup time of the AOT build.
JIT | AOT | |
---|---|---|
Compile time | 11s | 1m 32s |
File size | 15.9MB | 47MB |
Startup time | 1.943s | 0.043s |
Memory usage | 314MB | 42MB |
Comparing those numbers, both versions take a hit on startup time. In both cases, the app needs 2.5× more time to boot. But since the base value of the native binary is much smaller, the overhead doesn’t look that bad. Additionally, the native build is much less memory‑hungry as the code size grows. RAM allocated by the native binary is 1.5× greater over the base value, while the JAR needed to allocate 2.5× as much memory.
Sample app
Let’s test something closer to a real‑life scenario. I will reuse the benchmark app from the previous blog post.
It’s a sample app that repackages DTOs and pushes them from left to right (something that happens way more often in code than we’d like to admit). One conclusion from that post was how helpful JIT is at optimizing our code on the fly. Can we count on the same help with the native build? Well… no.
JIT | AOT | |
---|---|---|
No warmup | 606ms | 681ms |
Warmup | 373ms | 642ms |

As you can tell by the numbers, if we allow JIT to warm up, the same task is executed in just 60% of the original time. We don’t see the same improvement on the native build—execution time remains similar no matter how long we “warm up” the code.
If you’re thinking about using native builds in prod, please benchmark your code carefully. You might be surprised that your app runs slower if you heavily relied on JIT optimizations, which are not possible with AOT.
Conclusion
To sum it up: with AOT, we trade longer compile time and larger file size for much faster boot time and lower memory usage. One place where this trade‑off makes perfect sense is the cloud. If you’re running your code in the cloud, give it a try. Keep in mind that to use GraalVM natively with Spring Boot, you need to upgrade to version 3.0 or higher. That can be a challenge in itself, especially since Spring Boot 3.0 uses Java 17 as a baseline. Before the upgrade, check the full laundry list.