Java compilation models: JIT vs AOT
"In this blog post, I dive deeper into how JIT actually works and when native image yields better"
Over the last decade, I’ve seen Java evolve in parallel with the architectural shift from large monolithic apps to distributed microservices. This transition brought new design patterns — and new runtime demands. Once known for slow startup, Java has made major progress, driven by innovations like GraalVM’s AOT compilation.
In this post, I’ll examine both runtime models — traditional JIT and newer AOT — and compare their pros, cons, and ideal use cases.
𝐉𝐈𝐓. Java’s traditional compilation strategy since the beginning. It dynamically optimizes code at runtime by observing real usage patterns.
𝐇𝐨𝐰 𝐢𝐭 𝐰𝐨𝐫𝐤𝐬 When a method is first invoked, the JVM interprets its bytecode line-by-line. Once it becomes “hot” (frequently executed), the JIT compiler transforms it into optimized native machine code and stores it in a memory region called the code cache. Future invocations bypass interpretation and jump straight into native execution.
✅ Pros • Best peak performance • Fast iteration during development • Full support for runtime features (reflection, classloading, proxies)
🏗️ Best for • Huge monolithic apps • Long-lived services with stable lifecycles
𝐀𝐎𝐓. A newer model introduced via GraalVM. It compiles code ahead of time into a native binary for your platform (x64, ARM).
𝐇𝐨𝐰 𝐢𝐭 𝐰𝐨𝐫𝐤𝐬 At build time, all reachable code is ahead-of-time (AOT) compiled into a standalone native executable. The result includes Substrate VM, a lightweight runtime that replaces the traditional JVM. There’s no interpreter or JIT at runtime, which enables blazing-fast startup times and low memory overhead. While this improves cold start and footprint, it comes at the cost of slightly reduced peak performance due to the lack of runtime profiling and optimization.
✅ Pros • Near-instant cold start • Small and predictable memory footprint • Great for minimal containers and scaling scenarios
🏗️ Best for • Tiny microservices • Serverless functions (pay-per-invocation)
No surprise that the strengths of one model often reveal the weaknesses of another. As always in software engineering, it’s about choosing the right tool for the right job. JIT and AOT each serve specialized use cases and should be applied thoughtfully rather than universally. Framework support for AOT also varies: while Spring has made significant progress, it faced a steeper path compared to Micronaut and Quarkus, which were designed with AOT in mind from the start. GraalVM Native Image does support virtual threads with JDK 21+, so you can pair lightweight concurrency with near-instant startup, a functionality traditionally inherent to Go lang. In practice, Spring Boot and Quarkus expose VTs in both JVM and native modes (e.g., spring.threads.virtual.enabled=true, Quarkus VT guides).
What’s your experience? Drop a comment 👇
PS: This post was sparked by a great Spring I/O 2024 talk by Alina Yurenko (GraalVM Advocate @Oracle).