Week 20 - Containers, Native Images, and Deployment¶
Conceptual Core¶
The artifact you ship is a container image (and a Kubernetes manifest, usually). Java has gained excellent first-class container support - jlink runtime images, Buildpacks, GraalVM native-image, Spring Boot's bootBuildImage. Picking among them is now a design decision, not an afterthought.
Mechanical Detail¶
- Plain JRE image - fast to build, large (~200MB+ for a base + app). Fine for internal apps.
jlinkcustom runtime -jdepsto find required modules,jlink --add-modules ... --strip-debug --no-man-pages --no-header-files --compress=2to produce a ~50MB JRE.- Buildpacks (
spring-boot:build-image,packCLI) - opinionated, layered, reproducible. The CNCF default. - GraalVM
native-image(orquarkus:native) - single static binary, ~30-50MB, sub-100ms cold start, lower peak throughput. Reflection/dynamic proxies needreachability-metadata.json(often auto-collected via tracing agent). - Project Leyden AOT artifacts (early stages in 25): training run, write AOT cache (
-XX:AOTMode=record/-XX:AOTMode=create), load on next start (-XX:AOTMode=on). Closes some of the gap to native-image without giving up the JIT. - Kubernetes specifics: requests/limits aligned with
-XX:MaxRAMPercentage, liveness vs readiness probes (don't conflate), graceful shutdown viaSIGTERMandserver.shutdown=gracefulin Spring Boot.
Lab¶
Produce four images of your service: plain JRE Dockerfile; jlink-trimmed; Buildpacks; native-image. Tabulate size, cold start, warm p99, RSS. Pick a winner for an explicit deployment profile (e.g., "always-on internal API" vs "scale-to-zero per-request webhook").
Idiomatic Drill¶
Write a sane Dockerfile from scratch - multi-stage, non-root user, tini as PID 1, -XX:MaxRAMPercentage=75.0, JFR enabled. Then realize Buildpacks gave you all of that and consider when each is right.
Production Hardening Slice¶
Add to hardening/: a Dockerfile, a pack build script, a native profile, a sample Kubernetes manifest with probes/limits/shutdown configured.