Skip to content

Week 11 - Heap Sizing, GOMEMLIMIT-Equivalents, and Container Awareness

Conceptual Core

Heap sizing in a container is a recurring failure mode. The JVM has been container-aware by default since 10, but flags and defaults still bite. Off-heap memory (direct byte buffers, Panama segments, metaspace, code cache, GC structures) is not in -Xmx and is the usual cause of "my pod got OOMKilled but the JVM said the heap was fine."

Mechanical Detail

  • -Xms / -Xmx - initial / max heap. Setting them equal avoids GC behavior changes mid-life.
  • -XX:MaxRAMPercentage=75.0 (default 25% - far too low) - the modern way to size in containers. Pair with -XX:InitialRAMPercentage.
  • Metaspace - class metadata, off-heap, defaults to unbounded. -XX:MaxMetaspaceSize=256m is sane. Classloader leaks show up here.
  • Direct memory (ByteBuffer.allocateDirect) - capped by -XX:MaxDirectMemorySize, defaults to ~heap size. Netty leans on this heavily.
  • Code cache - -XX:ReservedCodeCacheSize=256m. Fills up on long-running apps with tons of JIT'd code; when full, JIT stops and your app slows.
  • GC overhead - G1 has ~10% memory overhead for its remembered sets; ZGC ~3× the live set for forwarding tables (acceptable; the trade is sub-ms pauses).
  • Total RSS budget = Xmx + Metaspace + DirectMemory + CodeCache + thread stacks (-Xss × thread count) + GC + native libraries. Budget every term in a container.

Lab

Take your week-10 app. Run it in a 1GB container with default flags, observe the headroom. Then explicitly size every memory pool and run again. Measure RSS over time.

Idiomatic Drill

Read the Netty memory docs (PooledByteBufAllocator). Understand why a server with heavy I/O has a non-trivial direct-memory footprint.

Production Hardening Slice

Add a "memory budget" checklist to your hardening/ template: every JVM flag with a justification and a value. Wire -XX:+ExitOnOutOfMemoryError -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/heap.hprof.

Comments