Skip to content

Week 19 - gRPC: Streaming, Interceptors, Deadlines, Retries, Outlier Ejection

19.1 Conceptual Core

  • gRPC is HTTP/2 with a binary protocol (Protocol Buffers) and four call shapes: unary, server-streaming, client-streaming, bidirectional-streaming.
  • The Go implementation (google.golang.org/grpc) is the canonical one. Read its source: google.golang.org/grpc/server.go, clientconn.go, stream.go.
  • Production gRPC concerns:
  • Deadlines: every call must have one. Set on the client; propagate via context.
  • Retries: configured via service config, with backoff. Idempotent calls only.
  • Interceptors: cross-cutting middleware (logging, tracing, metrics, auth). Both unary and stream variants.
  • Health checking: grpc.health.v1.Health standard service.
  • Load balancing: client-side, via resolver + balancer plugins.
  • Connection management: connections are HTTP/2 multiplexed; default max concurrent streams is 100-tune up for high-fanout clients.

19.2 Mechanical Detail

  • Server setup:
    s := grpc.NewServer(
        grpc.ChainUnaryInterceptor(
            recoveryInterceptor(),
            loggingInterceptor(logger),
            otelgrpc.UnaryServerInterceptor(),
            authInterceptor(),
        ),
        grpc.KeepaliveParams(keepalive.ServerParameters{
            MaxConnectionIdle: 5 * time.Minute,
        }),
    )
    
  • Client setup:
    cc, err := grpc.NewClient("dns:///service.local:50051",
        grpc.WithTransportCredentials(creds),
        grpc.WithChainUnaryInterceptor(otelgrpc.UnaryClientInterceptor()),
        grpc.WithDefaultServiceConfig(`{
            "loadBalancingConfig": [{"round_robin":{}}],
            "methodConfig": [{
                "name": [{"service":"foo.Bar"}],
                "retryPolicy": {
                    "maxAttempts": 3,
                    "initialBackoff": "0.1s",
                    "maxBackoff": "1s",
                    "backoffMultiplier": 2,
                    "retryableStatusCodes": ["UNAVAILABLE"]
                },
                "timeout": "2s"
            }]
        }`),
    )
    
  • Streaming patterns: server-streaming for log tail; client-streaming for batch upload; bidi for chat-like interaction. Always pair stream lifecycle with context.Context so cancellation works.
  • Outlier ejection: at the client balancer level, eject endpoints with high error rates. The xds balancer supports it natively; for simpler setups, implement a Picker wrapper.
  • Backpressure on streams: HTTP/2 has flow control. The Go gRPC implementation respects it. If your server is slow to send, the client will block writes, and vice versa. Do not rely on unbounded internal buffers.

19.3 Lab-"A Hardened gRPC Service"

Build a minimal Echo service with: - Unary + server-streaming + bidi methods. - Server interceptors for: panic recovery, request logging, OTel tracing, auth, rate limiting. - Client config with retries (UNAVAILABLE only), 2 s default deadline, round-robin load balancing. - A grpc.health.v1 health server. - A tools/grpc_load_test/ directory with `ghz - based load tests; capture latency p50/p95/p99 under 10K QPS.

19.4 Idiomatic & golangci-lint Drill

  • protogetter (use generated getters), goerr113 (use sentinel errors), nilerr, errchkjson.

19.5 Production Hardening Slice

  • Wire deadline propagation tests: a client request with a 500 ms deadline must result in the server seeing a context with a similar deadline (within a budget). Failure here is the single most common gRPC production bug.

Comments