Cross-References-tutoriaal and Sibling Curricula¶
This curriculum lives in a multi-curriculum repository. Other curricula share substrate. Use this map to navigate when a topic spans the seams.
Repository layout¶
self_dev/
├── tutoriaal/ -applied AI engineer track (this curriculum)
├── AI_SYSTEMS_PLAN/ -AI systems / GPU / inference / training-infra track
├── LINUX/ -kernel + namespaces + cgroups + eBPF
├── CONTAINER_INTERNALS_PLAN/ -OCI runtimes, image internals
├── KUBERNETES_PLAN/ -control plane, controllers, GitOps
├── RUST_TUTORIAL_PLAN/ -Rust mastery
├── GO_LEARNIN_PLAN/ -Go mastery
└── AI_EXPERT_ROADMAP.md -parent strategic doc for tutoriaal
When a topic spans curricula¶
Inference & serving¶
- tutoriaal sequence 14 (light, applied): deploy a model on vLLM, measure throughput.
- tutoriaal DEEP_DIVES/05 (LLM applications): serving-side concerns from the application layer.
- AI_SYSTEMS_PLAN/05_MONTH_INFERENCE_SYSTEMS.md: inference-engineer-grade depth.
- AI_SYSTEMS_PLAN/DEEP_DIVES/08_INFERENCE_SERVING.md: vLLM internals, paged attention algorithm, scheduler design.
- Use which when: tutoriaal first if you're shipping the feature; AI_SYSTEMS if you're optimizing throughput, building a custom serving stack, or interviewing for an inference-engineer role.
Distributed training¶
- tutoriaal sequence 16 (light): ZeRO/FSDP at concept level for breadth.
- tutoriaal DEEP_DIVES/10 (fine-tuning): when you fine-tune at scale, this references the systems track.
- AI_SYSTEMS_PLAN/04_MONTH_DISTRIBUTED_TRAINING.md and DEEP_DIVES/06: full algorithmic depth (ring-allreduce proof, ZeRO memory math, pipeline schedules, 3D parallelism).
- Use which when: tutoriaal for "I need to fine-tune a 7B model"; AI_SYSTEMS for "I need to design a 70B+ training job."
Transformers & attention¶
- tutoriaal sequence 08: build-a-transformer track (Karpathy lineage).
- tutoriaal DEEP_DIVES/04 (deep learning fundamentals): backprop, optimizers, normalization for transformers.
- AI_SYSTEMS_PLAN/DEEP_DIVES/07_ATTENTION_TRANSFORMER.md: attention math, FlashAttention derivation, KV-cache calculus.
- Use which when: tutoriaal if you're learning by building; AI_SYSTEMS when you need to implement a custom attention kernel or understand FlashAttention.
Quantization¶
- tutoriaal sequence 14 + DEEP_DIVES/10: AWQ/GPTQ at decision-matrix level (when to apply for inference vs FT).
- AI_SYSTEMS_PLAN/DEEP_DIVES/09_QUANTIZATION.md: full algorithm derivations (AWQ identity proof, GPTQ from Optimal Brain Surgeon, SmoothQuant α derivation, FP8 with delayed scaling, Marlin kernel).
- Use which when: tutoriaal if you're picking a method for your shipping app; AI_SYSTEMS if you're implementing or contributing to a quantization library.
Numerical precision / mixed precision¶
- tutoriaal DEEP_DIVES/04: mixed-precision overview within optimizer + training-loop context.
- AI_SYSTEMS_PLAN/DEEP_DIVES/11_NUMERICS_AND_MIXED_PRECISION.md: IEEE-754 derivations, FP8 algorithm, loss scaling, catastrophic cancellation, transformer stability tricks.
- Use which when: tutoriaal for AMP usage in your training loop; AI_SYSTEMS when something NaN'd and you need to debug it.
PyTorch¶
- tutoriaal DEEP_DIVES/02_PYTORCH_FLUENCY.md: user-level-write training and inference code fluently.
- AI_SYSTEMS_PLAN/DEEP_DIVES/04_PYTORCH_INTERNALS.md: internals-dispatcher, autograd engine, torch.compile pipeline, custom-op registration.
- Use which when: tutoriaal first; AI_SYSTEMS when you need to register a custom CUDA kernel as a PyTorch op or debug a torch.compile failure.
GPU programming¶
- Out of scope for tutoriaal entirely.
- AI_SYSTEMS_PLAN/02_MONTH_GPU_PROGRAMMING.md and DEEP_DIVES/01-03: GPU architecture, CUDA, Triton.
- Use which when: when you need to write or read CUDA/Triton kernels.
Production deployment¶
- tutoriaal: how the AI service should behave (chapters 05, 09, 12).
- CONTAINER_INTERNALS_PLAN/: how to package it (Dockerfile, OCI, multi-stage builds, supply chain).
- KUBERNETES_PLAN/: how to orchestrate it (KServe, KubeRay, autoscaling, admission policies).
- LINUX/: when something goes wrong at the host level (PSI memory pressure, eBPF tracing).
Recommended cross-curriculum reading by month¶
| Month | Primary | Secondary support |
|---|---|---|
| M01-M03 (foundations) | tutoriaal | tutoriaal DEEP_DIVES 01-04 |
| M04-M06 (applied) | tutoriaal | tutoriaal DEEP_DIVES 05-07; CONTAINER for image build; KUBERNETES for deploy |
| M07-M09 (specialty + infra) | tutoriaal track choice | If Track C (infra): AI_SYSTEMS_PLAN/05 (inference); if Track A (evals): tutoriaal DEEP_DIVES 08-09 |
| M10-M12 (capstone) | tutoriaal | All adjacent curricula as needed for capstone deploy |
When to skip into a sibling curriculum¶
If during a tutoriaal week you find yourself wanting to:
- Write a custom CUDA/Triton kernel → switch context to AI_SYSTEMS_PLAN/02.
- Train a model >7B with FSDP → AI_SYSTEMS_PLAN/04.
- Optimize an inference server's scheduler → AI_SYSTEMS_PLAN/DEEP_DIVES/08.
- Debug a NaN in mixed-precision training → AI_SYSTEMS_PLAN/DEEP_DIVES/11.
- Ship a hardened production K8s deploy → KUBERNETES_PLAN.
- Trace a kernel-level failure → LINUX.
- Sign and verify a container image → CONTAINER_INTERNALS_PLAN.
These skips are not detours; they are how you produce production-credible artifacts.
When the curricula disagree¶
When two curricula reference the same topic and disagree (e.g., a tool's recommended config), trust:
- For algorithms / math: AI_SYSTEMS DEEP_DIVES (deeper derivations).
- For application patterns: tutoriaal DEEP_DIVES.
- For deployment substrate: LINUX / CONTAINER / KUBERNETES.
- For specific tool versions / APIs: neither-verify against the tool's current docs at use time.
Year-2 stack composition¶
A year-2 reader having completed tutoriaal year-1 might extend with:
- AI_SYSTEMS_PLAN/05 (inference) + DEEP_DIVES 07-10 → for an inference-engineer pivot.
- AI_SYSTEMS_PLAN/04 (training) + DEEP_DIVES 06 → for a training-infrastructure pivot.
- KUBERNETES_PLAN Months 5-6 + tutoriaal DEEP_DIVES 09 → for a platform-engineer pivot specializing in AI.
The cross-references make these pivots low-friction; you're not starting from scratch in any direction.