14 - Anatomy of an AI OSS Project¶
What this session is¶
Read a real AI OSS repo top to bottom so the next one feels familiar.
Case study: huggingface/peft¶
PEFT (Parameter-Efficient Fine-Tuning) implements LoRA, QLoRA, IA3, prefix-tuning, etc. Small enough to read in a sitting; well-maintained.
Typical top level:
README.md
CONTRIBUTING.md
LICENSE
setup.py / pyproject.toml
src/peft/ # library code
tests/
examples/
docs/
.github/workflows/
What to read, in order¶
1. README.md (5 min)¶
What the project is. Quickstart example. Supported methods.
2. CONTRIBUTING.md (5 min)¶
How to set up the dev environment. Code style. Tests. PR rules.
3. setup.py / pyproject.toml (2 min)¶
Dependencies. Optional extras. Python version.
4. src/peft/ (15 min)¶
The package itself:
src/peft/
├── __init__.py # public API
├── peft_model.py # main PeftModel class
├── config.py # config classes
├── tuners/
│ ├── lora.py # LoRA implementation
│ ├── ia3.py
│ ├── prefix_tuning.py
│ └── ...
└── utils/
Read __init__.py first - it shows the public API surface. Then pick lora.py - that's the most-used technique and you understand it.
5. tests/ (10 min)¶
Pick test_lora.py (or similar). See how the team validates that LoRA still works across model architectures.
6. examples/ (10 min)¶
Working notebooks. Reproducible end-to-end runs.
7. .github/workflows/ (5 min)¶
tests.yml - runs pytest matrix. build_docs.yml - builds docs. release.yml - pushes to PyPI.
CI is the spec. What it runs, your PR must pass.
What to look for¶
- Where does data flow? For a training-time library: model → tuner wrapper → optimizer → save. For RAG: query → embed → search → context → LLM → response.
- Where's the public API? Usually
__init__.pyor amodels.py/api.py. - Where are model architectures? Usually
models/or per-architecture files. - Where are tests?
tests/. Match each test to a code file. - What's "magic"? Decorators that register models (
@register_model), config classes that auto-load. Read the registration logic once.
Common AI-project patterns¶
- Registry pattern. Models, tuners, integrations registered by string. New addition = add to registry + implement interface.
- Hub integration. Models loaded from
from_pretrained("model-id"). Look for_load_pretrained_modelor similar. - Configuration as a dataclass.
@dataclass class FooConfig. Serializes to JSON for reproducibility. - Mixed-precision and device handling.
with torch.cuda.amp.autocast():blocks;model.to(device). - Pipeline abstraction. High-level wrapper over tokenizer + model + generation logic.
Once you see these in one project, you see them everywhere.
Reading the test suite¶
Tests document expected behavior. For peft:
test_lora.py::test_lora_save_load- round-trip preservation.test_lora.py::test_lora_target_modules- which layers get adapters.test_lora.py::test_lora_merge- merging LoRA back into base weights.
Each test names the contract. To break the test is to break the contract.
Counter-example: pytorch/pytorch¶
Several million lines. C++/CUDA/Python. Build system alone is a project. Don't read top-to-bottom. Instead, find a specific module (torch/optim/, torch/utils/data/) and read just that.
Counter-example: langchain-ai/langchain¶
Monorepo with ~100 packages. Hundreds of integrations. Don't read top-to-bottom. Pick one integration package (e.g., libs/community/langchain_community/llms/anthropic.py) and read just that.
Exercise¶
- Clone
huggingface/peft. - Spend 45 minutes reading per the order above.
- After: explain to yourself, out loud:
- What does this project do?
- What's the public API?
- Where would a new technique (e.g., new LoRA variant) be added?
- How is it tested?
- Pick one open
good first issue. Locate the code it concerns.
What you might wonder¶
"I read it. I don't fully understand it." That's fine. Goal is geography, not mastery. You should know roughly where things live. Mastery comes from changes.
"The code uses techniques I haven't learned (mixin classes, metaclasses, etc.)." Note them. Don't get stuck. Modify a small piece first.
"It uses CUDA / accelerate / DeepSpeed. I can't run on my laptop."
You can still read and contribute. Many PRs are CPU-testable. Look for @require_torch_gpu decorators on tests - those are GPU-only; the rest you can run.
Done¶
- Read a real AI OSS repo with a plan.
- Know the typical layout.
- Have a target issue.
Next: Your first contribution →