10 - Your portfolio: 3 projects¶
What this session is¶
The specific 3-project portfolio that opens interviews. What each project demonstrates, how to scope it, how long it should take, and what NOT to build.
Why three¶
Two reasons:
- Hiring managers can't read 10 projects. They open your GitHub, look at 2-3 pinned repos, decide. If the top three look strong, you're in. If they look weak, you're out.
- Three projects = three different proofs. One shows breadth. One shows depth. One shows you can ship.
More than three pinned projects dilutes the signal. Fewer than three feels thin.
The portfolio formula¶
The strongest 3-project portfolios I've seen follow this pattern:
- A clear product demo - proves you can ship something a user could use.
- A reproduced or extended paper - proves you can read research and implement.
- An OSS contribution - proves you can work in someone else's codebase.
Three different muscles. Together: shippable, technical, collaborative.
Project 1: A clear product demo¶
Scope: A working application that uses LLMs (or your specialization's models) to do something specific and useful. Hosted live. Looks decent. Has a README. Has at least basic evaluation.
Examples that have landed jobs:
- A RAG-powered Q&A bot over a specific corpus (a textbook, the python docs, a podcast archive).
- A code-review assistant for a specific framework.
- An LLM-powered tool for a niche profession (lawyers reviewing contracts, doctors summarizing notes - with appropriate disclaimers).
- A model-comparison playground for a specific task.
- An agent that automates a real workflow you actually use.
What makes it strong:
- Solves a real, specific problem. Not "an AI chatbot."
- Live and working (Vercel, Modal, Hugging Face Space, Railway).
- Has an "eval" section - how do you know it works?
- Has a write-up that's honest about limitations.
What kills it:
- Yet another generic chatbot.
- Demo broken or behind a login wall recruiters can't access.
- No write-up - just code.
- Uses OpenAI for everything with no thought about cost/latency.
Time investment: 4-6 weeks.
Project 2: A reproduced or extended paper¶
Scope: Pick a paper from your specialization. Re-implement the key method. Compare your results to the paper's claims. Write up the discrepancies honestly.
Examples:
- Reproduce a LoRA fine-tune from a published paper, on a different base model.
- Reproduce an evaluation result (e.g., a paper claiming model X beats model Y at task Z - does it?).
- Re-implement a serving optimization (FlashAttention from scratch, or a small piece of vLLM's KV cache management).
- Compare two preference algorithms (DPO vs ORPO) on the same dataset.
What makes it strong:
- Tackles a paper from the last 12-18 months (current).
- Honest about deltas: "I got 73% where they got 78%, here's why I think so."
- Code is clean and runnable from scratch.
- Write-up is technical and specific.
What kills it:
- Toy paper, decade old.
- Copy-paste of someone else's repo, lightly rebranded.
- Claims to "beat the paper" with hand-wavy methodology.
Time investment: 3-5 weeks.
Project 3: An OSS contribution¶
Scope: A merged PR to a well-known AI OSS project. Doesn't need to be huge. The bar is "real PR in a real codebase."
Strongest targets:
huggingface/transformershuggingface/peftortrlvllm-project/vllmlangchain-ai/langchainorrun-llama/llama_indexaxolotl-ai-cloud/axolotl
What makes it strong:
- The PR is actually merged, not stale-and-closed.
- It's not just a typo fix (that's table stakes, not a portfolio item; a few of those help warm up).
- The maintainer thread shows back-and-forth - you addressed feedback.
- Your linked GitHub profile shows activity over time.
What kills it:
- "Contributed" issues without PRs.
- One-character typo fix as the headline.
- Closed/rejected PRs presented as wins.
Time investment: 4-8 weeks for a first non-trivial merged PR. Often longer for big projects.
This is also the project that takes the longest to start - you'll spend weeks just orienting in the codebase. That's normal. See Open source as resume.
What NOT to put in your portfolio¶
These read as red flags or as "junior":
- ❌ MNIST classifier. Done by every tutorial.
- ❌ Generic chatbot wrapping OpenAI. Done by everyone.
- ❌ Stable Diffusion image generator with no specific application.
- ❌ "Coursera capstone projects." Generic; signals you only do guided work.
- ❌ Anything you can't explain in detail in 10 minutes.
- ❌ Forks with no original work.
Polish standard¶
Each pinned repo needs:
- README with: what it does, screenshots/gif if visual, install + run instructions, eval results, limitations, license.
- Working installation from a fresh clone. Test it on a clean machine before pinning.
- CI that's green (basic tests, lint).
- A LICENSE file.
- No committed secrets, no committed model weights, no committed data. Use
.gitignoreproperly.
The bar isn't "production-quality." The bar is "someone could clone this and run it without help."
How they fit on a resume / LinkedIn¶
On your CV:
Projects
- {name}: {one sentence}. [link]
- {name}: {one sentence}. [link]
- {name}: {one sentence}. [link]
On LinkedIn featured items, same three. On the GitHub profile README, same three.
Hiring managers see consistency across surfaces and assume there's a coherent person behind it.
What you might wonder¶
"Can I have more than three?" Sure, in your repo list. Pin three. The pinned three are the signal.
"What if my project ideas overlap with my employer's IP?" Don't risk it. Pick a different domain. Many strong portfolios are in unrelated niches.
"What if I'm not great at frontend?" Use Gradio or Streamlit. They look adequate. Nobody expects a designer's UI for an AI project. Make it functional and clean.
"What if my paper-reproduction project doesn't reproduce?" Write that up honestly. "I couldn't reproduce X; here's what I think happened" is a strong portfolio piece. Many published results don't reproduce; honest reproduction work is valuable.
Done¶
- Have the 3-project formula.
- Have target candidates for each.
- Know what to avoid.
Next: Evaluating job postings honestly →