11 - Evaluating job postings honestly¶
What this session is¶
How to read AI job postings without getting demoralized or scammed. What requirements actually mean. What "preferred" really translates to. Red flags vs green flags.
The lying isn't intentional¶
AI job postings are bad signals because:
- Recruiters often write them without understanding the role.
- The "requirements" are aspirational, not literal.
- Companies copy-paste each other's requirements.
- "5+ years of LLM experience" is impossible (the field is 3 years old in its current form) - but it's written anyway.
If you take them literally, you'll never apply. The trick is reading between the lines.
The general decoder¶
| The post says | Likely means |
|---|---|
| "5+ years experience with LLMs" | "Has shipped something with an LLM." |
| "PhD preferred" | "Bonus, not required, unless it's a research role." |
| "Expert in PyTorch" | "Can debug a training loop." |
| "Strong mathematical background" | "Won't be intimidated by a derivative." |
| "Production experience" | "Has put something behind an HTTP endpoint." |
| "Familiarity with distributed systems" | "Knows what Kubernetes is." |
| "Publications in top venues" | (Only literal for research roles.) |
| "Self-starter" | "Manager won't define your tasks." |
| "Fast-paced environment" | "Understaffed." |
| "Wear many hats" | "Understaffed and not hiring more." |
| "Stock options" | "Pay attention to the equity terms before negotiating." |
If you hit ~60-70% of the listed requirements, apply. Hitting 100% means you're overqualified or the post is too vague.
Red flags¶
These mean: pass.
- No salary band listed in jurisdictions that require it. US/UK/EU postings increasingly must disclose. Refusal hints at games during negotiation.
- Vague responsibilities ("you'll work on AI things"). Often means the role isn't defined; you'll be a glorified researcher-or-prompt-engineer of-the-moment.
- "Looking for someone passionate about AI." Code for "we'll underpay you because the work is its own reward."
- No mention of what model / stack / problem. Generic.
- Buzzword salad ("GenAI, AGI, agents, multimodal, RAG, fine-tuning, MLOps"). Means the company doesn't know what it wants.
- "Must be available 24/7" / "thrives under pressure" / "willing to wear many hats." Burnout shop.
- Asks for unpaid take-homes longer than 4 hours, or live coding without a base. Disrespectful.
- No engineering blog, no GitHub presence, no public talks. Means you can't validate from the outside.
- Company name you've never heard of with an "AI" suffix added in 2024. Many of these are dropshipped wrappers around OpenAI.
- VC-funded but no shipped product after 12+ months. Possible cash burn / pivot ahead.
Green flags¶
These mean: apply, even if you don't match every line.
- Specific stack mentioned (e.g., "PyTorch, vLLM, AWS, K8s, our LLM evals are in Promptfoo"). Means the team knows their setup.
- Specific problem mentioned ("building RAG over legal contracts" / "optimizing serving latency for our 70B model").
- Engineering blog with technical depth (not just marketing).
- Public GitHub presence with active repos.
- Clear interview process described (sometimes on the careers page).
- Salary band published.
- Mention of evaluation discipline. "We measure our model outputs against [specific benchmarks]" is rare and excellent.
- Hiring manager visible on LinkedIn. You can research them.
Company types and what they offer¶
Big tech (Google, Meta, Microsoft, Amazon, Apple, Nvidia)¶
- Pay: highest. $250-600K+ all-in.
- Stability: moderate. AI teams have been reorged frequently.
- Visibility: high. Resume value.
- Bureaucracy: high. Real-world impact often slow.
- AI work: real, varied. Includes both frontier work and product integrations.
AI labs (Anthropic, OpenAI, DeepMind, xAI, Mistral, Cohere)¶
- Pay: very high. $300K-1M+ for senior; lower for new grads but still strong.
- Stability: changing fast.
- Visibility: highest in the field.
- Selectivity: brutal. Top 1-2% of applicants.
- AI work: the frontier. Research-engineering blend.
AI startups (Series A-C)¶
- Pay: moderate to high. $150-300K + meaningful equity.
- Stability: low to moderate. Many fail.
- Visibility: depends.
- Bureaucracy: low.
- AI work: often more applied than frontier. Sometimes a thin wrapper around APIs.
Non-AI companies adopting AI (most of the market)¶
- Pay: moderate. $130-220K.
- Stability: higher.
- Visibility: lower.
- Bureaucracy: depends.
- AI work: mostly applied. Building product features. The bulk of hiring.
"AI-first" early stage startups (seed / pre-seed)¶
- Pay: lower base, more equity.
- Stability: lowest.
- Visibility: varies.
- AI work: intense.
- Risk: highest. Many won't exist in 18 months.
How to research a company before applying¶
10 minutes per company:
- Engineering blog? Read 2 recent posts.
- GitHub? Active? Open source anything useful?
- Glassdoor + Blind? Salary signals, culture signals. Take with salt.
- Funding stage? If startup, runway?
- Recent layoffs? levels.fyi, Layoffs.fyi, news.
- Hiring manager on LinkedIn? Their background tells you what they value.
- Product actually exists and works? Try the free tier if there is one.
If 4+ of these are positive, apply.
The application math¶
Honest numbers from 2025-2026 entry-level applicants:
- 80-200 applications per offer.
- 10-20% reply rate (mostly auto-rejection or "we'll keep your resume").
- 5-10% first-round interview rate.
- 1-3% final offer rate.
This is brutal, but normal. If you're getting 0 first-round interviews after 50 applications, the resume + portfolio need work. If you're getting first rounds but no offers, the interview prep is the gap (see page 13).
What you might wonder¶
"Should I lie about years of experience?" No. They check. Frame your relevant work honestly - projects, OSS, side work - and let the interview demonstrate ability.
"Should I apply to roles I'm slightly under-qualified for?" Yes. The posted requirements are ceilings, not floors.
"What about visa / location restrictions?" Some companies sponsor; many don't. Filter early. Don't waste cycles applying to roles you can't accept.
"Should I take a contract role to get experience?" Yes, often. Contracts to FTE is a common path in AI. Less competition, faster decisions, real production exposure.
Done¶
- Can decode posting language.
- Have a red-flag / green-flag filter.
- Have realistic application-math expectations.