What you will do
- Design agentic systems & ship AI to production: Build resilient, observable services, while optimizing cost and latency budgets. Build toolâusing LLM âagentsâ (task planning, function/tool calling, multiâstep workflows, guardrails) for tasks like grant discovery, application drafting, document parsing and many more.
- Own RAG endâtoâend: Ingest and normalize content, choose chunking/embedding strategies, implement hybrid retrieval, reâranking, citations, and grounding. Continuously improve recall/precision.
- Manage embeddings at scale: Select, evaluate, and migrate embedding models; maintain vector stores (e.g., pgvector/Qdrant/Pinecone etc.); monitor drift and rebuild strategies.
- Collaborate crossâfunctionally while raising engineering standards: Work side by side with Product, Design on scoping, UX, and measurement; run experiments (A/B, canaries), interpret results, and iterate. Write clear, maintainable code, add tests and docs, and contribute to reliability practices (alerts, dashboards, incident response).
What we're looking for
- Software engineering background: 5+ years of professional software engineering experience (as an IC), including 2+ years working with modern LLMs.
- Proven production impact: Youâve taken LLM/RAG systems from prototype to production, owned reliability/observability, and iterated postâlaunch based on evals and user feedback.
- LLM agentic systems: Experience building tool/functionâcalling workflows, planning/execution loops, and safe tool integrations (e.g., with LangChain/LangGraph, LlamaIndex, Semantic Kernel, or custom orchestration).
- RAG expertise: Strong grasp of document ingestion, chunking/windowing, embeddings, hybrid search (keyword + vector), reâranking, and grounded citations. Experience with reârankers/crossâencoders, hybrid retrieval tuning, or search/recommendation systems.
- Embeddings & vector stores: Handsâon with embedding model selection/versioning and vector DBs (e.g., pgvector, Qdrant, Pinecone, Weaviate, Milvus etc.).
- Evaluation mindset: Comfort designing eval suites (RAG/QA, extraction, summarization), using automated and humanâinâtheâloop methods; familiarity with frameworks like Ragas/DeepEval/OpenAI Evals or equivalent.
- Infrastructure & languages: Proficiency in Python (FastAPI, Celery); Experience with GCP/AWS, Docker, CI/CD, and observability (logs/metrics/traces).
- Data chops: Comfortable with SQL, schema design, and building/maintaining data pipelines that power retrieval and evaluation.
- Collaborative approach: You thrive in a crossâfunctional environment and can translate research ideas into shippable, userâfriendly features.
- Resultsâdriven: Bias for action and ownership with an eye for speed, quality, and simplicity.
Nice to have
- Startup Experience and comfort operating in fast, scrappy environments is a plus.
- Familiarity with responsible AI, redâteaming, and domainâspecific safety policies.
- Fineâtuning: Practical experience with SFT/LoRA or instructionâtuning (and good intuition for when fineâtuning vs. prompting vs. model choice is the right lever).
Compensation & Benefits
- Salary ranges are based on market data, relative to our size, industry, and stage of growth. Salary is one part of total compensation, which also includes equity, perks, and competitive benefits.
- For US-based candidates, our target salary band is $175,000 - $220,000/year + equity. Salary decisions will be based on multiple factors including geographic location, qualifications for the role, skillset, proficiency, and experience level.
- 100% covered health, dental, and vision insurance for employees, 50% for dependents.
- Generous PTO policy, including parental leave.
- 401(k).
- Company laptop + stipend to set up your home workstation.
- Company retreats for in-person time with your colleagues.
- Work with awesome nonprofits around the US. We partner with incredible organizations doing meaningful work, and you get to help power their success.
