About the Role
What you’ll do
- Build content discovery pipelines: Automate discovery and acquisition of grant-related content from the web—foundation websites, RFPs, program announcements—turning the open web into structured, actionable data.
- Build LLM extraction pipelines: Implement production pipelines to transform unstructured text into canonical business objects—including document ingestion (PDFs, HTML, Word), OCR, table extraction, and layout-aware parsing. Partner with product engineers to evolve schemas as domain needs change.
- Own semantic chunking and embeddings: Design chunking strategies optimized for retrieval; select and manage embedding models; maintain vector indices that power downstream search and RAG features.
- Optimize for cost and latency: Profile token usage, implement caching and batching strategies, choose appropriate models for different tasks, and manage the cost/quality tradeoff at scale.
- Maintain data quality and serve downstream consumers: Implement validation, anomaly detection, and alerting for extraction drift. Expose clean data via APIs, materialized views, or event streams that product teams can rely on without understanding the extraction complexity. Integrate and normalize data from external providers—resolving entities, mapping to internal schemas, and ensuring "Ford Foundation" and "The Ford Foundation" resolve to the same canonical record.
What we're looking for
- Software engineering background: 5+ years of professional software engineering experience, including 2+ years working with modern LLMs (as an IC). Startup experience and comfort operating in fast, scrappy environments is a plus.
- Proven production impact: You’ve taken LLM/RAG systems from prototype to production, owned reliability/observability, and iterated post‑launch based on evals and user feedback.
- LLM agentic systems: Experience building tool/function‑calling workflows, planning/execution loops, and safe tool integrations (e.g., with LangChain/LangGraph, LlamaIndex, Semantic Kernel, or custom orchestration).
- RAG expertise: Strong grasp of document ingestion, chunking/windowing, embeddings, hybrid search (keyword + vector), re‑ranking, and grounded citations.Experience with re‑rankers/cross‑encoders, hybrid retrieval tuning, or search/recommendation systems.
- Embeddings & vector stores: Hands‑on with embedding model selection/versioning and vector DBs (e.g., pgvector, FAISS, Pinecone, Weaviate, Milvus, Qdrant). Document processing at scale (PDF parsing/OCR), structured extraction with JSON schemas, and schema‑guided generation.
- Evaluation mindset: Comfort designing eval suites (RAG/QA, extraction, summarization), using automated and human‑in‑the‑loop methods; familiarity with frameworks like Ragas/DeepEval/OpenAI Evals or equivalent.
- Infrastructure & languages: Proficiency in Python (FastAPI, Celery) and TypeScript/Node; familiarity with Ruby on Rails (our core platform) or willingness to learn.
- Experience with AWS/GCP, Docker, CI/CD, and observability (logs/metrics/traces).
- Data chops: Comfortable with SQL, schema design, and building/maintaining data pipelines that power retrieval and evaluation
- Collaborative approach: You thrive in a cross‑functional environment and can translate researchy ideas into shippable, user‑friendly features.
- Results‑driven: Bias for action and ownership with an eye for speed, quality, and simplicity.
- Fine‑tuning: Practical experience with SFT/LoRA or instruction‑tuning (and good intuition for when fine‑tuning vs. prompting vs. model choice is the right lever).Exposure to open‑source LLMs (e.g., Llama) and providers (e.g., OpenAI, Anthropic, Google, Mistral).Familiarity with responsible AI, red‑teaming, and domain‑specific safety policies.
Nice to have:
Why You’ll Love Working Here:
Compensation & Benefits
- For US-based candidates, our target salary band is $175,000 - $220,000 USD + equity. Salary decisions consider experience, location, and technical depth
- 100% covered health, dental, and vision insurance for employees (50% for dependents)
- Generous PTO, including parental leave
- 401(k)
- Company laptop and home-office stipend
- Bi-Annual Company Retreats for in-person collaboration
