Position Type: Full-Time, Remote
Working Hours: U.S. Business Hours (CST)
About the Role
Our client builds advanced data intelligence systems that map complex ownership, legal, and property relationships using graph technology. Their platform relies heavily on Neo4j to structure, connect, and analyze highly relational data with precision, speed, and traceability.
We are seeking a Graph Database Engineer or Full-Stack Engineer with significant Neo4j and Cypher experience. The ideal candidate is strong on the backend, capable of owning the graph data model, optimizing complex queries, and building reliable ingestion and integration pipelines. This role is mission-critical in defining how data is modeled, computed, and delivered across people, properties, documents, transactions, and events.
Key Responsibilities
- Graph Data Modeling & Architecture: Design, refine, and maintain Neo4j graph schemas. Ensure data relationships are accurate, normalized, and high-performing. Implement schema updates aligned with product and technical priorities.
- Ownership, Lineage & Relationship Logic: Develop computation logic for ownership changes, temporal updates, and event transitions. Maintain data lineage, transfers, and history through optimized structures.
- Cypher Query Engineering: Build, optimize, and troubleshoot Cypher queries for traversal, lineage, analytics, and validation. Use PROFILE/EXPLAIN to diagnose performance issues and refactor queries. Create migration scripts and support large-scale graph operations
- Backend & Data Integration (Python or Full Stack): Build backend services and ingestion pipelines (Python preferred; other backend languages acceptable if skilled in Neo4j integration). Integrate Neo4j with APIs, SQL/NoSQL sources, and analytical systems. Collaborate with engineering teams to onboard and structure new data sources
- Governance, QA & Data Reliability: Implement validation rules, constraints, QA checks, and auditability. Ensure schema compliance, reproducibility, and clean version control across graph changes
Required Skills & Experience
- Core Expertise: 3+ years of hands-on experience with graph databases (Neo4j strongly preferred). Deep experience modeling and querying interconnected datasets.
- Cypher & Querying: Advanced Cypher skills (path queries, pattern comprehension, aggregations, profiling).
- Backend Engineering: Strong backend development experience (Python preferred; Node.js/TypeScript/Java also acceptable). Experience building scalable integrations, ingestion pipelines, or backend services
- Data Modeling: Experience with graph normalization, temporal modeling, or event-sourced architectures
- Nice to Have: Experience with Dockerized Neo4j, APOC, or automated data pipelines. Full-stack experience (React, TypeScript, or similar)
What You’ll Do
- Serve as the primary architect for graph structures and data logic
- Build scalable backend components driving graph computation and ingestion
- Define and optimize complex real-world relationships within the knowledge graph
- Contribute to the data foundation of an emerging AI-driven intelligence platform
Key Performance Indicators (KPIs)
- Query Optimization: 20–30% improvement in Cypher execution time. Efficient profiling and refactoring of complex traversals
- Data Model Quality: Zero critical relationship or integrity issues. On-time implementation of schema updates
- Pipeline & Backend Reliability: 99%+ ingestion pipeline success rate. Minimal downtime and smooth onboarding of new data sources
- Delivery & Velocity: Consistent on-time delivery during sprints. Low defect rate and strong ownership
- Data Governance: Validation errors under 1%. Full compliance with constraints, QA rules, and auditability
Interview Process
- Initial Phone Screen
- Technical Interview with Pavago Recruiter
- Practical Task (schema design, backend logic, or query optimization)
- Final Client Interview
- Offer & Background Verification
