Join Granica's core engineering team to design and scale systems powering data workflows, automation, and analytics. Build backend APIs and scalable data pipelines using Python, PySpark, and modern data lakehouse/warehouse tech. Collaborate across teams and with customers to solve complex data challenges and design seamless integration solutions.
Requirements
- 5+ years in software/data engineering or infrastructure roles
- Strong Python skills (backend APIs a plus)
- Proven ability to build scalable data pipelines from scratch
- Hands-on with Apache Iceberg/Delta Lake + Snowflake/Databricks
- Workflow orchestration expertise (Airflow, Luigi, etc.)
- Big data frameworks experience (Spark, Hadoop)
- Familiar with monitoring/analytics tools (Prometheus, Grafana, ELK, Datadog)
- Skilled in designing scalable, reliable, cost-efficient systems
- Experience with large-scale distributed data architectures
- Thrives in fast-paced startup environments
- Excellent problem-solving, communication, and customer-facing skills
Benefits
- Competitive salary
- Meaningful equity
- Substantial bonus for top performers
- Flexible time off
- Comprehensive health coverage for you and your family
- Support for research, publication, and deep technical exploration
