We’re looking for a hands-on Databricks Engineer to help design, build, and scale a modern data platform running on Apache Spark and Delta Lake. This role sits at the intersection of data engineering, platform architecture, and performance optimization. You’ll work closely with data scientists, analysts, and backend teams to ensure reliable, high-performance data pipelines and well-governed datasets.
Responsibilities
- Design and implement end-to-end data pipelines using Databricks (Jobs, Workflows, Delta Live Tables)
- Build and maintain scalable ETL/ELT processes leveraging Apache Spark (PySpark / Scala)
- Develop data models using Delta Lake, including schema design, partitioning strategies, Z-ordering, and optimization techniques
- Manage and optimize Databricks clusters (autoscaling, spot instances, instance pools, cluster policies)
- Implement CI/CD pipelines for Databricks deployments (e.g., using Databricks Repos, Terraform, Azure DevOps / GitHub Actions)
- Work with structured and semi-structured data (JSON, Parquet, Avro) at scale
- Ensure data quality and reliability through validation frameworks, unit/integration testing, and monitoring
- Implement data governance practices (Unity Catalog, access controls, lineage tracking, auditing)
- Troubleshoot performance issues (job failures, skew, shuffle bottlenecks, memory pressure) and optimize Spark workloads
- Integrate Databricks with cloud-native services (AWS S3, Azure Data Lake Storage, GCP BigQuery)
- Collaborate with data consumers to define SLAs, data contracts, and service interfaces
Requirements
- Strong experience with Databricks (production workloads, not just notebooks)
- Deep understanding of Apache Spark internals (execution plan, Catalyst optimizer, Tungsten engine)
- Proficiency in PySpark (preferred) or Scala
- Solid knowledge of Delta Lake (ACID transactions, time travel, compaction, OPTIMIZE, VACUUM)
- Experience with distributed data processing and large-scale datasets (TB+ scale)
- Familiarity with orchestration tools (Databricks Workflows, Airflow, or similar)
- Experience with version control and CI/CD pipelines
- Knowledge of cloud platforms (AWS / Azure / GCP), including IAM and storage services
- Strong SQL skills and understanding of data warehousing concepts
- Experience with data modeling techniques (star schema, medallion architecture)
Nice to Have
- Experience with streaming pipelines (Structured Streaming, Auto Loader)
- Knowledge of ML workflows on Databricks (MLflow, feature stores)
- Infrastructure-as-Code experience (Terraform, ARM, CloudFormation)
- Exposure to Unity Catalog and data governance frameworks
- Experience with cost optimization strategies in Databricks environments
- Familiarity with DBT or similar transformation tools
