We are seeking a Senior Data Engineer with strong expertise in PySpark and Databricks to design, build, and optimize scalable data pipelines that support complex analytics and modeling use cases.
Requirements
- 5–7 years of professional experience in data engineering
- Strong hands-on proficiency with PySpark (intermediate to advanced level)
- Solid experience working with Databricks, including Autoloader, Python-based workflows, and platform best practices
- Proven experience optimizing data pipelines for performance and cost efficiency
- Strong understanding of ETL processes and large-scale data transformations
- Excellent problem-solving skills with the ability to diagnose and resolve complex data issues
- Strong communication skills and the ability to collaborate effectively with both technical and non-technical stakeholders
Benefits
- 100% Remote Work
- Highly Competitive USD Pay
- Paid Time Off
- Work with Autonomy
