This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Data Engineer (5 to 7 yrs) (AWS, Databricks, SQL) in India.
We are seeking an experienced Data Engineer to design, build, and optimize modern data pipelines and architectures that empower analytics, AI, and business intelligence initiatives. In this role, you will work closely with product, software, and analytics teams to deliver high-quality, reliable data solutions. You will be responsible for managing data workflows in Databricks and AWS, ensuring strong governance, performance, and scalability. This position offers the opportunity to tackle complex data challenges in a fully remote, collaborative environment, while contributing to impactful projects across e-commerce and other domains. The ideal candidate thrives in a fast-paced, innovative culture and enjoys solving technical challenges with creativity and precision.
Accountabilities
- Design, implement, and maintain data pipelines using Databricks and AWS services (S3, Glue, Lambda, Redshift).
- Architect and manage Medallion architecture (Bronze, Silver, Gold layers) within Databricks.
- Implement and maintain Unity Catalog, Delta Tables, and enforce robust data governance and lineage.
- Develop and optimize SQL queries for large-scale datasets, ensuring performance and efficiency.
- Design and maintain data models to support analytical and reporting requirements.
- Implement Slowly Changing Dimensions (SCD) and apply normalization/denormalization techniques for optimal data storage and retrieval.
- Collaborate with data scientists, analysts, and business stakeholders to deliver actionable data solutions.
- Identify and implement optimization techniques for query performance and resource usage.
Requirements
- 5–7 years of hands-on experience in Data Engineering with AWS and Databricks.
- Strong proficiency in SQL and data modeling best practices.
- Expertise in Python or PySpark for data transformation and ETL processes.
- Experience with Medallion architecture, Unity Catalog, Delta Lake, and Delta Tables.
- Knowledge of SCDs, data normalization/denormalization, and query optimization.
- Familiarity with BI tools (Power BI, Tableau) is a plus.
- Experience with CI/CD pipelines, Terraform, or DevOps workflows for data engineering is desirable.
- Strong problem-solving, analytical skills, and ability to work in cross-functional teams.
- Excellent communication skills in English.
Benefits
Benefits
- Fully remote work with flexible working hours.
- Stock options and performance-based incentives.
- Mentorship and continuous learning opportunities.
- Supportive and collaborative work culture.
- Competitive benefits exceeding statutory requirements.
- Exposure to cutting-edge AI and data technologies in a high-growth environment.
Jobgether is a Talent Matching Platform that partners with companies worldwide to efficiently connect top talent with the right opportunities through AI-driven job matching.
When you apply, your profile goes through our AI-powered screening process designed to identify top talent efficiently and fairly.
🔍 Our AI evaluates your CV and LinkedIn profile thoroughly, analyzing your skills, experience, and achievements.
📊 It compares your profile to the job’s core requirements and past success factors to determine your match score.
🎯 Based on this analysis, we automatically shortlist the 3 candidates with the highest match to the role.
🧠 When necessary, our human team may perform an additional manual review to ensure no strong profile is missed.
The process is transparent, skills-based, and free of bias — focusing solely on your fit for the role.
Once the shortlist is completed, we share it directly with the company that owns the job opening. The final decision and next steps (such as interviews or additional assessments) are then made by their internal hiring team.