This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Senior Data Engineer in United States.
The Senior Data Engineer will play a critical role in designing, building, and maintaining large-scale, high-performance data infrastructure and pipelines. Working within a fully remote, flexible environment, this role supports global cross-functional teams by ensuring accurate, reliable, and actionable data flows. The ideal candidate will have extensive experience in ETL orchestration, distributed data processing, and cloud-native architectures, while guiding and mentoring other engineers. You will implement best practices for data validation, telemetry, and automation, enabling predictive and prescriptive analytics to drive business impact. This position provides an opportunity to shape modern data platforms, collaborate with technical and business stakeholders, and influence strategic data initiatives at scale.
Accountabilities:
- Design, develop, and implement high-volume data pipelines for Data Lake and Data Warehouse environments.
- Build and maintain ETL frameworks and enforce design patterns to improve code quality and maintainability.
- Ensure accuracy, consistency, and reliability of data processing and reporting.
- Develop cloud-native data pipelines, database schemas, and automation routines to support analytics and machine learning.
- Mentor and provide technical guidance to other Data Engineers, serving as a technical owner of parts of the data platform.
- Collaborate with business analysts, technical resources, and stakeholders to communicate ideas and solutions effectively.
Requirements
- Expert knowledge of Python and SQL.
- 7+ years of professional experience, with 5+ years in data engineering, business intelligence, or related roles.
- Experience with ETL orchestration and workflow management tools (Airflow, Flink, etc.) using AWS or GCP.
- Proficiency with distributed data processing tools such as Spark or Presto, and streaming technologies like Kafka or Flink.
- Hands-on experience with Snowflake or other big data platforms, cloud service providers (AWS preferred), and container orchestration (Kubernetes).
- Familiarity with DevOps practices and agile methodologies.
- MS in Computer Science, Software Engineering, or related field preferred; BS acceptable.
- Strong analytical, problem-solving, and communication skills, with the ability to work effectively in a remote, flexible environment.
Benefits
- Competitive salary range: $190,000 – $200,000 annually.
- Fully remote work with flexible hours.
- Opportunity to work on global, high-impact data initiatives.
- Mentorship and professional development opportunities.
- Collaborative and inclusive work culture.
- Access to cutting-edge tools, technologies, and cloud platforms.
Jobgether is a Talent Matching Platform that partners with companies worldwide to efficiently connect top talent with the right opportunities through AI-driven job matching.
When you apply, your profile goes through our AI-powered screening process designed to identify top talent efficiently and fairly.
🔍 Our AI evaluates your CV and LinkedIn profile thoroughly, analyzing your skills, experience, and achievements.
📊 It compares your profile to the job’s core requirements and past success factors to determine your match score.
🎯 Based on this analysis, we automatically shortlist the 3 candidates with the highest match to the role.
🧠 When necessary, our human team may perform an additional manual review to ensure no strong profile is missed.
The process is transparent, skills-based, and free of bias — focusing solely on your fit for the role.
Once the shortlist is completed, we share it directly with the company that owns the job opening. The final decision and next steps (such as interviews or additional assessments) are then made by their internal hiring team.
