In this role, you will
- Architect complex, high-volume data pipelines for production use.
- Design and implement scalable data models serving multiple product and internal teams.
- Own data quality frameworks and standards across key data products.
- Build reusable patterns for transformations and metrics to drive efficiency.
- Define and maintain core business metrics and Key Performance Indicators (KPIs) in partnership with Analytics.
- Own the data products used across the company, ensuring reliability and performance.
- Set and promote standards for data modeling and pipeline development.
- Partner closely with Analytics, Data Science, and Machine Learning teams on requirements to reduce friction and accelerate their work.
- Mentor engineers and actively participate in the hiring process.
Qualifications
- 5+ years of experience in data engineering with a proven track record of successful data architecture and pipeline creation.
- Expertise in handling large-scale data systems and cloud platforms (preferably AWS).
- Proficient in Python and SQL. Scala and MongoDB are a plus
- Strong experience with data pipeline and workflow management tools like Apache Airflow, Luigi, or Prefect.
- In-depth knowledge of real-time data processing frameworks such as Kinesis, Kafka, Flink, or Spark Streaming. Experience with Segment is a plus.
- Experience with data modeling tools and ETL frameworks, with a strong emphasis on performance optimization.
- Excellent communication and collaboration skills to work effectively across teams.
