- Design, develop, and optimize data pipelines to extract, transform, and load (ETL/ELT) data from a variety of sources.
- Build and manage data models and data warehouses that support business intelligence, reporting, and analytics needs.
- Leverage cloud technologies such as AWS, Azure, or Google Cloud Platform for building scalable, reliable, and efficient data solutions.
- Develop and maintain automated data workflows using tools like Airflow, AWS Glue, Azure Data Factory, or similar technologies.
- Work with large datasets and complex data structures, ensuring data quality, integrity, and performance.
- Write and optimize SQL queries for complex data extraction, aggregation, and transformation tasks.
- Integrate APIs to connect data sources, extract information, and facilitate real-time data processing.
- Collaborate with business intelligence and data science teams to define data requirements and ensure the availability of clean, accurate data for analysis and decision-making.
- Implement CI/CD pipelines for automated deployment of data pipelines and models.
- Monitor the performance of data systems, ensuring reliability, availability, and scalability of data architectures.
- Create and maintain comprehensive documentation for data pipelines, systems, and processes.
- Stay up to date with emerging trends and technologies in the data engineering field and continuously improve data systems.
- Solid experience as a Data Engineer or similar role in data architecture and pipeline development.
- Strong experience with cloud platforms such as AWS, Azure, or Google Cloud.
- Advanced knowledge of ETL/ELT processes, data modeling, and data warehousing (e.g., Snowflake, Redshift).
- Proficiency in SQL for complex data transformation and querying.
- Hands-on experience with data pipeline orchestration tools like Azure Data Factory, Apache Airflow, AWS Glue, or similar.
- Strong programming skills in Python for automation, data processing, and integration tasks.
- Experience working with big data technologies such as Hadoop, Spark, or Kafka is a plus.
- Familiarity with GitHub for version control and CI/CD pipelines for deployment automation.
- Strong understanding of data security, governance, and compliance best practices.
- Experience with business intelligence tools such as Tableau, Power BI, or similar for reporting and data visualization.
- Ability to work in an agile, fast-paced environment and manage multiple tasks simultaneously.
- Master’s degree in computer science, Engineering, Data Science, or related field.
- Experience with real-time data streaming platforms such as Kafka or AWS Kinesis.
- Exposure to machine learning and AI technologies and how data engineering supports these initiatives.
- Experience with Infrastructure as Code (IaC) tools such as Terraform or CloudFormation.
- Knowledge of data lake architectures and modern data processing frameworks.
- Experience with Tableau for building reports, dashboards, and visual analytics.
- Opportunity to work on cutting-edge data engineering projects with the latest cloud technologies.
- Be part of an innovative and collaborative team driving data-driven decision-making.
- Competitive salary and comprehensive benefits package.
- Opportunities for career growth and professional development.
- Work in an agile environment, where your contributions make a direct impact on the business.
