Location:
- Develop and maintain ETL pipelines using Python and SQL Server for data ingestion and transformation
- Write, test, and debug Python code for data processing and automation
- Create and maintain T-SQL queries, views, and stored procedures
- Assist in building and managing data workflows using Azure Data Factory, Azure SQL, and Blob Storage
- Monitor data pipelines and support troubleshooting of failures, data quality issues, and performance concerns
- Follow data quality, security, and compliance standards in regulated environments
- Collaborate with data analysts, engineers, and DevOps teams to support data requirements
- Participate in code reviews and continuously improve development practices
- Maintain clear and accurate documentation for data processes and pipelines
Requirements
- 1–4 years of experience in Data Engineering, Software Engineering, or a related role
- Strong proficiency in Python (functions, error handling, debugging)
- Strong working knowledge of SQL (joins, aggregations, subqueries)
- Experience with relational databases, preferably SQL Server
- Exposure to REST APIs and external data integrations
- Familiarity with version control tools such as Git
- Strong analytical and problem-solving skills
- Ability to work in a fast-paced, team-oriented environment
- Good communication skills (written and verbal)
- Should be willing to accept a long-term work-from-home arrangement.
- Should be amenable to a permanent night shift schedule.
Details
