Join New Era Technology as a Data Platform Engineer III to design, build, and optimize scalable data pipelines and analytics solutions using Microsoft Fabric, PySpark, Python, and SQL. The ideal candidate will have hands-on experience with Lakehouse architecture, data integration, and large-scale data processing to support enterprise analytics and reporting.
Requirements
- Design, develop, and maintain data pipelines and ETL/ELT processes using Microsoft Fabric and PySpark.
- Build and manage Lakehouse and Data Warehouse solutions within the Microsoft Fabric ecosystem.
- Develop scalable data processing workflows using Python and PySpark.
- Write optimized SQL queries for data transformation, analysis, and performance tuning.
- Integrate data from various sources such as APIs, databases, cloud storage, and streaming platforms.
- Implement data modeling techniques to support analytics and reporting requirements.
- Ensure data quality, governance, and security across the data platform.
- Monitor and optimize data pipeline performance and reliability.
- Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements.
- Document architecture, workflows, and technical processes.
Benefits
- Competitive benefits
- Continuous training
- Supportive team-oriented culture
- Access to industry-certified experts
