Job Title:
Data EngineerJob Description
Data Engineer – Remote in Mexico
We're Concentrix, the global technology and services leader that powers the world’s best brands, today and into the future. We’re solution-focused, tech-powered, intelligence-fueled. With unique data and insights, deep industry expertise, and advanced technology solutions, we’re the intelligent transformation partner that powers a world that works, helping companies become refreshingly simple to work, interact, and transact with. We shape new game-changing careers in over 70 countries, attracting the best talent.
We embrace our game-changers with open arms, people from diverse backgrounds, who are curious and willing to learn. Your natural talent to help others and go beyond WOW for our customers will fit right in with what we do and who we are.
As part of the Core Engineering Services team, we are seeking a Data Engineer to support the design, development, and maintenance of scalable data pipelines and cloud-based data solutions. We are looking for a hands-on specialist focused on building, optimizing, and supporting data workflows within our Azure and Databricks environment.
Essential Functions/Core Responsibilities
1. Data Pipeline Engineering & Orchestration
- Pipeline Development: Build and maintain robust data pipelines for ingesting, transforming, and loading data into the Azure data lake.
- Workflow Management: Develop and support orchestration and workflow monitoring solutions to ensure reliable data delivery.
- Performance Tuning: Write and optimize complex SQL queries; improve data performance via advanced query tuning and indexing.
2. Backend & API Integration
- API Development: Develop backend data APIs and support API management configurations for seamless data exchange.
- External Integration: Integrate with external systems and REST APIs to facilitate diverse data flows.
- Streaming & Events: Manage the ingestion of streaming or event-based data (e.g., Event Hubs) into the ecosystem.
3. Infrastructure, DevOps & Quality
- Infrastructure as Code: Implement and maintain Azure resources using Terraform and YAML-based configurations.
- CI/CD & Versioning: Contribute to CI/CD pipelines using Azure DevOps and maintain strict version control, logging, and monitoring.
- Data Governance: Support rigorous data quality checks, validation processes, and adhere to engineering best practices through code reviews.
Candidate Profile
- Education: Bachelor’s degree in Computer Science, Information Systems, or a related field.
- Experience: 3+ years of hands-on experience in data engineering roles.
- Language: Fluent in English.
Technical Core
- Language Mastery: Proficiency in Python (processing scripts/utilities) and SQL (transformation/analysis).
- Cloud Ecosystem: Practical experience with Azure and Databricks; familiarity with Lakehouse architectures.
- Data Modeling: Strong understanding of relational data models (Star/Snowflake, Kimball) and ETL/ELT concepts.
- Big Data Tools: Experience working with Spark or similar big data technologies.
Professional Experience
- Technical Problem-Solver: Strong troubleshooting skills related to pipelines, jobs, and cloud components.
- Collaborative Engineer: Experience working in Agile environments (Jira/GitHub), partnering with technical teams to translate requirements into solutions.
- Self-Directed: Ability to work independently with minimal oversight, meeting deadlines across multiple simultaneous projects.
Technical Stack Summary
- Languages: Python, SQL
- Platforms: Azure, Databricks, Azure Data Lake
- Orchestration/DevOps: Azure DevOps, Terraform, GitHub, YAML
- Data/Messaging: Spark, Event Hubs, RESTful APIs
- Architecture: Lakehouse, Star/Snowflake Schema
Join us and be part of this journey towards greater opportunities and brighter futures.
