Job Title:
DataOps EngineerJob Description
DataOps Engineer | Remote in Argentina | High-Impact Global Projects
We're Concentrix, the global technology and services leader that powers the world’s best brands, today and into the future. We’re solution-focused, tech-powered, intelligence-fueled. With unique data and insights, deep industry expertise, and advanced technology solutions, we’re the intelligent transformation partner that powers a world that works, helping companies become refreshingly simple to work, interact, and transact with. We shape new game-changing careers in over 70 countries, attracting the best talent.
In our Information Technology and Global Security team, you will deliver the latest technology infrastructure, transformative software solutions and industry-leading global security for our staff and clients. You will work with the best in the world to design, implement and strategize IT, security, application development, innovation, and solutions in today’s hyperconnected world. You will be part of the technology team that is core to our vision of develop, build and run the future of Integrated Services.
We embrace our game-changers with open arms, people from diverse backgrounds, who are curious and willing to learn. Your natural talent to help others and go beyond WOW for our customers will fit right in with what we do and who we are.
We are seeking a talented and proactive DataOps Engineer to join our advanced data team. This role is vital in designing, automating, migrating, and operating our data pipelines with a strong focus on modernization and reliability. You will play a key role in enhancing our data architecture and operational excellence.
Essential Functions/Core Responsibilities
- Design and implement automated and reliable data pipelines, focusing on modernization and data quality enhancement.
- Develop and maintain robust CI/CD processes for data applications.
- Utilize AWS services including IAM, Networking (VPC, subnets, routing), Glue, Athena, CloudWatch, and ElastiCache to maintain a seamless data environment.
- Handle data processing using Apache Spark for both batch and streaming processes.
- Manage and enhance our data catalog system, improving schema management, metadata governance, and data governance practices.
- Employ Infrastructure as Code (IaC) techniques using Terraform to provision, version, and maintain data infrastructure.
- Utilize programming and query languages, such as Python, Go (Golang), and SQL, to develop scalable data solutions.
Candidate Profile
- Bachelor’s degree in Computer Science, Data Science, or a related discipline.
- Proven experience with DataOps, emphasizing pipeline automation, reliability, and quality.
- Hands-on expertise with AWS cloud services, specifically IAM, Networking, Glue, Athena, CloudWatch, and ElastiCache.
- Proficient in data processing technologies, particularly Apache Spark.
- Experienced in managing data catalogs with focus on schema management and metadata governance.
- Skilled in using Terraform for Infrastructure as Code.
- Proficiency in Python, Go (Golang), and SQL is essential.
Join us and be part of this journey towards greater opportunities and brighter futures.
Location:
ARG Work-at-HomeLanguage Requirements:
Time Type:
Full timeIf you are a California resident, by submitting your information, you acknowledge that you have read and have access to the Job Applicant Privacy Notice for California Residents
