Description
Our client represents the connected world, offering innovative and customer-centric information technology experiences, enabling Enterprises, Associates, and Society to Rise™.
They are a USD 6 billion company with 163,000+ professionals across 90 countries, helping 1279 global customers, including Fortune 500 companies. They focus on leveraging next-generation technologies, including 5G, Blockchain, Metaverse, Quantum Computing, Cybersecurity, Artificial Intelligence, and more, on enabling end-to-end digital transformation for global customers.
Our client is one of the fastest-growing brands and among the top 7 IT service providers globally. Our client has consistently emerged as a leader in sustainability and is recognized amongst the ‘2021 Global 100 Most sustainable corporations in the World by Corporate Knights.
We are currently searching for a Data Engineer Sr.:
Responsibilities
- The Data Engineer is responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support data analytics, reporting, and business intelligence.
- This role ensures data is accessible, reliable, and optimized for performance across various systems.
- Design, develop, and maintain ETL/ELT pipelines for ingesting and transforming data from multiple sources.
- Build and optimize data models for analytics and reporting.
- Implement and manage data storage solutions (e.g., relational databases, data lakes, cloud storage).
- Ensure data quality, integrity, and security across all systems.
- Collaborate with data scientists, analysts, and business teams to understand requirements and deliver solutions.
- Monitor and improve data pipeline performance and troubleshoot issues.
- Stay updated with emerging technologies and best practices in data engineering and cloud platforms.
Requirements
- Proficiency in SQL and experience with relational databases (e.g., Oracle, MySQL, SQL Server).
- Strong programming skills in Python, PL/SQL, Java, or Scala.
- Experience with big data technologies (e.g., Hadoop, Spark, Databricks) and cloud platforms (AWS, Azure, GCP).
- Hands-on experience with OpenShift or other container orchestration platforms (e.g., Kubernetes).
- Knowledge of data warehousing concepts and tools (e.g., Snowflake, Redshift, BigQuery).
- Familiarity with workflow orchestration tools (e.g., Airflow, Luigi).
- Understanding of data governance, security, and compliance.
Prefered
- Experience with streaming data (Kafka, Kinesis).
- Background in DevOps practices for data pipelines.
- Knowledge of machine learning workflows and integration with data pipelines.
Languages
- Advanced Oral English.
- Native Spanish.
Note:
- Fully remote
If you meet these qualifications and are pursuing new challenges, Start your application to join an award-winning employer. Explore all our job openings | Sequoia Career’s Page: https://www.sequoia-connect.com/careers/.
Requirements
- Proficiency in SQL and experience with relational databases (e.g., Oracle, MySQL, SQL Server).
- Strong programming skills in Python, PL/SQL, Java, or Scala.
- Experience with big data technologies (e.g., Hadoop, Spark, Databricks) and cloud platforms (AWS, Azure, GCP).
- Hands-on experience with OpenShift or other container orchestration platforms (e.g., Kubernetes).
- Knowledge of data warehousing concepts and tools (e.g., Snowflake, Redshift, BigQuery).
- Familiarity with workflow orchestration tools (e.g., Airflow, Luigi).
- Understanding of data governance, security, and compliance.
