Himalayas logo
iHorizonsIH

Data Engineer

iHorizons is a prominent technology solutions provider based in Qatar, specializing in business transformation and digital services across the MENA region.

iHorizons

Employee count: 201-500

Egypt only

Stay safe on Himalayas

Never send money to companies. Jobs on Himalayas will never require payment from applicants.

Job Summary
As a Data Engineer at iHorizons, you will design, develop, and maintain scalable data pipelines and architectures that power our AI and advanced analytics initiatives across government and private clients. Working closely with the AI & Data Science team, you will ensure high-quality, reliable, and secure data flow across batch and streaming systems, enabling data-driven model development, deployment, and business intelligence at scale for iHorizons and its clients.

Job Objectives
• Build and maintain robust, scalable data pipelines that power AI/ML workflows and analytics platforms.
• Ensure high-quality, governed, and observable data across the organization’s data ecosystem.
• Optimize distributed data processing systems for performance, reliability, and cost efficiency.
• Collaborate with AI engineers, analysts, and stakeholders to deliver trusted datasets and features.
• Drive continuous improvement in data architecture, tooling, and engineering best practices.

Job Responsibilities
Data Pipeline Development & Engineering
• Design, develop, and deploy scalable ETL/ELT pipelines for structured and unstructured datasets.
• Implement batch and real-time data processing solutions using modern frameworks.
• Build data ingestion systems from multiple sources such as APIs, databases, logs, IoT devices, and streaming platforms.
• Ensure data pipelines support AI/ML feature engineering and training workflows.
• Automate pipeline execution, monitoring, and orchestration using tools such as Apache Airflow.
• Experience with modern transformation tools such as dbt for SQL-based data modelling and transformation within cloud data warehouses.

Big Data & Distributed Systems
• Develop distributed processing jobs using Apache Spark and Hadoop ecosystem tools.
• Work with streaming platforms such as Apache Kafka for real-time data delivery.
• Apply distributed computing principles including scalability, partitioning, and fault tolerance.
• Optimize workloads for performance and reliability across large-scale datasets.

Cloud Data Platforms & Infrastructure
• Build and manage cloud-native pipelines and warehousing solutions on GCP and Azure.
• Work with services such as BigQuery, Dataflow, Pub/Sub, Azure Synapse, Databricks, and Data Factory.
• Implement containerized deployments using Docker and Kubernetes.
• Support cost optimization and performance tuning of cloud-based data platforms.

Data Modeling & Architecture
• Design and implement enterprise-grade data lakes and data warehouses.
• Apply medallion architecture principles across bronze, silver and gold data layers.
• Develop dimensional data models using Kimball methodology, including star and snowflake schemas.
• Ensure strong governance, data quality, lineage, and observability practices.
• Build reusable, scalable data models for analytics and AI feature stores.

Database Management & Optimization
• Work extensively with relational databases such as PostgreSQL, MySQL, and Oracle.
• Write complex SQL queries with advanced proficiency.
• Apply indexing strategies, query optimization, and performance tuning.
• Design efficient schemas aligned with normalization and warehousing standards.
• Support NoSQL database solutions where required, including MongoDB, Cassandra, Redis, and DynamoDB.

Documentation and Other Responsibilities
• Develop and maintain clear technical documentation for data pipelines, architectures, and implementations.
• Write high-quality, maintainable code aligned with established engineering standards and best practices.
• Ensure all solutions comply with iHorizons’ data security, privacy, and governance policies.
• Troubleshoot and resolve data pipeline and system issues through structured root-cause analysis.
• Collaborate with cross-functional teams to continuously improve platform reliability and delivery outcomes.

Job Requirements
Educational Qualification
• Bachelor’s degree in Computer Science, Software Engineering, Information Systems, Data Science, or a related field.
• Master’s degree is an advantage, particularly in Data Engineering, AI, or Cloud Computing.
Certifications (Optional but Valuable)
Professional certifications are considered a strong advantage, particularly:
• Google Professional Data Engineer (highly valuable)
• Azure Data Engineer Associate
• Databricks Certified Data Engineer
• Apache Spark Certification

Previous Work Experience
• 4-6 years of proven experience in implementing production-grade data pipelines, performing performance optimization, and working with distributed data systems in real-world environments.

Skills & Abilities
• Strong foundational knowledge in data structures and algorithms, database systems, and distributed computing principles, forming the basis for building scalable and high-performance data platforms.
• Advanced proficiency in programming languages such as Python, Java, Scala, Shell scripting, and especially SQL, which is essential for this role.
• Strong hands-on experience working with relational database systems such as PostgreSQL, MySQL, and Oracle, including expertise in writing complex SQL queries, indexing strategies, query optimization, and data modelling techniques such as 3NF, star schema, and snowflake schema.
• Familiarity with NoSQL database technologies, depending on project needs, including platforms such as MongoDB, Cassandra, Redis, and DynamoDB.
• Proven ability to design and implement scalable ETL/ELT pipelines, with solid understanding of data warehousing concepts and experience building both batch and streaming data workflows.
• Experience using industry-standard tools and platforms such as Apache Airflow, Informatica, Snowflake, Google BigQuery, and Azure Synapse to support enterprise data integration and analytics.
• Strong knowledge of big data and distributed systems, including frameworks such as Apache Spark, the Hadoop ecosystem, and streaming platforms like Apache Kafka, with an understanding of distributed computing principles, scalability, and fault tolerance.
• Hands-on expertise with modern cloud data platforms, particularly Google Cloud Platform (BigQuery, Dataflow, Pub/Sub) and Microsoft Azure (Data Factory, Synapse, Databricks), which are critical for today’s data engineering environments.
• Infrastructure-level understanding of containerization and orchestration tools such as Docker and Kubernetes.
• Demonstrated ability to design data lakes and enterprise data architectures, including implementing medallion architecture (bronze, silver, gold layers), applying dimensional modelling approaches such as Kimball methodology, and ensuring strong practices in data governance, quality, and observability.
• Strong working knowledge of supporting engineering practices including version control (Git), schema design, and end-to-end data pipeline architecture

About the job

Apply before

Posted on

Job type

Full Time

Experience level

Mid-level

Education

Bachelor degree

Experience

4 years minimum

Location requirements

Hiring timezones

Egypt +/- 0 hours

About iHorizons

Learn more about iHorizons and their company culture.

View company profile

iHorizons is a leading provider of business solutions and technology services across MENA and emerging markets. Founded in 1996 and headquartered in Doha, Qatar, iHorizons has been instrumental in helping organizations and governments accelerate their digital transformations. Our mission revolves around improving customer experiences and increasing operational efficiencies through innovative solutions.

With a strong focus on digital services, iHorizons specializes in various sectors including telecom, government, and media. We have established ourselves as trusted partners to major clients such as Ooredoo and Al Jazeera. Our flagship products, such as the iHorizons Knowledge Server, are crafted to meet the unique demands of large-scale operatives, particularly in managing Arabic web content effectively. Additionally, our custom solutions equip clients with the tools necessary for navigating their digital landscapes, ensuring that they remain competitive in an ever-evolving market. We pride ourselves on our commitment to quality, customer satisfaction, and fostering a culture of innovation. Our diverse team, drawn from various backgrounds and expertise, works collaboratively to deliver tailored solutions that meet the specific needs of our clients.

Claim this profileiHorizons logoIH

iHorizons

View company profile

Similar remote jobs

Here are other jobs you might want to apply for.

View all remote jobs

3 remote jobs at iHorizons

Explore the variety of open remote roles at iHorizons, offering flexible work options across multiple disciplines and skill levels.

View all jobs at iHorizons

Remote companies like iHorizons

Find your next opportunity by exploring profiles of companies that are similar to iHorizons. Compare culture, benefits, and job openings on Himalayas.

View all companies

Find your dream job

Sign up now and join over 100,000 remote workers who receive personalized job alerts, curated job matches, and more for free!

Sign up
Himalayas profile for an example user named Frankie Sullivan
iHorizons hiring Data Engineer • Remote (Work from Home) | Himalayas