Verinext has joined forces with Arctiq, combining our technical expertise and shared commitment to delivering innovative infrastructure and cloud solutions. As part of this expanded organization, we’re growing our team and looking for skilled professionals ready to solve complex challenges at scale.
Arctiq is looking for a Data Engineer to lead the development of scalable data pipelines within the Databricks ecosystem. You will be responsible for architecting robust ETL/ELT processes using a "configuration-as-code" approach, ensuring our data lake house is governed, performant, and production-ready. Experience in migrating data ingestion and transformation workloads to Databricks using Lake flow declarative pipelines is key.
Requirements
Core Responsibilities
Pipeline Architecture & Automation
- Design and implement declarative, scalable pipelines using Lakeflow and Databricks Asset Bundles (DABs)
- Establish reusable pipeline templates and patterns aligned with CI/CD best practices
- Build a “configuration-as-code” approach for onboarding new sources and transformations quickly
Data Ingestion & Streaming
- Develop high-volume ingestion pipelines using Databricks AutoLoader
- Implement CDC patterns (incremental loads, merges, deduping, late-arriving data)
- Ensure ingestion is resilient, observable, and optimized for cost/performance
Lakehouse Governance & Security
- Configure and manage Unity Catalog for:
- metadata management
- access control / RBAC
- lineage and governance workflows
- Help enforce standards for data quality, naming, and environment separation
Orchestration & Workflow Development
- Build and maintain complex workflows using Databricks Workflows / Jobs
- Integrate with external orchestration tools when required
- Improve operational reliability (retry logic, alerting, dependency handling)
Infrastructure as Code (IaC)
- Use Terraform to manage Databricks workspace resources and AWS components
- Support AWS-aligned deployment patterns for:
- S3 storage
- compute configuration
- workspace setup and environment parity
Requirements
- Strong hands-on experience with Databricks in production environments
- Deep expertise in PySpark and advanced SQL
- Experience with:
- Delta Lake
- ingestion pipelines (batch + streaming)
- data transformation frameworks/patterns
- Proven experience implementing:
- CI/CD in Databricks
- Databricks Asset Bundles (DABs)
- declarative pipelines (Lakeflow)
- Strong AWS infrastructure familiarity (S3, IAM, compute patterns)
- Terraform experience specifically with Databricks + AWS resources
- PowerShell scripting experience (asset)
Benefits
- Retirement Plan (401k, IRA)
- Work From Home
- Health Care Plan
Equal Employment Opportunity:
Arctiq is an equal opportunity employer and does not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, veteran status, or any other protected characteristic under applicable law.
Employment Disclaimer:
This job description is not intended to create an employment contract. Employment with the Company is at-will, meaning employment may be terminated by either the employee or the Company at any time, with or without cause or notice, subject to applicable law.
Duties Subject to Change:
Arctiq reserves the right to modify, add, or reassign duties and responsibilities at any time based on business needs.
Confidentiality:
This position may require access to confidential or sensitive information. Employees are expected to maintain confidentiality and comply with all Company policies and applicable security requirements.
