About Us:
At CompassX, our clients rely on us to lead high-priority strategic initiatives and transformational projects. Our mission is to create a community of people who come up with innovative approaches and deliver the best outcomes for our clients.
You will have the opportunity to leverage your experience, creativity, and skills to impact your clients and influence the trajectory of our firm to achieve growth for the team and your career.
We are honored to be recognized as a “Best Place to Work” in Southern California and listed as one of INC.’s 5000 fastest-growing private companies in the U.S.
We’re looking for a Senior Data Engineer (Python and Snowflake) to support one of our life sciences clients. This role will focus on designing and building scalable data pipelines, integrating data into Snowflake, and enabling downstream analytics and reporting in Power BI.
The client environment is still maturing, so your ability to shape structure, define logic, and deliver value will be key.
What you'll do
- Design, build, and maintain ETL/ELT pipelines using Python, integrating data from APIs, flat files, and relational systems into Snowflake
- Develop and optimize data models and transformations (dbt) to support reporting and analytical use cases
- Implement data validation, testing, and quality checks to ensure accuracy and reliability across datasets
- Manage data workflows, orchestration, and automation using modern tools and practices (e.g., Airflow, GitHub Actions)
- Support downstream users and analysts by preparing clean, well-structured datasets for Power BI dashboards and reports
- Contribute to the development and management of containerized environments using Docker and Linux
- Collaborate with BI developers, analysts, and business stakeholders to deliver end-to-end data solutions
- Help define and promote data engineering best practices, frameworks, and standards within a growing data environment
What we're looking for
- 7–10 years of data engineering experience across the full data lifecycle
- Strong programming experience in Python, including data libraries such as Pandas, PySpark, or SQLAlchemy
- Advanced SQL skills and hands-on experience developing transformations using dbt
- Experience with Snowflake or similar cloud data platforms (e.g., Redshift, BigQuery)
- Working knowledge of Linux, Docker, and GitHub Actions for environment management and CI/CD automation
- Understanding of data architecture concepts, including modeling, lineage, and orchestration
- Exposure to Power BI and experience supporting analytics or BI teams
- Comfortable working in a fast-paced and collaborative environment