Summary:
Overview
The Enterprise Data Warehouse Engineer will play a critical role in designing, building, and maintaining the ASPCA’s enterprise data warehouse ecosystem. This role will be part of the Enterprise Data Engineering Operations team, which sits within the broader Product, Data, and Reporting Solutions (PDRS) department. The Engineer will develop and maintain dbt models and data workflows across the medallion architecture, integrate master data management (MDM) workflows into enterprise pipelines, and partner with cross functional stakeholders to support the ASPCA’s transition from siloed systems to a centralized enterprise data solution.
Reporting to the Director, Enterprise Data Engineering Operations, the Enterprise Data Warehouse Engineer will ensure that data assets are accurate, performant, reliable, and aligned with organizational standards. The ideal candidate will combine strong technical skills with a collaborative mindset, thriving in a distributed data environment where clarity, consistency, and data quality are essential.
Who We Are
The Information Technology (IT) department supports and improves a broad portfolio of technologies to support our staff. IT ensures that all ASPCA staff, partners, and communities have the systems needed to work effectively and efficiently to improve animal welfare. The sub-teams within IT include Product, Data and Reporting Solutions, Operations and Information Security, Technical Support, Enterprise Architecture, and Business Operations.
What You’ll Do
Enterprise Data Warehouse Engineer reports directly to the Director, Enterprise Data Engineering Operations and has no direct reports.
Where and When You’ll Work
This remote‑based position (which may require periodic travel as described below) is open to all eligible candidates based within the United States.
What You’ll Get
Compensation
Starting pay for the successful applicant will depend on a variety of factors, including but not limited to education, training, experience, location, business needs, internal equity, market demands or budgeted amount for the role. The target hiring range is for new hire offers only, and compensation may increase beyond the maximum hiring range based on performance over time. The maximum of the hiring range is reserved for candidates with the highest qualifications and relevant experience. The expected hiring salary ranges for this role are set forth below and may be modified in the future.
$125,000-$130,000 annually
For more information on our benefits offerings, visit our website.
Benefits
At the ASPCA, you don’t have to choose between your passion and making a living. Our comprehensive benefits package helps ensure you can live a rewarding life at work and at home. Our benefits include, but are not limited to:
Affordable health coverage, including medical, employer-paid dental and optional vision coverage.
Flexible time off that includes vacation time, paid personal time, sick time, bereavement time, paid parental leave, and 10 company paid holidays that allows you even more flexibility to observe the days that mean the most to you.
Competitive financial incentives and retirement savings, including a 401(k) plan with generous employer contributions — we match dollar-for-dollar up to 4% and provide an additional 4% contribution toward your future each year.
Robust professional development opportunities, including classes, on-the-job training, coaching and mentorship with industry-leading peers, internal mobility, opportunities to support in the field and so much more.
Please note, a cover letter is requested. Applications will be accepted until 5pm ET on Thursday April 16, 2026.
Responsibilities:
Responsibilities
Responsibility buckets are listed in general order of importance. They include, but are not limited to:
Medallion Layer Data Modeling and Development
Architect, implement, and maintain dbt models across the medallion architecture, applying appropriate transformation patterns to meet operational, analytical, and dimensional data requirements
Integrate MDM workflows and reference data into medallion-layer transformations in alignment with enterprise data governance standards
Apply mastered entities, harmonized identifiers, survivorship rules, and standardized reference values to Gold‑layer models as defined by enterprise MDM policies
Build data models that power enterprise analytics, reporting, and other downstream uses
Implement modeling best practices (e.g., naming conventions, documentation, testing, and lineage tracking) across all layers to ensure dbt models comply with enterprise governance standards and quality, performance, and security requirements
Optimize SQL code, dbt transformation logic, and Snowflake compute performance to ensure efficient, scalable pipelines
Pipeline Orchestration and Operations
Design, build, and maintain robust pipelines that support data ingestion, transformation, and delivery while adhering to engineering standards for well‑structured code, clear documentation, and idempotent processing
Implement and maintain scalable job orchestration, monitoring, alerting, automated testing, and error‑handling capabilities
Preserve version control for pipeline code artifacts and support CI/CD workflows to ensure reliable deployment of changes
Troubleshoot pipeline problems, such as issues with data quality and concerns over data freshness, across Microsoft Fabric, dbt, and Snowflake
Conduct root cause analysis and proactively drive pipeline improvements
Collaboration, Alignment, and Data Quality
Work closely with the Data Management & BI team to align on definitions, requirements, and expectations, ensuring that engineered datasets are accurate, trusted, and analytics‑ready
Collaborate with the Strategy & Research team to deliver granular, well‑structured Silver‑layer datasets that support statistical analysis, data science modeling, and operational insights
Support enterprise data governance and data quality through strong metadata practices and clear documentation of transformation logic
Identify opportunities to improve data workflows, automate processes, and reduce technical debt
Participate in code reviews, knowledge-sharing sessions, and team‑wide initiatives that strengthen engineering quality and consistency
Contribute to the advancement of the ASPCA's data ecosystem by evaluating emerging tools and technologies
Qualifications
Excellent analytical and problem‑solving skills, with a strong commitment to data quality
Ability to collaborate effectively with both technical and non‑technical partners
Solid written and verbal communication skills, with the ability to clearly convey data and technical concepts
Comfortable operating in a highly distributed, cross‑functional environment
Skilled at managing multiple priorities, shifting requirements, and changing timelines
Demonstrates curiosity, creativity, and a willingness to experiment and learn
Takes initiative and works independently while valuing teamwork
Welcomes feedback and proactively seeks opportunities to improve systems and workflows
Values diversity of thought and embraces an inclusive, collaborative team culture
Ability to exemplify ASPCA’s core values and behavioral competencies.
Technical Requirements
Expert‑level proficiency in dbt Core, including advanced model design, macro development, custom tests, documentation practices, performance optimization, and integration with automated deployment pipelines
Advanced proficiency with Snowflake, including warehouse configuration, performance tuning, query optimization, cost management, and implementing role‑based access controls
Proficiency with Microsoft Fabric (or Azure Data Factory) for pipeline orchestration, monitoring, scheduled or event‑driven workflows, and Lakehouse integration
Deep knowledge of data warehousing principles, including dimensional modeling, medallion architecture, and ELT transformation patterns
Strong SQL skills required (Python is a plus)
Familiarity with Git/GitHub workflows and DevOps CI/CD practices
Understanding of data governance, metadata management, and enterprise data quality practices
Additional Information
Must be available for occasional off-hours support for critical data pipelines
Some travel to ASPCA locations and training sites (approximately 5% annually) may be required
Language
English (required)
Education and Work Experience
High School Diploma or GED (Required)
3–5 years of experience designing and maintaining data warehouse models and transformation workflows required
3–5 years of experience building and maintaining dbt based transformation pipelines across medallion layers required
3–5 years of experience working in a cloud data warehouse environment required (Snowflake strongly preferred)
Demonstrated experience applying dimensional modeling and enterprise data modeling practices (e.g., star schemas, SCDs, conformed dimensions)
3–5 years of experience orchestrating data pipelines using Microsoft Fabric, Azure Data Factory, or similar cloud orchestration tools required
Experience integrating with commercial Master Data Management (MDM) platforms; Reltio preferred
Experience working within structured DevOps workflows, version control, and automated delivery pipelines
Language:
