Crisp is the leading collaborative commerce platform that connects CPG brands to real-time sales and inventory data from 40+ retailers and distributors. We serve 7,000+ brands representing over $2.5T in retail sales across 250K+ stores, working with industry leaders like J.M. Smucker, Nestlé, and KraftHeinz. Following our recent $72M Series B funding and strategic acquisitions of Atheon Analytics, Cantactix, ClearBox Analytics, Lumidata, and others, we're positioned for aggressive global expansion and product innovation in the $15T+ global supply chain market.
We are looking for a Senior Data Analytics Engineer to design, build, and own our internal data flows and analytics infrastructure. In this role, you will sit at the intersection of data engineering and analytics, enabling data-driven decision-making across Product, Engineering, Sales, Marketing, Finance, and Leadership.
As a Data Analytics Engineer at Crisp, you will play a key role in unlocking the potential of our internal data by designing, building, and optimizing scalable data pipelines and systems. You will be responsible for building reliable, scalable data pipelines, transforming raw data into analytics-ready datasets, and partnering closely with stakeholders to define metrics, dashboards, and insights that power the business.
What you’ll be responsible for:
- Data Modeling & Pipelines
- Design, build, and maintain robust ELT/ETL pipelines for internal data flows
- Model and transform data using best practices and industry standard approaches (e.g., star schemas, dimensional modeling, medallion architecture)
- Ensure data reliability, freshness, and scalability across the analytics stack
- Analytics & Insights
- Develop analytics-ready datasets that support self-service analytics and reporting
- Partner with business teams to define KPIs, metrics, and data definitions
- Build and maintain dashboards, reports, and exploratory analyses for key stakeholders
- Data Quality & Governance
- Implement data quality checks, monitoring, and alerting
- Establish and document data definitions, lineage, and best practices
- Improve trust and adoption of internal analytics through consistency and clarity
Signs of a great candidate for this role:
- Data oriented: You think about data in a rigorously structured manner. You live by the mantra “garbage in, garbage out” and are deeply experienced in the art of data cleansing.
- Experience building and maintaining data pipelines with a modern data stack
- Solid understanding of data warehousing concepts and analytics best practice
- Focus on the business problem: You are passionate about building robust data pipelines and systems that enable actionable insights. To deliver the right data in the right format, you take the time to understand how the business works, ensuring your solutions align with stakeholder needs and drive meaningful outcomes.
- Communication: Experience working with stakeholders to translate business questions into data solutions. Strong communication skills and ability to explain complex data concepts clearly.
- BI-as-Code Practitioner: Experience with visualization tools like Power BI (preferred), Looker, or Tableau. Crucially, you prefer programmatic and systematic usage (e.g., managing assets via APIs, Tabular Editor, or LookML) over "ClickOps" and manual dashboarding.
- AI-Native Workflows: You have a passion for AI-assisted development. You actively use tools like Cursor, Claude Code, or Gemini to accelerate your day-to-day tasks, and you are a standard-bearer for stakeholders to adopt these behaviors as well.
Must have:
- dbt: Strong proficiency in building and managing data transformations.
- FiveTran: Familiarity using FiveTran for large scale data system integrations and applying custom transformations to automated pipelines.
- Python: Advanced knowledge of Python for data engineering tasks and automation.
- SQL: Proficiency in SQL and SQL-based languages, with expertise in composing and optimizing and performance tuning of SQL queries. SQL CTEs.
- git: Proficiency with git and version control systems and familiarity with GitHub, GitHub Actions, and CI/CD pipelines, including experience within automating data transformation and pipeline deployments. workflows and managing version control systems.
Nice to have:
- GCP: Experience with Google Cloud Platform services, such as BigQuery, Cloud Run, Dataflow, and Composer.
- Finance, People and Sales Data Experience: Knowledge of working with internal analytics datasets, such as financial or sales data.
- Data Observability: Familiarity with modern monitoring tools or solutions to track data freshness, volume, profiling output, and anomalies.
- Infrastructure as Code: Familiarity with Terraform for managing cloud resources (BigQuery datasets, IAM roles).
- Reverse ETL: Experience using Census or Hightouch to sync modeled data back into SaaS tools like Salesforce or HubSpot.
Signs of a great candidate for Crisp:
- Collaboration: You believe the best results come from working together. You share ideas, pitch in, and elevate those around you.
- Grit: You’re curious, self-driven, and unafraid to roll up your sleeves. You get the job done even when the path isn’t clear and adapt quickly when things change.
- People: You stay close to those we serve. Listening, learning, and building what matters most.
- Feedback: You see it as fuel. You give it with care, take it with humility, and use it to level up.
- Ingenuity: You solve problems with creativity and speed. You look for ways to streamline, automate, or improve without being asked.
We are committed to transparency, diversity, and meritocracy, fostering an environment where every team member is empowered to make an impact, grow personally, and advance in their career. We invite you to join us — not just to take on a role, but to help shape a company you’re proud to be part of.
