About the Company:
Responsibilities:
- Quality Strategy and Leadership -
- Define and execute the Quality Engineering vision and roadmap aligned with organizational goals.
- Build and lead a multi-disciplinary QA team (Platform QA, Data Quality, Automation, and Performance Engineering).
- Partner with engineering, product, and customer success leaders to ensure quality is embedded across the SDLC.
- Establish quality KPIs and dashboards for visibility into release readiness, product stability, and production health.
- End-to-End Quality Ownership -
- Own functional and regression testing strategy across microservices, SDKs, APIs, and UI layers.
- Drive non-functional testing excellence — performance, scalability, reliability, failover, and compatibility.
- Lead security testing initiatives, collaborating with DevSecOps and compliance teams to ensure adherence to ISO, SOC 2, and GDPR standards.
- Introduce AI-augmented testing frameworks leveraging machine learning for predictive defect detection and data drift analysis.
- Data Science and AI Quality -
- Partner with the Data Science org to establish data quality pipelines for training and inference data.
- Define testing standards for AI model accuracy, precision, bias, and drift monitoring.
- Ensure data-driven validation frameworks are integrated into MLOps and deployment cycles.
- Customer Success Engineering Quality -
- Oversee QA processes for custom deployments, integrations, and client-specific configurations.
- Build automation frameworks to validate end-to-end customer experience workflows and integrations (Genesys, Salesforce, Zendesk, etc.).
- Process and Tooling Excellence -
- Institutionalize CI/CD-driven test automation for faster and safer deployments.
- Evaluate and implement next-gen QA tools for test management, performance benchmarking, and observability.
- Drive shift-left testing culture and early defect detection through static analysis, contract testing, and chaos testing.
- Foster collaboration between QA, DevOps, and Observability teams for quality-in-production metrics.
Requirements:
- 10+ years of experience in software quality engineering, with 3+ years leading QA teams in a SaaS or platform environment.
- Proven experience in AI/ML or data-centric product quality, including validation of data pipelines and ML models.
- Deep understanding of microservices architecture, cloud platforms (AWS/Azure), and CI/CD pipelines.
- Hands-on experience with automation frameworks (Selenium, Cypress, Playwright, TestNG, PyTest, etc.).
- Expertise in performance testing tools (JMeter, Gatling, Locust) and security testing (OWASP, Burp Suite, ZAP).
- Strong grasp of API and SDK testing, ideally for multi-platform environments (web, mobile, voice).
- Exceptional communication, leadership, and stakeholder management skills with an ability to influence across functions.
- Preferred Skills
- Experience in AI/ML testing, data validation, and model governance frameworks.
- Familiarity with observability stacks (ELK, Grafana, Datadog, OpenTelemetry) for production monitoring.
- Understanding of GenAI systems, prompt testing, and retrieval-augmented generation (RAG) validation.
- Exposure to B2B enterprise platforms and multi-tenant SaaS architectures.
- Knowledge of secure SDLC and compliance frameworks (SOC 2, ISO 27001).
Why Netomi
- Work at the frontier of AI-driven customer experience.
- Influence quality strategy across platform, AI, and customer success ecosystems.
- Collaborate with top-tier engineers and data scientists in a high-growth, innovation-driven culture.
- Competitive compensation, fast career progression, and global exposure.
