Company Summary
Irth Solutions is a software product company building cutting-edge technology platforms that enable data-driven insights across Damage Prevention, Asset Integrity, Land Management, and Stakeholder Engagement. With a strong product culture, collaborative environment, and high growth potential, Irth offers opportunities to work on enterprise-scale data and AI platforms.
Irth is building a governed, multi-cloud Databricks Lakehouse to support analytics, AI/ML innovation, and customer-facing AI products across AWS, Azure, and GCP.
Job Summary
As an MLOps / LLMOps Engineer, you will design, automate, and operate scalable ML and LLM systems on Irth’s enterprise Lakehouse platform.
You will work closely with Data Science, Engineering, and Product teams to deploy reliable, secure, and production-ready ML and GenAI solutions. This role focuses on operationalizing ML models, building CI/CD pipelines, ensuring governance and compliance, and maintaining high-performance, observable AI systems.
Requirements
Primary Responsibility
ML/LLM Platform Development
- Operationalize model training, evaluation, packaging, and deployment using Databricks, Delta Lake, and medallion architecture.
- Implement Unity Catalog model governance, lineage tracking, and access control.
- Develop reusable job templates, cluster policies, and standardized deployment patterns.
ML/LLM Production Deployment
- Deploy and manage ML and GenAI solutions including risk scoring, anomaly detection, predictive maintenance, NLP, and RAG pipelines.
- Build and optimize LLM pipelines using vector databases, model serving endpoints, and inference workflows.
- Optimize models using quantization, caching, and performance tuning techniques.
- Implement batch and real-time inference pipelines with defined SLAs.
Reliability, Security & Compliance
- Implement data contracts, schema validation, and data quality checks across ML pipelines.
- Ensure secure handling of sensitive data including PII detection, classification, and obfuscation.
- Maintain full lineage from data sources to deployed models and serving endpoints.
- Enforce data residency, governance, and compliance policies.
CI/CD Automation & Testing
- Implement CI/CD pipelines using GitHub Actions and Databricks Asset Bundles.
- Automate deployments across DEV, QA, and PROD environments.
- Develop unit and integration tests for data pipelines and ML models.
- Ensure version control, reproducibility, and automated deployment workflows.
Observability & Operations
- Monitor pipeline health, model performance, drift, and system reliability.
- Implement alerting, incident response workflows, and automated ticketing.
- Track LLM performance metrics including latency, hallucination rates, and API costs.
- Develop runbooks, disaster recovery procedures, and operational documentation.
FinOps & Cost Optimization
- Apply tagging policies and cost tracking for ML infrastructure.
- Support budget monitoring, cost optimization, and resource management.
Skills & Experience
Required:
- 3–5 years of experience in MLOps, LLMOps, or ML platform engineering roles.
- Hands-on experience with Databricks, Delta Lake, Unity Catalog, and ML deployment workflows.
- Strong experience with CI/CD pipelines using GitHub Actions and infrastructure automation.
- Experience implementing data quality validation, schema governance, and data contracts.
- Experience building production-grade ML pipelines with monitoring and observability.
- Strong security knowledge including RBAC, encryption, data residency, and governance practices.
- Proficiency in Python, SQL, and distributed data processing frameworks.
Preferred:
- Experience with LLM pipelines, prompt engineering, RAG workflows, and model optimization.
- Experience with vector databases, model serving, and MLflow.
- Experience with Azure and AWS cloud platforms, including security and networking.
- Experience with geospatial data and analytics.
- Familiarity with Power BI, semantic layers, and enterprise analytics platforms.
- Experience with disaster recovery, FinOps, and enterprise-scale ML operations.
EDUCATION
- Bachelor’s or master’s degree in computer science, Software Engineering, or a related field, or equivalent professional experience.
Benefits
WHAT IS IN IT FOR YOU
- Being an integral part of a dynamic, growing company that is well respected in its industry.
- Competitive pay based on experience.
