Responsibilities
- Lead the architecture and design of scalable, secure, and high-performing data solutions using Snowflake.
- Design and implement robust data models (star, snowflake, normalized, and denormalized) for reporting and analytics use cases.
- Develop and manage ETL/ELT pipelines using tools like Informatica, dbt, Talend, Matillion, or custom scripts.
- Optimize Snowflake performance through clustering, caching, resource monitors, and query tuning.
- Integrate Snowflake with diverse data sources including cloud storage (S3, Azure Blob), APIs, RDBMS, and real-time systems.
- Implement security best practices including data masking, access controls, and role-based access (RBAC).
- Collaborate with stakeholders including data engineers, analysts, BI developers, and business users to understand requirements and provide technical guidance.
- Define and enforce data governance, data quality, and metadata management policies.
- Stay updated on emerging Snowflake features and industry trends to continuously improve architecture and processes.
- Coordinate with data scientists, analysts, and other stakeholders for data-related needs
- Help the Data Science & Analytics Practice grow by mentoring junior Practice members, leading initiatives, leading Data Practice Offerings
- Provide thought leadership by representing the Practice / Organization on internal / external platforms
Qualification:
- Translate business requirements into data requests, reports and dashboards.
- Strong Database & modeling concepts with exposure to SQL & NoSQL Databases
- Strong data architecture patterns & principles, ability to design secure & scalable data lakes, data warehouse, data hubs, and other event-driven architectures
- Expertise in designing and writing ETL processes.
- Strong experience to Snowflake, and its components.
- Knowledge of Master Data management and related tools
- Strong exposure to data security and privacy regulations (GDPR, HIPAA) and best practices
- Skilled in ensuring data accuracy, consistency, and quality
- Experience of AWS services viz., AWS S3, Redshift, Lambda, DynamoDB, EMR, Glue, Lake formation, Athena, Quicksight, RDS, Kinesis, Managed Kafka, API Gateway, CloudWatch
- AWS S3, Redshift, Lambda, DynamoDB, EMR, Glue, Lake formation, Athena, Quicksight, RDS, Kinesis, Managed Kafka, Elasticsearch and Elastic Cache, API Gateway, CloudWatch
- Ability to implement data validation processes and establish data quality standards.
- Experience in Linux, and scripting
- Proficiency in data visualization tools like Tableau, Power BI or similar to create meaningful insights
- Experience working with data ingestion tools such as Fivetran, stitch, or Matillion
- AWS IOT solutions
- Apache NiFi, Talend, Informatica
- Knowledge of GCP Data services
- Exposure to AI / ML technologies