We're looking for Managers (GTM +Cloud/ Big Data Architects) with strong technology and data understanding having proven delivery capability in delivery and presales. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team.
Requirements
- Have proven experience in driving Analytics GTM/Pre-Sales by collaborating with senior stakeholder/s in the client and partner organization in BCM, WAM, Insurance.
- Need to work with client in converting business problems/challenges to technical solutions considering security, performance, scalability etc.
- Need to understand current & Future state enterprise architecture.
- Need to contribute in various technical streams during implementation of the project.
- Provide product and design level technical best practices
- Interact with senior client technology leaders, understand their business goals, create, architect, propose, develop and deliver technology solutions
- Define and develop client specific best practices around data management within a Hadoop environment or cloud environment
- Recommend design alternatives for data ingestion, processing and provisioning layers
- Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop, Spark
- Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies
- Tech Stack: - AWS Experience building on AWS using S3, EC2, Redshift, Glue, EMR, DynamoDB, Lambda, Quick Sight, etc.
- Experience in Pyspark/Spark / Scala
- Experience using software version control tools (Git, Jenkins, Apache Subversion)
- AWS certifications or other related professional technical certifications
- Experience with cloud or on-premises middleware and other enterprise integration technologies.
- Experience in writing MapReduce and/or Spark jobs.
- Demonstrated strength in architecting data warehouse solutions and integrating technical components.
- Good analytical skills with excellent knowledge of SQL.
- 8+ years of work experience with very large data warehousing environment
- Excellent communication skills, both written and verbal
- 3+ years of experience with detailed knowledge of data warehouse technical architectures, infrastructure components, ETL/ ELT, and reporting/analytic tools.
- 3+ years of experience data modelling concepts
- 3+ years of Python and/or Java development experience
- 3+ years’ experience in Big Data stack environments (EMR, Hadoop, MapReduce, Hive)
- Architect in designing highly scalable solutions AWS.
- Strong understanding & familiarity with all AWS/GCP /Bigdata Ecosystem components
- Strong understanding of underlying AWS/GCP Architectural concepts and distributed computing paradigms
- Hands-on programming experience in Apache Spark using Python/Scala and Spark Streaming
- Hands on experience with major components like cloud ETLs, Spark, Databricks
- Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB
- Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions
- Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Cloudera and Databricks.
- Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms
- Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB
- Good knowledge in Apache Kafka & Apache Flume
- Experience in Enterprise grade solution implementations.
- Experience in performance bench marking enterprise applications
Benefits
- Support, coaching and feedback from some of the most engaging colleagues around
- Opportunities to develop new skills and progress your career
- The freedom and flexibility to handle your role in a way that’s right for you
