Why Blue Coding?
What are we looking for?
What's unique about this job?
This role offers the opportunity to build and shape a modern DataOps ecosystem from the ground up, working with a highly scalable AWS stack and real-time data processing.
You’ll operate with a high level of autonomy, directly influencing architectural decisions and best practices in a fast-moving, low-structure environment.
If you enjoy owning systems end-to-end, solving complex data challenges, and working on infrastructure that directly impacts business-critical workflows, this role will give you that exposure.
Here are some of the exciting day-to-day challenges you will face in this role:
- Design, develop, and maintain data pipelines and datastores that support enterprise analytics, data science, and operational workloads.
- Lead and support large-scale database migration initiatives, including on-premises to cloud migrations.
- Monitor, analyze, and optimize the performance and stability of data layer services and platforms.
- Ensure data integrity, quality, and compliance across pipelines and datasets.
- Collaborate closely with peers across engineering, analytics, and technology teams.
- Guide, coach, and mentor data engineers, BI developers, and analysts.
- Design and implement enterprise-scale data solutions with long-term business impact.
- Build and maintain data processing solutions using Python and/or Scala.
- Work with a variety of data ingestion patterns, including SFTP, APIs, streaming, and batch processing.
- Design and support database models optimized for analytical and reporting use cases.
- Implement monitoring, alerting, and observability for data pipelines and infrastructure.
- Maintain clear and comprehensive documentation of data architectures, pipelines, and processes.
- Work within an Agile environment, collaborating through tools such as Jira and Git
You will shine if you have:
5+ years of experience in DevOps or DataOps roles with a strong focus on AWS.
Hands-on experience with AWS services such as EMR (Spark), Redshift, RDS, Glue, Lambda, Kinesis, Step Functions, EventBridge, SNS/SQS, KMS, and CloudWatch.
Strong proficiency in Python and SQL.
Experience working with relational, NoSQL, and columnar databases.
Experience implementing Infrastructure as Code using Terraform or CloudFormation.
Experience designing and maintaining CI/CD pipelines.
Familiarity with data quality frameworks, observability tools, and data governance practices.
Knowledge of handling sensitive data (PII) and compliance standards such as HIPAA.
Bachelor’s degree in Computer Science, Data Science, or equivalent experience.
Proven ability to work autonomously and take ownership of infrastructure and data workflows in a production environment
It doesn’t hurt if you also have:
AWS certifications (e.g., Data Engineer – Associate, DevOps Engineer – Professional).
Experience with AWS DMS, Secrets Manager, SES, and containerization tools such as Docker.
Experience with BI tools such as Tableau or Power BI.
Experience with data observability platforms.
Advanced degree in a related field
Here are some of the perks we offer you:
- Salary in USD
- Flexible schedule (within US Time zones)
- 100% Remote
- Work with modern AWS stack (EMR, Kinesis, Glue, etc.)
- High-impact role with ownership over key technical decisions
