HimalayasHimalayas logo
Anika SystemsAS

Data Engineer

Anika Systems is an outcome-driven technology solutions firm that guides federal agencies in solving complex business challenges and preparing for the future through AI, data intelligence, and automation.

Anika Systems

Employee count: 51-200

United States only

Stay safe on Himalayas

Never send money to companies. Jobs on Himalayas will never require payment from applicants.

Anika Systems is seeking a skilled Data Engineer to design, build, and optimize scalable data pipelines and platforms supporting federal clients. This role will play a critical part in enabling enterprise data strategies, supporting Office of the Chief Data Officer (OCDO) initiatives, and delivering high-quality, trusted data for analytics, reporting, and mission operations.

This opportunity is 100% remote.

The ideal candidate has hands-on experience with ETL/ELT pipelines, XBRL data processing, Apache Iceberg-based architectures, and advanced data optimization techniques such as materialized views and context-aware data engineering. This role also requires proficiency in AI tools and AI-assisted development workflows, along with experience building and deploying CI/CD pipelines for data and analytics platforms.


Key Responsibilities
Data Pipeline Development & ETL/ELT
  • Design, develop, and maintain robust ETL/ELT pipelines to ingest, transform, and deliver data across enterprise platforms.
  • Build scalable data ingestion frameworks for structured and semi-structured data, including XBRL filings and financial datasets.
  • Implement data transformation logic to support analytics, reporting, and regulatory use cases.
  • Ensure data pipelines are reliable, performant, and scalable in cloud environments.
  • Leverage AI-assisted development tools to accelerate pipeline development, testing, and optimization.
Cloud Data Platforms & Iceberg Architecture
  • Develop and manage data solutions leveraging AWS services (e.g., S3, Airflow, DAGs, Glue, Lambda, Redshift).
  • Implement and optimize Apache Iceberg table formats for large-scale, ACID-compliant data lakes.
  • Support lakehouse architectures that unify data lakes and data warehouses.
  • Optimize data storage and retrieval strategies for performance and cost efficiency.
  • Enable data platforms that support AI/ML workloads and downstream generative AI use cases.
CI/CD & DataOps Engineering
  • Design and implement CI/CD pipelines for data pipelines, infrastructure, and analytics code using tools such as GitHub Actions, GitLab CI, Jenkins, or AWS-native services.
  • Automate build, test, and deployment processes for ETL pipelines and data platform components.
  • Implement DataOps best practices, including version control, automated testing, environment promotion, and rollback strategies.
  • Ensure reproducibility, reliability, and governance of data pipeline deployments across environments.
  • Integrate AI-driven testing and monitoring tools to improve pipeline quality and reduce operational risk.
Data Optimization & Performance Engineering
  • Design and implement materialized views and other performance optimization techniques to improve query efficiency.
  • Tune data pipelines and queries for performance, scalability, and cost.
  • Implement partitioning, indexing, and caching strategies aligned to workload patterns.
XBRL & Financial Data Processing
  • Develop pipelines to ingest, parse, and normalize XBRL (eXtensible Business Reporting Language) data.
  • Support regulatory and financial data use cases requiring high accuracy and traceability.
  • Ensure alignment with data standards and validation rules for financial reporting datasets.
Context Engineering & Data Modeling Support
  • Apply context engineering principles to ensure data is enriched with meaningful metadata, lineage, and business context.
  • Collaborate with Data Architects to support data modeling, schema design, and entity relationships.
  • Enable downstream analytics and AI use cases by structuring data for usability, discoverability, and governance.
Metadata, Data Catalog, and Governance Integration
  • Integrate pipelines with enterprise data catalogs and metadata management systems.
  • Support automated metadata capture, lineage tracking, and data quality monitoring.
  • Ensure alignment with data governance frameworks and standards established by OCDO organizations, including AI data readiness and traceability.
Stakeholder Collaboration & Agile Delivery
  • Collaborate with data architects, analysts, and business stakeholders to understand data needs and deliver solutions.
  • Participate in stakeholder listening campaigns, workshops, and data discovery efforts.
  • Work in Agile teams to iteratively deliver data capabilities and enhancements.
  • Contribute to identifying and implementing AI-driven efficiencies and automation opportunities across the data lifecycle.
Required Qualifications
  • Bachelor’s degree in Computer Science, Engineering, Data Science, or related field.
  • 5+ years of experience in data engineering, ETL development, or data platform engineering.
  • Strong hands-on experience with:
    • ETL/ELT tools and frameworks
    • AWS data services (S3, Glue, Lambda, Redshift, etc.)
    • Apache Iceberg and modern data lake architectures
  • Experience designing and implementing CI/CD pipelines for data platforms and ETL workflows.
  • Demonstrated proficiency using AI tools and AI-assisted development workflows (e.g., LLM copilots, automated code generation, pipeline optimization tools).
  • Experience processing XBRL or complex financial/regulatory datasets.
  • Proficiency in SQL and Python.
  • Experience implementing materialized views and query optimization techniques.
  • Understanding of data modeling concepts and metadata management.
  • Familiarity with data governance, data quality practices, and data readiness for AI/ML use cases.
  • Ability to work in Agile, DevOps-oriented environments.
  • U.S. Citizenship required; ability to obtain and maintain a federal clearance.
Preferred Qualifications
  • Experience supporting federal agencies such as SEC, DHS, Treasury, or Federal Reserve System.
  • Familiarity with data catalog tools (e.g., Collibra, Alation, ServiceNow).
  • Experience with Apache Spark, Kafka, or other distributed data processing frameworks.
  • Experience enabling data pipelines for AI/ML or generative AI applications.
  • Knowledge of data maturity frameworks (e.g., EDM DCAM, TDWI).
  • Exposure to context engineering or semantic data layer design.
  • AWS or data engineering certifications.
  • Experience with infrastructure-as-code (IaC) tools (e.g., Terraform, CloudFormation) in support of CI/CD pipelines.

About the job

Apply before

Posted on

Job type

Full Time

Experience level

Education

Bachelor degree

Experience

5 years minimum

Location requirements

Hiring timezones

United States +/- 0 hours

About Anika Systems

Learn more about Anika Systems and their company culture.

View company profile

At Anika Systems, we are at the forefront of technological innovation, pioneering transformative solutions that empower federal agencies to navigate the complexities of the digital age. Through our groundbreaking work in artificial intelligence, intelligent automation, and data intelligence, we are revolutionizing how government operates, enhancing efficiency, and accelerating mission-critical outcomes. Our approach is not merely about implementing new technologies; it's about engineering modern digital ecosystems. We guide our partners to move beyond modernization and into a state of continuous transformation, building resilient, data- and AI-powered platforms designed to scale with the challenges of tomorrow. We specialize in reimagining legacy systems, converting them into intelligent, adaptive platforms that are purpose-built for agility, automation, and AI readiness, thereby reducing technical debt and fostering a culture of innovation.

Our commitment to innovation is embodied in our 'show me' versus 'tell me' philosophy. We don't just present theoretical solutions; we deliver tangible, measurable impact through the rapid development of Minimum Viable Products (MVPs) within our poly-cloud based Virtual Innovation Transition Acceleration Lab (VITAL). This hands-on approach allows us to synthesize ideas into actionable business concepts and implement them using the most appropriate technologies. Anika Systems is a mission-focused innovation firm, dedicated to harnessing the full potential of emerging technologies to enable government teams to work smarter, faster, and more securely. By integrating AI, low-code platforms, and Robotic Process Automation, we streamline government operations, reduce costs, and ultimately improve the delivery of public services, ensuring that our partners are not just keeping up with change, but are actively driving it.

Employee benefits

Learn about the employee benefits and perks provided at Anika Systems.

View benefits

Paid parental leave

Paid leave for new parents.

Life Insurance coverage

Company-provided life insurance.

Referral program

Employee referral bonus program.

Paid time off and holidays

Includes paid time off and holidays.

View Anika Systems's employee benefits
Claim this profileAnika Systems logoAS

Anika Systems

View company profile

Similar remote jobs

Here are other jobs you might want to apply for.

View all remote jobs

4 remote jobs at Anika Systems

Explore the variety of open remote roles at Anika Systems, offering flexible work options across multiple disciplines and skill levels.

View all jobs at Anika Systems

Remote companies like Anika Systems

Find your next opportunity by exploring profiles of companies that are similar to Anika Systems. Compare culture, benefits, and job openings on Himalayas.

View all companies

Find your dream job

Sign up now and join over 100,000 remote workers who receive personalized job alerts, curated job matches, and more for free!

Sign up
Himalayas profile for an example user named Frankie Sullivan