Who is Element?
We serve as a partner at the intersection of innovation and our clients' needs, efficiently crafting meaningful user experiences for government and commercial customers. By breaking complex problems down into their fundamental elements, we create modern digital solutions that drive efficiency, maximize taxpayer dollars, and deliver essential outcomes that serve the people.
Why Work at Element?
Make an impact that resonates-join our vibrant team and discover how you can improve lives through digital transformation. Our talented professionals bring unparalleled energy engagement, setting a higher standard for impactful work. Be a part of our team and shape a better future.
Position Summary
We are seeking a Sr. Data Engineer to support a transformative state government AI initiative focused on leveraging data to drive better decision-making, transparency, and service delivery. This role will be part of a multidisciplinary team building scalable data infrastructure, enabling machine learning applications, and ensuring data quality and governance across state systems. The ideal candidate will bring deep expertise in data engineering, modern cloud architectures, and public-sector data practices, along with a passion for advancing responsible and ethical AI.
Key Responsibilities
- Design, build, and maintain data pipelines and ETL/ELT processes to support AI and analytics initiatives across multiple state agencies.
- Develop and optimize data lakes and data warehouse architectures in cloud environments (e.g., AWS, Azure, GCP).
- Collaborate with data scientists, analysts, and application developers to operationalize AI models and analytical workflows.
- Implement data quality frameworks, governance policies, and metadata management in compliance with state and federal regulations.
- Ensure data security, privacy, and compliance standards (HIPAA, CJIS, FERPA, etc.) are met in all engineering processes.
- Support the integration of AI/ML pipelines, MLOps tools, and model versioning into production environments.
- Lead and mentor junior engineers, fostering a culture of collaboration and continuous improvement.
- Provide technical documentation and participate in technical reviews, architecture discussions, and state reporting requirements.
- Implement data interoperability and sharing pipelines that align with FAIR data principles (Findable, Accessible, Interoperable, and Reusable) to enhance transparency and collaboration across state agencies.
- Engineer and maintain metadata-driven architectures and APIs that enable discoverability, lineage tracking, and reproducibility of datasets in compliance with state governance requirements.
- Support development of synthetic data generation pipelines to enable privacy-preserving testing and innovation within sandbox environments prior to production deployment.
- Automate data validation, schema enforcement, and quality metrics reporting to ensure compliance with governance and audit requirements.
- Collaborate with governance, security, and AI ethics teams to ensure datasets adhere to state privacy laws, NIST SP 800-53, and ethical AI guidelines.
- Support phased rollout of data environments by developing repeatable, version-controlled deployment pipelines that allow for incremental testing and expansion.
Minimum Qualifications
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or related field.
- 8+ years of experience in data engineering, including design and deployment of production-grade data systems.
- Expertise in Python, SQL, and modern data engineering frameworks (e.g., PySpark, dbt, Airflow, Kafka).
- Hands-on experience with cloud platforms (AWS, Azure, or GCP) and data warehousing technologies (Snowflake, BigQuery, Redshift, Synapse).
- Strong understanding of data modeling, data governance, and API-based data integration.
- Experience supporting AI/ML initiatives, MLOps pipelines, and scalable data science environments.
- Demonstrated experience implementing FAIR-aligned data architectures and metadata cataloging within public-sector or research ecosystems.
- Proven ability to design and maintain compliant data-sharing pipelines that support sandbox testing and phased rollout strategies.
- Familiarity with accessibility and inclusivity requirements (WCAG 2.1, multilingual readiness) for data systems and visualization tools.
- Familiarity with public sector data standards, compliance frameworks, and procurement environments.
- US Citizenship or Permanent Residency required.
- Must reside in the Continental US.
- Depending on the government agency, specific requirements may include public trust background check or security clearance.
Preferred Qualifications
- Experience working on government contracts or with state data modernization programs.
- Familiarity with AI ethics, transparency, and bias mitigation practices.
- Experience with infrastructure as code (IaC) tools (Terraform, CloudFormation) and CI/CD pipelines.
- Knowledge of geospatial data systems, real-time data processing, or data API services.
- Experience in engineering or integrating synthetic data frameworks that support privacy-preserving research and model validation.
- Knowledge of metadata standards such as DCAT, schema.org, or CKAN to enable FAIR data publication and interoperability across systems.
- Familiarity with NIST AI RMF or ISO/IEC 5259 frameworks for AI data quality, lineage, and governance.
Location
Be in your Element. We are a remote-first company based in Washington, DC.
Element is an equal opportunity employer All qualified applicants will receive consideration for employment without regard to age, ancestry, race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status, marital status, protected veteran status, or any other legally protected class.
We believe in a world where solutions we build improve the lives of those who use them.