Wade User
@wade
Data Engineering Lead with 9+ years building secure, scalable lakehouse and streaming platforms.
What I'm looking for
I’m a Data Engineering Lead with 9+ years designing and delivering scalable, high-performance data platforms across healthcare and enterprise environments. I’m deeply hands-on with Azure, Databricks, and Snowflake, and I focus on turning complex data challenges into reliable, cost-efficient solutions that drive business value.
In my current role, I lead data engineering strategy and platform architecture for healthcare and enterprise clients, including a multi-tenant lakehouse on Databricks and Snowflake processing 8–15 TB of data daily. I’ve improved query performance by 40%, strengthened reliability to 99.9% SLA, and cut operational costs by 35% through cloud-native migration, lakehouse optimization, and performance tuning across Spark workloads.
I also specialize in real-time and governed data flows—building Kafka streaming pipelines, CDC synchronization, and HIPAA-compliant EHR ingestion frameworks (HL7 v2/v3 and FHIR), including RBAC, PHI masking, encryption, and row-level security. I’ve led and mentored teams of 8–10 engineers while delivering ETL/ELT modernization and large migrations on time and within budget, using infrastructure-as-code (Terraform) and CI/CD to keep platforms dependable.
Experience
Work history, roles, and key accomplishments
Lead Data Engineer
Fynite
Jul 2023 - Present (2 years 9 months)
Architected and deployed a multi-tenant Databricks/Snowflake lakehouse processing 8–15 TB daily, improving query performance by 40% via partitioning, Z-Ordering, and cluster auto-scaling. Led Azure-native migration and platform optimization, cutting operational costs 35% and improving reliability to 99.9% SLA while reducing job runtime 45% and dashboard latency 70% with ClickHouse, including end-t
Delivered real-time data platform solutions by designing Kafka streaming pipelines ingesting 200M+ events daily and building Spark Structured Streaming jobs with checkpointing and idempotent logic for fault-tolerant, exactly-once delivery semantics. Optimized ClickHouse and Snowflake for large-scale analytics (storage -25%, query latency -40%, query time -45%) and implemented CDC with Debezium/Kaf
Data Engineer
Agit AI
Apr 2017 - Feb 2019 (1 year 10 months)
Designed and delivered ETL pipelines processing 3B+ historical records for regulatory reporting and analytics, reducing processing time by 60% using parallelization and partition pruning. Built dimensional warehouse models with SCD Type 2 for point-in-time reporting, improved query performance by 60% through SQL optimization, and increased nightly batch reliability from 87% to 99.5% using Airflow
Education
Degrees, certifications, and relevant coursework
Wade hasn't added their education
Don't worry, there are 90k+ talented remote workers on Himalayas
Availability
Location
Authorized to work in
Job categories
Skills
Interested in hiring Wade?
You can contact Wade and 90k+ other talented remote workers on Himalayas.
Message WadeFind your dream job
Sign up now and join over 100,000 remote workers who receive personalized job alerts, curated job matches, and more for free!
