SUMMARY
The Data Integration Architect designs, implements, and governs our enterprise integration and data ingestion strategy. This role defines the architecture and builds scalable application-to-application integrations, as well as creates robust data ingestion pipelines from business applications into the Enterprise Data Platform (EDP). This role requires deep expertise with Confluent Kafka, is highly hands-on, and can lead, mentor, and train the team while managing the Kafka environment.
ESSENTIAL JOB FUNCTIONS
To perform this job successfully, an individual must be able to satisfactorily perform each essential function listed below.
- Integration Architecture & Strategy
◦ Define and implement integration patterns, frameworks, and reference architectures for application-to-application (App2App) and enterprise-wide integrations. ◦ Evaluate when to use Confluent Kafka versus other integration approaches (APIs, file-based, ETL, iPaaS, etc.) based on business and technical requirements. ◦ Ensure integrations and ingestion pipelines are secure, scalable, fault-tolerant, and aligned with enterprise architecture standards.
- Data Ingestion into Enterprise Data Platform
◦ Design and build ingestion pipelines to move data from business applications into the Enterprise Data Platform (EDP). ◦ Leverage Kafka Connect, CDC, and integration tools to capture, stream, and process application data into Snowflake or other enterprise data stores. ◦ Partner with data engineering and platform teams to optimize ingestion for performance, lineage, and governance.
- Hands-On Kafka Expertise
◦ Design, deploy, and manage Kafka clusters (on-prem and/or Confluent Cloud). ◦ Implement producers, consumers, connectors, and stream processing pipelines. ◦ Monitor, tune, and optimize Kafka performance, capacity, and security. ◦ Lead incident resolution, troubleshooting, and performance improvements.
- Team Leadership & Enablement
◦ Provide technical leadership to developers, architects, and DevOps teams on Kafka and integration best practices. ◦ Conduct knowledge-sharing sessions and train the team on Kafka internals, operations, and advanced use cases. ◦ Drive adoption of standardized integration and ingestion practices across projects and teams.
- Collaboration & Governance
◦ Work closely with application owners, data architects, business stakeholders and vendors to design integration and ingestion solutions. ◦ Define and enforce naming conventions, standards, and governance for integration and data ingestion. ◦ Partner with security and infrastructure teams to ensure compliance and resiliency.
• Documentation and Best Practices:
◦ Create clear, detailed documentation of Confluent Kafka architectures, processes, and solutions. Share the templates and architecture standards with the Engineering Team leads.
- Performs other related duties and activities as required.
SUPERVISORY RESPONSIBILITIES
- Manages assigned personnel. Completes performance evaluations, orientation, and training. Makes decisions on employee hires, transfers, promotions, salary changes, discipline, terminations, and similar actions. Resolves employee problems within position responsibilities.
Minimum Knowledge and Skills required for the Job The requirements listed below are representative of the knowledge, skill, and/or abilities required to perform the job.
Education and Experience:
- Bachelor’s degree in Computer Science or engineering discipline and/or equivalent work experience
- 10+ years of experience in enterprise integration and data ingestion architecture roles.
- Proven expertise in application-to-application integration patterns (Pub/Sub, Event-Driven Architecture, APIs, File-based, etc.).
- Strong hands-on experience in Kafka (Confluent Kafka preferred): cluster setup, scaling, monitoring, administration, and development.
- Experience creating data ingestion pipelines from business applications into enterprise data platforms (e.g., Snowflake or similar).
- Strong knowledge of streaming, messaging, and event-driven design.
- Experience leading technical teams and mentoring developers/engineers.
- Familiarity with cloud platforms (AWS, Azure, GCP) and hybrid integration strategies.
- Strong communication and leadership skills.
Preferred Qualifications:
- Experience with microservices and container orchestration (Kubernetes, Docker).
- Exposure to Snowflake, Azure Data Factory or other iPaaS/ETL tools.
- Knowledge of schema management, CDC (Change Data Capture), and data governance in integration pipelines.
- Background in security, compliance, and performance tuning for distributed data systems.
Certificates, Licenses, and Registrations:
• N/A
Other Requirements:
- Travel as needed
Physical Requirements:
- Sedentary work. Exerting up to 10 pounds of force occasionally and/or negligible amount of force frequently or constantly to lift, carry, push, pull or otherwise move objects, including the human body. Sedentary work involves sitting most of the time. Jobs are sedentary if walking and standing are required only occasionally and all other sedentary criteria are met.
Sevita is a leading provider of home and community-based specialized health care. We believe that everyone deserves to live a full, more independent life. We provide people with quality services and individualized supports that lead to growth and independence, regardless of the physical, intellectual, or behavioral challenges they face.
We’ve made this our mission for more than 50 years. And today, our 40,000 team members continue to innovate and enhance care for the 50,000 individuals we serve all over the U.S.
As an equal opportunity employer, we do not discriminate on the basis of race, color, religion, sex (including pregnancy, sexual orientation, or gender identity), national origin, age, disability, genetic information, veteran status, citizenship, or any other characteristic protected by law
