Data Engineering Lead

Get Well Logo

Get Well

📍Remote - Worldwide

Summary

Join Get Well as a Team Lead – Data Engineering to lead the technical evolution and team development of the data engineering function. This dual role involves driving the architecture and implementation of the data platform, modernizing legacy pipelines, and ensuring readiness for advanced analytics and AI. You will also coach, mentor, and grow a high-performing team of engineers. This position offers the chance to lead a team at the intersection of traditional data reliability and modern data innovation, impacting millions of patients. The data platform processes millions of patient interactions yearly and is central to delivering insights across product lines. The position can be based remotely in the US.

Requirements

  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related technical discipline
  • 6+ years of experience in data engineering or platform roles, with 3+ years in a technical leadership or director capacity
  • Proven experience building and managing cloud-native data platforms in healthcare, life sciences, or similarly complex domains
  • Expertise in Python, SQL, Spark, and distributed data processing systems
  • Strong foundation in relational databases (e.g., PostgreSQL, MySQL, SQL Server) and modern data processing frameworks (e.g., Spark, Flink, dbt, Airflow)
  • Hands-on experience with streaming data architectures, especially Apache Kafka (or AWS MSK) and Apache Flink, including event-based modeling and real-time processing pipelines
  • Hands-on experience designing and operating pipelines using AWS-native tools such as Glue, S3, Athena, Redshift, RDS, MSK, and Lambda
  • Experience building and managing CI/CD workflows for data pipelines and platform infrastructure (e.g., GitLab CI/CD, Terraform,, CodePipeline, CloudFormation)
  • Commitment to infrastructure-as-code and “everything-as-code” practices to ensure consistency, repeatability, and version control
  • Demonstrated success building resilient, observable, and cost-effective pipelines
  • Deep appreciation for data modeling, quality, and governance, especially within regulated domains
  • Familiarity with SMART on FHIR
  • Understanding of and experience adhering to HIPAA, HITRUST, FedRAMP  and other data compliance frameworks
  • Adhere to all organizational information security policies and protect all sensitive information including but not limited to ePHI and PHI in accordance with organizational policy and Federal, State, and local regulations

Responsibilities

  • Own and evolve the architecture of Get Well’s data product strategy (e.g., data lakehouse, data mesh, or medallion architecture)
  • Define and execute on the roadmap for ingestion, transformation, storage, cataloging, and data access layers
  • Drive decisions around cloud infrastructure, tools, and vendor solutions for scalability, performance, and cost-effectiveness
  • Ensure alignment with broader AI and analytics goals across the company
  • Own the design and delivery of scalable, reliable data pipelines across RDBMS, NoSQL, and modern lakehouse platforms
  • Architect for both batch and streaming workflows, integrating internal and external data sources from APIs, EHRs, and cloud services
  • Drive the evolution from legacy infrastructure to cloud-native and serverless architectures using the AWS ecosystem (e.g., Glue, Athena, Redshift, S3, MSK, Lambda)
  • Balance day-to-day execution with long-term platform thinking—making decisions that scale technically and organizationally
  • Ensure platform readiness for downstream consumers in analytics, data science, AI/ML, and regulatory reporting
  • Lead engineering teams responsible for building and maintaining secure, high-performance data pipelines (batch and streaming)
  • Oversee deployment and orchestration data infrastructure (e.g., Airflow, dbt, Spark, Databricks, Snowflake)
  • Enforce engineering best practices for CI/CD, observability, incident response, and SLA adherence
  • Champion reusability, standardization, and automation across the data stack
  • Partner with data governance, security, and compliance teams to uphold regulatory requirements (e.g., HIPAA, HITRUST, FedRAMP, etc.)
  • Define and enforce data quality standards, lineage tracking, and access control policies
  • Support metadata management, data cataloging, and documentation for discoverability and stewardship
  • Lead and inspire a team of data engineers, fostering a culture of ownership, growth, and continuous learning
  • Provide coaching, mentorship, and technical guidance to help team members develop both in craft and in career
  • Define and uphold engineering best practices, development standards, and onboarding processes for new team members
  • Collaborate closely with engineering leadership to shape team structure, hiring plans, and performance goals
  • Foster cross-functional partnerships with product, analytics, AI, and clinical informatics teams
  • Facilitate agile development practices and continuous improvement processes
  • Communicate complex technical concepts to both technical and non-technical stakeholders

Benefits

  • Exceptionally generous paid time away from work
  • A variety of paid leave programs
  • Savings opportunities with 401(k) and incentive plans
  • Internal education programs
  • Full array of health benefits
  • Fitness reimbursement
  • Cell phone subsidy
  • Casual offices with snacks and drinks
  • Peer recognition programs
  • Health advocacy and employee assistance programs
  • Pet insurance

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.

Similar Remote Jobs