hims & hers is hiring a
Sr. Data Engineer, Remote - Worldwide

Logo of hims & hers

Sr. Data Engineer closed

🏢 hims & hers

💵 $140k-$170k
📍Worldwide

Summary

The job is for a Senior Data Engineer at Hims & Hers Health, Inc., a health and wellness platform. The role involves architecting and developing data pipelines, building infrastructure, designing data processing and integration pipelines, improving data quality, orchestrating data flow patterns, supporting analytics engineers, partnering with other teams, and having 8+ years of experience in designing, creating, and maintaining scalable data pipelines.

Requirements

  • 8+ years of professional experience designing, creating and maintaining scalable data pipelines using Python, API calls, SQL, and scripting languages
  • Demonstrated experience writing clean, efficient & well-documented Python code and are willing to become effective in other languages as needed
  • Demonstrated experience writing complex, highly optimized SQL queries across large data sets
  • Experience working with customer behavior data
  • Experience with Javascript, event tracking tools like GTM, tools like Google Analytics, Amplitude and CRM tools
  • Experience with cloud technologies such as AWS and/or Google Cloud Platform
  • Experience with serverless architecture (Google Cloud Functions, AWS Lambda)
  • Experience with IaC technologies like Terraform
  • Experience with data warehouses like BigQuery, Databricks, Snowflake, and Postgres
  • Experience building event streaming pipelines using Kafka/Confluent Kafka
  • Experience with modern data stack like Airflow/Astronomer, Fivetran, Tableau/Looker
  • Experience with containers and container orchestration tools such as Docker or Kubernetes
  • Experience with Machine Learning & MLOps
  • Experience with CI/CD (Jenkins, GitHub Actions, Circle CI)
  • Thorough understanding of SDLC and Agile frameworks
  • Project management skills and a demonstrated ability to work autonomously

Responsibilities

  • Architect and develop data pipelines to optimize performance, quality, and scalability
  • Build, maintain & operate scalable, performant, and containerized infrastructure required for optimal extraction, transformation, and loading of data from various data sources
  • Design, develop, and own robust, scalable data processing and data integration pipelines using Python, dbt, Kafka, Airflow, PySpark, SparkSQL, and REST API endpoints to ingest data from various external data sources to Data Lake
  • Develop testing frameworks and monitoring to improve data quality, observability, pipeline reliability, and performance
  • Orchestrate sophisticated data flow patterns across a variety of disparate tooling
  • Support analytics engineers, data analysts, and business partners in building tools and data marts that enable self-service analytics
  • Partner with the rest of the Data Platform team to set best practices and ensure their execution
  • Partner with the analytics engineers to ensure the performance and reliability of our data sources
  • Partner with machine learning engineers to deploy predictive models
  • Partner with the legal and security teams to build frameworks and implement data compliance and security policies
  • Partner with DevOps to build IaC and CI/CD pipelines
  • Support code versioning and code deployments for data Pipelines

Preferred Qualifications

  • Experience building data models using dbt
  • Experience designing and developing systems with desired SLAs and data quality metrics
  • Experience with microservice architecture
  • Experience architecting an enterprise-grade data platform

Benefits

  • The actual amount will take into account a range of factors and H&H offers a comprehensive Total Rewards package that may include an equity grant
  • Consult with your Recruiter during any potential screening to determine a more targeted range based on location and job-related factors
This job is filled or no longer available

Similar Jobs