Staff Data Engineer

Calendly Logo

Calendly

💵 $170k-$306k
📍Remote - United States

Summary

Join Calendly's Engineering team and become our Staff Data Engineer, leading the evolution of our data platform. You will design and build data pipelines, ensuring reliability and accuracy across our infrastructure. As a technical lead, you'll mentor engineers, drive architectural decisions, and work with a modern data stack (Apache Flink, Beam, Airflow, Kubernetes, Google Cloud). You will collaborate with various teams to understand their needs and deliver scalable data solutions. This role requires extensive experience in streaming and messaging systems, cloud data warehouses, and mentoring data engineers. Calendly offers a competitive salary and benefits package.

Requirements

  • 5+ years of experience with streaming and messaging systems like Beam, Flink, Spark, Kafka, and/or Pub/Sub
  • 8+ years of experience with managing enterprise-grade cloud data warehouses (BigQuery, Snowflake, Databricks, etc) using change-data-capture strategies (e.g. systems like Debezium) that are open-source and self-managed
  • Expertise in SQL, Python, and ideally Java
  • Experience mentoring high-potential data engineers and contributing to team culture and best practices
  • Availability for participation in an on-call rotation, ensuring prompt and effective responses to business-critical alerts outside of regular working hours
  • Authorized to work lawfully in the United States of America as Calendly does not engage in immigration sponsorship at this time

Responsibilities

  • Designing and building net-new batch and streaming data pipelines that power analytics, product features, and customer-facing experiences as the company scales
  • Serving as the technical lead on the centralized Data Platform team, mentoring engineers, driving architectural best practices, and raising the bar for code and design reviews
  • Building for scale and reliability by ensuring robust monitoring, alerting, and self-healing systems—identifying issues before they affect users or the business
  • Working hands-on with our modern data stack: Apache Flink, Beam, Airflow, Kubernetes, Google Cloud Storage (GCS), BigQuery, and Datadog
  • Partnering closely with data consumers across product, engineering, and analytics to understand evolving needs and deliver scalable, reusable data solutions
  • Helping lay the technical foundations for a platform that can support increasing data volume, complexity, and business use cases
  • Contributing to a culture of ownership, quality, and continuous improvement, ensuring the Data Platform is a trusted, high-leverage layer for the company

Benefits

  • Quarterly Corporate Bonus program (or Sales incentive)
  • Equity awards
  • Competitive benefits

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.

Similar Remote Jobs