Senior Data Engineer

Calendly Logo

Calendly

๐Ÿ’ต $148k-$267k
๐Ÿ“Remote - United States

Summary

Join Calendly's Engineering team and contribute to the evolution of our data platform as a Senior Data Engineer. Design and build batch and streaming data pipelines, ensuring high reliability and accuracy. Serve as a technical expert, mentoring engineers and driving architectural best practices. Work with a modern data stack (Apache Flink, Beam, Airflow, Kubernetes, Google Cloud Platform). Partner with various teams to understand their needs and deliver scalable data solutions. Help build a platform to support increasing data volume and complexity. Contribute to a culture of ownership and continuous improvement.

Requirements

  • 2+ years of experience with streaming and messaging systems like Beam, Flink, Spark, Kafka, and/or Pub/Sub
  • 5+ years of experience with managing enterprise-grade cloud data warehouses (BigQuery, Snowflake, Databricks, etc) using change-data-capture strategies (e.g. systems like Debezium) that are open-source and self-managed
  • Expert in SQL, Python, and ideally Java
  • Experience mentoring high-potential data engineers and contributing to team culture and best practices
  • Availability for participation in an on-call rotation, ensuring prompt and effective responses to business-critical alerts outside of regular working hours
  • Authorized to work lawfully in the United States of America as Calendly does not engage in immigration sponsorship at this time

Responsibilities

  • Design and build net-new batch and streaming data pipelines that power analytics, product features, and customer-facing experiences as the company scales
  • Serve as the technical expert on the centralized Data Platform team, mentoring engineers, driving architectural best practices, and raising the bar for code and design reviews
  • Build for scale and reliability by ensuring robust monitoring, alerting, and self-healing systemsโ€”identifying issues before they affect users or the business
  • Work hands-on with our modern data stack: Apache Flink, Beam, Airflow, Kubernetes, Google Cloud Storage (GCS), BigQuery, and Datadog
  • Partner closely with data consumers across product, engineering, and analytics to understand evolving needs and deliver scalable, reusable data solutions
  • Help lay the technical foundations for a platform that can support increasing data volume, complexity, and business use cases
  • Contribute to a culture of ownership, quality, and continuous improvement, ensuring the Data Platform is a trusted, high-leverage layer for the company

Benefits

  • Quarterly Corporate Bonus program (or Sales incentive)
  • Equity awards
  • Competitive benefits

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.