Senior Data Engineer

closed
Honeycomb.io Logo

Honeycomb.io

πŸ’΅ $168k-$200k
πŸ“Remote - United States, Canada

Summary

Join Honeycomb as their first Senior Data Engineer and build the foundation for their data-driven future. Partnering with the Head of Data, you will architect and build a scalable data platform powering business insights and setting the standard for data quality. You will own the data platform, build scalable systems, collaborate across functions, drive innovation and quality, and lead with impact. This role requires extensive data development experience, expertise in modern data tooling, and experience collaborating with various stakeholders. Honeycomb offers a competitive salary, equity, unlimited PTO, paid sabbatical, remote-first culture, stipends, comprehensive benefits, parental leave, and an annual development allowance.

Requirements

  • Extensive data development including expert-level SQL and programming experience in a scripting language (preferably Python)
  • Demonstrated experience with modern data tooling including: MPP Data warehouses (e.g. Redshift or Snowflake (preferred)), DBT Workflow automation (e.g. Airflow, Dagster, Prefect)
  • Experience implementing structured data models, architectures and marts (e.g. Inmon, Kimball)
  • Experience collaborating with data analysts, data scientists and business users with varying levels of data savvy
  • Comfortable working through ambiguous problems - this is our first DE hire so there will be a fair amount of role shaping

Responsibilities

  • Own the Data Platform: Take full ownership of our Snowflake data warehouse, DBT models, and diverse ingestion platform. You’ll design and maintain end-to-end solutions that enable access to clean, accurate and well-annotated data
  • Build Scalable Systems: Leverage modern technologies to create robust, production-grade data pipelines and models. Your work will enable rapid iteration and empower teams from R&D to Sales, Marketing, Finance, and beyond to make informed, data-driven decisions and have ownership over their data
  • Collaborate Across Functions: Work hand-in-hand with engineering, product, sales, marketing, and business stakeholders to translate complex needs into aligned data architectures and actionable insights. Your collaborative spirit will help bridge gaps and foster a culture of shared success
  • Drive Innovation and Quality: Establish best practices for data quality and reliability by setting meaningful SLO metrics and continuously refining our systems. You’ll have the autonomy to experiment with new technologies and approaches, driving innovation in a fast-paced, evolving environment
  • Lead with Impact: From planning and deployment to long-term maintenance, you’ll lead critical projects with a keen sense of ownership and strategic vision. Your ability to balance technical excellence with business value will be key to our next phase of growth

Preferred Qualifications

  • Experience with any of the following: Spark, Scala, Terraform, AWS/K8s, Debezium/Flink
  • Experience managing production-grade data pipelines powering customer-facing applications
  • Exposure to MLOps and supporting ML/AI team’s data requirements
  • Experience working with CRM, Martech and other GTM datasets and systems

Benefits

  • Base pay (range) of $170,000 - $200,000 USD, CAD $233,504 - CAD $274,710
  • A stake in our success - generous equity with employee-friendly stock program
  • It’s not about how strong of a negotiator you are - our pay is based on transparent levels relative to experience
  • Time to recharge - Unlimited PTO and paid sabbatical
  • A remote-first mindset and culture (really!)
  • Home office, co-working, and internet stipend
  • 100% employee/75% for dependents coverage for all benefits
  • Up to 16 weeks of paid parental leave, regardless of path to parenthood
  • Annual development allowance
This job is filled or no longer available