Staff Software Engineer

closed
Glassdoor Logo

Glassdoor

πŸ’΅ $134k-$178k
πŸ“Remote - United States

Summary

Join Glassdoor's data platform team as a Staff Software Engineer and contribute to the next generation of data infrastructure supporting Glassdoor and Fishbowl products. This transformational role requires 8+ years of software engineering experience and expertise in various technologies, including AWS, Hadoop, and container orchestration. You will introduce best practices, mentor engineers, and champion data quality. The position offers a competitive salary, generous benefits, and a flexible work environment. Glassdoor is committed to diversity, inclusion, and employee growth. The role involves significant responsibility in building and maintaining a scalable and robust data platform.

Requirements

  • A commitment to add to our culture of DEI
  • 8+ years of experience in software, platform, DataOps, MLOps engineering or a similar role
  • Strong data product and business sense to drive decision making that allows you to put yourself in the shoes of the end-user
  • Strong interpersonal and collaboration skills, with the ability to work effectively across functions and influence decision-making
  • Experience in stakeholder management and building consensus among diverse groups
  • Expertise in container and container orchestration tools (Docker and/or Kubernetes)
  • K8s cluster management
  • Performance optimizations
  • Updating and deploying helm charts
  • Expertise with CI/CD (GitHub Actions, GitLab CI, Jenkins, etc) fundamentals and implementation for big data tools
  • Strong AWS cloud fundamentals
  • Experience with IaC (CloudFormation and/or Terraform)
  • EMR, S3, EC2, EKS, ECS, ECR, VPC, IAM, Route 53, Kinesis, Lambda, Glue, and more
  • Hadoop fundamentals
  • HDFS, Hive, Tez, Spark, and more
  • Strong DevOps and SysOps experience
  • Networking experience (VPC, network peering, TCP/IP, subnets, etc)
  • Monitoring, observability, and alerting with DataDog, CloudWatch, and/or Grafana
  • Strong software fundamentals
  • Hands-on development & writing code (Python, Java, Scala, etc)
  • Unit testing, mocking, and patching strategies (pytest, unittest, mockito, etc)
  • OOP/OOD and software design patterns (factory, facade, builder, adapter, etc)
  • Experience with UML diagrams
  • Strong data fundamentals
  • Development of custom Airflow operators and libraries
  • Maintaining Airflow webserver, scheduler, and metastore
  • Maintaining EMR clusters for Hive and Spark workloads
  • Snowflake fundamentals
  • Streaming data with Kafka and transformations with ksqlDB/Flink
  • Experience with TimescaleDB and/or ClickHouse
  • Understanding fundamentals of data architecture and modeling
  • Strong AIOps experience
  • Understanding of ML Lifecycle
  • Understanding of agentic design
  • Proficiency in managing GPU instances, managing and monitoring GPUs in cluster environments
  • GPU concurrency & time-slicing
  • Experience working with tools like MLFlow, Kubeflow, KServe or BentoML
  • ML model registries, DVC, etc

Responsibilities

  • Introduce best practices in software development across data engineering and machine learning teams, ensuring technical architecture and design decisions support scalability, performance, and maintainability
  • Champion a culture of quality, continuous improvement, and technical mentorship within the team while partnering closely with various other teams across the engineering organization
  • Set and enforce best practices and standards for the data platform team and ensure that our platform runs smoothly while keeping our cloud environment clean
  • Mentor junior and senior engineers, challenging their technical skills to help them grow into well-rounded engineers
  • Conduct regular code and architecture reviews for the data platform team championing new approaches and refining older ones to keep us at the cutting edge of technology
  • Introduce a robust data quality strategy aimed to eradicate poor data from making its way into production

Preferred Qualifications

  • Experience with hosting, managing, and deploying LLMs
  • AWS Bedrock, LangChain, and more
  • Automated batch training of ML models
  • Airflow, KubeFlow, MLFlow etc

Benefits

  • Base salary range*: $134,400 - $178,500
  • Annual bonus target**: 10%
  • **Bonuses are paid in 6-month intervals, aligning with bi-annual performance reviews
  • Generous Restricted Stock Units (RSU)
  • ***Restricted Stock Units (RSU) are awarded at hire and may be refreshed annually. Additionally, as a pay-for-performance company, there are additional RSU grant awards for the very top performers
  • Health and wellness: 100% employer-paid premiums for employee medical, dental, vision, life, short and long-term disability and select well-being programs, along with 80% employer-paid premiums for all dependents
  • Generous paid time-off programs for birthing and non-birthing parents are provided, along with additional company-sponsored leaves such as paid injury/illness leave, family emergency leave, compassionate leave, and domestic & sexual violence leave
  • Coverage begins at the start of employment. After 48 months of continuous employment, 100% of all premiums for you and your dependents can be employer-paid!
  • Wellness programs to support mental, physical, and financial health, such as paid career coaching & mental health therapy, financial coaching, subsidized fitness & wellness appointments, HSA, FSA, commuter benefits, fertility & pregnancy support, employee perks and discounts, legal assistance program, gender-affirming care relocation benefit and work-from-home monthly allowance
  • Work/life balance : Open Paid Time Off policy, in addition to 15-20 paid company holidays/year
  • Investing in your future: 401(k) plan with a company match up to $5,000 per year, subsidized fertility & family planning services and discounted legal assistance services
  • Flexible hours and a where-to-work policy
  • Remote work
This job is filled or no longer available

Similar Remote Jobs