Remote Staff Data Engineer

Logo of ASAPP

ASAPP

πŸ“Remote - United States

Job highlights

Summary

Join our team at ASAPP as a Staff Data Engineer to design, build and maintain mission-critical core data infrastructure and analytics platform. Work with talented researchers, engineers, scientists and specialists to solve complex problems.

Requirements

  • 12+ years of experience in general software development and/or dev-ops, sre roles in AWS
  • 5+ years experience in data engineering, data systems, pipeline and stream processing
  • Expertise in at least one flavor of SQL, e.g. Redshift, Postgres, MySQL, Presto/Trino, Spark SQL, Hive
  • Proficiency in a high-level programming language(s). We use Python, Scala, Java, Kotlin, and Go
  • Experience with CI/CD (continuous integration and deployment)
  • Experience with workflow management systems such as Airflow, Oozie, Luigi, and Azkaban
  • Experience implementing data governance, i.e. access management policies, data retention, IAM, etc
  • Confidence operating in a devops-like capacity working with AWS, Kubernetes, Jenkins, Terraform, etc. thinking about automation, alerting, monitoring, and security and other declarative infrastructure

Responsibilities

  • Design and deploy improvements to our mission-critical production data pipeline, data warehouses, data systems
  • Recognize data flow patterns and generalizations to automate as much as possible to drive productivity gains
  • Expand our logging and monitoring processes to discover and resolve anomalies and issues before they become problems
  • Develop state-of-the-art automation and data solutions in Python, Spark and Flink
  • Maintain, Manage, Monitor our infrastructure related including Kafka, Kubernetes, Spark, Flink, Jenkins, general OLAP and RDBMS databases, S3 objects buckets, permissions
  • Increase the efficiency, accuracy, and repeatability of our ETL processes

Preferred Qualifications

  • Bachelor's Degree in a field of science, technology, engineering, or math, or equivalent hands-on experience
  • Experience in maintaining and managing kafka (not just using)
  • Experience in maintaining and managing OLAP/HA database systems (not just using)
  • Familiarity handling Kubernetes clusters for various jobs, apps, and high throughput
  • Technical knowledge of data exchange and serialization formats such as Protobuf, Avro, or Thrift
  • Experience in either deploying and creating Spark Scala and/or Flink applications

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.

Similar Remote Jobs

Please let ASAPP know you found this job on JobsCollider. Thanks! πŸ™