Terazo is hiring a
Senior Data Engineer

Logo of Terazo

Terazo

πŸ’΅ ~$200k
πŸ“Remote - United States

Summary

The job description is for a Data Platform Engineer at Terazo, a software and platform development firm. The position involves partnering with clients to develop data platforms, creating documentation, and collaborating with API developers. Required skills include designing and building ETL pipelines, data warehouses, RESTful microservices, dimensional data models, cloud-based streaming and ingestion services, and using Git, Github, or GitLab in a CI/CD development workflow.

Requirements

  • Designing and building ETL pipelines that leverage cloud platform services such as Azure Data Factory, AWS Data Pipeline, AWS Glue, or GCP Dataflow
  • Designing and building data warehouses within cloud based analytics services such as Azure Synapse, AWS Redshift, BigQuery, Azure Databricks, Snowflake
  • Developing RESTful microservices using languages like Python, Java, Go, or JavaScript
  • Designing and implementing dimensional data models within cloud based data warehouses (using Schema design and data modelling using sqlalchemy, and liquibase)
  • Designing and implementing cloud based streaming and ingestion services such as Kafka, Kinesis, or ActiveMQ
  • Communicating complex ideas with clients and technical staff
  • Using Git, Github, or GitLab in a CI/CD development workflow
  • Using Automated deployment services such as Jenkins, Azure DevOps, AWS CodePipeline
  • Writing effective technical documentation

Responsibilities

  • Partner with clients to develop and maintain first-class data platforms utilizing cloud data stores and other data science tools
  • Develop streaming data processors that crunch numbers in real time to help our clients make smart decisions
  • Collaborate with API developers to build data-driven microservices for our clients
  • Create valuable documentation and training material to help clients understand our work

Preferred Qualifications

  • Understanding of distributed computing framework - Apache Spark, Hadoop
  • Understanding of CI/CD technologies - Jenkins, Circle CI, Azure DevOps, AWS CodePipeline
  • Ability to write scalable PySpark modules
  • Familiarity with queuing services - Kafka, Kinesis, or ActiveMQ
  • Familiarity using languages such as Scala, Go or R
  • Build infrastructure and configuration as code modules using Terraform, Ansible, Chef, Puppet, etc

Benefits

  • Competitive salary
  • Open paid time off policy
  • Health, dental, and vision insurance
  • 401k with company match
  • Family Leave
  • Group life insurance and more!

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.

Similar Jobs

Please let Terazo know you found this job on JobsCollider. Thanks! πŸ™