Remote Data Engineer

Logo of Texture

Texture

πŸ“Remote - Worldwide

Job highlights

Summary

The job involves designing, constructing, and optimizing data pipelines using various technologies like BigQuery, Redshift, Databricks, or Snowflake. The candidate should have expertise in ELT processes and tools like dbt, experience with big data technologies like Hadoop, Spark, or Flink, proficiency in SQL and programming languages like Python, and familiarity with real-time analytics and data streaming technologies like Kafka or Kinesis. They must also prioritize data security and have strong problem-solving skills. The candidate requires at least 6 years of relevant experience and a proven track record of building and optimizing data pipelines and architectures.

Requirements

  • Expertise in ELT processes and tools like dbt
  • Experience with big data technologies like Hadoop, Spark, or Flink
  • Proficiency in SQL and programming languages like Python
  • Familiarity with data security best practices

Responsibilities

Designing, constructing, and optimizing data pipelines to support various data workflows

Benefits

  • Competitive salary and a compelling benefits package
  • A front-row seat in the revolutionizing of energy data management
  • Ample room for career growth

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.
Please let Texture know you found this job on JobsCollider. Thanks! πŸ™