Data Engineer II

HashiCorp Logo

HashiCorp

πŸ’΅ $79k-$93k
πŸ“Remote - Canada

Summary

Join HashiCorp's Data Analytics & Engineering team as a mid-level engineer! Your mission will be to oversee and govern the expansion of the existing data architecture, optimize data query performance, and develop scalable data pipelines. You will collaborate with analytics and business teams to improve data models, implement data quality monitoring processes, and design data integrations. This role requires proficiency in Snowflake, AWS cloud services, ETL/ELT pipelines, and Python/Go. You'll also contribute to the engineering wiki and documentation. The ideal candidate possesses a Bachelor's or Master's degree in a related field and relevant experience.

Requirements

  • Bachelor's or Master's in computer engineering, computer science or related area
  • Experience in developing and deploying data pipelines, preferably in the Cloud
  • Minimum 2 years of experience with snowflake- snowflake SQL, Snow pipe, streams, Stored procedure, Task, Hashing, Row Level Security, Time Travel etc
  • Hands on experience with Snowpark and App development with Snowpark and Stream lit
  • Proficient in ETL or ELT Data Pipelines and various aspects, terminologies with Pure SQL like SCD Dimensions, Delta Processing etc
  • Working with AWS cloud services - S3, Lambda, Glue, Athena, IAM, CloudWatch
  • Hands-on experience in API (Restful API) development and maintenance with Cloud technologies( Like AWS API Gateway, AWS lambda etc)
  • Experience in creating pipelines for real time and near real time integration working with different data sources - flat files, XML, JSON, Avro files and databases
  • Fluent in Python/Go language to be able to write maintainable, reusable, and complex functions for backend data processing
  • Robust written and oral communication skills with the ability to synthesize, simplify and explain complex problems to different audiences

Responsibilities

  • Oversee and govern the expansion of existing data architecture and the optimization of data query performance via best practices
  • Develop and maintain scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity
  • Collaborates with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization
  • Implements processes and systems to monitor data quality, ensuring production data is always reliable and available for key stakeholders and business processes that depend on it
  • Writes unit/integration tests, contributes to engineering wiki, and documents work
  • Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues
  • Designs data integrations and data quality framework
  • Designs and evaluates open source and vendor tools for data lineage
  • Works closely with all business units and engineering teams to develop strategy for long term data platform architecture
  • Develop best practices for data structure to ensure consistency within the system

Preferred Qualifications

Front development with python is good to have but not necessary

Benefits

  • $110,500 β€” $130,000 CAD
  • #LI-Remote

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.

Similar Remote Jobs