Remote Senior Data Engineer

closed
Logo of Encora

Encora

๐Ÿ“Remote - India

Job highlights

Summary

Join our team as a Data Engineer to provide high-quality data sets, build first-class data products, and partner with cross-functional teams. As a key member of our organization, you will be responsible for data pipelines' SLA and dependency management, writing technical documentation, and presenting at design reviews.

Requirements

  • Bachelorโ€™s degree in Computer Science or related field
  • 6+ years experience in commercial software development
  • Experience to mentor and support more junior members of the Team
  • Experience with Big Data technologies such as Snowflake, Databricks
  • Expert level skills writing and optimizing complex SQL
  • Solid experience developing complex ETL processes from concept to implementation; these should include defining SLA, performance measurements and monitoring
  • Advanced query language and data exploration skills, proven record of writing complex SQL queries across large datasets
  • Hands-on knowledge of the modern AWS Data Ecosystem, including AWS S3 and AWS Lambda
  • Experience with relational and NoSQL databases
  • Experience with programming languages such as Python, Java and/or Scala
  • Proficiency with Linux command line and systems administrations
  • Knowledge of cloud data warehouse concepts
  • Excellent verbal and written communication skills; Proven interpersonal skills and ability to convey key insights from complex analyses in summarized business terms
  • Ability to effectively communicate with technical teams
  • Ability to work in a fast-paced and dynamic environment

Responsibilities

  • Providing the organizationโ€™s data consumers high quality data sets by data curation, consolidation, and manipulation from a wide variety of large scale (terabyte and growing) sources
  • Building first-class data products and ETL processes that interact with terabytes of data on leading platforms such as Snowflake and BigQuery
  • Developing and improving Foundational datasets by creating efficient and scalable data models to be used across the organization
  • Partnering with our Data Science, Analytics, CRM, and Machine Learning teams
  • Responsible for the data pipelinesโ€™ SLA and dependency management
  • Writing technical documentation for data solutions, and presenting at design reviews
  • Solving data pipeline failure events and implementing anomaly detection
  • Working with various teams from Data Science to Product owners and front end developers on tracking solutions and solving technical challenges

Preferred Qualifications

While not mandatory, experience in building and operating data pipelines and products in compliance with the data mesh philosophy would be beneficial. Demonstrated efficiency in treating data: data lineage, data quality, data observability and data discoverability

This job is filled or no longer available