Remote Senior Data Engineer

Logo of Encora

Encora

📍Remote - India

Job highlights

Summary

Join our team at Encora as a Data Engineer to develop robust data models, design scalable semantic layers, and collaborate with cross-functional teams to deliver high-quality data products.

Requirements

  • Bachelor’s degree in Computer Science or related field
  • 6+ years experience in commercial data engineering or software development
  • Extensive data modeling experience working with cloud-based data warehousing solutions at scale, particularly Snowflake . Knowledge of data architecture, different data modeling approaches, performance optimization, and cost management
  • Deep expertise in writing and optimizing complex SQL queries, including performance tuning, clustering strategies, and troubleshooting
  • Expertise in creating and maintaining scalable and efficient data models using tools like dbt . Experience in developing star and snowflake schemas, as well as implementing best practices for data modeling and transformation
  • Strong experience in building, maintaining, and optimizing ETL/ELT pipelines using tools like dbt , Airflow , Matillion , or similar. Familiarity with data orchestration, data quality monitoring, automation and SLAs
  • Proficiency in at least one programming language such as Python or Java for data manipulation, automation, and pipeline development
  • Hands-on experience with cloud platforms such as AWS , Azure , or GCP , particularly with services like S3, Lambda, Glue, and other data-related services
  • Experience with data visualization, data front end and BI tools such as Tableau , Looker , or Streamlit to build and support the semantic layer and self-service analytics
  • Familiarity with CI/CD and version control systems like Git to automate deployment and testing of data models and pipelines
  • Understanding of data governance principles, data privacy regulations, and security best practices. Experience implementing role-based access controls and encryption strategies in data environments
  • Experience working in Agile environments and collaborating with cross-functional teams using tools like JIRA or Confluence

Responsibilities

  • Develop robust and scalable data models that support analytics, data science, reporting and other data workflows
  • Design, build and maintain the semantic layer in our cloud-native warehouse that enables self-service analytics and data democratization
  • Identify opportunities to leverage new data technologies and methodologies
  • Collaborate with cross-functional teams to prototype, test, and implement innovative data solutions that enhance data quality, accessibility, and usability
  • Design, implement, and maintain automated orchestration of our data workflows that support transformation across several layers in our warehouse, ensuring data integrity and data availability while optimizing for performance at scale
  • Partner closely with data analysts, data scientists, and other stakeholders to understand business requirements, translate them into engineering specifications, and deliver high-quality data products
  • Implement data governance standards and best practices to ensure data is accurate, consistent, and secure

Benefits

Work from home

Job description

Important Information

Location: PAN India

Experience: 8+ Years

Job Mode: Full-time

Work Mode: Work from home

Responsibilities and Duties

  • Develop robust and scalable data models that support analytics, data science, reporting and other data workflows. Ensure data models adhere to best practices for maintainability, performance, and security.
  • Design, build and maintain the semantic layer in our cloud-native warehouse that enables self-service analytics and data democratization. Create reusable, business-friendly data abstractions that empower stakeholders across the organization.
  • Identify opportunities to leverage new data technologies and methodologies. Collaborate with cross-functional teams to prototype, test, and implement innovative data solutions that enhance data quality, accessibility, and usability.
  • Design, implement, and maintain automated orchestration of our data workflows that support transformation across several layers in our warehouse, ensuring data integrity and data availability while optimizing for performance at scale.
  • Partner closely with data analysts, data scientists, and other stakeholders to understand business requirements, translate them into engineering specifications, and deliver high-quality data products.
  • Implement data governance standards and best practices to ensure data is accurate, consistent, and secure. Support compliance with relevant data privacy and security regulations.

Qualifications and Skills·

  • Bachelor’s degree in Computer Science or related field
  • 6+ years experience in commercial data engineering or software development

Role-specific Requirements

  • Extensive data modeling experience working with cloud-based data warehousing solutions at scale, particularly Snowflake. Knowledge of data architecture, different data modeling approaches, performance optimization, and cost management.
  • Deep expertise in writing and optimizing complex SQL queries, including performance tuning, clustering strategies, and troubleshooting.
  • Expertise in creating and maintaining scalable and efficient data models using tools like dbt. Experience in developing star and snowflake schemas, as well as implementing best practices for data modeling and transformation.
  • Strong experience in building, maintaining, and optimizing ETL/ELT pipelines using tools like dbtAirflowMatillion, or similar. Familiarity with data orchestration, data quality monitoring, automation and SLAs.
  • Proficiency in at least one programming language such as Python or Java for data manipulation, automation, and pipeline development.
  • Hands-on experience with cloud platforms such as AWSAzure, or GCP, particularly with services like S3, Lambda, Glue, and other data-related services.
  • Experience with data visualization, data front end and BI tools such as TableauLooker, or Streamlit to build and support the semantic layer and self-service analytics.
  • Familiarity with CI/CD and version control systems like Git to automate deployment and testing of data models and pipelines.
  • Understanding of data governance principles, data privacy regulations, and security best practices. Experience implementing role-based access controls and encryption strategies in data environments.
  • Experience working in Agile environments and collaborating with cross-functional teams using tools like JIRA or Confluence.

Communication/people skills

  • Excellent verbal and written communication skills.
  • Proven interpersonal skills and ability to convey key insights from complex analyses in summarized business terms.
  • Ability to build trust and effectively communicate with technical teams.
  • Strong interpersonal skills and the ability to work in a fast-paced and dynamic environment
  • Ability to make progress on projects independently and enthusiasm for solving difficult problems

About Encora

Encora is the preferred digital engineering and modernization partner of some of the world’s leading enterprises and digital native companies. With over 9,000 experts in 47+ offices and innovation labs worldwide, Encora’s technology practices include Product Engineering & Development, Cloud Services, Quality Engineering, DevSecOps, Data & Analytics, Digital Experience, Cybersecurity, and AI & LLM Engineering.

At Encora, we hire professionals based solely on their skills and qualifications, and do not discriminate based on age, disability, religion, gender, sexual orientation, socioeconomic status, or nationality.

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.
Please let Encora know you found this job on JobsCollider. Thanks! 🙏