Remote Big Data Engineer

Logo of Nagarro

Nagarro

📍Remote - Romania

Job highlights

Summary

Join us at Nagarro as a Senior Data Engineer to collaborate with the Business Analytics & Insights team in delivering high-quality data solutions, leading technical planning for data migration, and designing scalable data pipelines.

Requirements

  • 8+ years of IT experience
  • Minimum of 4 years working with Azure Databricks
  • Proficiency in Data Modeling and Source System Analysis
  • Strong knowledge of PySpark and SQL
  • Experience with Azure components: Data Factory, Data Lake, SQL Data Warehouse (DW), and Azure SQL
  • Experience with Python for data engineering purposes
  • Ability to conduct data profiling, cataloging, and mapping for technical design and construction of data flows
  • Familiarity with data visualization/exploration tools

Responsibilities

  • Lead the technical planning for data migration, including data ingestion, transformation, storage, and access control within Azure Data Factory and Azure Data Lake
  • Design and implement scalable, efficient data pipelines to ensure smooth data movement from multiple sources using Azure Databricks
  • Develop reusable frameworks for the ingestion of large datasets
  • Ensure data quality and integrity by implementing robust validation and cleansing mechanisms throughout the data pipeline
  • Work with event-based/streaming technologies to ingest and process data in real-time
  • Provide technical support to the team, resolving challenges during the migration and post-migration phases
  • Stay current with the latest advancements in cloud computing, data engineering, and analytics technologies; recommend best practices and industry standards for data lake solutions

Job description

Company Description

👋🏼 We’re Nagarro.

We are a digital product engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (18,500+ experts across 36 countries, to be exact). Our work culture is dynamic and non-hierarchical. We’re looking for great new colleagues. That’s where you come in!

By this point in your career, it is not just about the tech you know or how well you can code. It is about what more you want to do with that knowledge. Can you help your teammates proceed in the right direction? Can you tackle the challenges our clients face while always looking to take our solutions one step further to succeed at an even higher level? Yes? You may be ready to join us.

Job Description

In this role, you will work with our Business Analytics & Insights (BAI) team, collaborating with cross-functional teams to deliver high-quality data solutions in areas such as Supply Chain, Finance, Operations, Customer Experience, HR, Risk Management, and Global IT.

Responsibilities:

  • Lead the technical planning for data migration, including data ingestion, transformation, storage, and access control within Azure Data Factory and Azure Data Lake.
  •  Design and implement scalable, efficient data pipelines to ensure smooth data movement from multiple sources using Azure Databricks.
  •  Develop reusable frameworks for the ingestion of large datasets.
  •  Ensure data quality and integrity by implementing robust validation and cleansing mechanisms throughout the data pipeline.
  •  Work with event-based/streaming technologies to ingest and process data in real-time.
  •  Provide technical support to the team, resolving challenges during the migration and post-migration phases.
  •  Stay current with the latest advancements in cloud computing, data engineering, and analytics technologies; recommend best practices and industry standards for data lake solutions.

Qualifications

  • 8+ years of IT experience.
  •  Minimum of 4 years working with Azure Databricks.
  •  Proficiency in Data Modeling and Source System Analysis.
  •  Strong knowledge of PySpark and SQL.
  •  Experience with Azure components: Data Factory, Data Lake, SQL Data Warehouse (DW), and Azure SQL.
  •  Experience with Python for data engineering purposes.
  •  Ability to conduct data profiling, cataloging, and mapping for technical design and construction of data flows.
  • Familiarity with data visualization/exploration tools.

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.
Please let Nagarro know you found this job on JobsCollider. Thanks! 🙏