Senior Azure Data Engineer

Xebia Poland Logo

Xebia Poland

๐Ÿ“Remote - Worldwide

Summary

Join Xebia, a global leader in digital solutions, and become a key member of our team. You will be responsible for designing, building, and deploying large-scale infrastructure, focusing on distributed systems and utilizing Big Data and Cloud technologies. Your expertise in data engineering, Azure, and SQL will be crucial in building and maintaining architecture patterns for data processing and system integrations. You will translate technical designs into workable solutions, create reusable artifacts, and write efficient software. Excellent communication skills and a collaborative approach are essential. The ideal candidate will have 8+ years of data engineering experience and 2+ years of experience with Azure.

Requirements

  • Be available to start within a short time frame (a maximum one monthโ€™s notice)
  • Have 8 yearsโ€™ experience with data engineering
  • Have 2+ yearsโ€™ experience with Azure (Data Factory, Databricks, SQL, Data Lake, Power BI, Devops, Delta Lake, CosmosDB)
  • Possess strong T-SQL skills
  • Possess Python scripting skills
  • Have hands-on experience with ETL tools and processes
  • Possess strong verbal and written communication skills in English
  • Work from the European Union region and a work permit are required

Responsibilities

  • Be responsible for at-scale infrastructure design, build and deployment with a focus on distributed systems
  • Build and maintain architecture patterns for data processing, workflow definitions, and system to system integrations using Big Data and Cloud technologies
  • Evaluate and translate technical design to workable technical solutions/code and technical specifications at par with industry standards
  • Drive creation of re-usable artifacts
  • Write efficient and well-organized software to ship products in an iterative, continual release environment
  • Contribute and promote good software engineering practices across the team
  • Communicate clearly and effectively to technical and non-technical audiences
  • Define data retention policies
  • Monitor performance and advise any necessary infrastructure changes

Preferred Qualifications

  • Have experience with data transformation tools โ€“ Databricks and Spark
  • Have experience with data manipulation libraries (such as Pandas, NumPy, PySpark)
  • Have working knowledge of GitHub Actions

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.

Similar Remote Jobs