Senior Azure Data Engineer

Xebia Poland Logo

Xebia Poland

๐Ÿ“Remote - Worldwide

Summary

Join Xebia, a global leader in digital solutions, and become a key member of our team. We are seeking a highly experienced Data Engineer with a strong background in Azure and data processing. You will be responsible for designing, building, and deploying at-scale infrastructure, developing architecture patterns, and ensuring efficient software development. This role requires extensive experience with data engineering, Azure services, and various programming languages. The ideal candidate will also possess strong communication skills and a collaborative work ethic. We offer a challenging and rewarding work environment with opportunities for professional growth.

Requirements

  • Be available to start within a short time frame (a maximum one monthโ€™s notice)
  • Have 8 yearsโ€™ experience with data engineering
  • Have 2+ yearsโ€™ experience with Azure (Data Factory, Databricks, SQL, Data Lake, Power BI, Devops, Delta Lake, CosmosDB)
  • Possess strong T-SQL skills
  • Possess Python scripting skills
  • Have hands-on experience with ETL tools and processes
  • Possess strong verbal and written communication skills in English
  • Work from the European Union region and have a work permit

Responsibilities

  • Be responsible for at-scale infrastructure design, build and deployment with a focus on distributed systems
  • Build and maintain architecture patterns for data processing, workflow definitions, and system to system integrations using Big Data and Cloud technologies
  • Evaluate and translate technical design to workable technical solutions/code and technical specifications at par with industry standards
  • Drive creation of re-usable artifacts
  • Write efficient and well-organized software to ship products in an iterative, continual release environment
  • Contribute and promote good software engineering practices across the team
  • Communicate clearly and effectively to technical and non-technical audiences
  • Define data retention policies
  • Monitor performance and advise any necessary infrastructure changes

Preferred Qualifications

  • Have experience with data transformation tools โ€“ Databricks and Spark
  • Have experience with data manipulation libraries (such as Pandas, NumPy, PySpark)
  • Have working knowledge of GitHub Actions

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.