Senior/Lead Data Engineer

Mira Search Logo

Mira Search

📍Remote - Poland

Summary

Join Mira Search, a top HR agency in Dubai, for an exciting opportunity with a leading global management consulting firm. This Data Engineer role involves leading data migration to Databricks, developing ETL pipelines in Python, designing database schemas, and implementing CI/CD workflows using GitHub Actions. You will ensure data quality, create insightful visualizations, and collaborate effectively with various stakeholders. The ideal candidate possesses 5+ years of data engineering experience, strong SQL Server and Databricks expertise, and proficiency in Python and data visualization tools. This position offers flexible working time, full remote work, and various benefits.

Requirements

  • 5+ years of experience in data engineering
  • Strong expertise in SQL Server and Databricks – hands-on experience with data warehouses
  • Proficiency in Python – ETL development, data processing, and manipulation
  • Data visualization – experience with Python libraries (e.g., Matplotlib, Seaborn) and Power BI
  • CI/CD knowledge – familiarity with automation tools, especially GitHub Actions
  • T-SQL – ability to write complex queries efficiently
  • AWS services for data engineering – experience working with cloud-based data solutions
  • Leadership skills – ability to mentor junior engineers and take ownership of projects
  • Strong communication skills – capable of interacting with both technical and non-technical stakeholders
  • Problem-solving and analytical mindset – ability to identify and resolve complex data challenges
  • Ability to work independently and in an Agile team – adaptability to dynamic environments
  • Organizational skills – ability to manage multiple tasks and projects simultaneously
  • English level B2 or higher – ability to communicate effectively in an international team

Responsibilities

  • Lead the migration and optimization of one of our products from SQL Server to Databricks, ensuring a seamless transition and minimal disruption to operations
  • Design, develop, and maintain ETL pipelines in Python to support data ingestion, transformation, and loading processes
  • Create and optimize database schemas to support efficient data storage and retrieval
  • Implement and manage CI/CD workflows using GitHub Actions to automate testing, deployment, and monitoring of data pipelines
  • Ensure data quality through validation, cleansing, and monitoring processes, working closely with data scientists to meet analytical needs
  • Utilize Python visualization libraries such as matplotlib and seaborn, and Power BI to create insightful data visualizations for stakeholders
  • Maintain clear and effective communication with technical and non-technical team members, including business stakeholders, data scientists, and IT leadership
  • Track and manage tasks using tools like Jira to ensure timely and efficient project delivery

Preferred Qualifications

  • Data Quality & Data Governance – understanding of best practices in data management
  • Power BI / Tableau – experience with BI tools for reporting and visualization
  • Agile methodologies (Scrum/Kanban) – experience working with Jira or similar tools

Benefits

  • Flexible working time
  • Professional and ambitious team
  • Learning opportunities, seminars and conferences and time for exploring new technologies
  • Full remote work
  • Partial compensation of medical insurance (Luxmed) and multisport card
  • Co-funding for language courses (Polish and English)

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.