Senior Data Engineer

YipitData Logo

YipitData

๐Ÿ“Remote - India

Summary

Join YipitData's dynamic Data Engineering team as a Senior Data Engineer. This remote role, based in India, involves building and optimizing data pipelines in Databricks, using Spark and SQL. You will collaborate with analysts and stakeholders, mentor team members, and define technical standards. The ideal candidate has 6+ years of data engineering experience, a strong understanding of Spark and SQL, and a proven track record of success. This high-impact role offers opportunities for growth and learning within a fast-paced, innovative environment. You will report to the Senior Manager of Data Engineering and receive hands-on training with cutting-edge data tools and techniques. The position requires flexibility to accommodate meetings with US and LatAm teams on 2-3 days per week.

Requirements

  • You hold a Bachelorโ€™s or Masterโ€™s degree in Computer Science, STEM, or a related technical discipline
  • You have 6+ years of experience as a Data Engineer or in other technical functions
  • You are excited about solving data challenges and learning new skills
  • Proven track record of setting best practices for data modelling, governance, and CI/CD
  • You are comfortable working with large-scale datasets using PySpark, Delta, and Databricks
  • Comfortable steering technical roadmaps and negotiating priorities with senior stakeholders
  • Able to balance long-term architecture thinking with hands-on execution under tight timelines
  • You are eager to constantly learn new technologies
  • You are a self-starter who enjoys working collaboratively with stakeholders
  • You have exceptional verbal and written communication skills

Responsibilities

  • Report directly to the Senior Manager of Data Engineering, who will provide significant, hands-on training on cutting-edge data tools and techniques
  • Own the design, build, and continuous improvement of end-to-end data pipelines and the surrounding platform
  • Define and evangelize technical standards (coding, data-modeling, monitoring) for the team
  • Mentor and pair-program with other engineers; provide architectural and code reviews
  • Lead deep-dive performance-tuning initiatives (Spark UI analysis, cluster configuration, shuffle reduction)
  • Evaluate and introduce new Databricks / AWS capabilities that improve reliability or efficiency
  • Translate ambiguous business questions into scalable, production-ready data solutions with clear SLAs
  • Produce high-quality documentation, architecture diagrams, and internal training sessions

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.

Similar Remote Jobs