Remote Senior Data Engineer

Logo of Thoughtworks

Thoughtworks

πŸ“Remote - Brazil

Job highlights

Summary

Join Thoughtworks and thrive. Together, our extra curiosity, innovation, passion and dedication overcomes ordinary.

Requirements

  • Advanced/fluent english for daily conversation
  • Solid experience as a Data Engineer, with a focus on distributed systems and Big Data
  • Advanced proficiency in Python, Pyspark, Databricks and SQL
  • Hands on experience with AWS services related to Big Data
  • Working with data excites you; You can build and operate data pipelines, and maintain data storage, all within distributed systems
  • Hands-on experience of data modeling and modern data engineering tools and platforms
  • Experience in writing clean, high-quality code using the preferred programming language
  • Built and deployed large-scale data pipelines and data-centric applications using any of the distributed storage platforms and distributed processing platforms in a production setting
  • Experience with data visualization techniques and can communicate the insights as per the audience
  • Experience with data-driven approaches and can apply data security and privacy strategy to solve business problems
  • Experience with different types of databases (i.e.: SQL, NoSQL, data lake, data schemas, etc.)
  • Understand the importance of stakeholder management and can easily liaise between clients and other key stakeholders throughout projects, ensuring buy-in and gaining trust along the way
  • Resilient in ambiguous situations and can adapt your role to approach challenges from multiple perspectives
  • Don't shy away from risks or conflicts, instead you take them on and skillfully manage them
  • Eager to coach, mentor and motivate others and you aspire to influence teammates to take positive action and accountability for their work
  • Enjoy influencing others and always advocate for technical excellence while being open to change when needed

Responsibilities

  • Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions
  • Develop intricate data processing pipelines, addressing clients' most challenging problems
  • Collaborate with data scientists to design scalable implementations of their models
  • Write clean, iterative code using TDD and leverage various continuous delivery practices to deploy, support and operate data pipelines
  • Use different distributed storage and computing technologies from the plethora of options available
  • Develop data models by selecting from a variety of modeling techniques and implementing the chosen data model using the appropriate technology stack
  • Collaborate with the team on the areas of data governance, data security and data privacy
  • Incorporate data quality into your day-to-day work

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.
Please let Thoughtworks know you found this job on JobsCollider. Thanks! πŸ™