Principal Data Engineer

Seamless.AI Logo

Seamless.AI

πŸ“Remote - Costa Rica

Summary

Join Seamless.AI as a Freelance Principal Data Engineer and design, develop, and maintain scalable ETL pipelines. Collaborate with stakeholders to define data requirements and implement data transformation logic using Python and AWS Glue. Optimize ETL processes for large datasets, applying data matching and aggregation techniques. Ensure compliance with data governance and security best practices. This independent contractor role demands strong Python, Spark, and AWS Glue skills, along with experience in data modeling, warehousing, and machine learning. The position requires a Bachelor's degree or equivalent experience and 7+ years of relevant experience. Fluency in English and Spanish is also required.

Requirements

  • Strong proficiency in Python and experience with related libraries and frameworks (e.g., pandas, NumPy, PySpark)
  • Hands-on experience with AWS Glue or similar ETL tools and technologies
  • Solid understanding of data modeling, data warehousing, and data architecture principles
  • Expertise in working with large data sets, data lakes, and distributed computing frameworks
  • Experience developing and training machine learning models
  • Strong proficiency in SQL
  • Familiarity with data matching, deduplication, and aggregation methodologies
  • Experience with data governance, data security, and privacy practices
  • Strong problem-solving and analytical skills, with the ability to identify and resolve data-related issues
  • Excellent communication and collaboration skills, with the ability to work effectively independently
  • Highly organized and self-motivated, with the ability to manage multiple projects and priorities simultaneously
  • Bachelor's degree in Computer Science, Information Systems, related fields or equivalent years of work experience
  • 7+ years of experience as a Data Engineer, with a focus on ETL processes and data integration
  • Professional experience with Spark and AWS pipeline development required
  • Fluency in English and Spanish is required, as this role involves regular communication with international clients and team members. Applicants should have an advanced level of English in both written and verbal communication

Responsibilities

  • Design, develop, and maintain scalable ETL pipelines to acquire, transform, and load data from various sources into the data ecosystem
  • Work with stakeholders to understand data requirements and propose effective data acquisition and integration strategies
  • Implement data transformation logic using Python and relevant frameworks, ensuring efficiency and reliability
  • Utilize AWS Glue or similar tools to create and manage ETL jobs, workflows, and data catalogs
  • Optimize ETL processes to improve performance and scalability, particularly for large datasets
  • Apply data matching, deduplication, and aggregation techniques to enhance data accuracy and quality
  • Ensure compliance with data governance, security, and privacy best practices within the scope of project deliverables
  • Provide recommendations on emerging technologies and tools that enhance data processing efficiency

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.

Similar Remote Jobs