Principal Data Engineer
Seamless.AI
πRemote - United States
Please let Seamless.AI know you found this job on JobsCollider. Thanks! π
Job highlights
Summary
Join Seamless.AI as a Principal Data Engineer and leverage your expertise in Python, Spark, AWS Glue, and ETL technologies to design, develop, and maintain robust data pipelines. Collaborate with cross-functional teams, optimize ETL processes, and apply data matching methodologies. You will need a proven track record in data acquisition and transformation, experience with large datasets, and strong organizational skills. This role requires a Bachelor's degree in a related field and 7+ years of experience as a Data Engineer with a focus on ETL processes and data integration. Seamless.AI offers a dynamic work environment and has been recognized for its growth and workplace culture.
Requirements
- Strong proficiency in Python and experience with related libraries and frameworks (e.g., pandas, NumPy, PySpark)
- Hands-on experience with AWS Glue or similar ETL tools and technologies
- Solid understanding of data modeling, data warehousing, and data architecture principles
- Expertise in working with large data sets, data lakes, and distributed computing frameworks
- Strong proficiency in SQL
- Familiarity with data matching, deduplication, and aggregation methodologies
- Experience with data governance, data security, and privacy practices
- Strong problem-solving and analytical skills, with the ability to identify and resolve data-related issues
- Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams
- Highly organized and self-motivated, with the ability to manage multiple projects and priorities simultaneously
- Bachelor's degree in Computer Science, Information Systems, related fields or equivalent years of work experience
- 7+ years of experience as a Data Engineer, with a focus on ETL processes and data integration
- Professional experience with Spark and AWS pipeline development required
Responsibilities
- Design, develop, and maintain robust and scalable ETL pipelines to acquire, transform, and load data from various sources into our data ecosystem
- Collaborate with cross-functional teams to understand data requirements and develop efficient data acquisition and integration strategies
- Implement data transformation logic using Python and other relevant programming languages and frameworks
- Utilize AWS Glue or similar tools to create and manage ETL jobs, workflows, and data catalogs
- Optimize and tune ETL processes for improved performance and scalability, particularly with large data sets
- Apply methodologies and techniques for data matching, deduplication, and aggregation to ensure data accuracy and quality
- Implement and maintain data governance practices to ensure compliance, data security, and privacy
- Collaborate with the data engineering team to explore and adopt new technologies and tools that enhance the efficiency and effectiveness of data processing
Preferred Qualifications
Experience developing and training machine learning models
Share this job:
Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.
Similar Remote Jobs
- πIndia
- πIndia
- π°$142k-$282kπUnited States, Canada
- πIndia
- πSpain
- πSouth Africa
- π°$210k-$280kπUnited States
- πWorldwide
- π°$230k-$250kπUnited States
Please let Seamless.AI know you found this job on JobsCollider. Thanks! π