Summary
Join Wizeline, a global digital services company, and become a Data Engineer. You will build and maintain scalable data pipelines, focusing on data migration projects with large datasets. This role requires strong Snowflake experience, proficiency in dbt and Apache Airflow, and expert SQL skills. Proficiency in Python and familiarity with various data warehousing tools are also essential. You will work collaboratively in an agile environment, contributing to CI/CD pipelines and utilizing version control systems. Fluency in English and Portuguese is required.
Requirements
- Obtain a Bachelor's or Master's degree in Computer Science, Engineering, or a related quantitative field
- Have 3+ years of experience in data engineering, with a focus on building and maintaining scalable data pipelines
- Possess solid experience with data migration projects and working with large datasets
- Demonstrate strong hands-on experience with Snowflake, including data loading, querying, and performance optimization
- Show proficiency in dbt (data build tool) for data transformation and modeling
- Have proven experience with Apache Airflow for scheduling and orchestrating data workflows
- Possess expert-level SQL skills, including complex joins, window functions, and performance tuning
- Show proficiency in Python for data manipulation, scripting, and automation for edge cases
- Demonstrate familiarity with PySpark, AWS Athena, and Google BigQuery (source systems)
- Show understanding of data warehousing concepts, dimensional modeling, and ELT principles
- Have knowledge of building CI/CD pipelines for code deployment
- Have experience with version control systems (e.g., Github)
- Possess excellent problem-solving, analytical, and communication skills
- Be able to work independently and as part of a collaborative team in an agile environment
- Speak and write in English fluently; Effective communicator
- Be proficient in the Portuguese language
Responsibilities
- Build and maintain scalable data pipelines focusing on data migration projects with large datasets
- Work with Snowflake, including data loading, querying, and performance optimization
- Use dbt (data build tool) for data transformation and modeling
- Utilize Apache Airflow for scheduling and orchestrating data workflows
- Employ expert-level SQL skills, including complex joins, window functions, and performance tuning
- Use Python for data manipulation, scripting, and automation for edge cases
- Work with PySpark, AWS Athena, and Google BigQuery (source systems)
- Apply understanding of data warehousing concepts, dimensional modeling, and ELT principles
- Build CI/CD pipelines for code deployment
- Use version control systems (e.g., Github)
- Solve problems, analyze data, and communicate effectively
- Work independently and as part of a collaborative team in an agile environment
- Speak and write in English fluently; Effective communicator
Benefits
Access to LinkedIn Learning and Pluralsight
Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.