Senior Data Engineer

Aimpoint Digital Logo

Aimpoint Digital

πŸ“Remote - Colombia

Summary

Join Aimpoint Digital, a fully remote data and analytics consultancy, and become a trusted advisor to clients across various industries. You will work independently and collaboratively to design and develop end-to-end analytical solutions, building cloud data warehouses, data lakes, and ETL/ELT pipelines. Leverage modern tools like Snowflake, Databricks, and dbt, while writing code in SQL, Python, and Spark. As a Senior Data Engineer, you will manage individual workstreams, contribute to business development, and introduce innovative ideas. This role requires strong communication skills and extensive experience in data engineering, data modeling, and software engineering best practices.

Requirements

  • Degree educated in Computer Science, Engineering, Mathematics, or equivalent experience
  • Experience with managing stakeholders and collaborating with customers
  • Strong written and verbal communication skills required
  • 3+ years working with relational databases and query languages
  • 3+ years building data pipelines in production and ability to work across structured, semi-structured and unstructured data
  • 3+ years data modeling (e.g. star schema, entity-relationship)
  • 3+ years writing clean, maintainable, and robust code in Python, Scala, Java, or similar coding languages
  • Ability to manage an individual workstream independently
  • Expertise in software engineering concepts and best practices

Responsibilities

  • Become a trusted advisor working together with our clients, from data owners and analytic users to C-level executives
  • Work independently as part of a small team to solve complex data engineering use-cases across a variety of industries
  • Design and develop the analytical layer, building cloud data warehouses, data lakes, ETL/ELT pipelines, and orchestration jobs
  • Work with modern tools such as Snowflake, Databricks, Fivetran, and dbt and credentialize your skills with certifications
  • Write code in SQL, Python, and Spark, and use software engineering best-practices such as Git and CI/CD
  • Support the deployment of data science and ML projects into production
  • Note: You will not be developing machine learning models or algorithms

Preferred Qualifications

  • DevOps experience preferred
  • Experience working with cloud data warehouses (Snowflake, Google BigQuery, AWS Redshift, Microsoft Synapse) preferred
  • Experience working with cloud ETL/ELT tools (Fivetran, dbt, Matillion, Informatica, Talend, etc.) preferred
  • Experience working with cloud platforms (AWS, Azure, GCP) and container technologies (Docker, Kubernetes) preferred
  • Experience working with Apache Spark preferred
  • Experience preparing data for analytics and following a data science workflow to drive business results preferred
  • Consulting experience strongly preferred
  • Willingness to travel

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.

Similar Remote Jobs