Principal Data Engineer

Aimpoint Digital
Summary
Join Aimpoint Digital, a fully remote data and analytics consultancy, as a Principal Data Engineer. You will work with clients across various industries to design and develop end-to-end analytical solutions, improving their data-driven insights. Responsibilities include managing client engagements, contributing to practice development, and aiding in business development. You will design and develop analytical layers, build cloud data warehouses, and work with modern tools like Snowflake and Databricks. This role requires strong communication skills, experience with relational databases, data pipelines, and data modeling, as well as proficiency in coding languages like Python. The ideal candidate will also possess experience with cloud platforms and container technologies.
Requirements
- Degree educated in Computer Science, Engineering, Mathematics, or equivalent experience
- Experience with managing stakeholders and collaborating with customers
- Strong written and verbal communication skills required
- 5+ years working with relational databases and query languages
- 5+ years building data pipelines in production and ability to work across structured, semi-structured and unstructured data
- 5+ years data modeling (e.g. star schema, entity-relationship)
- 5+ years writing clean, maintainable, and robust code in Python, Scala, Java, or similar coding languages
- Ability to manage an individual workstream independently and ability to manage a small 1-2 person team
- Expertise in software engineering concepts and best practices
Responsibilities
- Become a trusted advisor working together with our clients, from data owners and analytic users to C-level executives
- Work independently or manage small teams to solve complex data engineering use-cases across a variety of industries
- Design and develop the analytical layer, building cloud data warehouses, data lakes, ETL/ELT pipelines, and orchestration jobs
- Work with modern tools such as Snowflake, Databricks, Fivetran, and dbt and credentialize your skills with certifications
- Write code in SQL, Python, and Spark, and use software engineering best-practices such as Git and CI/CD
- Support the deployment of data science and ML projects into production
- Note: You will not be developing machine learning models or algorithms
Preferred Qualifications
- DevOps experience preferred
- Experience working with cloud data warehouses (Snowflake, Google BigQuery, AWS Redshift, Microsoft Synapse) preferred
- Experience working with cloud ETL/ELT tools (Fivetran, dbt, Matillion, Informatica, Talend, etc.) preferred
- Experience working with cloud platforms (AWS, Azure, GCP) and container technologies (Docker, Kubernetes) preferred
- Experience working with Apache Spark preferred
- Experience preparing data for analytics and following a data science workflow to drive business results preferred
- Consulting experience strongly preferred
- Willingness to travel
Benefits
- We are actively seeking candidates for full-time, remote work within the US or UK
- Atlanta-based applicants will have the opportunity to work in our headquarters in Sandy Springs, GA
Share this job:
Similar Remote Jobs



