Summary
Join Help at Home as a Data Engineer II and play a key role in enhancing our data infrastructure. Collaborate with stakeholders to design scalable, high-quality data solutions, lead the creation of flexible data frameworks, and translate business needs into technical solutions. This remote position involves building CI/CD pipelines, ensuring data security, and contributing to data-driven decision-making. You will work with various technologies including Snowflake, AWS services, and programming languages like Python and Go. Help at Home offers a competitive salary and comprehensive benefits package.
Requirements
- Cloud-first mindset
- Ability to thrive in a fast-paced, dynamic environment, delivering impactful solutions
- Knowledge of data orchestration (Airflow, MWAA) and batch/stream processing platforms (Spark, Kafka, AWS Kinesis)
- Familiarity with testing frameworks and TDD/BDD methodologies
- Self-starter with strong problem-solving skills, quick learning, and a collaborative attitude
- Strong data analysis and relational database skills, including advanced SQL and the ability to create complex queries and stored procedures
- In-depth understanding of data warehouse design principles, relational and dimensional modeling, and ETL/ELT methods
- Understanding of trunk-based development
- Working knowledge of Snowflake and JSON
- Bachelorβs Degree in Computer Science, Data Science, or a related field required
- 5+ years of experience in data engineering, with expertise in cloud (AWS), data warehousing, and Snowflake
- AWS architecture, development, security, and networking experience, including hands-on work with services like S3, Lambda, Glue, EMR, CloudFormation, MWAA, Kinesis, MSK
- Demonstrated experience with automation (e.g., CI/CD, deployment pipelines)
- Proficient in Go, Python, Typescript, or similar programming languages
- Strong experience with data warehousing models (e.g., Kimball, Inman) and design fundamentals
Responsibilities
- Collaborates with stakeholders to understand business requirements and translates them into technical designs and solutions, conveying these designs to the team
- Leads or partners with the DevOps engineer to build CI/CD pipelines, enforce standards, and improve the SDLC and data security posture
- Builds frameworks, creates GitHub actions, and promotes Infrastructure as Code to streamline development and deployment processes
- Maps source system data structures into the data warehouse model, implementing best practices like change data capture (CDC) and slowly changing dimensions (SCD)
- Ensures releases meet defined quality standards for code, data, and security, blocking or delaying implementations when necessary
- Designs and develops flexible, reusable data solutions in alignment with data warehouse architecture standards
- Builds ingestion, integration, and sharing frameworks to improve data access and support Data Mesh initiatives
- Works closely with data engineers to ensure seamless communication and alignment across teams
- Maintains knowledge of current trends and emerging technologies, actively exploring new innovations to enhance our data infrastructure
Benefits
- Direct deposit
- Healthcare, dental, and vision insurance
- Paid time off and parental leave
- 401k
- Ongoing, in-depth training opportunities
- Meaningful work with clients who need your help
- Career growth and experience with an industry leader with 40+years of history in a high-demand field
Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.