Cloud DevOps Engineer

Logo of Collectivei

Collectivei

๐Ÿ’ต $100k-$170k
๐Ÿ“Remote - Canada

Job highlights

Summary

Join Collective[i], a company on a mission to help people and companies prosper, as a Senior Data Engineer. You will manage and optimize data infrastructure, focusing on both data engineering and DevOps responsibilities. This role requires expertise in AWS and SageMaker, with experience in Snowflake highly desirable. You will collaborate with Data Scientists to deploy machine learning models and automate deployment and monitoring. Collective[i] offers a remote work environment and is backed by experienced entrepreneurs with a proven track record.

Requirements

  • Bachelorโ€™s or Masterโ€™s degree in Computer Science, Data Engineering, or a related field
  • 5+ years of experience in Data Engineering with at least 3+ years working in AWS environments
  • Strong knowledge of AWS services, specifically SageMaker, Lambda, Glue, and Redshift
  • Hands-on experience deploying machine learning models in AWS SageMaker
  • Proficiency in DevOps practices, including CI/CD pipelines, containerization (Docker, ECS, EKS), and infrastructure-as-code (IaC) tools like Terraform or CloudFormation
  • Advanced SQL skills and experience in building and maintaining complex ETL workflows
  • Proficiency in Python, with additional skills in Java or Scala
  • Practical experience with Airflow for DAG management and data orchestration
  • Proficient in version control (GIT) and containerized deployment with Docker and managed services such as AWS Fargate, ECS, or EKS
  • Effective communication, Result oriented approach

Responsibilities

  • Design, develop, and maintain ETL pipelines to ensure reliable data flow and high-quality data for analytics and reporting
  • Build and optimize data models, implementing best practices to handle large volumes of data efficiently in Snowflake
  • Create and maintain complex SQL queries and transformations for data processing and analytics
  • Conduct orchestration and scheduling through Apache Airflow
  • Document data pipelines, architecture, and processes, maintaining clear and updated technical documentation
  • Architect, build, and maintain data science data and models infrastructure on AWS, focusing on scalability, performance, and cost-efficiency
  • Collaborate with Data Scientists to deploy machine learning models on AWS SageMaker, optimizing model performance and ensuring secure deployments
  • Automate deployment and monitoring of ML models using CI/CD pipelines and infrastructure-as-code (IaC) tools such as Terraform or AWS CloudFormation
  • AWS specific tasks (EC2, S3, RDS, VPC, CloudFormation, AutoScaling, CodePipeline, CodeBuild, CodeDeploy, ECS/EKS, cost management, etc.)
  • Set up and manage monitoring solutions (e.g., CloudWatch) to ensure data pipelines and deployed models are operating effectively

Preferred Qualifications

Experience with Snowflake is highly desirable, as our data environment is built around Snowflake for analytics and data warehousing

Benefits

  • $100,000 - $170,000 a year
  • Remote work

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.
Please let Collectivei know you found this job on JobsCollider. Thanks! ๐Ÿ™