Data Engineering Developer

Workleap Logo

Workleap

πŸ“Remote - Canada

Summary

Join Workleap, a company building employee experience software, as a Data Engineering Developer. You will develop and maintain Python services, build and manage data pipelines using Airbyte and Dagster, and implement integrations with external APIs. Collaborate with data engineers, scientists, and app developers to ensure data accessibility, reliability, and security. Contribute to internal platform components supporting AI integration and participate in pipeline observability and testing. This hybrid role blends data engineering and application development, offering opportunities to grow your architectural instincts and contribute to a modern data platform.

Requirements

  • Strong experience with Python and object-oriented programming
  • Proficient in HTTP API consumption and integration, including handling pagination, authentication, and rate limits
  • Comfortable working with modern Python tooling and conventions, including FastAPI and Pydantic
  • Hands-on experience with Airbyte and Dagster, used as part of structured pipeline development
  • Good understanding of data modeling, schema evolution, and API/data contract management
  • Solid awareness of security practices in a data engineering context (e.g., RBAC, credential management, scoped execution)
  • Comfortable with Git and collaborative software engineering practices

Responsibilities

  • Develop and maintain Python services, libraries, and utilities to support data workflows and platform-level orchestration
  • Use Airbyte and Dagster to build and manage ingestion and transformation pipelines
  • Implement reliable integrations with external APIs and cloud services
  • Apply object-oriented design principles and leverage modern Python tools and conventions (e.g., FastAPI, Pydantic) to write maintainable, production-grade code
  • Collaborate on data modeling, schema design, and contract management between services
  • Ensure security, including access controls, secrets management, and pipeline isolation
  • Contribute to internal platform components that support AI integration, including retrieval-augmented generation (RAG) pipelines and data preparation for embeddings
  • Participate in pipeline observability and testing, including logging, versioning, and validation strategies
  • Work cross-functionally to deliver robust, accessible data that powers our SaaS analytics and AI features

Preferred Qualifications

  • Exposure to data warehouse technologies (e.g., Snowflake, BigQuery, Redshift)
  • Experience with infrastructure-as-code (e.g., Terraform, Pulumi)
  • Familiarity with containerized environments (Docker, Kubernetes)
  • Understanding of RAG pipelines, vector search, or AI feature engineering workflows
  • Interest in dbt, Snowpark, or related data transformation tooling

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.