Summary
Join Storyblok, a growing company, and contribute to impactful data initiatives as a Data Engineer. You will design, build, and maintain scalable data pipelines using Amazon Redshift and dbt. Collaborate with data scientists, product managers, and engineers to deliver high-quality data. This role requires strong SQL skills, experience with ETL/ELT workflows, and a solid understanding of data warehousing principles. The position offers a remote work opportunity, flexible schedules, and various benefits including a remote work stipend, home office equipment, paid time off, a personal development fund, and stock options.
Requirements
- Bachelor’s degree in Computer Science, Information Systems, or related field—or equivalent professional experience
- 3+ years of experience in data engineering or a closely related field
- Strong SQL skills and experience with scripting languages (Python, Bash, JS/TS)
- Proven experience working with Amazon Redshift as a primary data warehouse
- Hands-on experience with dbt, including model design, testing, and documentation
- Solid understanding of data warehousing principles, including ETL/ELT workflows and dimensional modeling
- Experience with cloud-based infrastructure, preferably AWS
- Strong collaboration and communication skills in cross-functional settings
- Ability to balance technical excellence with practical delivery in a fast-paced environment
Responsibilities
- Design, build, and maintain reliable, scalable data pipelines for ingestion, transformation, and loading (ETL/ELT)
- Develop and optimize data models in Amazon Redshift, aligned with analytical and operational use cases
- Build and maintain dbt models to enable modular, testable, and well-documented transformations
- Implement robust data quality checks and monitoring to ensure high data integrity
- Work cross-functionally to understand data needs and deliver relevant, well-structured datasets
- Continuously refine and improve performance, cost-efficiency, and scalability of data workflows
- Document pipeline architecture, business logic, and data lineage
- Mentor junior team members and contribute to a culture of best practices in data engineering
Preferred Qualifications
- Experience in a SaaS or product-led growth environment
- Familiarity with orchestration tools like Airflow or Dagster
- Experience implementing CI/CD practices in data pipelines (e.g., GitHub Actions)
- Knowledge of data governance, access control, and security best practices
Benefits
- Monthly remote work stipend (home internet costs, electricity)
- Home office equipment package right at the start (laptop, keyboard, monitor…)
- Home office equipment upgrade (furniture, ear plugs …) or membership to a local co-working space after your onboarding
- Sick leave benefit, parental leave and 25 days of annual leave plus your local national holidays
- Personal development fund for courses, books, conferences, and material
- VSOP (Virtual Stock Option Plan)
- The annual international team-building trip, quarterly and monthly online get-togethers
- As a fully remote company, with work-life balance at its core, you’ll enjoy flexible schedules
Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.