Senior Data Engineer

Dataroid Logo

Dataroid

πŸ“Remote - Turkey

Summary

Join Dataroid, Turkey's fastest-growing data analytics platform, as a Senior Data Engineer. You will design and build large-scale, resilient data pipelines using various frameworks. This role requires extensive experience in data engineering, proficiency in programming languages like Python or Java, and familiarity with various data technologies. Dataroid offers a competitive compensation package including private health insurance, pension plans, and remote work benefits, along with opportunities for professional development and a thriving work environment. The ideal candidate will possess strong analytical and communication skills and a passion for innovation. Apply now to be part of a dynamic team shaping the future of data analytics.

Requirements

  • BSc/MSc/PhD degree in Computer Science or a related field or equivalent work experience
  • 5+ years of experience in Data Engineering or similar role
  • Strong experience with data modeling and ETL/ELT practices
  • Proficiency in one or more high-level Python or Java based batch and/or stream processing frameworks such as Apache Spark , Apache Flink or Kafka Streams
  • Strong experience with relational and non-relational data stores, key-value stores and search engines ( PostgreSQL , ScyllaDB , Druid , ClickHouse , Redis , Hazelcast , Elasticsearch etc.)
  • Experience with data workflow orchestration tools like Airflow or dbt
  • Deep understanding of storage formats such as Parquet , ORC and/or Avro
  • Proficiency with Python or Java
  • Strong experience with distributed systems and concurrent programming
  • Experience with distributed storage systems like HDFS and/or S3
  • Familiarity with data lake and data warehouse solutions including Hive , Iceberg and/or Delta Lake
  • Proficiency in code versioning tools such as Git
  • Strong sense of analytical thinking and problem-solving skills
  • Strong verbal and written communication skills

Responsibilities

  • Design and build large-scale resilient data pipelines using various frameworks like Apache Spark, Apache Flink, Kafka etc
  • Write well designed, reusable, testable, secure and scalable high-quality code
  • Collaborate with cross-functional teams
  • Discover, learn and implement new technologies

Preferred Qualifications

  • Familiarity with containerization & orchestration - Docker and/or Kubernetes
  • Familiarity with generative models and a strong enthusiasm for generative AI, large language models (LLMs) and the agentic world
  • Prior experience with SCRUM/Agile methodologies

Benefits

  • Private health insurance
  • Company-supported pension plans
  • Meal vouchers
  • Commute assistance
  • Remote work benefits
  • A paid day off for your birthday
  • Adaptable working hours
  • Online events
  • Inspiring guest speakers
  • Office snacks
  • A culture that limits unnecessary meetings
  • Access to premier online learning platforms like Udemy, digital libraries, and tailored training programs to support your career journey
  • Happy hours
  • Workshops
  • Seasonal celebrations
  • Other events that bring us together

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.