Remote Full Stack Data Engineer

Logo of rockITdata

rockITdata

πŸ“Remote - Worldwide

Job highlights

Summary

Join rockITdata as a Full Stack Data Engineer to design and implement scalable data ingestion pipelines, integrate data from different systems, develop and maintain data storage solutions, and build software applications to expose data services.

Requirements

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field
  • Proficiency in programming languages such as Python, Java, or Scala for data engineering and software development
  • Strong understanding of database concepts, data modeling techniques, and SQL programming
  • Hands-on experience with cloud platforms such as AWS, Azure, or GCP for building and deploying data solutions
  • Knowledge of data warehousing concepts and technologies (e.g., Redshift, BigQuery, Snowflake)
  • Familiarity with version control systems (e.g., Git) and software development best practices (e.g., Agile, CI/CD)

Responsibilities

  • Design and implement scalable data ingestion pipelines to efficiently collect and process data from various sources
  • Integrate data from different systems and platforms to create unified datasets for analysis and reporting
  • Develop and maintain data storage solutions such as data lakes, data warehouses, and NoSQL databases
  • Optimize data storage and retrieval mechanisms for performance, scalability, and cost-effectiveness
  • Implement data processing workflows for cleaning, transforming, and enriching raw data into usable formats
  • Apply data transformation techniques such as ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) processes
  • Design and implement data models to support analytical and reporting requirements
  • Optimize data models for query performance, data integrity, and storage efficiency
  • Build software applications and APIs to expose data services and functionality to other systems and applications
  • Integrate data engineering workflows with existing software systems and platforms
  • Establish monitoring and alerting mechanisms to track the health and performance of data pipelines and systems
  • Conduct regular maintenance activities to ensure the reliability, availability, and scalability of data infrastructure
  • Document data engineering processes, architectures, and solutions to facilitate knowledge sharing and collaboration
  • Collaborate with cross-functional teams including data scientists, analysts, and business stakeholders to understand requirements and deliver solutions

Preferred Qualifications

  • Experience building solutions for Commercial clients in Pharma, Biotech, CPG, Retail or Manufacturing industries
  • Experience with containerization technologies such as Docker and orchestration tools like Kubernetes
  • Knowledge of streaming data processing frameworks (e.g., Apache Flink, Apache Kafka Streams)
  • Familiarity with data governance and security practices for protecting sensitive data
  • Strong problem-solving skills and the ability to troubleshoot complex data engineering issues
  • Excellent communication skills and the ability to collaborate effectively in a team environment

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.
Please let rockITdata know you found this job on JobsCollider. Thanks! πŸ™