Remote Python Developer

Logo of NTD Software

NTD Software

πŸ“Remote

Job highlights

Summary

Join our team as a highly skilled GCP Python Developer with expertise in data engineering, cloud-native solutions, and big data technologies. The ideal candidate should have strong programming skills in Python, experience with Spark, PySpark, Hadoop, and other big data tools, and excellent problem-solving abilities.

Requirements

  • Proficiency in Python programming, with a strong emphasis on data engineering
  • Extensive experience with big data technologies: Spark, PySpark, Hadoop, HIVE, BigQuery, and Pub/Sub
  • Expertise in SQL, data modeling, and query optimization for large-scale data processing
  • Experience in data visualization and dashboarding tools
  • Strong debugging and problem-solving skills to resolve complex technical issues
  • Ability to work independently, learn new technologies, and prototype innovative solutions

Responsibilities

  • Utilize advanced data engineering skills to build efficient ETL pipelines
  • Focus on cloud-native solutions with hands-on experience in Google Cloud Platform (GCP) or Azure environments
  • Leverage data visualization and dashboarding techniques to effectively communicate complex data insights to stakeholders
  • Debug, troubleshoot, and implement solutions for complex technical problems, ensuring high performance and scalability
  • Continuously learn new technologies, prototype solutions, and propose innovative approaches to optimize data engineering processes
  • Collaborate with cross-functional teams to integrate data solutions across platforms and services

Job description

We are seeking a highly skilled GCP Python Developer with 5-7 years of experience in Python programming and data engineering. The ideal candidate should have a deep understanding of production-level coding techniques, including testing, object-oriented programming (OOP), and code optimization. The role requires strong expertise with big data technologies, such as Spark, PySpark, Hadoop, HIVE, BigQuery, and Pub/Sub, to build scalable ETL pipelines. This is an exciting opportunity for an individual with a passion for cloud-native solutions, data visualization, and a drive to solve complex technical challenges.

Responsibilities:

  • Utilize advanced data engineering skills, including expert-level SQL, data modeling, and query optimization, to build efficient ETL pipelines.
  • Focus on cloud-native solutions with hands-on experience, preferably in Google Cloud Platform (GCP) or Azure environments.
  • Leverage data visualization and dashboarding techniques to effectively communicate complex data insights to stakeholders.
  • Debug, troubleshoot, and implement solutions for complex technical problems, ensuring high performance and scalability.
  • Continuously learn new technologies, prototype solutions, and propose innovative approaches to optimize data engineering processes.
  • Collaborate with cross-functional teams to integrate data solutions across platforms and services.

Requirements:

  • Proficiency in Python programming, with a strong emphasis on data engineering.
  • Extensive experience with big data technologies: Spark, PySpark, Hadoop, HIVE, BigQuery, and Pub/Sub.
  • Expertise in SQL, data modeling, and query optimization for large-scale data processing.
  • Experience in data visualization and dashboarding tools.
  • Strong debugging and problem-solving skills to resolve complex technical issues.
  • Ability to work independently, learn new technologies, and prototype innovative solutions.

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.
Please let NTD Software know you found this job on JobsCollider. Thanks! πŸ™