Senior Big Data Engineer

Rackspace Technology Logo

Rackspace Technology

πŸ’΅ $116k-$198k
πŸ“Remote - United States

Summary

Join Rackspace Technology as a Senior Big Data Engineer and design, develop, and manage scalable batch processing systems using Hadoop, Oozie, Pig, Hive, and other GCP tools. You will write efficient, production-ready code, optimize data workflows, and implement DevOps best practices. This fully remote position requires a Bachelor's degree in a related field, 5+ years of relevant experience, and proficiency in Java or Python, along with expertise in the Apache Hadoop ecosystem and GCP. The ideal candidate is an independent, self-driven engineer with strong communication skills. Compensation varies by location and experience.

Requirements

  • Bachelor's degree in Computer Science, software engineering or related field of study
  • Experience with managed cloud services and understanding of cloud-based batch processing systems are critical
  • Proficiency in Oozie, Airflow, Map Reduce, Java
  • Strong programming skills with Java (specifically Spark), Python, Pig, and SQL
  • Expertise in public cloud services, particularly in GCP
  • Proficiency in the Apache Hadoop ecosystem with Oozie, Pig, Hive, Map Reduce
  • Familiarity with BigTable and Redis
  • Experienced in Infrastructure and Applied DevOps principles in daily work. Utilize tools for continuous integration and continuous deployment (CI/CD), and Infrastructure as Code (IaC) like Terraform to automate and improve development and release processes
  • Proven experience in engineering batch processing systems at scale
  • 5+ years of experience in customer-facing software/technology or consulting
  • 5+ years of experience with β€œon-premises to cloud” migrations or IT transformations
  • 5+ years of experience building, and operating solutions built on GCP
  • Proficiency in Oozie and Pig
  • Proficiency in Java or Python

Responsibilities

  • Design and develop scalable batch processing systems using technologies like Hadoop, Oozie, Pig, Hive, MapReduce, and HBase, with hands-on coding in Java or Python
  • Write clean, efficient, and production-ready code with a strong focus on data structures and algorithmic problem-solving applied to real-world data engineering tasks
  • Develop, manage, and optimize complex data workflows within the Apache Hadoop ecosystem, with a strong focus on Oozie orchestration and job scheduling
  • Leverage Google Cloud Platform (GCP) tools such as Dataproc, GCS, and Composer to build scalable and cloud-native big data solutions
  • Implement DevOps and automation best practices, including CI/CD pipelines, infrastructure as code (IaC), and performance tuning across distributed systems
  • Collaborate with cross-functional teams to ensure data pipeline reliability, code quality, and operational excellence in a remote-first environment

Benefits

  • Fully remote position
  • The anticipated starting pay range for Colorado is: $116,100 - $170,280
  • The anticipated starting pay range for the states of Hawaii and New York (not including NYC) is: $123,600 - $181,280
  • The anticipated starting pay range for California, New York City and Washington is: $135,300 - $198,440
  • Unless already included in the posted pay range and based on eligibility, the role may include variable compensation in the form of bonus, commissions, or other discretionary payments

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.