Senior Data Software Engineer

PsiQuantum Logo

PsiQuantum

๐Ÿ“Remote - United States, Canada

Summary

Join PsiQuantum's Quantum Applications Software Team as a Senior Data Software Engineer. You will architect, build, and maintain scalable data pipelines supporting critical quantum software applications. This hands-on role involves collaborating with scientists, gathering requirements, and building foundational data infrastructure. You will champion engineering excellence and work with cross-functional teams. The position requires expertise in data engineering, cloud technologies, and programming languages. Experience with high-performance computing and large-scale scientific computations is essential.

Requirements

  • Bachelorโ€™s or Masterโ€™s degree in Computer Science, Engineering, or a related field
  • 8+ years in Data Engineering with hands-on cloud and SaaS experience
  • Proven experience designing data pipelines and workflows, preferably for high-performance or large-scale scientific computations
  • Strong knowledge of database design principles (relational and/or NoSQL) for complex or high-volume datasets
  • Proficiency in one or more programming languages commonly used for data engineering (e.g., Python, C++, Rust)
  • Hands-on experience with orchestration tools such as Prefect, Apache Airflow, or equivalent frameworks
  • Hands-on experience with cloud data services, e.g. Databricks, AWS Glue/Athena, AWS Redshift, Snowflake, or similar
  • Excellent teamwork and communication skills, especially in collaborative, R&D-focused settings

Responsibilities

  • Develop and refine data processing pipelines to handle complex scientific or computational datasets
  • Design and implement scalable database solutions to efficiently store, query, and manage large volumes of domain-specific data
  • Refactor and optimize existing codebases to enhance performance, reliability, and maintainability across various data workflows
  • Collaborate with cross-functional teams (e.g., research scientists, HPC engineers) to support end-to-end data solutions in a high-performance environment
  • Integrate workflow automation tools, ensuring the smooth operation of data-intensive tasks at scale
  • Contribute to best practices for versioning, reproducibility, and metadata management of data assets
  • Implement Observability: Deploy monitoring/logging tools (e.g., CloudWatch, Prometheus, Grafana) to preempt issues, optimize performance, and ensure SLA compliance

Preferred Qualifications

  • Knowledge and experience with containerization and orchestration tools such as Docker and Kubernetes and event-driven architectures
  • Knowledge of HPC job schedulers (e.g., Slurm, LSF, or PBS) and distributed computing best practices is a plus
  • Experience with Infrastructure as Code (IaC) tools like Terraform, AWS CDK, etc
  • Deployed domain-specific containerization (Apptainer/Singularity) or managed GPU/ML clusters

Benefits

Equal employment opportunity for all applicants and employees

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.