Data Engineer

AccelOne Logo

AccelOne

πŸ“Remote - Argentina

Summary

Join AccelOne as a Data Engineer and design, build, and maintain real-time and batch data pipelines using Kafka, AWS, and Python. You will collaborate with cross-functional teams to develop data products and ensure reliable, scalable, and high-performance data infrastructure. This role demands expertise in data modeling, schema design, and performance optimization, along with proficiency in Python data libraries. Experience with AWS services, event-driven architectures, and Agile development is essential. AccelOne offers remote work, professional growth opportunities, and a supportive, inclusive environment.

Requirements

  • 4+ years of experience as a data engineer in a production environment
  • Fluency in Python, with strong experience in a Linux environment
  • Strong SQL skills, with hands-on experience in Snowflake and MySQL
  • Solid understanding of data modeling, schema design, and performance optimization
  • Proficiency with Python data libraries: NumPy, Pandas, SciPy
  • Experience building and maintaining real-time data pipelines using Kafka (Apache Kafka or AWS MSK)
  • Familiarity with AWS services including Lambda, Kinesis, S3, and CloudWatch
  • Experience working with event-driven architectures and streaming data patterns
  • Knowledge of RESTful APIs, data integration, and version control workflows (e.g., Git)
  • Comfortable in Agile development environments with strong testing and code review practices

Responsibilities

  • Design, build, and maintain scalable real-time and batch data pipelines using Kafka, AWS, and Python
  • Develop event-driven data workflows leveraging Kafka topics, Lambda functions, and downstream sinks (e.g., Snowflake, S3)
  • Work with cross-functional teams to translate business requirements into reliable data products
  • Build and maintain robust ETL processes across Snowflake and MySQL for reporting and analytics
  • Ensure high-quality data delivery by writing modular, well-tested Python code
  • Monitor data pipelines using tools such as CloudWatch, Grafana, or custom metrics
  • Contribute to technical design, architectural reviews, and infrastructure improvements
  • Maintain data integrity and scalability in a high-volume data environment
  • Continuously explore and implement best practices in data engineering, streaming, and analytics infrastructure

Benefits

  • Remote Work : Enjoy flexibility and a competitive compensation package
  • Professional Growth : Access to career development opportunities, training, and certifications
  • Inclusive Environment : We foster a people-first culture where everyone can thrive professionally and personally

Share this job:

Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.