Forward Delivery Engineer

Turing
Summary
Join Turing, a rapidly growing AI company, as a Forward Deployed Engineer (FDE) to work on cutting-edge Generative AI deployments. Partner directly with customers to design, build, and deploy intelligent applications using Python, Langchain/LangGraph, and large language models. Bridge engineering excellence with customer empathy to solve real-world problems. This hands-on role involves leading end-to-end deployment of GenAI applications, architecting scalable solutions, acting as a technical advisor, collaborating with various teams, writing clean code, operating across cloud platforms, and continuously improving deployment processes. The ideal candidate will have significant experience in software engineering, expertise in Python, Langchain/LangGraph, and SQL, and a strong understanding of cloud services. This is a full-time remote opportunity with flexible working hours.
Requirements
- 5β8+ years of experience in software engineering or solutions engineering, ideally in a customer-facing capacity
- Proven expertise in Python, Langchain, LangGraph, and SQL
- Deep experience with engineering architecture, including APIs, microservices, and event-driven systems
- Demonstrated success in designing and deploying GenAI applications into production environments
- Strong proficiency with cloud services such as AWS, GCP, and/or Azure
- Excellent communication skills, with the ability to translate technical complexity to customer-facing narratives
- Comfortable working autonomously and managing multiple deployment tracks in parallel
- Bachelor's, Master's, or Ph.D. in Computer Science, Engineering, or a related technical field
Responsibilities
- Lead the end-to-end deployment of GenAI applications for customersβfrom discovery to delivery
- Architect and implement robust, scalable solutions using Python, Langchain/LangGraph, and LLM frameworks
- Act as a trusted technical advisor to customers, understanding their needs and crafting tailored AI solutions
- Collaborate closely with product, ML, and engineering teams to influence roadmap and core platform capabilities
- Write clean, maintainable code and build reusable modules to streamline future deployments
- Operate across cloud platforms (AWS, Azure, GCP) to ensure secure, performant infrastructure
- Continuously improve deployment tools, pipelines, and methodologies to reduce time-to-value
Preferred Qualifications
- Familiarity with CI/CD, infrastructure-as-code (Terraform, Pulumi), and container orchestration (Docker, Kubernetes)
- Background in LLM fine-tuning, retrieval-augmented generation (RAG), or AI/ML operations
- Previous experience in a startup, consulting, or fast-paced customer-obsessed environment
Benefits
- Amazing work culture (Super collaborative & supportive work environment; 5 days a week)
- Awesome colleagues (Surround yourself with top talent from Meta, Google, LinkedIn etc. as well as people with deep startup experience)
- Competitive compensation
- Flexible working hours
- Full-time remote opportunity