Machine Learning Solutions Architect
phData
📍Remote - United States
Please let phData know you found this job on JobsCollider. Thanks! 🙏
Job highlights
Summary
Join phData, a leading innovator in the modern data stack, and become a Solutions Architect on our Machine Learning Engineering team. We partner with major cloud data platforms and help global enterprises solve complex data challenges. As a remote-first global company, we offer a casual, collaborative work environment. In this role, you will design and implement data solutions, provide thought leadership, and work with data scientists to deploy machine learning models. You will need extensive experience in machine learning engineering, strong programming skills, and experience with various data technologies. We offer competitive compensation, excellent benefits, and opportunities for professional development.
Requirements
- At least 6 years experience as a Machine Learning Engineer, Software Engineer, or Data Engineer
- 4-year Bachelor's degree in Computer Science or a related field
- Experience deploying machine learning models in a production setting
- Expertise in Python, Scala, Java, or another modern programming language
- The ability to build and operate robust data pipelines using a variety of data sources, programming languages, and toolsets
- Strong working knowledge of SQL and the ability to write, debug, and optimize distributed SQL queries
- Hands-on experience in one or more big data ecosystem products/languages such as Spark, Snowflake, Databricks, etc
- Familiarity with multiple data sources (e.g. JMS, Kafka, RDBMS, DWH, MySQL, Oracle, SAP)
- Systems-level knowledge in network/cloud architecture, operating systems (e.g., Linux), and storage systems (e.g., AWS, Databricks, Cloudera)
- Production experience in core data technologies (e.g. Spark, HDFS, Snowflake, Databricks, Redshift, & Amazon EMR)
- Development of APIs and web server applications (e.g. Flask, Django, Spring)
- Complete software development lifecycle experience, including design, documentation, implementation, testing, and deployment
- Excellent communication and presentation skills; previous experience working with internal or external customers
Responsibilities
- Design and implement data solutions best suited to deliver on our customer needs — from model inference, retraining, monitoring, and beyond — across an evolving technical stack
- Provide thought leadership by recommending the technologies and solution design for a given use case, from the application layer to infrastructure; and they have the team leadership and coding skills (e.g. Python, Java, and Scala) to build and operate in production; and to help ensure performance, security, scalability, and robust data integration
- Design and create environments for data scientists to build models and manipulate data
- Work within customer systems to extract data and place it within an analytical environment
- Learn and understand customer technology environments and systems
- Define the deployment approach and infrastructure for models and be responsible for ensuring that businesses can use the models we develop
- Demonstrate the business value of data by working with data scientists to manipulate and transform data into actionable insights
- Reveal the true value of data by working with data scientists to manipulate and transform data into appropriate formats in order to deploy actionable machine learning models
- Partner with data scientists to ensure solution deployability—at scale, in harmony with existing business systems and pipelines, and such that the solution can be maintained throughout its life cycle
- Create operational testing strategies, validate and test the model in QA, and implementation, testing, and deployment
- Ensure the quality of the delivered product
Preferred Qualifications
- A Master’s or other advanced degree in data science or a related field
- Hands-on experience with one or more ecosystem technologies (e.g., Spark, Databricks, Snowflake, AWS/Azure/GCP)
- Relevant side projects (e.g. contributions to an open source technology stack)
- Experience working with Data-Science and Machine-Learning software and libraries such as h2o, TensorFlow, Keras, scikit-learn, etc
- Experience with Docker, Kubernetes, or some other containerization technology
- AWS Sagemaker (or Azure ML) and MLflow experience
- Experience building enterprise ML models
Benefits
- Remote-First Work Environment
- Casual, award-winning small-business work environment
- Collaborative culture that prizes autonomy, creativity, and transparency
- Competitive comp, excellent benefits, 4 weeks of PTO plus 10 Holidays (and other cool perks)
- Accelerated learning and professional development through advanced training and certifications
Share this job:
Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.
Similar Remote Jobs
- 💰$225k-$280k📍United States, Canada
- 📍Uruguay
- 📍United States
- 💰$175k-$225k📍United States
- 💰$175k-$225k📍United States
- 📍United States
- 📍United Kingdom
- 📍Canada
- 📍United States
Please let phData know you found this job on JobsCollider. Thanks! 🙏