Summary
Join Discord's Adaptive Data Analytics team as a highly analytical professional responsible for protecting users from scaled abuse, including spam, account compromise, and payment fraud. You will develop strategies to address existing and emerging threats, leverage data-driven insights, and collaborate with various stakeholders. This role requires expertise in data science, machine learning, and anti-abuse engineering. You will utilize scripting languages, SQL, and machine learning classifiers to combat bad actors, create reports and recommendations, and manage metrics. Success in this position demands strong problem-solving, communication, and stakeholder management skills. The role offers a competitive salary and benefits package.
Requirements
- 5+ years of experience working with scripting languages like Python or Javascript, and expert level proficiency with SQL on large datasets
- 5+ years of experience combating scaled abuse online, with proactive and innovative approaches to handle adversarial actors, understanding the tradeoffs and risks that come with this space
- Strong problem-solving, troubleshooting, and investigative skills: we believe in getting to the root of the matter instead of just addressing symptoms
- Strong stakeholder management skills––you’re adept at improving processes and know how to leverage the skills of multiple disparate teams
- Ability to work in a fast-paced environment and make important decisions under pressure
- Excellent communication skills––you can efficiently and eloquently present ideas, updates and results to both technical and non-technical key stakeholders at various levels
- Experience conducting experimentation in production — formulating and testing hypotheses to assess the impact of anti-abuse automation including both rules and machine learning models
- Familiarity with standard software engineering practices, such as spec creation, code review, version control, and writing technical documentation
- A passion for protecting users, bringing a highly empathetic, tenacious, scrappy, creative, and positive mindset to combat bad actors
Responsibilities
- Understand and drive down existing and emerging scaled abuse threats in various spam and fraud related problem spaces through anti-abuse automation — utilizing heuristic rules and machine learning classifiers to stop bad actors
- Think both quantitatively and qualitatively to create reports and recommendations shaping safety strategy — improving, optimizing, and innovating our scaled anti-abuse methodology
- Propose projects, features, and other investments with well crafted RFC’s, and secure buy-in from leadership for these initiatives
- Consult with product teams on new features being introduced on Discord, and propose plans to help reduce or remove abusive behavior before it happens
- Manage metrics and incoming streams of reports and appeals from users and communities to identify developing patterns of abuse
- Work cross-collaboratively with other teams, including Product, Safety Engineering, ML, CX in response to complex and time-sensitive issues
Preferred Qualifications
- Experience creating and operationalizing innovative new metrics to bring visibility and insights into unmeasured problem spaces
- Experience developing Machine Learning models or feature engineering to improve existing models
- A strong background in statistics or causal inference
Benefits
- The US base salary range for this full-time position is $180,000 to $202,500 + equity + benefits
- #LI-Remote
Disclaimer: Please check that the job is real before you apply. Applying might take you to another website that we don't own. Please be aware that any actions taken during the application process are solely your responsibility, and we bear no responsibility for any outcomes.