Research Scientist, Frontier Red Team
San Francisco, CA, USA
Posted on Wednesday, July 26, 2023
We’re building a team that will hunt and mitigate extreme risks from future models.
This team will red team models to test the most significant risks they might be capable of in critical areas like biosecurity, cybersecurity risks, or deception. We believe that clear risk demonstrations can significantly advance technical research and mitigations, as well as identify effective policy interventions to promote and incentivize safety. And if we figure out how to prevent the most serious risks, we unlock some of the most valuable applications of AI.
As part of this team, you will lead research to baseline current models and test whether future frontier capabilities could cause significant harm. Day-to-day, you may decide you need to finetune a model to see whether it becomes superhuman in an eval you’ve designed; whiteboard a threat model with a national security expert; test a new training procedure or how a model uses a tool; brief government, labs, and research teams. Our goal is to see the frontier before we get there.
By nature, this team will be an unusual combination of backgrounds. We are particularly looking for people with these kinds of backgrounds:
Science: For example, you’re a chemist who builds LLM agents to help your research. Or, you’ve built a protein language model and you enjoyed looking through the embedding space. You’re a team lead at an ML-for-drug discovery company. You’ve built software for astronauts or materials scientists.
Cybersecurity: You’re a white hat hacker who is curious about LLMs. You’re an academic who researches RL for cybersecurity. You’ve participated in CTFs and you want to automate one.
Alignment: You’ve written detailed, concrete scenarios of significant risk from AI in a way that can be tested. You have built model evals and have ideas about how they can be better.
Do not rule yourself out if you do not fit one of those categories. It’s very likely the people we’re looking for do not fit any of the above.
This team is a technical research team within the Policy team with strong collaboration with the broader Anthropic Research team.
Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our customers and for society as a whole. Our interdisciplinary team has experience across ML, physics, policy, business and product.
You might be a good fit if you:
- Have solid ML-focused Python engineering and research skills, particularly around using and training models.
- Have led and executed technical research with a team.
- Are comfortable with messy experimental science. A lot of this is uncharted territory. We optimize for fast feedback loops. You may need to build your own tooling.
- Can clearly articulate and discuss the findings and importance of your work.
- Are mission-driven. You’re inspired to advance AI safety as fast as possible while deploying AI as positively as possible.
- Leading technical research into frontier risks.
- Developing and testing models with future capabilities.
- Designing evals.
- Collaborating with outside experts.
- Briefing external stakeholders like labs and government.
Annual Salary (USD)
- The expected salary range for this position is $250k - $450k.
Hybrid policy & US visa sponsorship: Currently, we expect all staff to be in our office at least 25% of the time. We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate; operations roles are especially difficult to support. But if we make you an offer, we will make every effort to get you into the United States, and we retain an immigration lawyer to help with this.
Role-specific policy: For this role, we prefer candidates who are able to be in our office more than 25% of the time, though we encourage you to apply even if you don’t think you will be able to do that.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Compensation and Benefits*
Anthropic’s compensation package consists of three elements: salary, equity, and benefits. We are committed to pay fairness and aim for these three elements collectively to be highly competitive with market rates.
Equity - On top of this position's salary (listed above), equity will be a major component of the total compensation. We aim to offer higher-than-average equity compensation for a company of our size, and communicate equity amounts at the time of offer issuance.
Benefits - Benefits we offer include:
- Optional equity donation matching at a 3:1 ratio, up to 50% of your equity grant.
- Comprehensive health, dental, and vision insurance for you and all your dependents.
- 401(k) plan with 4% matching.
- 21 weeks of paid parental leave.
- Unlimited PTO – most staff take between 4-6 weeks each year, sometimes more!
- Stipends for education, home office improvements, commuting, and wellness.
- Fertility benefits via Carrot.
- Daily lunches and snacks in our office.
- Relocation support for those moving to the Bay Area.
* This compensation and benefits information is based on Anthropic’s good faith estimate for this position, in San Francisco, CA, as of the date of publication and may be modified in the future. The level of pay within the range will depend on a variety of job-related factors, including where you place on our internal performance ladders, which is based on factors including past work experience, relevant education, and performance on our interviews or in a work trial.
How we're different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. We're trying to build a core of knowledge and intuition about the most robustly effective innovations in AI, and so thoroughly-documented null results are almost as valuable as positive discoveries. We do not have boundaries between engineering and research, and we expect all of our technical staff to contribute to both as needed.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation based in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.