Alert button
Picture for Buck Shlegeris

Buck Shlegeris

Alert button

Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training

Add code
Bookmark button
Alert button
Jan 17, 2024
Evan Hubinger, Carson Denison, Jesse Mu, Mike Lambert, Meg Tong, Monte MacDiarmid, Tamera Lanham, Daniel M. Ziegler, Tim Maxwell, Newton Cheng, Adam Jermyn, Amanda Askell, Ansh Radhakrishnan, Cem Anil, David Duvenaud, Deep Ganguli, Fazl Barez, Jack Clark, Kamal Ndousse, Kshitij Sachan, Michael Sellitto, Mrinank Sharma, Nova DasSarma, Roger Grosse, Shauna Kravec, Yuntao Bai, Zachary Witten, Marina Favaro, Jan Brauner, Holden Karnofsky, Paul Christiano, Samuel R. Bowman, Logan Graham, Jared Kaplan, Sören Mindermann, Ryan Greenblatt, Buck Shlegeris, Nicholas Schiefer, Ethan Perez

Viaarxiv icon

AI Control: Improving Safety Despite Intentional Subversion

Add code
Bookmark button
Alert button
Dec 14, 2023
Ryan Greenblatt, Buck Shlegeris, Kshitij Sachan, Fabien Roger

Viaarxiv icon

Generalized Wick Decompositions

Add code
Bookmark button
Alert button
Oct 10, 2023
Chris MacLeod, Evgenia Nitishinskaya, Buck Shlegeris

Viaarxiv icon

Benchmarks for Detecting Measurement Tampering

Add code
Bookmark button
Alert button
Sep 07, 2023
Fabien Roger, Ryan Greenblatt, Max Nadeau, Buck Shlegeris, Nate Thomas

Figure 1 for Benchmarks for Detecting Measurement Tampering
Figure 2 for Benchmarks for Detecting Measurement Tampering
Figure 3 for Benchmarks for Detecting Measurement Tampering
Figure 4 for Benchmarks for Detecting Measurement Tampering
Viaarxiv icon

Language models are better than humans at next-token prediction

Add code
Bookmark button
Alert button
Dec 21, 2022
Buck Shlegeris, Fabien Roger, Lawrence Chan, Euan McLean

Figure 1 for Language models are better than humans at next-token prediction
Figure 2 for Language models are better than humans at next-token prediction
Figure 3 for Language models are better than humans at next-token prediction
Viaarxiv icon

Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small

Add code
Bookmark button
Alert button
Nov 01, 2022
Kevin Wang, Alexandre Variengien, Arthur Conmy, Buck Shlegeris, Jacob Steinhardt

Figure 1 for Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small
Figure 2 for Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small
Figure 3 for Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small
Figure 4 for Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small
Viaarxiv icon

Polysemanticity and Capacity in Neural Networks

Add code
Bookmark button
Alert button
Oct 04, 2022
Adam Scherlis, Kshitij Sachan, Adam S. Jermyn, Joe Benton, Buck Shlegeris

Figure 1 for Polysemanticity and Capacity in Neural Networks
Figure 2 for Polysemanticity and Capacity in Neural Networks
Figure 3 for Polysemanticity and Capacity in Neural Networks
Figure 4 for Polysemanticity and Capacity in Neural Networks
Viaarxiv icon

Adversarial Training for High-Stakes Reliability

Add code
Bookmark button
Alert button
May 04, 2022
Daniel M. Ziegler, Seraphina Nix, Lawrence Chan, Tim Bauman, Peter Schmidt-Nielsen, Tao Lin, Adam Scherlis, Noa Nabeshima, Ben Weinstein-Raun, Daniel de Haas, Buck Shlegeris, Nate Thomas

Figure 1 for Adversarial Training for High-Stakes Reliability
Figure 2 for Adversarial Training for High-Stakes Reliability
Figure 3 for Adversarial Training for High-Stakes Reliability
Figure 4 for Adversarial Training for High-Stakes Reliability
Viaarxiv icon

Supervising strong learners by amplifying weak experts

Add code
Bookmark button
Alert button
Oct 19, 2018
Paul Christiano, Buck Shlegeris, Dario Amodei

Figure 1 for Supervising strong learners by amplifying weak experts
Figure 2 for Supervising strong learners by amplifying weak experts
Figure 3 for Supervising strong learners by amplifying weak experts
Figure 4 for Supervising strong learners by amplifying weak experts
Viaarxiv icon