Picture for Jennifer C. White

Jennifer C. White

A Transformer with Stack Attention

Add code
May 07, 2024
Viaarxiv icon

Context versus Prior Knowledge in Language Models

Add code
Apr 06, 2024
Viaarxiv icon

Schrödinger's Bat: Diffusion Models Sometimes Generate Polysemous Words in Superposition

Add code
Nov 23, 2022
Figure 1 for Schrödinger's Bat: Diffusion Models Sometimes Generate Polysemous Words in Superposition
Figure 2 for Schrödinger's Bat: Diffusion Models Sometimes Generate Polysemous Words in Superposition
Figure 3 for Schrödinger's Bat: Diffusion Models Sometimes Generate Polysemous Words in Superposition
Figure 4 for Schrödinger's Bat: Diffusion Models Sometimes Generate Polysemous Words in Superposition
Viaarxiv icon

Equivariant Transduction through Invariant Alignment

Add code
Sep 22, 2022
Figure 1 for Equivariant Transduction through Invariant Alignment
Figure 2 for Equivariant Transduction through Invariant Alignment
Figure 3 for Equivariant Transduction through Invariant Alignment
Figure 4 for Equivariant Transduction through Invariant Alignment
Viaarxiv icon

Examining the Inductive Bias of Neural Language Models with Artificial Languages

Add code
Jun 02, 2021
Figure 1 for Examining the Inductive Bias of Neural Language Models with Artificial Languages
Figure 2 for Examining the Inductive Bias of Neural Language Models with Artificial Languages
Figure 3 for Examining the Inductive Bias of Neural Language Models with Artificial Languages
Figure 4 for Examining the Inductive Bias of Neural Language Models with Artificial Languages
Viaarxiv icon

A Non-Linear Structural Probe

Add code
May 21, 2021
Figure 1 for A Non-Linear Structural Probe
Viaarxiv icon