Alert button
Picture for Mark E. Nunnally

Mark E. Nunnally

Alert button

LLMs Understand Glass-Box Models, Discover Surprises, and Suggest Repairs

Add code
Bookmark button
Alert button
Aug 07, 2023
Benjamin J. Lengerich, Sebastian Bordt, Harsha Nori, Mark E. Nunnally, Yin Aphinyanaphongs, Manolis Kellis, Rich Caruana

Figure 1 for LLMs Understand Glass-Box Models, Discover Surprises, and Suggest Repairs
Figure 2 for LLMs Understand Glass-Box Models, Discover Surprises, and Suggest Repairs
Figure 3 for LLMs Understand Glass-Box Models, Discover Surprises, and Suggest Repairs
Figure 4 for LLMs Understand Glass-Box Models, Discover Surprises, and Suggest Repairs
Viaarxiv icon

Estimating Discontinuous Time-Varying Risk Factors and Treatment Benefits for COVID-19 with Interpretable ML

Add code
Bookmark button
Alert button
Nov 15, 2022
Benjamin Lengerich, Mark E. Nunnally, Yin Aphinyanaphongs, Rich Caruana

Figure 1 for Estimating Discontinuous Time-Varying Risk Factors and Treatment Benefits for COVID-19 with Interpretable ML
Figure 2 for Estimating Discontinuous Time-Varying Risk Factors and Treatment Benefits for COVID-19 with Interpretable ML
Viaarxiv icon

Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values

Add code
Bookmark button
Alert button
Jun 30, 2022
Zijie J. Wang, Alex Kale, Harsha Nori, Peter Stella, Mark E. Nunnally, Duen Horng Chau, Mihaela Vorvoreanu, Jennifer Wortman Vaughan, Rich Caruana

Figure 1 for Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values
Figure 2 for Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values
Figure 3 for Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values
Figure 4 for Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values
Viaarxiv icon