Alert button
Picture for Vinitra Swamy

Vinitra Swamy

Alert button

InterpretCC: Conditional Computation for Inherently Interpretable Neural Networks

Add code
Bookmark button
Alert button
Feb 05, 2024
Vinitra Swamy, Julian Blackwell, Jibril Frej, Martin Jaggi, Tanja Käser

Figure 1 for InterpretCC: Conditional Computation for Inherently Interpretable Neural Networks
Figure 2 for InterpretCC: Conditional Computation for Inherently Interpretable Neural Networks
Figure 3 for InterpretCC: Conditional Computation for Inherently Interpretable Neural Networks
Figure 4 for InterpretCC: Conditional Computation for Inherently Interpretable Neural Networks
Viaarxiv icon

MEDITRON-70B: Scaling Medical Pretraining for Large Language Models

Add code
Bookmark button
Alert button
Nov 27, 2023
Zeming Chen, Alejandro Hernández Cano, Angelika Romanou, Antoine Bonnet, Kyle Matoba, Francesco Salvi, Matteo Pagliardini, Simin Fan, Andreas Köpf, Amirkeivan Mohtashami, Alexandre Sallinen, Alireza Sakhaeirad, Vinitra Swamy, Igor Krawczuk, Deniz Bayazit, Axel Marmet, Syrielle Montariol, Mary-Anne Hartley, Martin Jaggi, Antoine Bosselut

Viaarxiv icon

Unraveling Downstream Gender Bias from Large Language Models: A Study on AI Educational Writing Assistance

Add code
Bookmark button
Alert button
Nov 06, 2023
Thiemo Wambsganss, Xiaotian Su, Vinitra Swamy, Seyed Parsa Neshaei, Roman Rietsche, Tanja Käser

Viaarxiv icon

MultiModN- Multimodal, Multi-Task, Interpretable Modular Networks

Add code
Bookmark button
Alert button
Sep 25, 2023
Vinitra Swamy, Malika Satayeva, Jibril Frej, Thierry Bossy, Thijs Vogels, Martin Jaggi, Tanja Käser, Mary-Anne Hartley

Figure 1 for MultiModN- Multimodal, Multi-Task, Interpretable Modular Networks
Figure 2 for MultiModN- Multimodal, Multi-Task, Interpretable Modular Networks
Figure 3 for MultiModN- Multimodal, Multi-Task, Interpretable Modular Networks
Figure 4 for MultiModN- Multimodal, Multi-Task, Interpretable Modular Networks
Viaarxiv icon

The future of human-centric eXplainable Artificial Intelligence (XAI) is not post-hoc explanations

Add code
Bookmark button
Alert button
Jul 01, 2023
Vinitra Swamy, Jibril Frej, Tanja Käser

Figure 1 for The future of human-centric eXplainable Artificial Intelligence (XAI) is not post-hoc explanations
Figure 2 for The future of human-centric eXplainable Artificial Intelligence (XAI) is not post-hoc explanations
Viaarxiv icon

Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design

Add code
Bookmark button
Alert button
Dec 26, 2022
Vinitra Swamy, Sijia Du, Mirko Marras, Tanja Käser

Figure 1 for Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design
Figure 2 for Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design
Figure 3 for Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design
Figure 4 for Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design
Viaarxiv icon

Ripple: Concept-Based Interpretation for Raw Time Series Models in Education

Add code
Bookmark button
Alert button
Dec 13, 2022
Mohammad Asadi, Vinitra Swamy, Jibril Frej, Julien Vignoud, Mirko Marras, Tanja Käser

Figure 1 for Ripple: Concept-Based Interpretation for Raw Time Series Models in Education
Figure 2 for Ripple: Concept-Based Interpretation for Raw Time Series Models in Education
Figure 3 for Ripple: Concept-Based Interpretation for Raw Time Series Models in Education
Figure 4 for Ripple: Concept-Based Interpretation for Raw Time Series Models in Education
Viaarxiv icon

Bias at a Second Glance: A Deep Dive into Bias for German Educational Peer-Review Data Modeling

Add code
Bookmark button
Alert button
Sep 22, 2022
Thiemo Wambsganss, Vinitra Swamy, Roman Rietsche, Tanja Käser

Figure 1 for Bias at a Second Glance: A Deep Dive into Bias for German Educational Peer-Review Data Modeling
Figure 2 for Bias at a Second Glance: A Deep Dive into Bias for German Educational Peer-Review Data Modeling
Figure 3 for Bias at a Second Glance: A Deep Dive into Bias for German Educational Peer-Review Data Modeling
Figure 4 for Bias at a Second Glance: A Deep Dive into Bias for German Educational Peer-Review Data Modeling
Viaarxiv icon

Evaluating the Explainers: Black-Box Explainable Machine Learning for Student Success Prediction in MOOCs

Add code
Bookmark button
Alert button
Jul 01, 2022
Vinitra Swamy, Bahar Radmehr, Natasa Krco, Mirko Marras, Tanja Käser

Figure 1 for Evaluating the Explainers: Black-Box Explainable Machine Learning for Student Success Prediction in MOOCs
Figure 2 for Evaluating the Explainers: Black-Box Explainable Machine Learning for Student Success Prediction in MOOCs
Figure 3 for Evaluating the Explainers: Black-Box Explainable Machine Learning for Student Success Prediction in MOOCs
Figure 4 for Evaluating the Explainers: Black-Box Explainable Machine Learning for Student Success Prediction in MOOCs
Viaarxiv icon

Meta Transfer Learning for Early Success Prediction in MOOCs

Add code
Bookmark button
Alert button
Apr 25, 2022
Vinitra Swamy, Mirko Marras, Tanja Käser

Figure 1 for Meta Transfer Learning for Early Success Prediction in MOOCs
Figure 2 for Meta Transfer Learning for Early Success Prediction in MOOCs
Figure 3 for Meta Transfer Learning for Early Success Prediction in MOOCs
Figure 4 for Meta Transfer Learning for Early Success Prediction in MOOCs
Viaarxiv icon