Alert button
Picture for Toon Calders

Toon Calders

Alert button

How to be fair? A study of label and selection bias

Add code
Bookmark button
Alert button
Mar 21, 2024
Marco Favier, Toon Calders, Sam Pinxteren, Jonathan Meyer

Viaarxiv icon

Beyond Accuracy-Fairness: Stop evaluating bias mitigation methods solely on between-group metrics

Add code
Bookmark button
Alert button
Jan 24, 2024
Sofie Goethals, Toon Calders, David Martens

Viaarxiv icon

Model-based Counterfactual Generator for Gender Bias Mitigation

Add code
Bookmark button
Alert button
Nov 06, 2023
Ewoenam Kwaku Tokpo, Toon Calders

Viaarxiv icon

How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification

Add code
Bookmark button
Alert button
Jan 30, 2023
Ewoenam Tokpo, Pieter Delobelle, Bettina Berendt, Toon Calders

Figure 1 for How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification
Figure 2 for How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification
Figure 3 for How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification
Figure 4 for How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification
Viaarxiv icon

Text Style Transfer for Bias Mitigation using Masked Language Modeling

Add code
Bookmark button
Alert button
Jan 21, 2022
Ewoenam Kwaku Tokpo, Toon Calders

Figure 1 for Text Style Transfer for Bias Mitigation using Masked Language Modeling
Figure 2 for Text Style Transfer for Bias Mitigation using Masked Language Modeling
Figure 3 for Text Style Transfer for Bias Mitigation using Masked Language Modeling
Figure 4 for Text Style Transfer for Bias Mitigation using Masked Language Modeling
Viaarxiv icon

Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models

Add code
Bookmark button
Alert button
Dec 14, 2021
Pieter Delobelle, Ewoenam Kwaku Tokpo, Toon Calders, Bettina Berendt

Figure 1 for Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models
Figure 2 for Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models
Figure 3 for Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models
Figure 4 for Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models
Viaarxiv icon

Detecting and Explaining Drifts in Yearly Grant Applications

Add code
Bookmark button
Alert button
Oct 16, 2018
Stephen Pauwels, Toon Calders

Figure 1 for Detecting and Explaining Drifts in Yearly Grant Applications
Figure 2 for Detecting and Explaining Drifts in Yearly Grant Applications
Figure 3 for Detecting and Explaining Drifts in Yearly Grant Applications
Figure 4 for Detecting and Explaining Drifts in Yearly Grant Applications
Viaarxiv icon

Extending Dynamic Bayesian Networks for Anomaly Detection in Complex Logs

Add code
Bookmark button
Alert button
Aug 17, 2018
Stephen Pauwels, Toon Calders

Figure 1 for Extending Dynamic Bayesian Networks for Anomaly Detection in Complex Logs
Figure 2 for Extending Dynamic Bayesian Networks for Anomaly Detection in Complex Logs
Figure 3 for Extending Dynamic Bayesian Networks for Anomaly Detection in Complex Logs
Figure 4 for Extending Dynamic Bayesian Networks for Anomaly Detection in Complex Logs
Viaarxiv icon

Mining All Non-Derivable Frequent Itemsets

Add code
Bookmark button
Alert button
Jun 03, 2002
Toon Calders, Bart Goethals

Figure 1 for Mining All Non-Derivable Frequent Itemsets
Figure 2 for Mining All Non-Derivable Frequent Itemsets
Figure 3 for Mining All Non-Derivable Frequent Itemsets
Figure 4 for Mining All Non-Derivable Frequent Itemsets
Viaarxiv icon