CP researchers present their latest work on Trustworthy Machine Learning
Despite the successes of Deep Neural Networks in many domains, it has been shown that they are very brittle when encountered with adversarial examples, that are instances with small, intentional perturbations causing these models to make false predictions.
In their latest work, CP researchers Hamid Eghbal-zadeh, Khaled Koutini, Paul Primus, Verena Haunschmid, and Gerhard Widmer look for the causes of this brittleness, by studying the common inductive biases used in Deep Learning.
They investigate Data Augmentation, that is a widely-used technique in deep learning for extending the training data by using inductive biases and domain expertise.
Their study reveals that although these methods have been proposed to improve performance, they can result in severe adversarial vulnerabilities.
Read their full story here:
click here to view the paper, öffnet eine externe URL in einem neuen Fenster
click here to watch the talk, öffnet eine externe URL in einem neuen Fenster
click here to see the workshop, öffnet eine externe URL in einem neuen Fenster
News
29.07.2020