Student Projects in Machine Learning and Artificial Intelligence
Much of our research applies AI and Machine Learning (see further below), but we are also interested in developing, improving or understanding Deep Learning or Statistical Learning/Modeling in general: How does it work? When does it fail? What can we do when data is scarce? Projects may involve re-implementing methods from the literature, systematic experiments with designed data sets, attempts at qualitative or performance improvements, etc. These projects generally require a background in Machine Learning.
Contact (if not stated otherwise): Jan Schlüter
These student projects can be started any time (including holidays) and can span over semester boundaries.
Remark: We are open for new proposals - if you are interested in Machine Learning / Artificial Intelligence, feel free to contact us!
Topics:
Interpretable Machine Learning / Explainable Artificial Intelligence (Contact: Katharina Hoedt):
- Concept-based Explanation Methods (e.g. TCAV, Non-negative Concept Activation Vectors, Concept Bottleneck Models, ...)
- Evaluating Explanations
- Mechanistic Interpretability
- Counterfactual Explanations
Adversarial Examples (Contact: Katharina Hoedt):
- Adversarial Robustness
- Reasons for Adversarial Vulnerability
- Linking Adversarial Vulnerability and Explainability
Representation Learning:
- Deep InfoMax, opens an external URL in a new window
- Invariant information clustering, opens an external URL in a new window
- Disentangled representation learning, e.g..:
Generative Models:
- Generating raw audio GANs / WaveNet
- DDSP: Differentiable Digital Signal Processing, opens an external URL in a new window
- Flow-based models, opens an external URL in a new window
Machine Learning Theory: