Go to JKU Homepage
Institute of Computer Graphics
What's that?

Institutes, schools, other departments, and programs create their own web content and menus.

To help you better navigate the site, see here where you are at the moment.

Open Student Topics

  • Potential students need to send the following information when applying for a project: (1) a motivation letter stating their skills and background with respect to the project and information about work occupation and physical presence in Linz, and (2) transcripts and recommendation letters (optional). Note, that we only assign topics to students that are enrolled in KUSSS for the corresponding course (i.e., practicum, seminar, thesis) of the institute. Dropping out after topic assignment will result in a negative grade.


    Registration period: Tue 03.09.24 (07:00) - Tue 15.10.24 (23:59)

Sound augmentation for digitally enhanced play: adding sound to cartographic video game maps

Topics: cartographic game maps, sound, video game
Supervision: Claire Dormann and Günter Wallner
Contact: claire.dormann (at)jku.at
Type: BSc Thesis, MSc Practicum, MSc Thesis

Description

Sound is an important design aspect both of cartography and video games. However, it is not used in conjunction to game maps. In cartography, auditory icons are utilised to translate or complement visual data. Sound could also be used to convey gameplay information on a map. In games, sound is designed to create atmospheres, and / or trigger emotions. The ambient music in Assassin Creed Mirage creates a mood evocative of the ancient middle-eastern culture, and is designed to absorb players in the game. While players explore a game map, they could hear the many sounds of a game city, or a region (part of a world map).

Tasks

The first step of this project consists in exploring the use of sound (including music and sound effects) for game cartographic maps. Next, examples would be selected and implemented for a specific game map. The sonified map(s) will be produced as a mod game map, for example using World of Warcraft tools.

Requirement:

  • Interest in sound
  • Good programming skills
  • Interest in game development and game map design
  • Knowledge of Word or Warcraft
a map

Call of Duty: Warzone Movement Analytics and Visualization

Topics: game analytics, visualization
Supervision: Günter Wallner
Contact: guenter.wallner(at)jku.at
Type: MSc Thesis, MSc Practicum

Description
Activision recently released one of the largest available data sets from an AAA video game. The Call of Duty Caldera dataset includes the full map geometry as well as player data from 1 million players, particularly end points of play and trajectories. Navigation is a key element in video games and understanding movement patterns is of key interest for level design. This can benefit from visualizations that display the data directly within the level environment.

Tasks
The goal of this project is to first analyze and document the dataset and to setup a research environment to use the dataset for research purposes. Next, the included trajectory data should be analyzed and a visualization method for displaying navigational patterns be developed. This will involve reseaching appropriate methods such as the application of abstraction and/or aggregation methods (e.g., graph-based approaches).

Requirements

  • Good programming skills
  • Good data processing skills
  • Interest in game development and design
  • Interest and/or experience with visualization
  • Willingness to crack problems and to show self-initiative
  • Knowledge of Call of Duty is an advantage

Links:

Caldera



Understanding the Transition to Commercial Dashboarding Systems

Topics: visualization, dashboarding systems, qualitative study
Supervision: Conny Walchshofer, Marc Streit
Contact:  vds-lab(at)jku.at
Type:  BSc Practical Work, and possibly a subsequent BSc Thesis


Many long-established, traditional manufacturing businesses are becoming more digital and data-driven to improve their production. These companies are embracing visual analytics in these transitions through their adoption of commercial dashboarding systems such as Tableau and MS Power BI.

However, transitioning work practices to a new software environment, is often confronted with technical but also socio-technical challenges [Walchshofer et al. 2023, opens an external URL in a new window]. Walchshofer et al. conducted an interview study with 17 workers from an industrial manufacturing company and reported on observations such as hidden/underappreciated labor or a visualization knowledge gap leading to discomfort with interaction in dashboards. As this study represents just a snapshot of time during a lengthy transition process, this student topic focuses on a follow-up study to understand how these challenges change over time.

The goal of this project is to contribute to the design and execution of a qualitative study. No programming knowledge is required. In collaboration with Linköping University (Sweden), we will develop questions for an interview series and conduct the interviews. Audio recordings of these interviews then need to be transcribed and analyzed. The findings need to be compared with the study's results by Walchshofer et al.

Changes of observations when transitioning to a commercial dashboarding system



Lost My Way

Topics: game development, educational games, web programming
Supervision: Günter Wallner
Contact: guenter.wallner(at)jku.at
Type: MSc Thesis, MSc Practicum

Description

Games have shown to be engaging and valuable tools for education. As such many educational games have been developed to date for a variety of educational topic ranging from language learning to mathematics. At the same time educational games are challenging to design as they need to effectively communicate educational content while being entertaining to play. The goal of this work is to a) convert a previously in Adobe Flash developed game for geometry learning to HTML 5 and deploy it and b) conduct a user study to ascertain its value.

Tasks

Work will include converting the game from Adobe Flash (source code and assets will be provided) into HTML5. The game needs to include logging facilities to be able to reconstruct the solutions to the levels. The converted game needs to be deployed on a web server (space will be provided) and subsequently evaluated with players (e.g., via an online study).

Requirements
● Good programming skills
● Knowledge of HTML 5 and web development
● Knowledge of Adobe Flash is of advantage
● Interest in game development and design

An image of the game lost my way, showing a grid, a road, and sheeps



Bottom-Up Synthetic Aperture Imaging

Topics: image processing, machine learning, forest ecology, climate change
Supervision: Oliver BimberMohamed Youssef
Contact: oliver.bimber(at)jku.at
Type: BSc Practicum, MSc Practicum, BSc Thesis, MSc Thesis

Airborne Optical Sectioning (AOS) is a wide synthetic-aperture imaging technique that employs manned or unmanned aircraft, to sample images within large (synthetic aperture) areas from above occluded volumes, such as forests. Based on the poses of the aircraft during capturing, these images are computationally combined to integral images by light-field technology. These integral images suppress strong occlusion and reveal targets that remain hidden in single recordings. It is being used for search and rescue, wildfire detection, wildlife observation, archology, and forest ecology.

In this project we want to explore the reverse principle. Instead of capturing and processing top-down recorded aerial images of forest with a drone, we capture bottom-up recordings of forest from the ground. The goal is to reconstruct forest structure in lower layers despite and determine vegetation indices that provide information about vegetation health. We want to use wearable (helmet) and fixed positioned multispectral cameras in forest. This project requires physical presence in Linz, as field-experiments with special equipment are involved. 

Details on AOS:
https://github.com/JKU-ICG/AOS/, opens an external URL in a new window

Bottom-Up Synthetic Aperture Imaging



Single-Shot Learning for Drone Swarms

Topics: classification, machine learning, autonomous drone swarms
Supervision: Oliver BimberRakes Nathan
Contact: oliver.bimber(at)jku.at
Type: BSc Practicum, MSc Practicum, BSc Thesis, MSc Thesis

Airborne Optical Sectioning (AOS) is a wide synthetic-aperture imaging technique that employs manned or unmanned aircraft, to sample images within large (synthetic aperture) areas from above occluded volumes, such as forests. Based on the poses of the aircraft during capturing, these images are computationally combined to integral images by light-field technology. These integral images suppress strong occlusion and reveal targets that remain hidden in single recordings. It is being used for search and rescue, wildfire detection, wildlife observation, archology, and forest ecology.

We have developed our own autonomous drone swarm. It is capable of detecting and tracking anomalies on the ground, despite heavy occlusion cause by vegetation, such as forest. First field-experiments in cooperation with the German Aerospace Center have been carried out. In this project, we want to explore the single-shot learning of classification (being used in addition to anomaly detection).   

Details on AOS:
https://github.com/JKU-ICG/AOS/, opens an external URL in a new window

Details on the JKU Drone Swarm:
https://www.youtube.com/playlist?list=PLgGsWgs4hgaMXzo7QhSwNRctz9JTvh1JM, opens an external URL in a new window
https://github.com/JKU-ICG/AOS/tree/stable_release/AOS%20for%20Drone%20Swarms, opens an external URL in a new window

Single-Shot Learning for Drone Swarms



Drone-Based Wildfire Detection

Topics: drones, image processing, classification, machine learning
Supervision: Oliver BimberMohamed Youssef
Contact: oliver.bimber(at)jku.at
Type: BSc Practicum, MSc Practicum, BSc Thesis, MSc Thesis

Airborne Optical Sectioning (AOS) is a wide synthetic-aperture imaging technique that employs manned or unmanned aircraft, to sample images within large (synthetic aperture) areas from above occluded volumes, such as forests. Based on the poses of the aircraft during capturing, these images are computationally combined to integral images by light-field technology. These integral images suppress strong occlusion and reveal targets that remain hidden in single recordings.

In this project we want to extend our AOS simulator to simulate wildfire in an as realistic as possible way. The simulation data can then be used for training classifiers that detect wildfire early and under densely occluded condition.

Concretely, we want to explore AI-based image generation to produce many synthetic groundfire images from a limited number of real ones. These images are then used for training a classifier. This project has two main aspects, suitable for two students:  First, is the generation of synthetic ground fire images from real ones using, for instance, auto-encoder or vision transformer architectures and our forest simulator. This allows the generation of an extensive training dataset. Second, the training of classifiers and evaluation of (for instance using semantic labeling) to detect groundfire patterns automatically.



Details on AOS:
https://github.com/JKU-ICG/AOS/, opens an external URL in a new window

wildfire ©Image Source: https://www.thomasnet.com/insights/3-ways-technology-can-fight-the-australian-wildfires/



Visualization and Explanation of ML-Based Indiciation Expansion in Knowledge Graphs

Topics: visualization, explainable AI, knowledge graphs, healthcare
Supervision: Christian Steinparz, Marc Streit
Contact: vds-lab(at)jku.at
Type:  MSc Practical Work, MSc Thesis

Indication expansion involves identifying new potential uses or "indications" for existing drugs. One way to find new indications is to analyze the connections and relationships within pharmaceutical data, including experimental data and literature. Our collaboration partner employs a machine learning model to predict new links in their knowledge graphs, intending to discover relationships between current drugs and diseases for which they have not been previously used. The project focuses on effectively visualizing and explaining these newly predicted links, enabling domain experts to determine the potential of further research into each drug's new possible uses.

Relevant Paper: Knowledge Graphs for Indication Expansion: An Explainable Target-Disease Prediction Method, Gurbuz et al. 2022, opens an external URL in a new window

 

 

 

Indication Expansion Image adapted from the presentation of Knowledge Graphs for Indication Expansion: An Explainable Target-Disease Prediction Method (Gurbuz et al. 2022): https://www.youtube.com/watch?v=ADszHqJhr2Y&ab_channel=BiorelateLtd.



LLM-Controlled Drones and Drone Swarms

Topics: large language models, prompt engineering, autonomous drones and drone swarms
Supervision: Oliver BimberRakes Nathan
Contact: oliver.bimber(at)jku.at
Type: BSc Practicum, MSc Practicum, BSc Thesis, MSc Thesis

 

We have developed our own autonomous drone swarm. It is capable of detecting and tracking anomalies on the ground, despite heavy occlusion cause by vegetation, such as forest. First field-experiments in cooperation with the German Aerospace Center have been carried out. Previously, we have handcrafted control and vision models to drive the swarm. In this project, we want to explore how much of these models can be replaced by solutions that large languages models generate. We already have integrated a LLM into our groundstation architecture that controls single drones and whole swarms. Based on input prompts, it automatically executes code generated by the LLM to operate the drones. We wand to explore how complex tasks can be that LLMs can carry out for operating drones and swarms autonomously. This project can be split unit several subtasks for multiple students: One is to explore the computer vision capability of LLMs (especially in the context of spatial awareness). For instance, can the drones’ video images directly be interpreted by the LLM to support subsequent control actions. Another one is to explore the capabilities of LLM to path-planning strategies itself. Examples include decisions on splitting and merging swarms, collision avoidance, tracking targets, etc.        
 

Details on the JKU Drone Swarm and Groundstation:
https://www.youtube.com/playlist?list=PLgGsWgs4hgaMXzo7QhSwNRctz9JTvh1JM, opens an external URL in a new window

https://github.com/JKU-ICG/AOS/tree/stable_release/AOS%20for%20Drone%20Swarms, opens an external URL in a new window
● https://github.com/JKU-ICG/AOS/tree/stable_release/AOS%20Groundstation, opens an external URL in a new window

LLM-controlled drone (first simple example):
● https://www.youtube.com/watch?v=6r2Dofud2_A, opens an external URL in a new window

 LLM-Controlled Drones and Drone Swarms

Visual Analysis of Medical Patient Event Sequences From the MIMIC-IV Database

Topics: medical data, healthcare, patient data, data visualization, event sequence visualization
Relevant Paper: https://www.nature.com/articles/s41597-022-01899-x, opens an external URL in a new window
Supervision: Christian Steinparz, Marc Streit
Contact: vds-lab(at)jku.at
Type:  BSc Practical Work, potential subsequent BSc Thesis, MSc Practical Work, potential subsequent MSc Thesis

We are collaborating with the Nanosystems Engineering Lab (NSEL) at ETH Zürich. The researchers at NSEL are analyzing data on anastomotic leakage, a serious complication where the surgical connection between two body parts, such as blood vessels or intestines, fails and leaks.

In this project, you will use the MIMIC-IV database, which includes hospital patient data related to procedures and diagnoses associated with anastomotic leakage. You will create a suitable data structure, extract the relevant data from MIMIC-IV, and develop visualizations to analyze certain aspects of the data. For instance, the visualization should answer the question “What is the ratio of procedures of type A that lead to anastomotic leakage diagnosis type B”.

An example patient event sequence looks like this:
Patient 1: admission → discharge → admission → procedure “Other small to large intestinal anastomosis” which could trigger leakage → diagnosis “digestive system complications, not elsewhere classified” → diagnosis “Abscess of intestine”
Each event has a low amount of additional metadata such as the date.

We have already created and will provide a JSON file with a data processing pipeline, and a basic Sankey diagram for the patient event data, that you can refine and build on. 

Basic Observable Notebook For Visualizing Patient Event Data, opens an external URL in a new window

MIMIC-IV Sankey Diagram

Reimplementing Graph Visualization for Better Design and Interactivity

Topics: knowledge graphs, data visualization, node-link diagrams
Supervision: Christian Steinparz, Marc Streit
Contact: vds-lab(at)jku.at
Type: BSc Practical Work or MSc Practical Work

In a collaboration with Boehringer Ingelheim we developed a tool for exploring constraint violations in knowledge graphs. It is a web app buit in React that consists of multiple coordinated views. Each of these views provides a different visual representation of and interaction with the data which updates all the other views.
One of these views is a node-link diagram (NLD) tha visualizes the ontology of the graph data. This NLD is currenlty implemented using cytoscape.js. In this project you would rework the NLD in a nicer d3.js implementation with a more aesthetically pleasing and interactive feel and better visual encoding. The rework should preserve the NLD’s integration in the React web app and interaction with the other views. 
A small example of a simple interactive d3-based NLD implementation can be found in this observable notebook, opens an external URL in a new window.

NLD Reimplementation Figure

Automating Visualization Configurations to Show/Hide Relevant Aspects of the Data

Topics: data visualization, degree of interest, projection, trajectory data, user intent, human-computer interaction (HCI)
Supervision: Christian Steinparz, Marc Streit
Contact: vds-lab(at)jku.at
Type: MSc Seminar, Practical work, and potential follow-up MSc Thesis

We are developing a modular specification for calculating the Degree of Interest (DoI) in visualizing high-dimensional trajectory data. DoI defines interest in visual elements based on user input, hiding less relevant elements and emphasizing the most interesting ones. 

Using chess data as an example, each game is a trajectory, with moves and board states visualized in a scatter plot. A user might query for games starting with a specific opening, driving the DoI to focus on relevant points and lines and also propagating high DoI to their surroundings.

This DoI specification includes a large amount of possible user configurations: How strong should DoI be propagated via embedding proximity, or along the future of trajectories, or the past, or in dense versus sparse areas, for points, for edges, for labels, etc.?
Your task in this project is to conceptualize a way to find (near) optimal configurations automatically and create a web-based prototype. This includes devising, categorizing, and reasoning about aspects such as data properties and user intent, and figuring out which configuration is optimal for which circumstance. E.g., when data is extremely dense and hierarchical, and users want to compare sparse outliers. The underlying DoI specification is already given.
An interactive example prototype can be found here., opens an external URL in a new window

Trajectory Data Query Example Figure



Single Image De-raining Using Lightweight Model

Topics: Deraining, deep learning, Convolutional neural network, vision transformer
Supervision: Mohammed Abbass
Contact: mohammed.abbass(at)jku.at
Type: MSc Thesis, MSc Practicum

Undesirable weather conditions can lead to poor image quality and degrade both the content and colors of images. In rainy conditions, rain can reduce the achievement of image processing and computer vision tasks, such as surveillance, recognition, object detection, and tracking. Therefore, it is necessary to separate clean background images from rain-affected images and remove the rain. This is the focus of the image rain removal topic.

Therefore, this project will focus on developing an image rain removal algorithm based on a deep learning model. The work involves proposing a low-light deep learning model and using a popular dataset for both training and evaluating the proposed model. Additionally, we will design custom transformers or exploit existing models to help our model better performance the current state-of-the-art models. The project may also involve collecting real-world scenarios in challenging conditions, such as low light or blurred images.

Requirements:
● Good background in Deep Learning
● Good skills in PyTorch programming
● Interest in computer vision

Single image de-raining using lightweight model_2
Single image de-raining using lightweight model_2
Schematic of the proposed AutoML workflow for creating embeddings Embedding scatterplot taken from https://umap-learn.readthedocs.io/en/latest/plotting.html.



Visual Tracking of LLM Output Evolution

Topics: LLM, Human-AI Interaction, Visualization
Supervision: Amal Alnouri, Andreas Hinterreiter, Marc Streit
Contact: vds-lab@jku.at
Type: BSc Practical Work, BSc Thesis, MSc Practical Work, MSc Thesis

As Large Language Models (LLMs) become integral to text generation and reformulation tasks, there is a growing need for tools that enhance user understanding and control over the output. While LLMs can produce highly sophisticated text, users often struggle to evaluate and control this output especially when dealing with iterative edit prompts. This project aims to address the gap in transparency and user comprehension by developing an interactive visual tool that allows users to visually track the evolution of LLM-generated text over multiple interactions. By clearly displaying differences in the generated texts and providing key metrics as users refine their prompts, the tool will help them assess the extent to which the LLM outputs align with their expectations, make informed decisions, and gain deeper insights into how the model's responses evolve with each input variation.

LLM Prompting Teaser Image



Monkeypox disease classification

Topics: image classification, medical imaging, monkeypox virus
Supervision: Mohammed Abbass
Contact: mohammed.abbass(at)jku.at
Type: BSc Practicum, BSc Thesis

Monkeypox is an infectious viral disease caused by the mpox virus. This virus infects humans and some other animals. It can be transmitted to humans from animals and presents with a wide range of symptoms such as fever, rash, and swollen lymph nodes, which typically last from two to four weeks. Currently, monkeypox has received significant attention, as many people around the world have been infected with the virus.

In this project, we will focus on developing an image classification model based on CNN. The project includes developing existing deep learning models and exploiting a popular dataset to train and evaluate the model. Moreover, we will augment the dataset to increase the robustness of the model. Finally, we will compare our model with state-of-the-art methods

Requirements
● Good background in Deep Learning
● Good skills in PyTorch programming
● Interest in computer vision

Mpox Virus